text
stringlengths
559
401k
source
stringlengths
13
121
In logic, a functionally complete set of logical connectives or Boolean operators is one that can be used to express all possible truth tables by combining members of the set into a Boolean expression. A well-known complete set of connectives is { AND, NOT }. Each of the singleton sets { NAND } and { NOR } is functionally complete. However, the set { AND, OR } is incomplete, due to its inability to express NOT. A gate (or set of gates) that is functionally complete can also be called a universal gate (or a universal set of gates). In a context of propositional logic, functionally complete sets of connectives are also called (expressively) adequate. From the point of view of digital electronics, functional completeness means that every possible logic gate can be realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from either only binary NAND gates, or only binary NOR gates. == Introduction == Modern texts on logic typically take as primitive some subset of the connectives: conjunction ( ∧ {\displaystyle \land } ); disjunction ( ∨ {\displaystyle \lor } ); negation ( ¬ {\displaystyle \neg } ); material conditional ( → {\displaystyle \to } ); and possibly the biconditional ( ↔ {\displaystyle \leftrightarrow } ). Further connectives can be defined, if so desired, by defining them in terms of these primitives. For example, NOR (the negation of the disjunction, sometimes denoted ↓ {\displaystyle \downarrow } ) can be expressed as conjunction of two negations: A ↓ B := ¬ A ∧ ¬ B {\displaystyle A\downarrow B:=\neg A\land \neg B} Similarly, the negation of the conjunction, NAND (sometimes denoted as ↑ {\displaystyle \uparrow } ), can be defined in terms of disjunction and negation. Every binary connective can be defined in terms of { ¬ , ∧ , ∨ , → , ↔ } {\displaystyle \{\neg ,\land ,\lor ,\to ,\leftrightarrow \}} , which means that set is functionally complete. However, it contains redundancy: this set is not a minimal functionally complete set, because the conditional and biconditional can be defined in terms of the other connectives as A → B := ¬ A ∨ B A ↔ B := ( A → B ) ∧ ( B → A ) . {\displaystyle {\begin{aligned}A\to B&:=\neg A\lor B\\A\leftrightarrow B&:=(A\to B)\land (B\to A).\end{aligned}}} It follows that the smaller set { ¬ , ∧ , ∨ } {\displaystyle \{\neg ,\land ,\lor \}} is also functionally complete. (Its functional completeness is also proved by the Disjunctive Normal Form Theorem.) But this is still not minimal, as ∨ {\displaystyle \lor } can be defined as A ∨ B := ¬ ( ¬ A ∧ ¬ B ) . {\displaystyle A\lor B:=\neg (\neg A\land \neg B).} Alternatively, ∧ {\displaystyle \land } may be defined in terms of ∨ {\displaystyle \lor } in a similar manner, or ∨ {\displaystyle \lor } may be defined in terms of → {\displaystyle \rightarrow } : A ∨ B := ¬ A → B . {\displaystyle \ A\vee B:=\neg A\rightarrow B.} No further simplifications are possible. Hence, every two-element set of connectives containing ¬ {\displaystyle \neg } and one of { ∧ , ∨ , → } {\displaystyle \{\land ,\lor ,\rightarrow \}} is a minimal functionally complete subset of { ¬ , ∧ , ∨ , → , ↔ } {\displaystyle \{\neg ,\land ,\lor ,\to ,\leftrightarrow \}} . == Formal definition == Given the Boolean domain B = {0, 1}, a set F of Boolean functions fi : Bni → B is functionally complete if the clone on B generated by the basic functions fi contains all functions f : Bn → B, for all strictly positive integers n ≥ 1. In other words, the set is functionally complete if every Boolean function that takes at least one variable can be expressed in terms of the functions fi. Since every Boolean function of at least one variable can be expressed in terms of binary Boolean functions, F is functionally complete if and only if every binary Boolean function can be expressed in terms of the functions in F. A more natural condition would be that the clone generated by F consist of all functions f : Bn → B, for all integers n ≥ 0. However, the examples given above are not functionally complete in this stronger sense because it is not possible to write a nullary function, i.e. a constant expression, in terms of F if F itself does not contain at least one nullary function. With this stronger definition, the smallest functionally complete sets would have 2 elements. Another natural condition would be that the clone generated by F together with the two nullary constant functions be functionally complete or, equivalently, functionally complete in the strong sense of the previous paragraph. The example of the Boolean function given by S(x, y, z) = z if x = y and S(x, y, z) = x otherwise shows that this condition is strictly weaker than functional completeness. == Characterization of functional completeness == Emil Post proved that a set of logical connectives is functionally complete if and only if it is not a subset of any of the following sets of connectives: The monotonic connectives; changing the truth value of any connected variables from F to T without changing any from T to F never makes these connectives change their return value from T to F, e.g. ∨ , ∧ , ⊤ , ⊥ {\displaystyle \vee ,\wedge ,\top ,\bot } . The affine connectives, such that each connected variable either always or never affects the truth value these connectives return, e.g. ¬ , ⊤ , ⊥ , ↔ , ↮ {\displaystyle \neg ,\top ,\bot ,\leftrightarrow ,\nleftrightarrow } . The self-dual connectives, which are equal to their own de Morgan dual; if the truth values of all variables are reversed, so is the truth value these connectives return, e.g. ¬ {\displaystyle \neg } , maj(p, q, r). The truth-preserving connectives; they return the truth value T under any interpretation that assigns T to all variables, e.g. ∨ , ∧ , ⊤ , → , ↔ {\displaystyle \vee ,\wedge ,\top ,\rightarrow ,\leftrightarrow } . The falsity-preserving connectives; they return the truth value F under any interpretation that assigns F to all variables, e.g. ∨ , ∧ , ⊥ , ↛ , ↮ {\displaystyle \vee ,\wedge ,\bot ,\nrightarrow ,\nleftrightarrow } . Post gave a complete description of the lattice of all clones (sets of operations closed under composition and containing all projections) on the two-element set {T, F}, nowadays called Post's lattice, which implies the above result as a simple corollary: the five mentioned sets of connectives are exactly the maximal nontrivial clones. == Minimal functionally complete operator sets == When a single logical connective or Boolean operator is functionally complete by itself, it is called a Sheffer function or sometimes a sole sufficient operator. There are no unary operators with this property. NAND and NOR, which are dual to each other, are the only two binary Sheffer functions. These were discovered, but not published, by Charles Sanders Peirce around 1880, and rediscovered independently and published by Henry M. Sheffer in 1913. In digital electronics terminology, the binary NAND gate (↑) and the binary NOR gate (↓) are the only binary universal logic gates. The following are the minimal functionally complete sets of logical connectives with arity ≤ 2: One element {↑}, {↓}. Two elements { ∨ , ¬ } {\displaystyle \{\vee ,\neg \}} , { ∧ , ¬ } {\displaystyle \{\wedge ,\neg \}} , { → , ¬ } {\displaystyle \{\to ,\neg \}} , { ← , ¬ } {\displaystyle \{\gets ,\neg \}} , { → , ⊥ } {\displaystyle \{\to ,\bot \}} , { ← , ⊥ } {\displaystyle \{\gets ,\bot \}} , { → , ↮ } {\displaystyle \{\to ,\nleftrightarrow \}} , { ← , ↮ } {\displaystyle \{\gets ,\nleftrightarrow \}} , { → , ↛ } {\displaystyle \{\to ,\nrightarrow \}} , { → , ↚ } {\displaystyle \{\to ,\nleftarrow \}} , { ← , ↛ } {\displaystyle \{\gets ,\nrightarrow \}} , { ← , ↚ } {\displaystyle \{\gets ,\nleftarrow \}} , { ↛ , ¬ } {\displaystyle \{\nrightarrow ,\neg \}} , { ↚ , ¬ } {\displaystyle \{\nleftarrow ,\neg \}} , { ↛ , ⊤ } {\displaystyle \{\nrightarrow ,\top \}} , { ↚ , ⊤ } {\displaystyle \{\nleftarrow ,\top \}} , { ↛ , ↔ } {\displaystyle \{\nrightarrow ,\leftrightarrow \}} , { ↚ , ↔ } . {\displaystyle \{\nleftarrow ,\leftrightarrow \}.} Three elements { ∨ , ↔ , ⊥ } {\displaystyle \{\lor ,\leftrightarrow ,\bot \}} , { ∨ , ↔ , ↮ } {\displaystyle \{\lor ,\leftrightarrow ,\nleftrightarrow \}} , { ∨ , ↮ , ⊤ } {\displaystyle \{\lor ,\nleftrightarrow ,\top \}} , { ∧ , ↔ , ⊥ } {\displaystyle \{\land ,\leftrightarrow ,\bot \}} , { ∧ , ↔ , ↮ } {\displaystyle \{\land ,\leftrightarrow ,\nleftrightarrow \}} , { ∧ , ↮ , ⊤ } . {\displaystyle \{\land ,\nleftrightarrow ,\top \}.} There are no minimal functionally complete sets of more than three at most binary logical connectives. In order to keep the lists above readable, operators that ignore one or more inputs have been omitted. For example, an operator that ignores the first input and outputs the negation of the second can be replaced by a unary negation. == Examples == Examples of using the NAND (↑) completeness. As illustrated by, ¬A ≡ A ↑ A A ∧ B ≡ ¬(A ↑ B) ≡ (A ↑ B) ↑ (A ↑ B) A ∨ B ≡ (¬A) ↑ (¬B) ≡ (A ↑ A) ↑ (B ↑ B) Examples of using the NOR (↓) completeness. As illustrated by, ¬A ≡ A ↓ A A ∨ B ≡ ¬(A ↓ B) ≡ (A ↓ B) ↓ (A ↓ B) A ∧ B ≡ (¬A) ↓ (¬B) ≡ (A ↓ A) ↓ (B ↓ B) Note that an electronic circuit or a software function can be optimized by reuse, to reduce the number of gates. For instance, the "A ∧ B" operation, when expressed by ↑ gates, is implemented with the reuse of "A ↑ B", X ≡ (A ↑ B); A ∧ B ≡ X ↑ X == In other domains == Apart from logical connectives (Boolean operators), functional completeness can be introduced in other domains. For example, a set of reversible gates is called functionally complete, if it can express every reversible operator. The 3-input Fredkin gate is functionally complete reversible gate by itself – a sole sufficient operator. There are many other three-input universal logic gates, such as the Toffoli gate. In quantum computing, the Hadamard gate and the T gate are universal, albeit with a slightly more restrictive definition than that of functional completeness. == Set theory == There is an isomorphism between the algebra of sets and the Boolean algebra, that is, they have the same structure. Then, if we map boolean operators into set operators, the "translated" above text are valid also for sets: there are many "minimal complete set of set-theory operators" that can generate any other set relations. The more popular "Minimal complete operator sets" are {¬, ∩} and {¬, ∪}. If the universal set is forbidden, set operators are restricted to being falsity (Ø) preserving, and cannot be equivalent to functionally complete Boolean algebra. == See also == Algebra of sets – Identities and relationships involving sets Boolean algebra – Algebraic manipulation of "true" and "false" Completeness (logic) – Characteristic of some logical systems Conjunction/disjunction duality – Properties linking logical conjunction and disjunction List of Boolean algebra topics NAND logic – Logic constructed only from NAND gates NOR logic – Making other gates using just NOR gates One-instruction set computer – Abstract machine that uses only one instruction == References ==
Wikipedia/Functional_completeness
In propositional logic, tautology is either of two commonly used rules of replacement. The rules are used to eliminate redundancy in disjunctions and conjunctions when they occur in logical proofs. They are: The principle of idempotency of disjunction: P ∨ P ⇔ P {\displaystyle P\lor P\Leftrightarrow P} and the principle of idempotency of conjunction: P ∧ P ⇔ P {\displaystyle P\land P\Leftrightarrow P} Where " ⇔ {\displaystyle \Leftrightarrow } " is a metalogical symbol representing "can be replaced in a logical proof with". == Formal notation == Theorems are those logical formulas ϕ {\displaystyle \phi } where ⊢ ϕ {\displaystyle \vdash \phi } is the conclusion of a valid proof, while the equivalent semantic consequence ⊨ ϕ {\displaystyle \models \phi } indicates a tautology. The tautology rule may be expressed as a sequent: P ∨ P ⊢ P {\displaystyle P\lor P\vdash P} and P ∧ P ⊢ P {\displaystyle P\land P\vdash P} where ⊢ {\displaystyle \vdash } is a metalogical symbol meaning that P {\displaystyle P} is a syntactic consequence of P ∨ P {\displaystyle P\lor P} , in the one case, P ∧ P {\displaystyle P\land P} in the other, in some logical system; or as a rule of inference: P ∨ P ∴ P {\displaystyle {\frac {P\lor P}{\therefore P}}} and P ∧ P ∴ P {\displaystyle {\frac {P\land P}{\therefore P}}} where the rule is that wherever an instance of " P ∨ P {\displaystyle P\lor P} " or " P ∧ P {\displaystyle P\land P} " appears on a line of a proof, it can be replaced with " P {\displaystyle P} "; or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as: ( P ∨ P ) → P {\displaystyle (P\lor P)\to P} and ( P ∧ P ) → P {\displaystyle (P\land P)\to P} where P {\displaystyle P} is a proposition expressed in some formal system. == References ==
Wikipedia/Tautology_(rule_of_inference)
Chaff is an algorithm for solving instances of the Boolean satisfiability problem in programming. It was designed by researchers at Princeton University. The algorithm is an instance of the DPLL algorithm with a number of enhancements for efficient implementation. == Implementations == Some available implementations of the algorithm in software are mChaff and zChaff, the latter one being the most widely known and used. zChaff was originally written by Dr. Lintao Zhang, now at Microsoft Research, hence the “z”. It is now maintained by researchers at Princeton University and available for download as both source code and binaries on Linux. zChaff is free for non-commercial use. == References == M. Moskewicz, C. Madigan, Y. Zhao, L. Zhang, S. Malik. Chaff: Engineering an Efficient SAT Solver, 39th Design Automation Conference (DAC 2001), Las Vegas, ACM 2001. Vizel, Y.; Weissenbacher, G.; Malik, S. (2015). "Boolean Satisfiability Solvers and Their Applications in Model Checking". Proceedings of the IEEE. 103 (11): 2021–2035. doi:10.1109/JPROC.2015.2455034. S2CID 10190144. == External links == Web page about zChaff
Wikipedia/Chaff_algorithm
"Function and Concept" (German: "Funktion und Begriff", "Function and Concept") is a lecture delivered by Gottlob Frege in 1891. The lecture involves a clarification of his earlier distinction between concepts and objects. It was first published as an article in 1962. == Overview == In general, a concept is a function whose value is always a truth value (139). A relation is a two place function whose value is always a truth value (146). Frege draws an important distinction between concepts on the basis of their level. Frege tells us that a first-level concept is a one-place function that correlates objects with truth-values (147). First level concepts have the value of true or false depending on whether the object falls under the concept. So, the concept F {\displaystyle F} has the value the True with the argument the object named by 'Jamie' if and only if Jamie falls under the concept F {\displaystyle F} (or is in the extension of F). Second order concepts correlate concepts and relations with truth values. So, if we take the relation of identity to be the argument f {\displaystyle f} , the concept expressed by the sentence: ∀ x ∀ y f ( x , y ) → ∀ z ( f ( x , z ) → y = z ) {\displaystyle \forall x\forall yf(x,y)\rightarrow \forall z(f(x,z)\rightarrow y=z)} correlates the relation of identity with the True. The conceptual range (Begriffsumfang in Frege 1891, p. 16) follows the truth value of the function: x 2 = 1 {\displaystyle x^{2}=1} and ( x + 1 ) 2 = 2 ( x + 1 ) {\displaystyle (x+1)^{2}=2(x+1)} have the same conceptual range. == Translations == "On Function and Concept" in Michael Beaney, ed., The Frege Reader, Blackwell, 1997, pp. 130–148 == References == == External links == "Logical Constants" "Chronological Catalog of Frege's Work" List of English translations
Wikipedia/Function_and_Concept
A false dilemma, also referred to as false dichotomy or false binary, is an informal fallacy based on a premise that erroneously limits what options are available. The source of the fallacy lies not in an invalid form of inference but in a false premise. This premise has the form of a disjunctive claim: it asserts that one among a number of alternatives must be true. This disjunction is problematic because it oversimplifies the choice by excluding viable alternatives, presenting the viewer with only two absolute choices when, in fact, there could be many. False dilemmas often have the form of treating two contraries, which may both be false, as contradictories, of which one is necessarily true. Various inferential schemes are associated with false dilemmas, for example, the constructive dilemma, the destructive dilemma or the disjunctive syllogism. False dilemmas are usually discussed in terms of deductive arguments, but they can also occur as defeasible arguments. The human liability to commit false dilemmas may be due to the tendency to simplify reality by ordering it through either-or-statements, which is to some extent already built into human language. This may also be connected to the tendency to insist on clear distinction while denying the vagueness of many common expressions. == Definition == A false dilemma is an informal fallacy based on a premise that erroneously limits what options are available. In its most simple form, called the fallacy of bifurcation, all but two alternatives are excluded. A fallacy is an argument, i.e. a series of premises together with a conclusion, that is unsound, i.e. not both valid and true. Fallacies are usually divided into formal and informal fallacies. Formal fallacies are unsound because of their structure, while informal fallacies are unsound because of their content. The problematic content in the case of the false dilemma has the form of a disjunctive claim: it asserts that one among a number of alternatives must be true. This disjunction is problematic because it oversimplifies the choice by excluding viable alternatives. Sometimes a distinction is made between a false dilemma and a false dichotomy. On this view, the term "false dichotomy" refers to the false disjunctive claim while the term "false dilemma" refers not just to this claim but to the argument based on this claim. == Types == === Disjunction with contraries === In its most common form, a false dilemma presents the alternatives as contradictories, while in truth they are merely contraries. Two propositions are contradictories if it has to be the case that one is true and the other is false. Two propositions are contraries if at most one of them can be true, but leaves open the option that both of them might be false, which is not possible in the case of contradictories. Contradictories follow the law of the excluded middle but contraries do not. For example, the sentence "the exact number of marbles in the urn is either 10 or not 10" presents two contradictory alternatives. The sentence "the exact number of marbles in the urn is either 10 or 11" presents two contrary alternatives: the urn could also contain 2 marbles or 17 marbles. A common form of using contraries in false dilemmas is to force a choice between extremes on the agent: someone is either good or bad, rich or poor, normal or abnormal. Such cases ignore that there is a continuous spectrum between the extremes that is excluded from the choice. While false dilemmas involving contraries, i.e. exclusive options, are a very common form, this is just a special case: there are also arguments with non-exclusive disjunctions that are false dilemmas. For example, a choice between security and freedom does not involve contraries since these two terms are compatible with each other. === Logical forms === In logic, there are two main types of inferences known as dilemmas: the constructive dilemma and the destructive dilemma. In their most simple form, they can be expressed in the following way: simple constructive: ( P → Q ) , ( R → Q ) , ( P ∨ R ) ∴ Q {\displaystyle {\frac {(P\to Q),(R\to Q),(P\lor R)}{\therefore Q}}} simple destructive: ( P → Q ) , ( P → R ) , ( ¬ Q ∨ ¬ R ) ∴ ¬ P {\displaystyle {\frac {(P\to Q),(P\to R),(\lnot Q\lor \lnot R)}{\therefore \lnot P}}} The source of the fallacy is found in the disjunctive claim in the third premise, i.e. P ∨ R {\displaystyle P\lor R} and ¬ Q ∨ ¬ R {\displaystyle \lnot Q\lor \lnot R} respectively. The following is an example of a false dilemma with the simple constructive form: (1) "If you tell the truth, you force your friend into a social tragedy; and therefore, are an immoral person". (2) "If you lie, you are an immoral person (since it is immoral to lie)". (3) "Either you tell the truth, or you lie". Therefore "[y]ou are an immoral person (whatever choice you make in the given situation)". This example constitutes a false dilemma because there are other choices besides telling the truth and lying, like keeping silent. A false dilemma can also occur in the form of a disjunctive syllogism: disjunctive syllogism: ( P ∨ Q ) , ( ¬ P ) ∴ Q {\displaystyle {\frac {(P\lor Q),(\lnot P)}{\therefore Q}}} In this form, the first premise ( P ∨ Q {\displaystyle P\lor Q} ) is responsible for the fallacious inference. Lewis's trilemma is a famous example of this type of argument involving three disjuncts: "Jesus was either a liar, a lunatic, or Lord". By denying that Jesus was a liar or a lunatic, one is forced to draw the conclusion that he was God. But this leaves out various other alternatives, for example, that Jesus was a prophet. === Deductive and defeasible arguments === False dilemmas are usually discussed in terms of deductive arguments. But they can also occur as defeasible arguments. A valid argument is deductive if the truth of its premises ensures the truth of its conclusion. For a valid defeasible argument, on the other hand, it is possible for all its premises to be true and the conclusion to be false. The premises merely offer a certain degree of support for the conclusion but do not ensure it. In the case of a defeasible false dilemma, the support provided for the conclusion is overestimated since various alternatives are not considered in the disjunctive premise. == Explanation and avoidance == Part of understanding fallacies involves going beyond logic to empirical psychology in order to explain why there is a tendency to commit or fall for the fallacy in question. In the case of the false dilemma, the tendency to simplify reality by ordering it through either-or-statements may play an important role. This tendency is to some extent built into human language, which is full of pairs of opposites. This type of simplification is sometimes necessary to make decisions when there is not enough time to get a more detailed perspective. In order to avoid false dilemmas, the agent should become aware of additional options besides the prearranged alternatives. Critical thinking and creativity may be necessary to see through the false dichotomy and to discover new alternatives. == Relation to distinctions and vagueness == Some philosophers and scholars believe that "unless a distinction can be made rigorous and precise it isn't really a distinction". An exception is analytic philosopher John Searle, who called it an incorrect assumption that produces false dichotomies. Searle insists that "it is a condition of the adequacy of a precise theory of an indeterminate phenomenon that it should precisely characterize that phenomenon as indeterminate; and a distinction is no less a distinction for allowing for a family of related, marginal, diverging cases." Similarly, when two options are presented, they often are, although not always, two extreme points on some spectrum of possibilities; this may lend credence to the larger argument by giving the impression that the options are mutually exclusive, even though they need not be. Furthermore, the options in false dichotomies typically are presented as being collectively exhaustive, in which case the fallacy may be overcome, or at least weakened, by considering other possibilities, or perhaps by considering a whole spectrum of possibilities, as in fuzzy logic. This issue arises from real dichotomies in nature, the most prevalent example is the occurrence of an event. It either happened or it did not happen. This ontology sets a logical construct that cannot be reasonably applied to epistemology. == Examples == === False choice === The presentation of a false choice often reflects a deliberate attempt to eliminate several options that may occupy the middle ground on an issue. A common argument against noise pollution laws involves a false choice. It might be argued that in New York City noise should not be regulated, because if it were, a number of businesses would be required to close. This argument assumes that, for example, a bar must be shut down to prevent disturbing levels of noise emanating from it after midnight. This ignores the fact that law could require the bar to lower its noise levels, or install soundproofing structural elements to keep the noise from excessively transmitting onto others' properties. === Black-and-white thinking === In psychology, a phenomenon related to the false dilemma is "black-and-white thinking" or "thinking in black and white". There are people who routinely engage in black-and-white thinking, an example of which is someone who categorizes other people as all good or all bad. == Similar concepts == Various different terms are used to refer to false dilemmas. Some of the following terms are equivalent to the term false dilemma, some refer to special forms of false dilemmas and others refer to closely related concepts. Bifurcation fallacy Black-or-white fallacy Denying a conjunct (similar to a false dichotomy: see Formal fallacy § Denying a conjunct) Double bind Either/or fallacy Fallacy of exhaustive hypotheses Fallacy of the excluded middle Fallacy of the false alternative False binary False choice False dichotomy Invalid disjunction No middle ground == See also == == References == == External links == The Black-or-White Fallacy Archived 6 December 2020 at the Wayback Machine entry in The Fallacy Files
Wikipedia/False_dilemma
An existential graph is a type of diagrammatic or visual notation for logical expressions, created by Charles Sanders Peirce, who wrote on graphical logic as early as 1882, and continued to develop the method until his death in 1914. They include both a separate graphical notation for logical statements and a logical calculus, a formal system of rules of inference that can be used to derive theorems. == Background == Peirce found the algebraic notation (i.e. symbolic notation) of logic, especially that of predicate logic, which was still very new during his lifetime and which he himself played a major role in developing, to be philosophically unsatisfactory, because the symbols had their meaning by mere convention. In contrast, he strove for a style of writing in which the signs literally carry their meaning within them – in the terminology of his theory of signs: a system of iconic signs that resemble or resemble the represented objects and relations. Thus, the development of an iconic, graphic and – as he intended – intuitive and easy-to-learn logical system was a project that Peirce worked on throughout his life. After at least one aborted approach – the "Entitative Graphs" – the closed system of "Existential Graphs" finally emerged from 1896 onwards. Although considered by their creator to be a clearly superior and more intuitive system, as a mode of writing and as a calculus, they had no major influence on the history of logic. This has been attributed to the fact(s) that, for one, Peirce published little on this topic, and that the published texts were not written in a very understandable way; and, for two, that the linear formula notation in the hands of experts is actually the less complex tool. Hence, the existential graphs received little attention or were seen as unwieldy. From 1963 onwards, works by Don D. Roberts and J. Jay Zeman, in which Peirce's graphic systems were systematically examined and presented, led to a better understanding; even so, they have today found practical use within only one modern application—the conceptual graphs introduced by John F. Sowa in 1976, which are used in computer science to represent knowledge. However, existential graphs are increasingly reappearing as a subject of research in connection with a growing interest in graphical logic, which is also expressed in attempts to replace the rules of inference given by Peirce with more intuitive ones. The overall system of existential graphs is composed of three subsystems that build on each other, the alpha graphs, the beta graphs and the gamma graphs. The alpha graphs are a purely propositional logical system. Building on this, the beta graphs are a first order logical calculus. The gamma graphs, which have not yet been fully researched and were not completed by Peirce, are understood as a further development of the alpha and beta graphs. When interpreted appropriately, the gamma graphs cover higher-level predicate logic as well as modal logic. As late as 1903, Peirce began a new approach, the "Tinctured Existential Graphs," with which he wanted to replace the previous systems of alpha, beta and gamma graphs and combine their expressiveness and performance in a single new system. Like the gamma graphs, the "Tinctured Existential Graphs" remained unfinished. As calculi, the alpha, beta and gamma graphs are sound (i.e., all expressions derived as graphs are semantically valid). The alpha and beta graphs are also complete (i.e., all propositional or predicate-logically semantically valid expressions can be derived as alpha or beta graphs). == The graphs == Peirce proposed three systems of existential graphs: alpha, isomorphic to propositional logic and the two-element Boolean algebra; beta, isomorphic to first-order logic with identity, with all formulas closed; gamma, (nearly) isomorphic to normal modal logic. Alpha nests in beta and gamma. Beta does not nest in gamma, quantified modal logic being more general than put forth by Peirce. === Alpha === The syntax is: The blank page; Single letters or phrases written anywhere on the page; Any graph may be enclosed by a simple closed curve called a cut or sep. A cut can be empty. Cuts can nest and concatenate at will, but must never intersect. Any well-formed part of a graph is a subgraph. The semantics are: The blank page denotes Truth; Letters, phrases, subgraphs, and entire graphs may be True or False; To enclose a subgraph with a cut is equivalent to logical negation or Boolean complementation. Hence an empty cut denotes False; All subgraphs within a given cut are tacitly conjoined. Hence the alpha graphs are a minimalist notation for sentential logic, grounded in the expressive adequacy of And and Not. The alpha graphs constitute a radical simplification of the two-element Boolean algebra and the truth functors. The depth of an object is the number of cuts that enclose it. Rules of inference: Insertion - Any subgraph may be inserted into an odd numbered depth. The surrounding white page is depth 1. Depth 2 are the black letters and lines that encircle elements. Depth 3 is entering the next white area in an enclosed element. Erasure - Any subgraph in an even numbered depth may be erased. Rules of equivalence: Double cut - A pair of cuts with nothing between them may be drawn around any subgraph. Likewise two nested cuts with nothing between them may be erased. This rule is equivalent to Boolean involution and double negation elimination. Iteration/Deiteration – To understand this rule, it is best to view a graph as a tree structure having nodes and ancestors. Any subgraph P in node n may be copied into any node depending on n. Likewise, any subgraph P in node n may be erased if there exists a copy of P in some node ancestral to n (i.e., some node on which n depends). For an equivalent rule in an algebraic context, see C2 in Laws of Form. A proof manipulates a graph by a series of steps, with each step justified by one of the above rules. If a graph can be reduced by steps to the blank page or an empty cut, it is what is now called a tautology (or the complement thereof, a contradiction). Graphs that cannot be simplified beyond a certain point are analogues of the satisfiable formulas of first-order logic. === Beta === In the case of betagraphs, the atomic expressions are no longer propositional letters (P, Q, R,...) or statements ("It rains," "Peirce died in poverty"), but predicates in the sense of predicate logic (see there for more details), possibly abbreviated to predicate letters (F, G, H,...). A predicate in the sense of predicate logic is a sequence of words with clearly defined spaces that becomes a propositional sentence if you insert a proper noun into each space. For example, the word sequence "_ x is a human" is a predicate because it gives rise to the declarative sentence "Peirce is a human" if you enter the proper name "Peirce" in the blank space. Likewise, the word sequence "_1 is richer than _2" is a predicate, because it results in the statement "Socrates is richer than Plato" if the proper names "Socrates" or "Plato" are inserted into the spaces. === Notation of betagraphs === The basic language device is the line of identity, a thickly drawn line of any form. The identity line docks onto the blank space of a predicate to show that the predicate applies to at least one individual. In order to express that the predicate "_ is a human being" applies to at least one individual – i.e. to say that there is (at least) one human being – one writes an identity line in the blank space of the predicate "_ is a human being:" The beta graphs can be read as a system in which all formula are to be taken as closed, because all variables are implicitly quantified. If the "shallowest" part of a line of identity has even depth, the associated variable is tacitly existentially (universally) quantified. Zeman (1964) was the first to note that the beta graphs are isomorphic to first-order logic with equality (also see Zeman 1967). However, the secondary literature, especially Roberts (1973) and Shin (2002), does not agree on how this is. Peirce's writings do not address this question, because first-order logic was first clearly articulated only after his death, in the 1928 first edition of David Hilbert and Wilhelm Ackermann's Principles of Mathematical Logic. === Gamma === Add to the syntax of alpha a second kind of simple closed curve, written using a dashed rather than a solid line. Peirce proposed rules for this second style of cut, which can be read as the primitive unary operator of modal logic. Zeman (1964) was the first to note that the gamma graphs are equivalent to the well-known modal logics S4 and S5. Hence the gamma graphs can be read as a peculiar form of normal modal logic. This finding of Zeman's has received little attention to this day, but is nonetheless included here as a point of interest. == Peirce's role == The existential graphs are a curious offspring of Peirce the logician/mathematician with Peirce the founder of a major strand of semiotics. Peirce's graphical logic is but one of his many accomplishments in logic and mathematics. In a series of papers beginning in 1867, and culminating with his classic paper in the 1885 American Journal of Mathematics, Peirce developed much of the two-element Boolean algebra, propositional calculus, quantification and the predicate calculus, and some rudimentary set theory. Model theorists consider Peirce the first of their kind. He also extended De Morgan's relation algebra. He stopped short of metalogic (which eluded even Principia Mathematica). But Peirce's evolving semiotic theory led him to doubt the value of logic formulated using conventional linear notation, and to prefer that logic and mathematics be notated in two (or even three) dimensions. His work went beyond Euler's diagrams and Venn's 1880 revision thereof. Frege's 1879 work Begriffsschrift also employed a two-dimensional notation for logic, but one very different from Peirce's. Peirce's first published paper on graphical logic (reprinted in Vol. 3 of his Collected Papers) proposed a system dual (in effect) to the alpha existential graphs, called the entitative graphs. He very soon abandoned this formalism in favor of the existential graphs. In 1911 Victoria, Lady Welby showed the existential graphs to C. K. Ogden who felt they could usefully be combined with Welby's thoughts in a "less abstruse form." Otherwise they attracted little attention during his life and were invariably denigrated or ignored after his death, until the PhD theses by Roberts (1964) and Zeman (1964). == See also == Nor operator Conceptual graph Charles Sander Peirce Propositional calculus == References == == Further reading == === Primary literature === 1931–1935 & 1958. The Collected Papers of Charles Sanders Peirce. Volume 4, Book II: "Existential Graphs", consists of paragraphs 347–584. A discussion also begins in paragraph 617. Paragraphs 347–349 (II.1.1. "Logical Diagram")—Peirce's definition "Logical Diagram (or Graph)" in Baldwin's Dictionary of Philosophy and Psychology (1902), v. 2, p. 28. Classics in the History of Psychology Eprint. Paragraphs 350–371 (II.1.2. "Of Euler's Diagrams")—from "Graphs" (manuscript 479) c. 1903. Paragraphs 372–584 Eprint. Paragraphs 372–393 (II.2. "Symbolic Logic")—Peirce's part of "Symbolic Logic" in Baldwin's Dictionary of Philosophy and Psychology (1902) v. 2, pp. 645–650, beginning (near second column's top) with "If symbolic logic be defined...". Paragraph 393 (Baldwin's DPP2 p. 650) is by Peirce and Christine Ladd-Franklin ("C.S.P., C.L.F."). Paragraphs 394–417 (II.3. "Existential Graphs")—from Peirce's pamphlet A Syllabus of Certain Topics of Logic, pp. 15–23, Alfred Mudge & Son, Boston (1903). Paragraphs 418–509 (II.4. "On Existential Graphs, Euler's Diagrams, and Logical Algebra")—from "Logical Tracts, No. 2" (manuscript 492), c. 1903. Paragraphs 510–529 (II.5. "The Gamma Part of Existential Graphs")—from "Lowell Lectures of 1903," Lecture IV (manuscript 467). Paragraphs 530–572 (II.6.)—"Prolegomena To an Apology For Pragmaticism" (1906), The Monist, v. XVI, n. 4, pp. 492-546. Corrections (1907) in The Monist v. XVII, p. 160. Paragraphs 573–584 (II.7. "An Improvement on the Gamma Graphs")—from "For the National Academy of Science, 1906 April Meeting in Washington" (manuscript 490). Paragraphs 617–623 (at least) (in Book III, Ch. 2, §2, paragraphs 594–642)—from "Some Amazing Mazes: Explanation of Curiosity the First", The Monist, v. XVIII, 1908, n. 3, pp. 416-464, see starting p. 440. 1992. "Lecture Three: The Logic of Relatives", Reasoning and the Logic of Things, pp. 146–164. Ketner, Kenneth Laine (editing and introduction), and Hilary Putnam (commentary). Harvard University Press. Peirce's 1898 lectures in Cambridge, Massachusetts. 1977, 2001. Semiotic and Significs: The Correspondence between C.S. Peirce and Victoria Lady Welby. Hardwick, C.S., ed. Lubbock TX: Texas Tech University Press. 2nd edition 2001. A transcription of Peirce's MS 514 (1909), edited with commentary by John Sowa. Currently, the chronological critical edition of Peirce's works, the Writings, extends only to 1892. Much of Peirce's work on logical graphs consists of manuscripts written after that date and still unpublished. Hence our understanding of Peirce's graphical logic is likely to change as the remaining 23 volumes of the chronological edition appear. === Secondary literature === Hammer, Eric M. (1998), "Semantics for Existential Graphs," Journal of Philosophical Logic 27: 489–503. Ketner, Kenneth Laine (1981), "The Best Example of Semiosis and Its Use in Teaching Semiotics", American Journal of Semiotics v. I, n. 1–2, pp. 47–83. Article is an introduction to existential graphs. (1990), Elements of Logic: An Introduction to Peirce's Existential Graphs, Texas Tech University Press, Lubbock, TX, 99 pages, spiral-bound. Queiroz, João & Stjernfelt, Frederik (2011), "Diagrammatical Reasoning and Peircean Logic Representation", Semiotica vol. 186 (1/4). (Special issue on Peirce's diagrammatic logic.) [1] Roberts, Don D. (1964), "Existential Graphs and Natural Deduction" in Moore, E. C., and Robin, R. S., eds., Studies in the Philosophy of C. S. Peirce, 2nd series. Amherst MA: University of Massachusetts Press. The first publication to show any sympathy and understanding for Peirce's graphical logic. (1973). The Existential Graphs of C.S. Peirce. John Benjamins. An outgrowth of his 1963 thesis. Shin, Sun-Joo (2002), The Iconic Logic of Peirce's Graphs. MIT Press. Zalamea, Fernando. Peirce's Logic of Continuity. Docent Press, Boston MA. 2012. ISBN 9 780983 700494. Part II: Peirce's Existential Graphs, pp. 76-162. Zeman, J. J. (1964), The Graphical Logic of C.S. Peirce. Archived 2018-09-14 at the Wayback Machine Unpublished Ph.D. thesis submitted to the University of Chicago. (1967), "A System of Implicit Quantification," Journal of Symbolic Logic 32: 480–504. == External links == Stanford Encyclopedia of Philosophy: Peirce's Logic by Sun-Joo Shin and Eric Hammer. Dau, Frithjof, Peirce's Existential Graphs --- Readings and Links. An annotated bibliography on the existential graphs. Gottschall, Christian, Proof Builder Archived 2006-02-12 at the Wayback Machine — Java applet for deriving Alpha graphs. Liu, Xin-Wen, "The literature of C.S. Peirce’s Existential Graphs" (via Wayback Machine), Institute of Philosophy, Chinese Academy of Social Sciences, Beijing, PRC. Sowa, John F. "Laws, Facts, and Contexts: Foundations for Multimodal Reasoning". Retrieved 2009-10-23. (NB. Existential graphs and conceptual graphs.) Van Heuveln, Bram, "Existential Graphs. Archived 2009-08-29 at the Wayback Machine" Dept. of Cognitive Science, Rensselaer Polytechnic Institute. Alpha only. Zeman, Jay J., "Existential Graphs". With four online papers by Peirce.
Wikipedia/Entitative_graph
An existential graph is a type of diagrammatic or visual notation for logical expressions, created by Charles Sanders Peirce, who wrote on graphical logic as early as 1882, and continued to develop the method until his death in 1914. They include both a separate graphical notation for logical statements and a logical calculus, a formal system of rules of inference that can be used to derive theorems. == Background == Peirce found the algebraic notation (i.e. symbolic notation) of logic, especially that of predicate logic, which was still very new during his lifetime and which he himself played a major role in developing, to be philosophically unsatisfactory, because the symbols had their meaning by mere convention. In contrast, he strove for a style of writing in which the signs literally carry their meaning within them – in the terminology of his theory of signs: a system of iconic signs that resemble or resemble the represented objects and relations. Thus, the development of an iconic, graphic and – as he intended – intuitive and easy-to-learn logical system was a project that Peirce worked on throughout his life. After at least one aborted approach – the "Entitative Graphs" – the closed system of "Existential Graphs" finally emerged from 1896 onwards. Although considered by their creator to be a clearly superior and more intuitive system, as a mode of writing and as a calculus, they had no major influence on the history of logic. This has been attributed to the fact(s) that, for one, Peirce published little on this topic, and that the published texts were not written in a very understandable way; and, for two, that the linear formula notation in the hands of experts is actually the less complex tool. Hence, the existential graphs received little attention or were seen as unwieldy. From 1963 onwards, works by Don D. Roberts and J. Jay Zeman, in which Peirce's graphic systems were systematically examined and presented, led to a better understanding; even so, they have today found practical use within only one modern application—the conceptual graphs introduced by John F. Sowa in 1976, which are used in computer science to represent knowledge. However, existential graphs are increasingly reappearing as a subject of research in connection with a growing interest in graphical logic, which is also expressed in attempts to replace the rules of inference given by Peirce with more intuitive ones. The overall system of existential graphs is composed of three subsystems that build on each other, the alpha graphs, the beta graphs and the gamma graphs. The alpha graphs are a purely propositional logical system. Building on this, the beta graphs are a first order logical calculus. The gamma graphs, which have not yet been fully researched and were not completed by Peirce, are understood as a further development of the alpha and beta graphs. When interpreted appropriately, the gamma graphs cover higher-level predicate logic as well as modal logic. As late as 1903, Peirce began a new approach, the "Tinctured Existential Graphs," with which he wanted to replace the previous systems of alpha, beta and gamma graphs and combine their expressiveness and performance in a single new system. Like the gamma graphs, the "Tinctured Existential Graphs" remained unfinished. As calculi, the alpha, beta and gamma graphs are sound (i.e., all expressions derived as graphs are semantically valid). The alpha and beta graphs are also complete (i.e., all propositional or predicate-logically semantically valid expressions can be derived as alpha or beta graphs). == The graphs == Peirce proposed three systems of existential graphs: alpha, isomorphic to propositional logic and the two-element Boolean algebra; beta, isomorphic to first-order logic with identity, with all formulas closed; gamma, (nearly) isomorphic to normal modal logic. Alpha nests in beta and gamma. Beta does not nest in gamma, quantified modal logic being more general than put forth by Peirce. === Alpha === The syntax is: The blank page; Single letters or phrases written anywhere on the page; Any graph may be enclosed by a simple closed curve called a cut or sep. A cut can be empty. Cuts can nest and concatenate at will, but must never intersect. Any well-formed part of a graph is a subgraph. The semantics are: The blank page denotes Truth; Letters, phrases, subgraphs, and entire graphs may be True or False; To enclose a subgraph with a cut is equivalent to logical negation or Boolean complementation. Hence an empty cut denotes False; All subgraphs within a given cut are tacitly conjoined. Hence the alpha graphs are a minimalist notation for sentential logic, grounded in the expressive adequacy of And and Not. The alpha graphs constitute a radical simplification of the two-element Boolean algebra and the truth functors. The depth of an object is the number of cuts that enclose it. Rules of inference: Insertion - Any subgraph may be inserted into an odd numbered depth. The surrounding white page is depth 1. Depth 2 are the black letters and lines that encircle elements. Depth 3 is entering the next white area in an enclosed element. Erasure - Any subgraph in an even numbered depth may be erased. Rules of equivalence: Double cut - A pair of cuts with nothing between them may be drawn around any subgraph. Likewise two nested cuts with nothing between them may be erased. This rule is equivalent to Boolean involution and double negation elimination. Iteration/Deiteration – To understand this rule, it is best to view a graph as a tree structure having nodes and ancestors. Any subgraph P in node n may be copied into any node depending on n. Likewise, any subgraph P in node n may be erased if there exists a copy of P in some node ancestral to n (i.e., some node on which n depends). For an equivalent rule in an algebraic context, see C2 in Laws of Form. A proof manipulates a graph by a series of steps, with each step justified by one of the above rules. If a graph can be reduced by steps to the blank page or an empty cut, it is what is now called a tautology (or the complement thereof, a contradiction). Graphs that cannot be simplified beyond a certain point are analogues of the satisfiable formulas of first-order logic. === Beta === In the case of betagraphs, the atomic expressions are no longer propositional letters (P, Q, R,...) or statements ("It rains," "Peirce died in poverty"), but predicates in the sense of predicate logic (see there for more details), possibly abbreviated to predicate letters (F, G, H,...). A predicate in the sense of predicate logic is a sequence of words with clearly defined spaces that becomes a propositional sentence if you insert a proper noun into each space. For example, the word sequence "_ x is a human" is a predicate because it gives rise to the declarative sentence "Peirce is a human" if you enter the proper name "Peirce" in the blank space. Likewise, the word sequence "_1 is richer than _2" is a predicate, because it results in the statement "Socrates is richer than Plato" if the proper names "Socrates" or "Plato" are inserted into the spaces. === Notation of betagraphs === The basic language device is the line of identity, a thickly drawn line of any form. The identity line docks onto the blank space of a predicate to show that the predicate applies to at least one individual. In order to express that the predicate "_ is a human being" applies to at least one individual – i.e. to say that there is (at least) one human being – one writes an identity line in the blank space of the predicate "_ is a human being:" The beta graphs can be read as a system in which all formula are to be taken as closed, because all variables are implicitly quantified. If the "shallowest" part of a line of identity has even depth, the associated variable is tacitly existentially (universally) quantified. Zeman (1964) was the first to note that the beta graphs are isomorphic to first-order logic with equality (also see Zeman 1967). However, the secondary literature, especially Roberts (1973) and Shin (2002), does not agree on how this is. Peirce's writings do not address this question, because first-order logic was first clearly articulated only after his death, in the 1928 first edition of David Hilbert and Wilhelm Ackermann's Principles of Mathematical Logic. === Gamma === Add to the syntax of alpha a second kind of simple closed curve, written using a dashed rather than a solid line. Peirce proposed rules for this second style of cut, which can be read as the primitive unary operator of modal logic. Zeman (1964) was the first to note that the gamma graphs are equivalent to the well-known modal logics S4 and S5. Hence the gamma graphs can be read as a peculiar form of normal modal logic. This finding of Zeman's has received little attention to this day, but is nonetheless included here as a point of interest. == Peirce's role == The existential graphs are a curious offspring of Peirce the logician/mathematician with Peirce the founder of a major strand of semiotics. Peirce's graphical logic is but one of his many accomplishments in logic and mathematics. In a series of papers beginning in 1867, and culminating with his classic paper in the 1885 American Journal of Mathematics, Peirce developed much of the two-element Boolean algebra, propositional calculus, quantification and the predicate calculus, and some rudimentary set theory. Model theorists consider Peirce the first of their kind. He also extended De Morgan's relation algebra. He stopped short of metalogic (which eluded even Principia Mathematica). But Peirce's evolving semiotic theory led him to doubt the value of logic formulated using conventional linear notation, and to prefer that logic and mathematics be notated in two (or even three) dimensions. His work went beyond Euler's diagrams and Venn's 1880 revision thereof. Frege's 1879 work Begriffsschrift also employed a two-dimensional notation for logic, but one very different from Peirce's. Peirce's first published paper on graphical logic (reprinted in Vol. 3 of his Collected Papers) proposed a system dual (in effect) to the alpha existential graphs, called the entitative graphs. He very soon abandoned this formalism in favor of the existential graphs. In 1911 Victoria, Lady Welby showed the existential graphs to C. K. Ogden who felt they could usefully be combined with Welby's thoughts in a "less abstruse form." Otherwise they attracted little attention during his life and were invariably denigrated or ignored after his death, until the PhD theses by Roberts (1964) and Zeman (1964). == See also == Nor operator Conceptual graph Charles Sander Peirce Propositional calculus == References == == Further reading == === Primary literature === 1931–1935 & 1958. The Collected Papers of Charles Sanders Peirce. Volume 4, Book II: "Existential Graphs", consists of paragraphs 347–584. A discussion also begins in paragraph 617. Paragraphs 347–349 (II.1.1. "Logical Diagram")—Peirce's definition "Logical Diagram (or Graph)" in Baldwin's Dictionary of Philosophy and Psychology (1902), v. 2, p. 28. Classics in the History of Psychology Eprint. Paragraphs 350–371 (II.1.2. "Of Euler's Diagrams")—from "Graphs" (manuscript 479) c. 1903. Paragraphs 372–584 Eprint. Paragraphs 372–393 (II.2. "Symbolic Logic")—Peirce's part of "Symbolic Logic" in Baldwin's Dictionary of Philosophy and Psychology (1902) v. 2, pp. 645–650, beginning (near second column's top) with "If symbolic logic be defined...". Paragraph 393 (Baldwin's DPP2 p. 650) is by Peirce and Christine Ladd-Franklin ("C.S.P., C.L.F."). Paragraphs 394–417 (II.3. "Existential Graphs")—from Peirce's pamphlet A Syllabus of Certain Topics of Logic, pp. 15–23, Alfred Mudge & Son, Boston (1903). Paragraphs 418–509 (II.4. "On Existential Graphs, Euler's Diagrams, and Logical Algebra")—from "Logical Tracts, No. 2" (manuscript 492), c. 1903. Paragraphs 510–529 (II.5. "The Gamma Part of Existential Graphs")—from "Lowell Lectures of 1903," Lecture IV (manuscript 467). Paragraphs 530–572 (II.6.)—"Prolegomena To an Apology For Pragmaticism" (1906), The Monist, v. XVI, n. 4, pp. 492-546. Corrections (1907) in The Monist v. XVII, p. 160. Paragraphs 573–584 (II.7. "An Improvement on the Gamma Graphs")—from "For the National Academy of Science, 1906 April Meeting in Washington" (manuscript 490). Paragraphs 617–623 (at least) (in Book III, Ch. 2, §2, paragraphs 594–642)—from "Some Amazing Mazes: Explanation of Curiosity the First", The Monist, v. XVIII, 1908, n. 3, pp. 416-464, see starting p. 440. 1992. "Lecture Three: The Logic of Relatives", Reasoning and the Logic of Things, pp. 146–164. Ketner, Kenneth Laine (editing and introduction), and Hilary Putnam (commentary). Harvard University Press. Peirce's 1898 lectures in Cambridge, Massachusetts. 1977, 2001. Semiotic and Significs: The Correspondence between C.S. Peirce and Victoria Lady Welby. Hardwick, C.S., ed. Lubbock TX: Texas Tech University Press. 2nd edition 2001. A transcription of Peirce's MS 514 (1909), edited with commentary by John Sowa. Currently, the chronological critical edition of Peirce's works, the Writings, extends only to 1892. Much of Peirce's work on logical graphs consists of manuscripts written after that date and still unpublished. Hence our understanding of Peirce's graphical logic is likely to change as the remaining 23 volumes of the chronological edition appear. === Secondary literature === Hammer, Eric M. (1998), "Semantics for Existential Graphs," Journal of Philosophical Logic 27: 489–503. Ketner, Kenneth Laine (1981), "The Best Example of Semiosis and Its Use in Teaching Semiotics", American Journal of Semiotics v. I, n. 1–2, pp. 47–83. Article is an introduction to existential graphs. (1990), Elements of Logic: An Introduction to Peirce's Existential Graphs, Texas Tech University Press, Lubbock, TX, 99 pages, spiral-bound. Queiroz, João & Stjernfelt, Frederik (2011), "Diagrammatical Reasoning and Peircean Logic Representation", Semiotica vol. 186 (1/4). (Special issue on Peirce's diagrammatic logic.) [1] Roberts, Don D. (1964), "Existential Graphs and Natural Deduction" in Moore, E. C., and Robin, R. S., eds., Studies in the Philosophy of C. S. Peirce, 2nd series. Amherst MA: University of Massachusetts Press. The first publication to show any sympathy and understanding for Peirce's graphical logic. (1973). The Existential Graphs of C.S. Peirce. John Benjamins. An outgrowth of his 1963 thesis. Shin, Sun-Joo (2002), The Iconic Logic of Peirce's Graphs. MIT Press. Zalamea, Fernando. Peirce's Logic of Continuity. Docent Press, Boston MA. 2012. ISBN 9 780983 700494. Part II: Peirce's Existential Graphs, pp. 76-162. Zeman, J. J. (1964), The Graphical Logic of C.S. Peirce. Archived 2018-09-14 at the Wayback Machine Unpublished Ph.D. thesis submitted to the University of Chicago. (1967), "A System of Implicit Quantification," Journal of Symbolic Logic 32: 480–504. == External links == Stanford Encyclopedia of Philosophy: Peirce's Logic by Sun-Joo Shin and Eric Hammer. Dau, Frithjof, Peirce's Existential Graphs --- Readings and Links. An annotated bibliography on the existential graphs. Gottschall, Christian, Proof Builder Archived 2006-02-12 at the Wayback Machine — Java applet for deriving Alpha graphs. Liu, Xin-Wen, "The literature of C.S. Peirce’s Existential Graphs" (via Wayback Machine), Institute of Philosophy, Chinese Academy of Social Sciences, Beijing, PRC. Sowa, John F. "Laws, Facts, and Contexts: Foundations for Multimodal Reasoning". Retrieved 2009-10-23. (NB. Existential graphs and conceptual graphs.) Van Heuveln, Bram, "Existential Graphs. Archived 2009-08-29 at the Wayback Machine" Dept. of Cognitive Science, Rensselaer Polytechnic Institute. Alpha only. Zeman, Jay J., "Existential Graphs". With four online papers by Peirce.
Wikipedia/Logical_graph
The base rate fallacy, also called base rate neglect or base rate bias, is a type of fallacy in which people tend to ignore the base rate (e.g., general prevalence) in favor of the individuating information (i.e., information pertaining only to a specific case). For example, if someone hears that a friend is very shy and quiet, they might think the friend is more likely to be a librarian than a salesperson. However, there are far more salespeople than librarians overall—hence making it more likely that their friend is actually a salesperson, even if a greater proportion of librarians fit the description of being shy and quiet. Base rate neglect is a specific form of the more general extension neglect. It is also called the prosecutor's fallacy or defense attorney's fallacy when applied to the results of statistical tests (such as DNA tests) in the context of law proceedings. These terms were introduced by William C. Thompson and Edward Schumann in 1987, although it has been argued that their definition of the prosecutor's fallacy extends to many additional invalid imputations of guilt or liability that are not analyzable as errors in base rates or Bayes's theorem. == False positive paradox == An example of the base rate fallacy is the false positive paradox (also known as accuracy paradox). This paradox describes situations where there are more false positive test results than true positives (this means the classifier has a low precision). For example, if a facial recognition camera can identify wanted criminals 99% accurately, but analyzes 10,000 people a day, the high accuracy is outweighed by the number of tests; because of this, the program's list of criminals will likely have far more innocents (false positives) than criminals (true positives) because there are far more innocents than criminals overall. The probability of a positive test result is determined not only by the accuracy of the test but also by the characteristics of the sampled population. The fundamental issue is that the far higher prevalence of true negatives means that the pool of people testing positively will be dominated by false positives, given that even a small fraction of the much larger [negative] group will produce a larger number of indicated positives than the larger fraction of the much smaller [positive] group. When the prevalence, the proportion of those who have a given condition, is lower than the test's false positive rate, even tests that have a very low risk of giving a false positive in an individual case will give more false than true positives overall. It is especially counter-intuitive when interpreting a positive result in a test on a low-prevalence population after having dealt with positive results drawn from a high-prevalence population. If the false positive rate of the test is higher than the proportion of the new population with the condition, then a test administrator whose experience has been drawn from testing in a high-prevalence population may conclude from experience that a positive test result usually indicates a positive subject, when in fact a false positive is far more likely to have occurred. == Examples == === Example 1: Disease === ==== High-prevalence population ==== Imagine running an infectious disease test on a population A of 1,000 persons, of which 40% are infected. The test has a false positive rate of 5% (0.05) and a false negative rate of zero. The expected outcome of the 1,000 tests on population A would be: So, in population A, a person receiving a positive test could be over 93% confident (⁠400/30 + 400⁠) that it correctly indicates infection. ==== Low-prevalence population ==== Now consider the same test applied to population B, of which only 2% are infected. The expected outcome of 1000 tests on population B would be: In population B, only 20 of the 69 total people with a positive test result are actually infected. So, the probability of actually being infected after one is told that one is infected is only 29% (⁠20/20 + 49⁠) for a test that otherwise appears to be "95% accurate". A tester with experience of group A might find it a paradox that in group B, a result that had usually correctly indicated infection is now usually a false positive. The confusion of the posterior probability of infection with the prior probability of receiving a false positive is a natural error after receiving a health-threatening test result. === Example 2: Drunk drivers === Imagine that a group of police officers have breathalyzers displaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. One in a thousand drivers is driving drunk. Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. No other information is known about them. Many would estimate the probability that the driver is drunk as high as 95%, but the correct probability is about 2%. An explanation for this is as follows: on average, for every 1,000 drivers tested, 1 driver is drunk, and it is 100% certain that for that driver there is a true positive test result, so there is 1 true positive test result 999 drivers are not drunk, and among those drivers there are 5% false positive test results, so there are 49.95 false positive test results Therefore, the probability that any given driver among the 1 + 49.95 = 50.95 positive test results really is drunk is 1 / 50.95 ≈ 0.019627 {\displaystyle 1/50.95\approx 0.019627} . The validity of this result does, however, hinge on the validity of the initial assumption that the police officer stopped the driver truly at random, and not because of bad driving. If that or another non-arbitrary reason for stopping the driver was present, then the calculation also involves the probability of a drunk driver driving competently and a non-drunk driver driving (in-)competently. More formally, the same probability of roughly 0.02 can be established using Bayes' theorem. The goal is to find the probability that the driver is drunk given that the breathalyzer indicated they are drunk, which can be represented as p ( d r u n k ∣ D ) {\displaystyle p(\mathrm {drunk} \mid D)} where D means that the breathalyzer indicates that the driver is drunk. Using Bayes's theorem, p ( d r u n k ∣ D ) = p ( D ∣ d r u n k ) p ( d r u n k ) p ( D ) . {\displaystyle p(\mathrm {drunk} \mid D)={\frac {p(D\mid \mathrm {drunk} )\,p(\mathrm {drunk} )}{p(D)}}.} The following information is known in this scenario: p ( d r u n k ) = 0.001 , p ( s o b e r ) = 0.999 , p ( D ∣ d r u n k ) = 1.00 , p ( D ∣ s o b e r ) = 0.05. {\displaystyle {\begin{aligned}p(\mathrm {drunk} )&=0.001,\\p(\mathrm {sober} )&=0.999,\\p(D\mid \mathrm {drunk} )&=1.00,\\p(D\mid \mathrm {sober} )&=0.05.\end{aligned}}} As can be seen from the formula, one needs p(D) for Bayes' theorem, which can be computed from the preceding values using the law of total probability: p ( D ) = p ( D ∣ d r u n k ) p ( d r u n k ) + p ( D ∣ s o b e r ) p ( s o b e r ) {\displaystyle p(D)=p(D\mid \mathrm {drunk} )\,p(\mathrm {drunk} )+p(D\mid \mathrm {sober} )\,p(\mathrm {sober} )} which gives p ( D ) = ( 1.00 × 0.001 ) + ( 0.05 × 0.999 ) = 0.05095. {\displaystyle p(D)=(1.00\times 0.001)+(0.05\times 0.999)=0.05095.} Plugging these numbers into Bayes' theorem, one finds that p ( d r u n k ∣ D ) = 1.00 × 0.001 0.05095 ≈ 0.019627 , {\displaystyle p(\mathrm {drunk} \mid D)={\frac {1.00\times 0.001}{0.05095}}\approx 0.019627,} which is the precision of the test. === Example 3: Terrorist identification === In a city of 1 million inhabitants, let there be 100 terrorists and 999,900 non-terrorists. To simplify the example, it is assumed that all people present in the city are inhabitants. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software. The software has two failure rates of 1%: The false negative rate: If the camera scans a terrorist, a bell will ring 99% of the time, and it will fail to ring 1% of the time. The false positive rate: If the camera scans a non-terrorist, a bell will not ring 99% of the time, but it will ring 1% of the time. Suppose now that an inhabitant triggers the alarm. Someone making the base rate fallacy would infer that there is a 99% probability that the detected person is a terrorist. Although the inference seems to make sense, it is actually bad reasoning, and a calculation below will show that the probability of a terrorist is actually near 1%, not near 99%. The fallacy arises from confusing the natures of two different failure rates. The 'number of non-bells per 100 terrorists' (P(¬B | T), or the probability that the bell fails to ring given the inhabitant is a terrorist) and the 'number of non-terrorists per 100 bells' (P(¬T | B), or the probability that the inhabitant is a non-terrorist given the bell rings) are unrelated quantities; one is not necessarily equal—or even close—to the other. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore, 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The 'number of non-terrorists per 100 bells' in that city is 100, yet P(T | B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell. Imagine that the first city's entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. The probability that a person triggering the alarm actually is a terrorist is only about 99 in 10,098, which is less than 1% and very, very far below the initial guess of 99%. The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists, and the number of false positives (non-terrorists scanned as terrorists) is so much larger than the true positives (terrorists scanned as terrorists). Multiple practitioners have argued that as the base rate of terrorism is extremely low, using data mining and predictive algorithms to identify terrorists cannot feasibly work due to the false positive paradox. Estimates of the number of false positives for each accurate result vary from over ten thousand to one billion; consequently, investigating each lead would be cost- and time-prohibitive. The level of accuracy required to make these models viable is likely unachievable. Foremost, the low base rate of terrorism also means there is a lack of data with which to make an accurate algorithm. Further, in the context of detecting terrorism false negatives are highly undesirable and thus must be minimised as much as possible; however, this requires increasing sensitivity at the cost of specificity, increasing false positives. It is also questionable whether the use of such models by law enforcement would meet the requisite burden of proof given that over 99% of results would be false positives. === Example 4: biological testing of a suspect === A crime is committed. Forensic analysis determines that the perpetrator has a certain blood type shared by 10% of the population. A suspect is arrested, and found to have that same blood type. A prosecutor might charge the suspect with the crime on that basis alone, and claim at trial that the probability that the defendant is guilty is 90%. However, this conclusion is only close to correct if the defendant was selected as the main suspect based on robust evidence discovered prior to the blood test and unrelated to it. Otherwise, the reasoning presented is flawed, as it overlooks the high prior probability (that is, prior to the blood test) that he is a random innocent person. Assume, for instance, that 1000 people live in the town where the crime occurred. This means that 100 people live there who have the perpetrator's blood type, of whom only one is the true perpetrator; therefore, the true probability that the defendant is guilty – based only on the fact that his blood type matches that of the killer – is only 1%, far less than the 90% argued by the prosecutor. The prosecutor's fallacy involves assuming that the prior probability of a random match is equal to the probability that the defendant is innocent. When using it, a prosecutor questioning an expert witness may ask: "The odds of finding this evidence on an innocent man are so small that the jury can safely disregard the possibility that this defendant is innocent, correct?" The claim assumes that the probability that evidence is found on an innocent man is the same as the probability that a man is innocent given that evidence was found on him, which is not true. Whilst the former is usually small (10% in the previous example) due to good forensic evidence procedures, the latter (99% in that example) does not directly relate to it and will often be much higher, since, in fact, it depends on the likely quite high prior odds of the defendant being a random innocent person. == Examples in law == === O. J. Simpson trial === O. J. Simpson was tried and acquitted in 1995 for the murders of his ex-wife Nicole Brown Simpson and her friend Ronald Goldman. Crime scene blood matched Simpson's with characteristics shared by 1 in 400 people. However, the defense argued that the number of people from Los Angeles matching the sample could fill a football stadium and that the figure of 1 in 400 was useless. It would have been incorrect, and an example of prosecutor's fallacy, to rely solely on the "1 in 400" figure to deduce that a given person matching the sample would be likely to be the culprit. In the same trial, the prosecution presented evidence that Simpson had been violent toward his wife. The defense argued that there was only one woman murdered for every 2500 women who were subjected to spousal abuse, and that any history of Simpson being violent toward his wife was irrelevant to the trial. However, the reasoning behind the defense's calculation was fallacious. According to author Gerd Gigerenzer, the correct probability requires additional context: Simpson's wife had not only been subjected to domestic violence, but rather subjected to domestic violence (by Simpson) and killed (by someone). Gigerenzer writes "the chances that a batterer actually murdered his partner, given that she has been killed, is about 8 in 9 or approximately 90%". While most cases of spousal abuse do not end in murder, most cases of murder where there is a history of spousal abuse were committed by the spouse. === Sally Clark case === Sally Clark, a British woman, was accused in 1998 of having killed her first child at 11 weeks of age and then her second child at 8 weeks of age. The prosecution had expert witness Sir Roy Meadow, a professor and consultant paediatrician, testify that the probability of two children in the same family dying from SIDS is about 1 in 73 million. That was much less frequent than the actual rate measured in historical data – Meadow estimated it from single-SIDS death data, and the assumption that the probability of such deaths should be uncorrelated between infants. Meadow acknowledged that 1-in-73 million is not an impossibility, but argued that such accidents would happen "once every hundred years" and that, in a country of 15 million 2-child families, it is vastly more likely that the double-deaths are due to Münchausen syndrome by proxy than to such a rare accident. However, there is good reason to suppose that the likelihood of a death from SIDS in a family is significantly greater if a previous child has already died in these circumstances, (a genetic predisposition to SIDS is likely to invalidate that assumed statistical independence) making some families more susceptible to SIDS and the error an outcome of the ecological fallacy. The likelihood of two SIDS deaths in the same family cannot be soundly estimated by squaring the likelihood of a single such death in all otherwise similar families. The 1-in-73 million figure greatly underestimated the chance of two successive accidents, but even if that assessment were accurate, the court seems to have missed the fact that the 1-in-73 million number meant nothing on its own. As an a priori probability, it should have been weighed against the a priori probabilities of the alternatives. Given that two deaths had occurred, one of the following explanations must be true, and all of them are a priori extremely improbable: Two successive deaths in the same family, both by SIDS Double homicide (the prosecution's case) Other possibilities (including one homicide and one case of SIDS) It is unclear whether an estimate of the probability for the second possibility was ever proposed during the trial, or whether the comparison of the first two probabilities was understood to be the key estimate to make in the statistical analysis assessing the prosecution's case against the case for innocence. Clark was convicted in 1999, resulting in a press release by the Royal Statistical Society which pointed out the mistakes. In 2002, Ray Hill (a mathematics professor at Salford) attempted to accurately compare the chances of these two possible explanations; he concluded that successive accidents are between 4.5 and 9 times more likely than are successive murders, so that the a priori odds of Clark's guilt were between 4.5 to 1 and 9 to 1 against. After the court found that the forensic pathologist who had examined both babies had withheld exculpatory evidence, a higher court later quashed Clark's conviction, on 29 January 2003. == Findings in psychology == In experiments, people have been found to prefer individuating information over general information when the former is available. In some experiments, students were asked to estimate the grade point averages (GPAs) of hypothetical students. When given relevant statistics about GPA distribution, students tended to ignore them if given descriptive information about the particular student even if the new descriptive information was obviously of little or no relevance to school performance. This finding has been used to argue that interviews are an unnecessary part of the college admissions process because interviewers are unable to pick successful candidates better than basic statistics. Psychologists Daniel Kahneman and Amos Tversky attempted to explain this finding in terms of a simple rule or "heuristic" called representativeness. They argued that many judgments relating to likelihood, or to cause and effect, are based on how representative one thing is of another, or of a category. Kahneman considers base rate neglect to be a specific form of extension neglect. Richard Nisbett has argued that some attributional biases like the fundamental attribution error are instances of the base rate fallacy: people do not use the "consensus information" (the "base rate") about how others behaved in similar situations and instead prefer simpler dispositional attributions. There is considerable debate in psychology on the conditions under which people do or do not appreciate base rate information. Researchers in the heuristics-and-biases program have stressed empirical findings showing that people tend to ignore base rates and make inferences that violate certain norms of probabilistic reasoning, such as Bayes' theorem. The conclusion drawn from this line of research was that human probabilistic thinking is fundamentally flawed and error-prone. Other researchers have emphasized the link between cognitive processes and information formats, arguing that such conclusions are not generally warranted. Consider again Example 2 from above. The required inference is to estimate the (posterior) probability that a (randomly picked) driver is drunk, given that the breathalyzer test is positive. Formally, this probability can be calculated using Bayes' theorem, as shown above. However, there are different ways of presenting the relevant information. Consider the following, formally equivalent variant of the problem: 1 out of 1000 drivers are driving drunk. The breathalyzers never fail to detect a truly drunk person. For 50 out of the 999 drivers who are not drunk the breathalyzer falsely displays drunkenness. Suppose the policemen then stop a driver at random, and force them to take a breathalyzer test. It indicates that they are drunk. No other information is known about them. Estimate the probability the driver is really drunk. In this case, the relevant numerical information—p(drunk), p(D | drunk), p(D | sober)—is presented in terms of natural frequencies with respect to a certain reference class (see reference class problem). Empirical studies show that people's inferences correspond more closely to Bayes' rule when information is presented this way, helping to overcome base-rate neglect in laypeople and experts. As a consequence, organizations like the Cochrane Collaboration recommend using this kind of format for communicating health statistics. Teaching people to translate these kinds of Bayesian reasoning problems into natural frequency formats is more effective than merely teaching them to plug probabilities (or percentages) into Bayes' theorem. It has also been shown that graphical representations of natural frequencies (e.g., icon arrays, hypothetical outcome plots) help people to make better inferences. One important reason why natural frequency formats are helpful is that this information format facilitates the required inference because it simplifies the necessary calculations. This can be seen when using an alternative way of computing the required probability p(drunk|D): p ( d r u n k ∣ D ) = N ( d r u n k ∩ D ) N ( D ) = 1 51 = 0.0196 {\displaystyle p(\mathrm {drunk} \mid D)={\frac {N(\mathrm {drunk} \cap D)}{N(D)}}={\frac {1}{51}}=0.0196} where N(drunk ∩ D) denotes the number of drivers that are drunk and get a positive breathalyzer result, and N(D) denotes the total number of cases with a positive breathalyzer result. The equivalence of this equation to the above one follows from the axioms of probability theory, according to which N(drunk ∩ D) = N × p (D | drunk) × p (drunk). Importantly, although this equation is formally equivalent to Bayes' rule, it is not psychologically equivalent. Using natural frequencies simplifies the inference because the required mathematical operation can be performed on natural numbers, instead of normalized fractions (i.e., probabilities), because it makes the high number of false positives more transparent, and because natural frequencies exhibit a "nested-set structure". Not every frequency format facilitates Bayesian reasoning. Natural frequencies refer to frequency information that results from natural sampling, which preserves base rate information (e.g., number of drunken drivers when taking a random sample of drivers). This is different from systematic sampling, in which base rates are fixed a priori (e.g., in scientific experiments). In the latter case it is not possible to infer the posterior probability p(drunk | positive test) from comparing the number of drivers who are drunk and test positive compared to the total number of people who get a positive breathalyzer result, because base rate information is not preserved and must be explicitly re-introduced using Bayes' theorem. == See also == Precision and recall Data dredging – Misuse of data analysis Evidence under Bayes' theorem Inductive argument – Method of logical reasoningPages displaying short descriptions of redirect targets List of cognitive biases List of paradoxes – List of statements that appear to contradict themselves Misleading vividness – Evidence relying on personal testimonyPages displaying short descriptions of redirect targets Prevention paradox – Situation in epidemiology Simpson's paradox – Error in statistical reasoning with groups Intuitive statistics – cognitive phenomenon where organisms use data to make generalizations and predictions about the worldPages displaying wikidata descriptions as a fallback R v Adams == References == == External links == The Base Rate Fallacy The Fallacy Files
Wikipedia/Base_rate_fallacy
In mathematical logic, the implicational propositional calculus is a version of classical propositional calculus that uses only one connective, called implication or conditional. In formulas, this binary operation is indicated by "implies", "if ..., then ...", "→", " → {\displaystyle \rightarrow } ", etc.. == Functional (in)completeness == Implication alone is not functionally complete as a logical operator because one cannot form all other two-valued truth functions from it. For example, the two-place truth function that always returns false is not definable from → and arbitrary propositional variables: any formula constructed from → and propositional variables must receive the value true when all of its variables are evaluated to true. It follows that {→} is not functionally complete. However, if one adds a nullary connective ⊥ for falsity, then one can define all other truth functions. Formulas over the resulting set of connectives {→, ⊥} are called f-implicational. If P and Q are propositions, then: ¬P is equivalent to P → ⊥ P ∧ Q is equivalent to (P → (Q → ⊥)) → ⊥ P ∨ Q is equivalent to (P → Q) → Q P ↔ Q is equivalent to ((P → Q) → ((Q → P) → ⊥)) → ⊥ Since the above operators are known to be functionally complete, it follows that any truth function can be expressed in terms of → and ⊥. == Axiom system == The following statements are considered tautologies (irreducible and intuitively true, by definition). Axiom schema 1 is P → (Q → P). Axiom schema 2 is (P → (Q → R)) → ((P → Q) → (P → R)). Axiom schema 3 (Peirce's law) is ((P → Q) → P) → P. The one non-nullary rule of inference (modus ponens) is: from P and P → Q infer Q. Where in each case, P, Q, and R may be replaced by any formulas that contain only "→" as a connective. If Γ is a set of formulas and A a formula, then Γ ⊢ A {\displaystyle \Gamma \vdash A} means that A is derivable using the axioms and rules above and formulas from Γ as additional hypotheses. Łukasiewicz (1948) found an axiom system for the implicational calculus that replaces the schemas 1–3 above with a single schema ((P → Q) → R) → ((R → P) → (S → P)). He also argued that there is no shorter axiom system. == Basic properties of derivation == Since all axioms and rules of the calculus are schemata, derivation is closed under substitution: If Γ ⊢ A , {\displaystyle \Gamma \vdash A,} then σ ( Γ ) ⊢ σ ( A ) , {\displaystyle \sigma (\Gamma )\vdash \sigma (A),} where σ is any substitution (of formulas using only implication). The implicational propositional calculus also satisfies the deduction theorem: If Γ , A ⊢ B {\displaystyle \Gamma ,A\vdash B} , then Γ ⊢ A → B . {\displaystyle \Gamma \vdash A\to B.} As explained in the deduction theorem article, this holds for any axiomatic extension of the system containing axiom schemas 1 and 2 above and modus ponens. == Completeness == The implicational propositional calculus is semantically complete with respect to the usual two-valued semantics of classical propositional logic. That is, if Γ is a set of implicational formulas, and A is an implicational formula entailed by Γ, then Γ ⊢ A {\displaystyle \Gamma \vdash A} . === Proof === A proof of the completeness theorem is outlined below. First, using the compactness theorem and the deduction theorem, we may reduce the completeness theorem to its special case with empty Γ, i.e., we only need to show that every tautology is derivable in the system. The proof is similar to completeness of full propositional logic, but it also uses the following idea to overcome the functional incompleteness of implication. If A and F are formulas, then A → F is equivalent to (¬A*) ∨ F, where A* is the result of replacing in A all, some, or none of the occurrences of F by falsity. Similarly, (A → F) → F is equivalent to A* ∨ F. So under some conditions, one can use them as substitutes for saying A* is false or A* is true respectively. We first observe some basic facts about derivability: Indeed, we can derive A → (B → C) using Axiom 1, and then derive A → C by modus ponens (twice) from Ax. 2. This follows from (1) by the deduction theorem. If we further assume C → B, we can derive A → B using (1), then we derive C by modus ponens. This shows A → C , ( A → B ) → C , C → B ⊢ C {\displaystyle A\to C,(A\to B)\to C,C\to B\vdash C} , and the deduction theorem gives A → C , ( A → B ) → C ⊢ ( C → B ) → C {\displaystyle A\to C,(A\to B)\to C\vdash (C\to B)\to C} . We apply Ax. 3 to obtain (3). Let F be an arbitrary fixed formula. For any formula A, we define A0 = (A → F) and A1 = ((A → F) → F). Consider only formulas in propositional variables p1, ..., pn. We claim that for every formula A in these variables and every truth assignment e, We prove (4) by induction on A. The base case A = pi is trivial. Let A = (B → C). We distinguish three cases: e(C) = 1. Then also e(A) = 1. We have ( C → F ) → F ⊢ ( ( B → C ) → F ) → F {\displaystyle (C\to F)\to F\vdash ((B\to C)\to F)\to F} by applying (2) twice to the axiom C → (B → C). Since we have derived (C → F) → F by the induction hypothesis, we can infer ((B → C) → F) → F. e(B) = 0. Then again e(A) = 1. The deduction theorem applied to (3) gives B → F ⊢ ( ( B → C ) → F ) → F . {\displaystyle B\to F\vdash ((B\to C)\to F)\to F.} Since we have derived B → F by the induction hypothesis, we can infer ((B → C) → F) → F. e(B) = 1 and e(C) = 0. Then e(A) = 0. We have ( B → F ) → F , C → F , B → C ⊢ B → F by (1) ⊢ F by modus ponens, {\displaystyle {\begin{aligned}(B\to F)\to F,C\to F,B\to C&\vdash B\to F&&{\text{by (1)}}\\&\vdash F&&{\text{by modus ponens,}}\end{aligned}}} thus ( B → F ) → F , C → F ⊢ ( B → C ) → F {\displaystyle (B\to F)\to F,C\to F\vdash (B\to C)\to F} by the deduction theorem. We have derived (B → F) → F and C → F by the induction hypothesis, hence we can infer (B → C) → F. This completes the proof of (4). Now let F be a tautology in variables p1, ..., pn. We will prove by reverse induction on k = n,...,0 that for every assignment e, The base case k = n follows from a special case of (4) using F e ( F ) = F 1 = ( ( F → F ) → F ) {\displaystyle F^{e(F)}=F^{1}=((F\to F)\to F)} and the fact that F→F is a theorem by the deduction theorem. Assume that (5) holds for k + 1, we will show it for k. By applying deduction theorem to the induction hypothesis, we obtain p 1 e ( p 1 ) , … , p k e ( p k ) ⊢ ( p k + 1 → F ) → F , p 1 e ( p 1 ) , … , p k e ( p k ) ⊢ ( ( p k + 1 → F ) → F ) → F , {\displaystyle {\begin{aligned}p_{1}^{e(p_{1})},\dots ,p_{k}^{e(p_{k})}&\vdash (p_{k+1}\to F)\to F,\\p_{1}^{e(p_{1})},\dots ,p_{k}^{e(p_{k})}&\vdash ((p_{k+1}\to F)\to F)\to F,\end{aligned}}} by first setting e(pk+1) = 0 and second setting e(pk+1) = 1. From this we derive (5) using modus ponens. For k = 0 we obtain that the tautology F is provable without assumptions. This is what was to be proved. This proof is constructive. That is, given a tautology, one could actually follow the instructions and create a proof of it from the axioms. However, the length of such a proof increases exponentially with the number of propositional variables in the tautology, hence it is not a practical method for any but the very shortest tautologies. == The Bernays–Tarski axiom system == The Bernays–Tarski axiom system is often used. In particular, Łukasiewicz's paper derives the Bernays–Tarski axioms from Łukasiewicz's sole axiom as a means of showing its completeness. It differs from the axiom schemas above by replacing axiom schema 2, (P→(Q→R))→((P→Q)→(P→R)), with Axiom schema 2': (P→Q)→((Q→R)→(P→R)), which is called hypothetical syllogism. This makes derivation of the deduction meta-theorem a little more difficult, but it can still be done. We show that from P→(Q→R) and P→Q one can derive P→R. This fact can be used in lieu of axiom schema 2 to get the meta-theorem. P→(Q→R) given P→Q given (P→Q)→((Q→R)→(P→R)) ax 2' (Q→R)→(P→R) mp 2,3 (P→(Q→R))→(((Q→R)→(P→R))→(P→(P→R))) ax 2' ((Q→R)→(P→R))→(P→(P→R)) mp 1,5 P→(P→R) mp 4,6 (P→(P→R))→(((P→R)→R)→(P→R)) ax 2' ((P→R)→R)→(P→R) mp 7,8 (((P→R)→R)→(P→R))→(P→R) ax 3 P→R mp 9,10 qed == Satisfiability and validity == Satisfiability in the implicational propositional calculus is trivial, because every formula is satisfiable: just set all variables to true. Falsifiability in the implicational propositional calculus is NP-complete, meaning that validity (tautology) is co-NP-complete. In this case, a useful technique is to presume that the formula is not a tautology and attempt to find a valuation that makes it false. If one succeeds, then it is indeed not a tautology. If one fails, then it is a tautology. Example of a non-tautology: Suppose [(A→B)→((C→A)→E)]→([F→((C→D)→E)]→[(A→F)→(D→E)]) is false. Then (A→B)→((C→A)→E) is true; F→((C→D)→E) is true; A→F is true; D is true; and E is false. Since D is true, C→D is true. So the truth of F→((C→D)→E) is equivalent to the truth of F→E. Then since E is false and F→E is true, we get that F is false. Since A→F is true, A is false. Thus A→B is true and (C→A)→E is true. C→A is false, so C is true. The value of B does not matter, so we can arbitrarily choose it to be true. Summing up, the valuation that sets B, C and D to be true and A, E and F to be false will make [(A→B)→((C→A)→E)]→([F→((C→D)→E)]→[(A→F)→(D→E)]) false. So it is not a tautology. Example of a tautology: Suppose ((A→B)→C)→((C→A)→(D→A)) is false. Then (A→B)→C is true; C→A is true; D is true; and A is false. Since A is false, A→B is true. So C is true. Thus A must be true, contradicting the fact that it is false. Thus there is no valuation that makes ((A→B)→C)→((C→A)→(D→A)) false. Consequently, it is a tautology. == Adding an axiom schema == What would happen if another axiom schema were added to those listed above? There are two cases: (1) it is a tautology; or (2) it is not a tautology. If it is a tautology, then the set of theorems remains the set of tautologies as before. However, in some cases it may be possible to find significantly shorter proofs for theorems. Nevertheless, the minimum length of proofs of theorems will remain unbounded, that is, for any natural number n there will still be theorems that cannot be proved in n or fewer steps. If the new axiom schema is not a tautology, then every formula becomes a theorem (which makes the concept of a theorem useless in this case). What is more, there is then an upper bound on the minimum length of a proof of every formula, because there is a common method for proving every formula. For example, suppose the new axiom schema were ((B→C)→C)→B. Then ((A→(A→A))→(A→A))→A is an instance (one of the new axioms) and also not a tautology. But [((A→(A→A))→(A→A))→A]→A is a tautology and thus a theorem due to the old axioms (using the completeness result above). Applying modus ponens, we get that A is a theorem of the extended system. Then all one has to do to prove any formula is to replace A by the desired formula throughout the proof of A. This proof will have the same number of steps as the proof of A. == An alternative axiomatization == The axioms listed above primarily work through the deduction metatheorem to arrive at completeness. Here is another axiom system that aims directly at completeness without going through the deduction metatheorem. First we have axiom schemas that are designed to efficiently prove the subset of tautologies that contain only one propositional variable. aa 1: ꞈA→A aa 2: (A→B)→ꞈ(A→(C→B)) aa 3: A→((B→C)→ꞈ((A→B)→C)) aa 4: A→ꞈ(B→A) The proof of each such tautology would begin with two parts (hypothesis and conclusion) that are the same. Then insert additional hypotheses between them. Then insert additional tautological hypotheses (which are true even when the sole variable is false) into the original hypothesis. Then add more hypotheses outside (on the left). This procedure will quickly give every tautology containing only one variable. (The symbol "ꞈ" in each axiom schema indicates where the conclusion used in the completeness proof begins. It is merely a comment, not a part of the formula.) Consider any formula Φ that may contain A, B, C1, ..., Cn and ends with A as its final conclusion. Then we take aa 5: Φ−→(Φ+→ꞈΦ) as an axiom schema where Φ− is the result of replacing B by A throughout Φ and Φ+ is the result of replacing B by (A→A) throughout Φ. This is a schema for axiom schemas since there are two level of substitution: in the first Φ is substituted (with variations); in the second, any of the variables (including both A and B) may be replaced by arbitrary formulas of the implicational propositional calculus. This schema allows one to prove tautologies with more than one variable by considering the case when B is false Φ− and the case when B is true Φ+. If the variable that is the final conclusion of a formula takes the value true, then the whole formula takes the value true regardless of the values of the other variables. Consequently if A is true, then Φ, Φ−, Φ+ and Φ−→(Φ+→Φ) are all true. So without loss of generality, we may assume that A is false. Notice that Φ is a tautology if and only if both Φ− and Φ+ are tautologies. But while Φ has n+2 distinct variables, Φ− and Φ+ both have n+1. So the question of whether a formula is a tautology has been reduced to the question of whether certain formulas with one variable each are all tautologies. Also notice that Φ−→(Φ+→Φ) is a tautology regardless of whether Φ is, because if Φ is false then either Φ− or Φ+ will be false depending on whether B is false or true. Examples: Deriving Peirce's law [((P→P)→P)→P]→([((P→(P→P))→P)→P]→[((P→Q)→P)→P]) aa 5 P→P aa 1 (P→P)→((P→P)→(((P→P)→P)→P)) aa 3 (P→P)→(((P→P)→P)→P) mp 2,3 ((P→P)→P)→P mp 2,4 [((P→(P→P))→P)→P]→[((P→Q)→P)→P] mp 5,1 P→(P→P) aa 4 (P→(P→P))→((P→P)→(((P→(P→P))→P)→P)) aa 3 (P→P)→(((P→(P→P))→P)→P) mp 7,8 ((P→(P→P))→P)→P mp 2,9 ((P→Q)→P)→P mp 10,6 qed Deriving Łukasiewicz' sole axiom [((P→Q)→P)→((P→P)→(S→P))]→([((P→Q)→(P→P))→(((P→P)→P)→(S→P))]→[((P→Q)→R)→((R→P)→(S→P))]) aa 5 [((P→P)→P)→((P→P)→(S→P))]→([((P→(P→P))→P)→((P→P)→(S→P))]→[((P→Q)→P)→((P→P)→(S→P))]) aa 5 P→(S→P) aa 4 (P→(S→P))→(P→((P→P)→(S→P))) aa 2 P→((P→P)→(S→P)) mp 3,4 P→P aa 1 (P→P)→((P→((P→P)→(S→P)))→[((P→P)→P)→((P→P)→(S→P))]) aa 3 (P→((P→P)→(S→P)))→[((P→P)→P)→((P→P)→(S→P))] mp 6,7 ((P→P)→P)→((P→P)→(S→P)) mp 5,8 [((P→(P→P))→P)→((P→P)→(S→P))]→[((P→Q)→P)→((P→P)→(S→P))] mp 9,2 P→(P→P) aa 4 (P→(P→P))→((P→((P→P)→(S→P)))→[((P→(P→P))→P)→((P→P)→(S→P))]) aa 3 (P→((P→P)→(S→P)))→[((P→(P→P))→P)→((P→P)→(S→P))] mp 11,12 ((P→(P→P))→P)→((P→P)→(S→P)) mp 5,13 ((P→Q)→P)→((P→P)→(S→P)) mp 14,10 [((P→Q)→(P→P))→(((P→P)→P)→(S→P))]→[((P→Q)→R)→((R→P)→(S→P))] mp 15,1 (P→P)→((P→(S→P))→[((P→P)→P)→(S→P)]) aa 3 (P→(S→P))→[((P→P)→P)→(S→P)] mp 6,17 ((P→P)→P)→(S→P) mp 3,18 (((P→P)→P)→(S→P))→[((P→Q)→(P→P))→(((P→P)→P)→(S→P))] aa 4 ((P→Q)→(P→P))→(((P→P)→P)→(S→P)) mp 19,20 ((P→Q)→R)→((R→P)→(S→P)) mp 21,16 qed Using a truth table to verify Łukasiewicz' sole axiom would require consideration of 16=24 cases since it contains 4 distinct variables. In this derivation, we were able to restrict consideration to merely 3 cases: R is false and Q is false, R is false and Q is true, and R is true. However because we are working within the formal system of logic (instead of outside it, informally), each case required much more effort. == See also == Deduction theorem List of logic systems § Implicational propositional calculus Peirce's law Propositional calculus Tautology (logic) Truth table Valuation (logic) == References == == Further reading == Mendelson, Elliot (1997) Introduction to Mathematical Logic, 4th ed. London: Chapman & Hall.
Wikipedia/Implicational_propositional_calculus
Constructive dilemma is a valid rule of inference of propositional logic. It is the inference that, if P implies Q and R implies S and either P or R is true, then either Q or S has to be true. In sum, if two conditionals are true and at least one of their antecedents is, then at least one of their consequents must be too. Constructive dilemma is the disjunctive version of modus ponens, whereas destructive dilemma is the disjunctive version of modus tollens. The constructive dilemma rule can be stated: ( P → Q ) , ( R → S ) , P ∨ R ∴ Q ∨ S {\displaystyle {\frac {(P\to Q),(R\to S),P\lor R}{\therefore Q\lor S}}} where the rule is that whenever instances of " P → Q {\displaystyle P\to Q} ", " R → S {\displaystyle R\to S} ", and " P ∨ R {\displaystyle P\lor R} " appear on lines of a proof, " Q ∨ S {\displaystyle Q\lor S} " can be placed on a subsequent line. == Formal notation == The constructive dilemma rule may be written in sequent notation: ( P → Q ) , ( R → S ) , ( P ∨ R ) ⊢ ( Q ∨ S ) {\displaystyle (P\to Q),(R\to S),(P\lor R)\vdash (Q\lor S)} where ⊢ {\displaystyle \vdash } is a metalogical symbol meaning that Q ∨ S {\displaystyle Q\lor S} is a syntactic consequence of P → Q {\displaystyle P\to Q} , R → S {\displaystyle R\to S} , and P ∨ R {\displaystyle P\lor R} in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic: ( ( ( P → Q ) ∧ ( R → S ) ) ∧ ( P ∨ R ) ) → ( Q ∨ S ) {\displaystyle (((P\to Q)\land (R\to S))\land (P\lor R))\to (Q\lor S)} where P {\displaystyle P} , Q {\displaystyle Q} , R {\displaystyle R} and S {\displaystyle S} are propositions expressed in some formal system. == Natural language example == If I win a million dollars, I will donate it to an orphanage. If my friend wins a million dollars, he will donate it to a wildlife fund. Either I win a million dollars or my friend wins a million dollars. Therefore, either an orphanage will get a million dollars, or a wildlife fund will get a million dollars. The dilemma derives its name because of the transfer of disjunctive operator. == References ==
Wikipedia/Constructive_dilemma
An existential graph is a type of diagrammatic or visual notation for logical expressions, created by Charles Sanders Peirce, who wrote on graphical logic as early as 1882, and continued to develop the method until his death in 1914. They include both a separate graphical notation for logical statements and a logical calculus, a formal system of rules of inference that can be used to derive theorems. == Background == Peirce found the algebraic notation (i.e. symbolic notation) of logic, especially that of predicate logic, which was still very new during his lifetime and which he himself played a major role in developing, to be philosophically unsatisfactory, because the symbols had their meaning by mere convention. In contrast, he strove for a style of writing in which the signs literally carry their meaning within them – in the terminology of his theory of signs: a system of iconic signs that resemble or resemble the represented objects and relations. Thus, the development of an iconic, graphic and – as he intended – intuitive and easy-to-learn logical system was a project that Peirce worked on throughout his life. After at least one aborted approach – the "Entitative Graphs" – the closed system of "Existential Graphs" finally emerged from 1896 onwards. Although considered by their creator to be a clearly superior and more intuitive system, as a mode of writing and as a calculus, they had no major influence on the history of logic. This has been attributed to the fact(s) that, for one, Peirce published little on this topic, and that the published texts were not written in a very understandable way; and, for two, that the linear formula notation in the hands of experts is actually the less complex tool. Hence, the existential graphs received little attention or were seen as unwieldy. From 1963 onwards, works by Don D. Roberts and J. Jay Zeman, in which Peirce's graphic systems were systematically examined and presented, led to a better understanding; even so, they have today found practical use within only one modern application—the conceptual graphs introduced by John F. Sowa in 1976, which are used in computer science to represent knowledge. However, existential graphs are increasingly reappearing as a subject of research in connection with a growing interest in graphical logic, which is also expressed in attempts to replace the rules of inference given by Peirce with more intuitive ones. The overall system of existential graphs is composed of three subsystems that build on each other, the alpha graphs, the beta graphs and the gamma graphs. The alpha graphs are a purely propositional logical system. Building on this, the beta graphs are a first order logical calculus. The gamma graphs, which have not yet been fully researched and were not completed by Peirce, are understood as a further development of the alpha and beta graphs. When interpreted appropriately, the gamma graphs cover higher-level predicate logic as well as modal logic. As late as 1903, Peirce began a new approach, the "Tinctured Existential Graphs," with which he wanted to replace the previous systems of alpha, beta and gamma graphs and combine their expressiveness and performance in a single new system. Like the gamma graphs, the "Tinctured Existential Graphs" remained unfinished. As calculi, the alpha, beta and gamma graphs are sound (i.e., all expressions derived as graphs are semantically valid). The alpha and beta graphs are also complete (i.e., all propositional or predicate-logically semantically valid expressions can be derived as alpha or beta graphs). == The graphs == Peirce proposed three systems of existential graphs: alpha, isomorphic to propositional logic and the two-element Boolean algebra; beta, isomorphic to first-order logic with identity, with all formulas closed; gamma, (nearly) isomorphic to normal modal logic. Alpha nests in beta and gamma. Beta does not nest in gamma, quantified modal logic being more general than put forth by Peirce. === Alpha === The syntax is: The blank page; Single letters or phrases written anywhere on the page; Any graph may be enclosed by a simple closed curve called a cut or sep. A cut can be empty. Cuts can nest and concatenate at will, but must never intersect. Any well-formed part of a graph is a subgraph. The semantics are: The blank page denotes Truth; Letters, phrases, subgraphs, and entire graphs may be True or False; To enclose a subgraph with a cut is equivalent to logical negation or Boolean complementation. Hence an empty cut denotes False; All subgraphs within a given cut are tacitly conjoined. Hence the alpha graphs are a minimalist notation for sentential logic, grounded in the expressive adequacy of And and Not. The alpha graphs constitute a radical simplification of the two-element Boolean algebra and the truth functors. The depth of an object is the number of cuts that enclose it. Rules of inference: Insertion - Any subgraph may be inserted into an odd numbered depth. The surrounding white page is depth 1. Depth 2 are the black letters and lines that encircle elements. Depth 3 is entering the next white area in an enclosed element. Erasure - Any subgraph in an even numbered depth may be erased. Rules of equivalence: Double cut - A pair of cuts with nothing between them may be drawn around any subgraph. Likewise two nested cuts with nothing between them may be erased. This rule is equivalent to Boolean involution and double negation elimination. Iteration/Deiteration – To understand this rule, it is best to view a graph as a tree structure having nodes and ancestors. Any subgraph P in node n may be copied into any node depending on n. Likewise, any subgraph P in node n may be erased if there exists a copy of P in some node ancestral to n (i.e., some node on which n depends). For an equivalent rule in an algebraic context, see C2 in Laws of Form. A proof manipulates a graph by a series of steps, with each step justified by one of the above rules. If a graph can be reduced by steps to the blank page or an empty cut, it is what is now called a tautology (or the complement thereof, a contradiction). Graphs that cannot be simplified beyond a certain point are analogues of the satisfiable formulas of first-order logic. === Beta === In the case of betagraphs, the atomic expressions are no longer propositional letters (P, Q, R,...) or statements ("It rains," "Peirce died in poverty"), but predicates in the sense of predicate logic (see there for more details), possibly abbreviated to predicate letters (F, G, H,...). A predicate in the sense of predicate logic is a sequence of words with clearly defined spaces that becomes a propositional sentence if you insert a proper noun into each space. For example, the word sequence "_ x is a human" is a predicate because it gives rise to the declarative sentence "Peirce is a human" if you enter the proper name "Peirce" in the blank space. Likewise, the word sequence "_1 is richer than _2" is a predicate, because it results in the statement "Socrates is richer than Plato" if the proper names "Socrates" or "Plato" are inserted into the spaces. === Notation of betagraphs === The basic language device is the line of identity, a thickly drawn line of any form. The identity line docks onto the blank space of a predicate to show that the predicate applies to at least one individual. In order to express that the predicate "_ is a human being" applies to at least one individual – i.e. to say that there is (at least) one human being – one writes an identity line in the blank space of the predicate "_ is a human being:" The beta graphs can be read as a system in which all formula are to be taken as closed, because all variables are implicitly quantified. If the "shallowest" part of a line of identity has even depth, the associated variable is tacitly existentially (universally) quantified. Zeman (1964) was the first to note that the beta graphs are isomorphic to first-order logic with equality (also see Zeman 1967). However, the secondary literature, especially Roberts (1973) and Shin (2002), does not agree on how this is. Peirce's writings do not address this question, because first-order logic was first clearly articulated only after his death, in the 1928 first edition of David Hilbert and Wilhelm Ackermann's Principles of Mathematical Logic. === Gamma === Add to the syntax of alpha a second kind of simple closed curve, written using a dashed rather than a solid line. Peirce proposed rules for this second style of cut, which can be read as the primitive unary operator of modal logic. Zeman (1964) was the first to note that the gamma graphs are equivalent to the well-known modal logics S4 and S5. Hence the gamma graphs can be read as a peculiar form of normal modal logic. This finding of Zeman's has received little attention to this day, but is nonetheless included here as a point of interest. == Peirce's role == The existential graphs are a curious offspring of Peirce the logician/mathematician with Peirce the founder of a major strand of semiotics. Peirce's graphical logic is but one of his many accomplishments in logic and mathematics. In a series of papers beginning in 1867, and culminating with his classic paper in the 1885 American Journal of Mathematics, Peirce developed much of the two-element Boolean algebra, propositional calculus, quantification and the predicate calculus, and some rudimentary set theory. Model theorists consider Peirce the first of their kind. He also extended De Morgan's relation algebra. He stopped short of metalogic (which eluded even Principia Mathematica). But Peirce's evolving semiotic theory led him to doubt the value of logic formulated using conventional linear notation, and to prefer that logic and mathematics be notated in two (or even three) dimensions. His work went beyond Euler's diagrams and Venn's 1880 revision thereof. Frege's 1879 work Begriffsschrift also employed a two-dimensional notation for logic, but one very different from Peirce's. Peirce's first published paper on graphical logic (reprinted in Vol. 3 of his Collected Papers) proposed a system dual (in effect) to the alpha existential graphs, called the entitative graphs. He very soon abandoned this formalism in favor of the existential graphs. In 1911 Victoria, Lady Welby showed the existential graphs to C. K. Ogden who felt they could usefully be combined with Welby's thoughts in a "less abstruse form." Otherwise they attracted little attention during his life and were invariably denigrated or ignored after his death, until the PhD theses by Roberts (1964) and Zeman (1964). == See also == Nor operator Conceptual graph Charles Sander Peirce Propositional calculus == References == == Further reading == === Primary literature === 1931–1935 & 1958. The Collected Papers of Charles Sanders Peirce. Volume 4, Book II: "Existential Graphs", consists of paragraphs 347–584. A discussion also begins in paragraph 617. Paragraphs 347–349 (II.1.1. "Logical Diagram")—Peirce's definition "Logical Diagram (or Graph)" in Baldwin's Dictionary of Philosophy and Psychology (1902), v. 2, p. 28. Classics in the History of Psychology Eprint. Paragraphs 350–371 (II.1.2. "Of Euler's Diagrams")—from "Graphs" (manuscript 479) c. 1903. Paragraphs 372–584 Eprint. Paragraphs 372–393 (II.2. "Symbolic Logic")—Peirce's part of "Symbolic Logic" in Baldwin's Dictionary of Philosophy and Psychology (1902) v. 2, pp. 645–650, beginning (near second column's top) with "If symbolic logic be defined...". Paragraph 393 (Baldwin's DPP2 p. 650) is by Peirce and Christine Ladd-Franklin ("C.S.P., C.L.F."). Paragraphs 394–417 (II.3. "Existential Graphs")—from Peirce's pamphlet A Syllabus of Certain Topics of Logic, pp. 15–23, Alfred Mudge & Son, Boston (1903). Paragraphs 418–509 (II.4. "On Existential Graphs, Euler's Diagrams, and Logical Algebra")—from "Logical Tracts, No. 2" (manuscript 492), c. 1903. Paragraphs 510–529 (II.5. "The Gamma Part of Existential Graphs")—from "Lowell Lectures of 1903," Lecture IV (manuscript 467). Paragraphs 530–572 (II.6.)—"Prolegomena To an Apology For Pragmaticism" (1906), The Monist, v. XVI, n. 4, pp. 492-546. Corrections (1907) in The Monist v. XVII, p. 160. Paragraphs 573–584 (II.7. "An Improvement on the Gamma Graphs")—from "For the National Academy of Science, 1906 April Meeting in Washington" (manuscript 490). Paragraphs 617–623 (at least) (in Book III, Ch. 2, §2, paragraphs 594–642)—from "Some Amazing Mazes: Explanation of Curiosity the First", The Monist, v. XVIII, 1908, n. 3, pp. 416-464, see starting p. 440. 1992. "Lecture Three: The Logic of Relatives", Reasoning and the Logic of Things, pp. 146–164. Ketner, Kenneth Laine (editing and introduction), and Hilary Putnam (commentary). Harvard University Press. Peirce's 1898 lectures in Cambridge, Massachusetts. 1977, 2001. Semiotic and Significs: The Correspondence between C.S. Peirce and Victoria Lady Welby. Hardwick, C.S., ed. Lubbock TX: Texas Tech University Press. 2nd edition 2001. A transcription of Peirce's MS 514 (1909), edited with commentary by John Sowa. Currently, the chronological critical edition of Peirce's works, the Writings, extends only to 1892. Much of Peirce's work on logical graphs consists of manuscripts written after that date and still unpublished. Hence our understanding of Peirce's graphical logic is likely to change as the remaining 23 volumes of the chronological edition appear. === Secondary literature === Hammer, Eric M. (1998), "Semantics for Existential Graphs," Journal of Philosophical Logic 27: 489–503. Ketner, Kenneth Laine (1981), "The Best Example of Semiosis and Its Use in Teaching Semiotics", American Journal of Semiotics v. I, n. 1–2, pp. 47–83. Article is an introduction to existential graphs. (1990), Elements of Logic: An Introduction to Peirce's Existential Graphs, Texas Tech University Press, Lubbock, TX, 99 pages, spiral-bound. Queiroz, João & Stjernfelt, Frederik (2011), "Diagrammatical Reasoning and Peircean Logic Representation", Semiotica vol. 186 (1/4). (Special issue on Peirce's diagrammatic logic.) [1] Roberts, Don D. (1964), "Existential Graphs and Natural Deduction" in Moore, E. C., and Robin, R. S., eds., Studies in the Philosophy of C. S. Peirce, 2nd series. Amherst MA: University of Massachusetts Press. The first publication to show any sympathy and understanding for Peirce's graphical logic. (1973). The Existential Graphs of C.S. Peirce. John Benjamins. An outgrowth of his 1963 thesis. Shin, Sun-Joo (2002), The Iconic Logic of Peirce's Graphs. MIT Press. Zalamea, Fernando. Peirce's Logic of Continuity. Docent Press, Boston MA. 2012. ISBN 9 780983 700494. Part II: Peirce's Existential Graphs, pp. 76-162. Zeman, J. J. (1964), The Graphical Logic of C.S. Peirce. Archived 2018-09-14 at the Wayback Machine Unpublished Ph.D. thesis submitted to the University of Chicago. (1967), "A System of Implicit Quantification," Journal of Symbolic Logic 32: 480–504. == External links == Stanford Encyclopedia of Philosophy: Peirce's Logic by Sun-Joo Shin and Eric Hammer. Dau, Frithjof, Peirce's Existential Graphs --- Readings and Links. An annotated bibliography on the existential graphs. Gottschall, Christian, Proof Builder Archived 2006-02-12 at the Wayback Machine — Java applet for deriving Alpha graphs. Liu, Xin-Wen, "The literature of C.S. Peirce’s Existential Graphs" (via Wayback Machine), Institute of Philosophy, Chinese Academy of Social Sciences, Beijing, PRC. Sowa, John F. "Laws, Facts, and Contexts: Foundations for Multimodal Reasoning". Retrieved 2009-10-23. (NB. Existential graphs and conceptual graphs.) Van Heuveln, Bram, "Existential Graphs. Archived 2009-08-29 at the Wayback Machine" Dept. of Cognitive Science, Rensselaer Polytechnic Institute. Alpha only. Zeman, Jay J., "Existential Graphs". With four online papers by Peirce.
Wikipedia/Existential_graph
Rules of inference are ways of deriving conclusions from premises. They are integral parts of formal logic, serving as norms of the logical structure of valid arguments. If an argument with true premises follows a rule of inference then the conclusion cannot be false. Modus ponens, an influential rule of inference, connects two premises of the form "if P {\displaystyle P} then Q {\displaystyle Q} " and " P {\displaystyle P} " to the conclusion " Q {\displaystyle Q} ", as in the argument "If it rains, then the ground is wet. It rains. Therefore, the ground is wet." There are many other rules of inference for different patterns of valid arguments, such as modus tollens, disjunctive syllogism, constructive dilemma, and existential generalization. Rules of inference include rules of implication, which operate only in one direction from premises to conclusions, and rules of replacement, which state that two expressions are equivalent and can be freely swapped. Rules of inference contrast with formal fallacies—invalid argument forms involving logical errors. Rules of inference belong to logical systems, and distinct logical systems use different rules of inference. Propositional logic examines the inferential patterns of simple and compound propositions. First-order logic extends propositional logic by articulating the internal structure of propositions. It introduces new rules of inference governing how this internal structure affects valid arguments. Modal logics explore concepts like possibility and necessity, examining the inferential structure of these concepts. Intuitionistic, paraconsistent, and many-valued logics propose alternative inferential patterns that differ from the traditionally dominant approach associated with classical logic. Various formalisms are used to express logical systems. Some employ many intuitive rules of inference to reflect how people naturally reason while others provide minimalistic frameworks to represent foundational principles without redundancy. Rules of inference are relevant to many areas, such as proofs in mathematics and automated reasoning in computer science. Their conceptual and psychological underpinnings are studied by philosophers of logic and cognitive psychologists. == Definition == A rule of inference is a way of drawing a conclusion from a set of premises. Also called inference rule and transformation rule, it is a norm of correct inferences that can be used to guide reasoning, justify conclusions, and criticize arguments. As part of deductive logic, rules of inference are argument forms that preserve the truth of the premises, meaning that the conclusion is always true if the premises are true. An inference is deductively correct or valid if it follows a valid rule of inference. Whether this is the case depends only on the form or syntactical structure of the premises and the conclusion. As a result, the actual content or concrete meaning of the statements does not affect validity. For instance, modus ponens is a rule of inference that connects two premises of the form "if P {\displaystyle P} then Q {\displaystyle Q} " and " P {\displaystyle P} " to the conclusion " Q {\displaystyle Q} ", where P {\displaystyle P} and Q {\displaystyle Q} stand for statements. Any argument with this form is valid, independent of the specific meanings of P {\displaystyle P} and Q {\displaystyle Q} , such as the argument "If it rains, then the ground is wet. It rains. Therefore, the ground is wet". In addition to modus ponens, there are many other rules of inference, such as modus tollens, disjunctive syllogism, hypothetical syllogism, constructive dilemma, and destructive dilemma. There are different formats to represent rules of inference. A common approach is to use a new line for each premise and separate the premises from the conclusion using a horizontal line. With this format, modus ponens is written as: P → Q P Q {\displaystyle {\begin{array}{l}P\to Q\\P\\\hline Q\end{array}}} Some logicians employ the therefore sign ( ∴ {\displaystyle \therefore } ) together or instead of the horizontal line to indicate where the conclusion begins. The sequent notation, a different approach, uses a single line in which the premises are separated by commas and connected to the conclusion with the turnstile symbol ( ⊢ {\displaystyle \vdash } ), as in P → Q , P ⊢ Q {\displaystyle P\to Q,P\vdash Q} . The letters P {\displaystyle P} and Q {\displaystyle Q} in these formulas are so-called metavariables: they stand for any simple or compound proposition. Rules of inference belong to logical systems and distinct logical systems may use different rules of inference. For example, universal instantiation is a rule of inference in the system of first-order logic but not in propositional logic. Rules of inference play a central role in proofs as explicit procedures for arriving at a new line of a proof based on the preceding lines. Proofs involve a series of inferential steps and often use various rules of inference to establish the theorem they intend to demonstrate. Rules of inference are definitory rules—rules about which inferences are allowed. They contrast with strategic rules, which govern the inferential steps needed to prove a certain theorem from a specific set of premises. Mastering definitory rules by itself is not sufficient for effective reasoning since they provide little guidance on how to reach the intended conclusion. As standards or procedures governing the transformation of symbolic expressions, rules of inference are similar to mathematical functions taking premises as input and producing a conclusion as output. According to one interpretation, rules of inference are inherent in logical operators found in statements, making the meaning and function of these operators explicit without adding any additional information. Logicians distinguish two types of rules of inference: rules of implication and rules of replacement. Rules of implication, like modus ponens, operate only in one direction, meaning that the conclusion can be deduced from the premises but the premises cannot be deduced from the conclusion. Rules of replacement, by contrast, operate in both directions, stating that two expressions are equivalent and can be freely replaced with each other. In classical logic, for example, a proposition ( P {\displaystyle P} ) is equivalent to the negation of its negation ( ¬ ¬ P {\displaystyle \lnot \lnot P} ). As a result, one can infer one from the other in either direction, making it a rule of replacement. Other rules of replacement include De Morgan's laws as well as the commutative and associative properties of conjunction and disjunction. While rules of implication apply only to complete statements, rules of replacement can be applied to any part of a compound statement. One of the earliest discussions of formal rules of inference is found in antiquity in Aristotle's logic. His explanations of valid and invalid syllogisms were further refined in medieval and early modern philosophy. The development of symbolic logic in the 19th century led to the formulation of many additional rules of inference belonging to classical propositional and first-order logic. In the 20th and 21st centuries, logicians developed various non-classical systems of logic with alternative rules of inference. == Basic concepts == Rules of inference describe the structure of arguments, which consist of premises that support a conclusion. Premises and conclusions are statements or propositions about what is true. For instance, the assertion "The door is open." is a statement that is either true or false, while the question "Is the door open?" and the command "Open the door!" are not statements and have no truth value. An inference is a step of reasoning from premises to a conclusion while an argument is the outward expression of an inference. Logic is the study of correct reasoning and examines how to distinguish good from bad arguments. Deductive logic is the branch of logic that investigates the strongest arguments, called deductively valid arguments, for which the conclusion cannot be false if all the premises are true. This is expressed by saying that the conclusion is a logical consequence of the premises. Rules of inference belong to deductive logic and describe argument forms that fulfill this requirement. In order to precisely assess whether an argument follows a rule of inference, logicians use formal languages to express statements in a rigorous manner, similar to mathematical formulas. They combine formal languages with rules of inference to construct formal systems—frameworks for formulating propositions and drawing conclusions. Different formal systems may employ different formal languages or different rules of inference. The basic rules of inference within a formal system can often be expanded by introducing new rules of inference, known as admissible rules. Admissible rules do not change which arguments in a formal system are valid but can simplify proofs. If an admissible rule can be expressed through a combination of the system's basic rules, it is called a derived or derivable rule. Statements that can be deduced in a formal system are called theorems of this formal system. Widely-used systems of logic include propositional logic, first-order logic, and modal logic. Rules of inference only ensure that the conclusion is true if the premises are true. An argument with false premises can still be valid, but its conclusion could be false. For example, the argument "If pigs can fly, then the sky is purple. Pigs can fly. Therefore, the sky is purple." is valid because it follows modus ponens, even though it contains false premises. A valid argument is called sound argument if all premises are true. Rules of inference are closely related to tautologies. In logic, a tautology is a statement that is true only because of the logical vocabulary it uses, independent of the meanings of its non-logical vocabulary. For example, the statement "if the tree is green and the sky is blue then the tree is green" is true independently of the meanings of terms like tree and green, making it a tautology. Every argument following a rule of inference can be transformed into a tautology. This is achieved by forming a conjunction (and) of all premises and connecting it through implication (if ... then ...) to the conclusion, thereby combining all the individual statements of the argument into a single statement. For example, the valid argument "The tree is green and the sky is blue. Therefore, the tree is green." can be transformed into the tautology "if the tree is green and the sky is blue then the tree is green". Rules of inference are also closely related to laws of thought, which are basic principles of logic that can take the form tautologies. For example, the law of identity asserts that each entity is identical to itself. Other traditional laws of thought include the law of non-contradiction and the law of excluded middle. Rules of inference are not the only way to demonstrate that an argument is valid. Alternative methods include the use of truth tables, which applies to propositional logic, and truth trees, which can also be employed in first-order logic. == Systems of logic == === Classical === ==== Propositional logic ==== Propositional logic examines the inferential patterns of simple and compound propositions. It uses letters, such as P {\displaystyle P} and Q {\displaystyle Q} , to represent simple propositions. Compound propositions are formed by modifying or combining simple propositions with logical operators, such as ¬ {\displaystyle \lnot } (not), ∧ {\displaystyle \land } (and), ∨ {\displaystyle \lor } (or), and → {\displaystyle \to } (if ... then ...). For example, if P {\displaystyle P} stands for the statement "it is raining" and Q {\displaystyle Q} stands for the statement "the streets are wet", then ¬ P {\displaystyle \lnot P} expresses "it is not raining" and P → Q {\displaystyle P\to Q} expresses "if it is raining then the streets are wet". These logical operators are truth-functional, meaning that the truth value of a compound proposition depends only on the truth values of the simple propositions composing it. For instance, the compound proposition P ∧ Q {\displaystyle P\land Q} is only true if both P {\displaystyle P} and Q {\displaystyle Q} are true; in all other cases, it is false. Propositional logic is not concerned with the concrete meaning of propositions other than their truth values. Key rules of inference in propositional logic are modus ponens, modus tollens, hypothetical syllogism, disjunctive syllogism, and double negation elimination. Further rules include conjunction introduction, conjunction elimination, disjunction introduction, disjunction elimination, constructive dilemma, destructive dilemma, absorption, and De Morgan's laws. ==== First-order logic ==== First-order logic also employs the logical operators from propositional logic but includes additional devices to articulate the internal structure of propositions. Basic propositions in first-order logic consist of a predicate, symbolized with uppercase letters like P {\displaystyle P} and Q {\displaystyle Q} , which is applied to singular terms, symbolized with lowercase letters like a {\displaystyle a} and b {\displaystyle b} . For example, if a {\displaystyle a} stands for "Aristotle" and P {\displaystyle P} stands for "is a philosopher", the formula P ( a ) {\displaystyle P(a)} means that "Aristotle is a philosopher". Another innovation of first-order logic is the use of the quantifiers ∃ {\displaystyle \exists } and ∀ {\displaystyle \forall } , which express that a predicate applies to some or all individuals. For instance, the formula ∃ x P ( x ) {\displaystyle \exists xP(x)} expresses that philosophers exist while ∀ x P ( x ) {\displaystyle \forall xP(x)} expresses that everyone is a philosopher. The rules of inference from propositional logic are also valid in first-order logic. Additionally, first-order logic introduces new rules of inference that govern the role of singular terms, predicates, and quantifiers in arguments. Key rules of inference are universal instantiation and existential generalization. Other rules of inference include universal generalization and existential instantiation. === Modal logics === Modal logics are formal systems that extend propositional logic and first-order logic with additional logical operators. Alethic modal logic introduces the operator ◊ {\displaystyle \Diamond } to express that something is possible and the operator ◻ {\displaystyle \Box } to express that something is necessary. For example, if the P {\displaystyle P} means that "Parvati works", then ◊ P {\displaystyle \Diamond P} means that "It is possible that Parvati works" while ◻ P {\displaystyle \Box P} means that "It is necessary that Parvati works". These two operators are related by a rule of replacement stating that ◻ P {\displaystyle \Box P} is equivalent to ¬ ◊ ¬ P {\displaystyle \lnot \Diamond \lnot P} . In other words: if something is necessarily true then it is not possible that it is not true. Further rules of inference include the necessitation rule, which asserts that a statement is necessarily true if it is provable in a formal system without any additional premises, and the distribution axiom, which allows one to derive ◊ P → ◊ Q {\displaystyle \Diamond P\to \Diamond Q} from ◊ ( P → Q ) {\displaystyle \Diamond (P\to Q)} . These rules of inference belong to system K, a weak form of modal logic with only the most basic rules of inference. Many formal systems of alethic modal logic include additional rules of inference, such as system T, which allows one to deduce P {\displaystyle P} from ◻ P {\displaystyle \Box P} . Non-alethic systems of modal logic introduce operators that behave like ◊ {\displaystyle \Diamond } and ◻ {\displaystyle \Box } in alethic modal logic, following similar rules of inference but with different meanings. Deontic logic is one type of non-alethic logic. It uses the operator P {\displaystyle P} to express that an action is permitted and the operator O {\displaystyle O} to express that an action is required, where P {\displaystyle P} behaves similarly to ◊ {\displaystyle \Diamond } and O {\displaystyle O} behaves similarly to ◻ {\displaystyle \Box } . For instance, the rule of replacement in alethic modal logic asserting that ◻ Q {\displaystyle \Box Q} is equivalent to ¬ ◊ ¬ Q {\displaystyle \lnot \Diamond \lnot Q} also applies to deontic logic. As a result, one can deduce from O Q {\displaystyle OQ} (e.g. Quinn has an obligation to help) that ¬ P ¬ Q {\displaystyle \lnot P\lnot Q} (e.g. Quinn is not permitted not to help). Other systems of modal logic include temporal modal logic, which has operators for what is always or sometimes the case, as well as doxastic and epistemic modal logics, which have operators for what people believe and know. === Others === Many other systems of logic have been proposed. One of the earliest systems is Aristotelian logic, according to which each statement is made up of two terms, a subject and a predicate, connected by a copula. For example, the statement "all humans are mortal" has the subject "all humans", the predicate "mortal", and the copula "is". All rules of inference in Aristotelian logic have the form of syllogisms, which consist of two premises and a conclusion. For instance, the Barbara rule of inference describes the validity of arguments of the form "All men are mortal. All Greeks are men. Therefore, all Greeks are mortal." Second-order logic extends first-order logic by allowing quantifiers to apply to predicates in addition to singular terms. For example, to express that the individuals Adam ( a {\displaystyle a} ) and Bianca ( b {\displaystyle b} ) share a property, one can use the formula ∃ X ( X ( a ) ∧ X ( b ) ) {\displaystyle \exists X(X(a)\land X(b))} . Second-order logic also comes with new rules of inference. For instance, one can infer P ( a ) {\displaystyle P(a)} (Adam is a philosopher) from ∀ X X ( a ) {\displaystyle \forall XX(a)} (every property applies to Adam). Intuitionistic logic is a non-classical variant of propositional and first-order logic. It shares with them many rules of inference, such as modus ponens, but excludes certain rules. For example, in classical logic, one can infer P {\displaystyle P} from ¬ ¬ P {\displaystyle \lnot \lnot P} using the rule of double negation elimination. However, in intuitionistic logic, this inference is invalid. As a result, every theorem that can be deduced in intuitionistic logic can also be deduced in classical logic, but some theorems provable in classical logic cannot be proven in intuitionistic logic. Paraconsistent logics revise classical logic to allow the existence of contradictions. In logic, a contradiction happens if the same proposition is both affirmed and denied, meaning that a formal system contains both P {\displaystyle P} and ¬ P {\displaystyle \lnot P} as theorems. Classical logic prohibits contradictions because classical rules of inference lead to the principle of explosion, an admissible rule of inference that makes it possible to infer Q {\displaystyle Q} from the premises P {\displaystyle P} and ¬ P {\displaystyle \lnot P} . Since Q {\displaystyle Q} is unrelated to P {\displaystyle P} , any arbitrary statement can be deduced from a contradiction, making the affected systems useless for deciding what is true and false. Paraconsistent logics solve this problem by modifying the rules of inference in such a way that the principle of explosion is not an admissible rule of inference. As a result, it is possible to reason about inconsistent information without deriving absurd conclusions. Many-valued logics modify classical logic by introducing additional truth values. In classical logic, a proposition is either true or false with nothing in between. In many-valued logics, some propositions are neither true nor false. Kleene logic, for example, is a three-valued logic that introduces the additional truth value undefined to describe situations where information is incomplete or uncertain. Many-valued logics have adjusted rules of inference to accommodate the additional truth values. For instance, the classical rule of replacement stating that P → Q {\displaystyle P\to Q} is equivalent to ¬ P ∨ Q {\displaystyle \lnot P\lor Q} is invalid in many three-valued systems. == Formalisms == Various formalisms or proof systems have been suggested as distinct ways of codifying reasoning and demonstrating the validity of arguments. Unlike different systems of logic, these formalisms do not impact what can be proven; they only influence how proofs are formulated. Influential frameworks include natural deduction systems, Hilbert systems, and sequent calculi. Natural deduction systems aim to reflect how people naturally reason by introducing many intuitive rules of inference to make logical derivations more accessible. They break complex arguments into simple steps, often using subproofs based on temporary premises. The rules of inference in natural deduction target specific logical operators, governing how an operator can be added with introduction rules or removed with elimination rules. For example, the rule of conjunction introduction asserts that one can infer P ∧ Q {\displaystyle P\land Q} from the premises P {\displaystyle P} and Q {\displaystyle Q} , thereby producing a conclusion with the conjunction operator from premises that do not contain it. Conversely, the rule of conjunction elimination asserts that one can infer P {\displaystyle P} from P ∧ Q {\displaystyle P\land Q} , thereby producing a conclusion that no longer includes the conjunction operator. Similar rules of inference are disjunction introduction and elimination, implication introduction and elimination, negation introduction and elimination, and biconditional introduction and elimination. As a result, systems of natural deduction usually include many rules of inference. Hilbert systems, by contrast, aim to provide a minimal and efficient framework of logical reasoning by including as few rules of inference as possible. Many Hilbert systems only have modus ponens as the sole rule of inference. To ensure that all theorems can be deduced from this minimal foundation, they introduce axiom schemes. An axiom scheme is a template to create axioms or true statements. It uses metavariables, which are placeholders that can be replaced by specific terms or formulas to generate an infinite number of true statements. For example, propositional logic can be defined with the following three axiom schemes: (1) P → ( Q → P ) {\displaystyle P\to (Q\to P)} , (2) ( P → ( Q → R ) ) → ( ( P → Q ) → ( P → R ) ) {\displaystyle (P\to (Q\to R))\to ((P\to Q)\to (P\to R))} , and (3) ( ¬ P → ¬ Q ) → ( Q → P ) {\displaystyle (\lnot P\to \lnot Q)\to (Q\to P)} . To formulate proofs, logicians create new statements from axiom schemes and then apply modus ponens to these statements to derive conclusions. Compared to natural deduction, this procedure tends to be less intuitive since its heavy reliance on symbolic manipulation can obscure the underlying logical reasoning. Sequent calculi, another approach, introduce sequents as formal representations of arguments. A sequent has the form A 1 , … , A m ⊢ B 1 , … , B n {\displaystyle A_{1},\dots ,A_{m}\vdash B_{1},\dots ,B_{n}} , where A i {\displaystyle A_{i}} and B i {\displaystyle B_{i}} stand for propositions. Sequents are conditional assertions stating that at least one B i {\displaystyle B_{i}} is true if all A i {\displaystyle A_{i}} are true. Rules of inference operate on sequents to produce additional sequents. Sequent calculi define two rules of inference for each logical operator: one to introduce it on the left side of a sequent and another to introduce it on the right side. For example, through the rule for introducing the operator ¬ {\displaystyle \lnot } on the left side, one can infer ¬ R , P ⊢ Q {\displaystyle \lnot R,P\vdash Q} from P ⊢ Q , R {\displaystyle P\vdash Q,R} . The cut rule, an additional rule of inference, makes it possible to simplify sequents by removing certain propositions. == Formal fallacies == While rules of inference describe valid patterns of deductive reasoning, formal fallacies are invalid argument forms that involve logical errors. The premises of a formal fallacy do not properly support its conclusion: the conclusion can be false even if all premises are true. Formal fallacies often mimic the structure of valid rules of inference and can thereby mislead people into unknowingly committing them and accepting their conclusions. The formal fallacy of affirming the consequent concludes P {\displaystyle P} from the premises P → Q {\displaystyle P\to Q} and Q {\displaystyle Q} , as in the argument "If Leo is a cat, then Leo is an animal. Leo is an animal. Therefore, Leo is a cat." This fallacy resembles valid inferences following modus ponens, with the key difference that the fallacy swaps the second premise and the conclusion. The formal fallacy of denying the antecedent concludes ¬ Q {\displaystyle \lnot Q} from the premises P → Q {\displaystyle P\to Q} and ¬ P {\displaystyle \lnot P} , as in the argument "If Laya saw the movie, then Laya had fun. Laya did not see the movie. Therefore, Laya did not have fun." This fallacy resembles valid inferences following modus tollens, with the key difference that the fallacy swaps the second premise and the conclusion. Other formal fallacies include affirming a disjunct, the existential fallacy, and the fallacy of the undistributed middle. == In various fields == Rules of inference are relevant to many fields, especially the formal sciences, such as mathematics and computer science, where they are used to prove theorems. Mathematical proofs often start with a set of axioms to describe the logical relationships between mathematical constructs. To establish theorems, mathematicians apply rules of inference to these axioms, aiming to demonstrate that the theorems are logical consequences. Mathematical logic, a subfield of mathematics and logic, uses mathematical methods and frameworks to study rules of inference and other logical concepts. Computer science also relies on deductive reasoning, employing rules of inference to establish theorems and validate algorithms. Logic programming frameworks, such as Prolog, allow developers to represent knowledge and use computation to draw inferences and solve problems. These frameworks often include an automated theorem prover, a program that uses rules of inference to generate or verify proofs automatically. Expert systems utilize automated reasoning to simulate the decision-making processes of human experts in specific fields, such as medical diagnosis, and assist in complex problem-solving tasks. They have a knowledge base to represent the facts and rules of the field and use an inference engine to extract relevant information and respond to user queries. Rules of inference are central to the philosophy of logic regarding the contrast between deductive-theoretic and model-theoretic conceptions of logical consequence. Logical consequence, a fundamental concept in logic, is the relation between the premises of a deductively valid argument and its conclusion. Conceptions of logical consequence explain the nature of this relation and the conditions under which it exists. The deductive-theoretic conception relies on rules of inference, arguing that logical consequence means that the conclusion can be deduced from the premises through a series of inferential steps. The model-theoretic conception, by contrast, focuses on how the non-logical vocabulary of statements can be interpreted. According to this view, logical consequence means that no counterexamples are possible: under no interpretation are the premises true and the conclusion false. Cognitive psychologists study mental processes, including logical reasoning. They are interested in how humans use rules of inference to draw conclusions, examining the factors that influence correctness and efficiency. They observe that humans are better at using some rules of inference than others. For example, the rate of successful inferences is higher for modus ponens than for modus tollens. A related topic focuses on biases that lead individuals to mistake formal fallacies for valid arguments. For instance, fallacies of the types affirming the consequent and denying the antecedent are often mistakenly accepted as valid. The assessment of arguments also depends on the concrete meaning of the propositions: individuals are more likely to accept a fallacy if its conclusion sounds plausible. == See also == Immediate inference Inference objection Law of thought List of rules of inference Logical truth Structural rule == References == === Notes === === Citations === === Sources ===
Wikipedia/Inference_rule
A propositional directed acyclic graph (PDAG) is a data structure that is used to represent a Boolean function. A Boolean function can be represented as a rooted, directed acyclic graph of the following form: Leaves are labeled with ⊤ {\displaystyle \top } (true), ⊥ {\displaystyle \bot } (false), or a Boolean variable. Non-leaves are △ {\displaystyle \bigtriangleup } (logical and), ▽ {\displaystyle \bigtriangledown } (logical or) and ◊ {\displaystyle \Diamond } (logical not). △ {\displaystyle \bigtriangleup } - and ▽ {\displaystyle \bigtriangledown } -nodes have at least one child. ◊ {\displaystyle \Diamond } -nodes have exactly one child. Leaves labeled with ⊤ {\displaystyle \top } ( ⊥ {\displaystyle \bot } ) represent the constant Boolean function which always evaluates to 1 (0). A leaf labeled with a Boolean variable x {\displaystyle x} is interpreted as the assignment x = 1 {\displaystyle x=1} , i.e. it represents the Boolean function which evaluates to 1 if and only if x = 1 {\displaystyle x=1} . The Boolean function represented by a △ {\displaystyle \bigtriangleup } -node is the one that evaluates to 1, if and only if the Boolean function of all its children evaluate to 1. Similarly, a ▽ {\displaystyle \bigtriangledown } -node represents the Boolean function that evaluates to 1, if and only if the Boolean function of at least one child evaluates to 1. Finally, a ◊ {\displaystyle \Diamond } -node represents the complementary Boolean function its child, i.e. the one that evaluates to 1, if and only if the Boolean function of its child evaluates to 0. == PDAG, BDD, and NNF == Every binary decision diagram (BDD) and every negation normal form (NNF) are also a PDAG with some particular properties. The following pictures represent the Boolean function f(x1, x2, x3) = -x1 * -x2 * -x3 + x1 * x2 + x2 * x3: == See also == Data structure Boolean satisfiability problem Proposition Boolean circuit == References == M. Wachter & R. Haenni, "Propositional DAGs: a New Graph-Based Language for Representing Boolean Functions", KR'06, 10th International Conference on Principles of Knowledge Representation and Reasoning, Lake District, UK, 2006. M. Wachter & R. Haenni, "Probabilistic Equivalence Checking with Propositional DAGs", Technical Report iam-2006-001, Institute of Computer Science and Applied Mathematics, University of Bern, Switzerland, 2006. M. Wachter, R. Haenni & J. Jonczy, "Reliability and Diagnostics of Modular Systems: a New Probabilistic Approach", DX'06, 18th International Workshop on Principles of Diagnosis, Peñaranda de Duero, Burgos, Spain, 2006.
Wikipedia/Propositional_directed_acyclic_graph
In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem. The resolution rule can be traced back to Davis and Putnam (1960); however, their algorithm required trying all ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson's syntactical unification algorithm, which allowed one to instantiate the formula during the proof "on demand" just as far as needed to keep refutation completeness. The clause produced by a resolution rule is sometimes called a resolvent. == Resolution in propositional logic == === Resolution rule === The resolution rule in propositional logic is a single valid inference rule that produces a new clause implied by two clauses containing complementary literals. A literal is a propositional variable or the negation of a propositional variable. Two literals are said to be complements if one is the negation of the other (in the following, ¬ c {\displaystyle \lnot c} is taken to be the complement to c {\displaystyle c} ). The resulting clause contains all the literals that do not have complements. Formally: a 1 ∨ a 2 ∨ ⋯ ∨ c , b 1 ∨ b 2 ∨ ⋯ ∨ ¬ c a 1 ∨ a 2 ∨ ⋯ ∨ b 1 ∨ b 2 ∨ ⋯ {\displaystyle {\frac {a_{1}\lor a_{2}\lor \cdots \lor c,\quad b_{1}\lor b_{2}\lor \cdots \lor \neg c}{a_{1}\lor a_{2}\lor \cdots \lor b_{1}\lor b_{2}\lor \cdots }}} where all a i {\displaystyle a_{i}} , b i {\displaystyle b_{i}} , and c {\displaystyle c} are literals, the dividing line stands for "entails". The above may also be written as: ( ¬ a 1 ∧ ¬ a 2 ∧ ⋯ ) → c , c → ( b 1 ∨ b 2 ∨ ⋯ ) ( ¬ a 1 ∧ ¬ a 2 ∧ ⋯ ) → ( b 1 ∨ b 2 ∨ ⋯ ) {\displaystyle {\frac {(\neg a_{1}\land \neg a_{2}\land \cdots )\rightarrow c,\quad c\rightarrow (b_{1}\lor b_{2}\lor \cdots )}{(\neg a_{1}\land \neg a_{2}\land \cdots )\rightarrow (b_{1}\lor b_{2}\lor \cdots )}}} Or schematically as: Γ 1 ∪ { ℓ } Γ 2 ∪ { ℓ ¯ } Γ 1 ∪ Γ 2 | ℓ | {\displaystyle {\frac {\Gamma _{1}\cup \left\{\ell \right\}\,\,\,\,\Gamma _{2}\cup \left\{{\overline {\ell }}\right\}}{\Gamma _{1}\cup \Gamma _{2}}}|\ell |} We have the following terminology: The clauses Γ 1 ∪ { ℓ } {\displaystyle \Gamma _{1}\cup \left\{\ell \right\}} and Γ 2 ∪ { ℓ ¯ } {\displaystyle \Gamma _{2}\cup \left\{{\overline {\ell }}\right\}} are the inference's premises Γ 1 ∪ Γ 2 {\displaystyle \Gamma _{1}\cup \Gamma _{2}} (the resolvent of the premises) is its conclusion. The literal ℓ {\displaystyle \ell } is the left resolved literal, The literal ℓ ¯ {\displaystyle {\overline {\ell }}} is the right resolved literal, | ℓ | {\displaystyle |\ell |} is the resolved atom or pivot. The clause produced by the resolution rule is called the resolvent of the two input clauses. It is the principle of consensus applied to clauses rather than terms. When the two clauses contain more than one pair of complementary literals, the resolution rule can be applied (independently) for each such pair; however, the result is always a tautology. Modus ponens can be seen as a special case of resolution (of a one-literal clause and a two-literal clause). p → q , p q {\displaystyle {\frac {p\rightarrow q,\quad p}{q}}} is equivalent to ¬ p ∨ q , p q {\displaystyle {\frac {\lnot p\lor q,\quad p}{q}}} === A resolution technique === When coupled with a complete search algorithm, the resolution rule yields a sound and complete algorithm for deciding the satisfiability of a propositional formula, and, by extension, the validity of a sentence under a set of axioms. This resolution technique uses proof by contradiction and is based on the fact that any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form. The steps are as follows. All sentences in the knowledge base and the negation of the sentence to be proved (the conjecture) are conjunctively connected. The resulting sentence is transformed into a conjunctive normal form with the conjuncts viewed as elements in a set, S, of clauses. For example, ( A 1 ∨ A 2 ) ∧ ( B 1 ∨ B 2 ∨ B 3 ) ∧ ( C 1 ) {\displaystyle (A_{1}\lor A_{2})\land (B_{1}\lor B_{2}\lor B_{3})\land (C_{1})} gives rise to the set S = { A 1 ∨ A 2 , B 1 ∨ B 2 ∨ B 3 , C 1 } {\displaystyle S=\{A_{1}\lor A_{2},B_{1}\lor B_{2}\lor B_{3},C_{1}\}} . The resolution rule is applied to all possible pairs of clauses that contain complementary literals. After each application of the resolution rule, the resulting sentence is simplified by removing repeated literals. If the clause contains complementary literals, it is discarded (as a tautology). If not, and if it is not yet present in the clause set S, it is added to S, and is considered for further resolution inferences. If after applying a resolution rule the empty clause is derived, the original formula is unsatisfiable (or contradictory), and hence it can be concluded that the initial conjecture follows from the axioms. If, on the other hand, the empty clause cannot be derived, and the resolution rule cannot be applied to derive any more new clauses, the conjecture is not a theorem of the original knowledge base. One instance of this algorithm is the original Davis–Putnam algorithm that was later refined into the DPLL algorithm that removed the need for explicit representation of the resolvents. This description of the resolution technique uses a set S as the underlying data-structure to represent resolution derivations. Lists, Trees and Directed Acyclic Graphs are other possible and common alternatives. Tree representations are more faithful to the fact that the resolution rule is binary. Together with a sequent notation for clauses, a tree representation also makes it clear to see how the resolution rule is related to a special case of the cut-rule, restricted to atomic cut-formulas. However, tree representations are not as compact as set or list representations, because they explicitly show redundant subderivations of clauses that are used more than once in the derivation of the empty clause. Graph representations can be as compact in the number of clauses as list representations and they also store structural information regarding which clauses were resolved to derive each resolvent. ==== A simple example ==== a ∨ b , ¬ a ∨ c b ∨ c {\displaystyle {\frac {a\vee b,\quad \neg a\vee c}{b\vee c}}} In plain language: Suppose a {\displaystyle a} is false. In order for the premise a ∨ b {\displaystyle a\vee b} to be true, b {\displaystyle b} must be true. Alternatively, suppose a {\displaystyle a} is true. In order for the premise ¬ a ∨ c {\displaystyle \neg a\vee c} to be true, c {\displaystyle c} must be true. Therefore, regardless of falsehood or veracity of a {\displaystyle a} , if both premises hold, then the conclusion b ∨ c {\displaystyle b\vee c} is true. == Resolution in first-order logic == Resolution rule can be generalized to first-order logic to: Γ 1 ∪ { L 1 } Γ 2 ∪ { L 2 } ( Γ 1 ∪ Γ 2 ) ϕ ϕ {\displaystyle {\frac {\Gamma _{1}\cup \left\{L_{1}\right\}\,\,\,\,\Gamma _{2}\cup \left\{L_{2}\right\}}{(\Gamma _{1}\cup \Gamma _{2})\phi }}\phi } where ϕ {\displaystyle \phi } is a most general unifier of L 1 {\displaystyle L_{1}} and L 2 ¯ {\displaystyle {\overline {L_{2}}}} , and Γ 1 {\displaystyle \Gamma _{1}} and Γ 2 {\displaystyle \Gamma _{2}} have no common variables. === Example === The clauses P ( x ) , Q ( x ) {\displaystyle P(x),Q(x)} and ¬ P ( b ) {\displaystyle \neg P(b)} can apply this rule with [ b / x ] {\displaystyle [b/x]} as unifier. Here x is a variable and b is a constant. P ( x ) , Q ( x ) ¬ P ( b ) Q ( b ) [ b / x ] {\displaystyle {\frac {P(x),Q(x)\,\,\,\,\neg P(b)}{Q(b)}}[b/x]} Here we see that The clauses P ( x ) , Q ( x ) {\displaystyle P(x),Q(x)} and ¬ P ( b ) {\displaystyle \neg P(b)} are the inference's premises Q ( b ) {\displaystyle Q(b)} (the resolvent of the premises) is its conclusion. The literal P ( x ) {\displaystyle P(x)} is the left resolved literal, The literal ¬ P ( b ) {\displaystyle \neg P(b)} is the right resolved literal, P {\displaystyle P} is the resolved atom or pivot. [ b / x ] {\displaystyle [b/x]} is the most general unifier of the resolved literals. === Informal explanation === In first-order logic, resolution condenses the traditional syllogisms of logical inference down to a single rule. To understand how resolution works, consider the following example syllogism of term logic: All Greeks are Europeans. Homer is a Greek. Therefore, Homer is a European. Or, more generally: ∀ x . P ( x ) ⇒ Q ( x ) {\displaystyle \forall x.P(x)\Rightarrow Q(x)} P ( a ) {\displaystyle P(a)} Therefore, Q ( a ) {\displaystyle Q(a)} To recast the reasoning using the resolution technique, first the clauses must be converted to conjunctive normal form (CNF). In this form, all quantification becomes implicit: universal quantifiers on variables (X, Y, ...) are simply omitted as understood, while existentially-quantified variables are replaced by Skolem functions. ¬ P ( x ) ∨ Q ( x ) {\displaystyle \neg P(x)\vee Q(x)} P ( a ) {\displaystyle P(a)} Therefore, Q ( a ) {\displaystyle Q(a)} So the question is, how does the resolution technique derive the last clause from the first two? The rule is simple: Find two clauses containing the same predicate, where it is negated in one clause but not in the other. Perform a unification on the two predicates. (If the unification fails, you made a bad choice of predicates. Go back to the previous step and try again.) If any unbound variables which were bound in the unified predicates also occur in other predicates in the two clauses, replace them with their bound values (terms) there as well. Discard the unified predicates, and combine the remaining ones from the two clauses into a new clause, also joined by the "∨" operator. To apply this rule to the above example, we find the predicate P occurs in negated form ¬P(X) in the first clause, and in non-negated form P(a) in the second clause. X is an unbound variable, while a is a bound value (term). Unifying the two produces the substitution X ↦ a Discarding the unified predicates, and applying this substitution to the remaining predicates (just Q(X), in this case), produces the conclusion: Q(a) For another example, consider the syllogistic form All Cretans are islanders. All islanders are liars. Therefore all Cretans are liars. Or more generally, ∀X P(X) → Q(X) ∀X Q(X) → R(X) Therefore, ∀X P(X) → R(X) In CNF, the antecedents become: ¬P(X) ∨ Q(X) ¬Q(Y) ∨ R(Y) (The variable in the second clause was renamed to make it clear that variables in different clauses are distinct.) Now, unifying Q(X) in the first clause with ¬Q(Y) in the second clause means that X and Y become the same variable anyway. Substituting this into the remaining clauses and combining them gives the conclusion: ¬P(X) ∨ R(X) === Factoring === The resolution rule, as defined by Robinson, also incorporated factoring, which unifies two literals in the same clause, before or during the application of resolution as defined above. The resulting inference rule is refutation-complete, in that a set of clauses is unsatisfiable if and only if there exists a derivation of the empty clause using only resolution, enhanced by factoring. An example for an unsatisfiable clause set for which factoring is needed to derive the empty clause is: ( 1 ) : P ( u ) ∨ P ( f ( u ) ) ( 2 ) : ¬ P ( v ) ∨ P ( f ( w ) ) ( 3 ) : ¬ P ( x ) ∨ ¬ P ( f ( x ) ) {\displaystyle {\begin{array}{rlcl}(1):&P(u)&\lor &P(f(u))\\(2):&\lnot P(v)&\lor &P(f(w))\\(3):&\lnot P(x)&\lor &\lnot P(f(x))\\\end{array}}} Since each clause consists of two literals, so does each possible resolvent. Therefore, by resolution without factoring, the empty clause can never be obtained. Using factoring, it can be obtained e.g. as follows: ( 4 ) : P ( u ) ∨ P ( f ( w ) ) by resolving (1) and (2), with v = f ( u ) ( 5 ) : P ( f ( w ) ) by factoring (4), with u = f ( w ) ( 6 ) : ¬ P ( f ( f ( w ′ ) ) ) by resolving (5) and (3), with w = w ′ , x = f ( w ′ ) ( 7 ) : false by resolving (5) and (6), with w = f ( w ′ ) {\displaystyle {\begin{array}{rll}(4):&P(u)\lor P(f(w))&{\text{by resolving (1) and (2), with }}v=f(u)\\(5):&P(f(w))&{\text{by factoring (4), with }}u=f(w)\\(6):&\lnot P(f(f(w')))&{\text{by resolving (5) and (3), with }}w=w',x=f(w')\\(7):&{\text{false}}&{\text{by resolving (5) and (6), with }}w=f(w')\\\end{array}}} == Non-clausal resolution == Generalizations of the above resolution rule have been devised that do not require the originating formulas to be in clausal normal form. These techniques are useful mainly in interactive theorem proving where it is important to preserve human readability of intermediate result formulas. Besides, they avoid combinatorial explosion during transformation to clause-form,: 98  and sometimes save resolution steps.: 425  === Non-clausal resolution in propositional logic === For propositional logic, Murray: 18  and Manna and Waldinger: 98  use the rule F [ p ] G [ p ] F [ true ] ∨ G [ false ] {\displaystyle {\begin{array}{c}F[p]\;\;\;\;\;\;\;\;\;\;G[p]\\\hline F[{\textit {true}}]\lor G[{\textit {false}}]\\\end{array}}} , where p {\displaystyle p} denotes an arbitrary formula, F [ p ] {\displaystyle F[p]} denotes a formula containing p {\displaystyle p} as a subformula, and F [ true ] {\displaystyle F[{\textit {true}}]} is built by replacing in F [ p ] {\displaystyle F[p]} every occurrence of p {\displaystyle p} by true {\displaystyle {\textit {true}}} ; likewise for G {\displaystyle G} . The resolvent F [ true ] ∨ G [ false ] {\displaystyle F[{\textit {true}}]\lor G[{\textit {false}}]} is intended to be simplified using rules like q ∧ true ⟹ q {\displaystyle q\land {\textit {true}}\implies q} , etc. In order to prevent generating useless trivial resolvents, the rule shall be applied only when p {\displaystyle p} has at least one "negative" and "positive" occurrence in F {\displaystyle F} and G {\displaystyle G} , respectively. Murray has shown that this rule is complete if augmented by appropriate logical transformation rules.: 103  Traugott uses the rule F [ p + , p − ] G [ p ] F [ G [ true ] , ¬ G [ false ] ] {\displaystyle {\begin{array}{c}F[p^{+},p^{-}]\;\;\;\;\;\;\;\;G[p]\\\hline F[G[{\textit {true}}],\lnot G[{\textit {false}}]]\\\end{array}}} , where the exponents of p {\displaystyle p} indicate the polarity of its occurrences. While G [ true ] {\displaystyle G[{\textit {true}}]} and G [ false ] {\displaystyle G[{\textit {false}}]} are built as before, the formula F [ G [ true ] , ¬ G [ false ] ] {\displaystyle F[G[{\textit {true}}],\lnot G[{\textit {false}}]]} is obtained by replacing each positive and each negative occurrence of p {\displaystyle p} in F {\displaystyle F} with G [ true ] {\displaystyle G[{\textit {true}}]} and G [ false ] {\displaystyle G[{\textit {false}}]} , respectively. Similar to Murray's approach, appropriate simplifying transformations are to be applied to the resolvent. Traugott proved his rule to be complete, provided ∧ , ∨ , → , ¬ {\displaystyle \land ,\lor ,\rightarrow ,\lnot } are the only connectives used in formulas.: 398–400  Traugott's resolvent is stronger than Murray's.: 395  Moreover, it does not introduce new binary junctors, thus avoiding a tendency towards clausal form in repeated resolution. However, formulas may grow longer when a small p {\displaystyle p} is replaced multiple times with a larger G [ true ] {\displaystyle G[{\textit {true}}]} and/or G [ false ] {\displaystyle G[{\textit {false}}]} .: 398  === Propositional non-clausal resolution example === As an example, starting from the user-given assumptions ( 1 ) : a → b ∧ c ( 2 ) : c → d ( 3 ) : b ∧ d → e ( 4 ) : ¬ ( a → e ) {\displaystyle {\begin{array}{rccc}(1):&a&\rightarrow &b\land c\\(2):&c&\rightarrow &d\\(3):&b\land d&\rightarrow &e\\(4):&\lnot (a&\rightarrow &e)\\\end{array}}} the Murray rule can be used as follows to infer a contradiction: ( 5 ) : ( true → d ) ∨ ( a → b ∧ false ) ⟹ d ∨ ¬ a from (2) and (1), with p = c ( 6 ) : ( b ∧ true → e ) ∨ ( false ∨ ¬ a ) ⟹ ( b → e ) ∨ ¬ a from (3) and (5), with p = d ( 7 ) : ( ( true → e ) ∨ ¬ a ) ∨ ( a → false ∧ c ) ⟹ e ∨ ¬ a ∨ ¬ a from (6) and (1), with p = b ( 8 ) : ( e ∨ ¬ true ∨ ¬ true ) ∨ ¬ ( false → e ) ⟹ e from (7) and (4), with p = a ( 9 ) : ¬ ( a → true ) ∨ false ⟹ false from (4) and (8), with p = e {\displaystyle {\begin{array}{rrclccl}(5):&({\textit {true}}\rightarrow d)&\lor &(a\rightarrow b\land {\textit {false}})&\implies &d\lor \lnot a&{\mbox{from (2) and (1), with }}p=c\\(6):&(b\land {\textit {true}}\rightarrow e)&\lor &({\textit {false}}\lor \lnot a)&\implies &(b\rightarrow e)\lor \lnot a&{\mbox{from (3) and (5), with }}p=d\\(7):&(({\textit {true}}\rightarrow e)\lor \lnot a)&\lor &(a\rightarrow {\textit {false}}\land c)&\implies &e\lor \lnot a\lor \lnot a&{\mbox{from (6) and (1), with }}p=b\\(8):&(e\lor \lnot {\textit {true}}\lor \lnot {\textit {true}})&\lor &\lnot ({\textit {false}}\rightarrow e)&\implies &e&{\mbox{from (7) and (4), with }}p=a\\(9):&\lnot (a\rightarrow {\textit {true}})&\lor &{\textit {false}}&\implies &{\textit {false}}&{\mbox{from (4) and (8), with }}p=e\\\end{array}}} For the same purpose, the Traugott rule can be used as follows :: 397  ( 10 ) : a → b ∧ ( true → d ) ⟹ a → b ∧ d from (1) and (2), with p = c ( 11 ) : a → ( true → e ) ⟹ a → e from (10) and (3), with p = ( b ∧ d ) ( 12 ) : ¬ true ⟹ false from (11) and (4), with p = ( a → e ) {\displaystyle {\begin{array}{rcccl}(10):&a\rightarrow b\land ({\textit {true}}\rightarrow d)&\implies &a\rightarrow b\land d&{\mbox{from (1) and (2), with }}p=c\\(11):&a\rightarrow ({\textit {true}}\rightarrow e)&\implies &a\rightarrow e&{\mbox{from (10) and (3), with }}p=(b\land d)\\(12):&\lnot {\textit {true}}&\implies &{\textit {false}}&{\mbox{from (11) and (4), with }}p=(a\rightarrow e)\\\end{array}}} From a comparison of both deductions, the following issues can be seen: Traugott's rule may yield a sharper resolvent: compare (5) and (10), which both resolve (1) and (2) on p = c {\displaystyle p=c} . Murray's rule introduced 3 new disjunction symbols: in (5), (6), and (7), while Traugott's rule did not introduce any new symbol; in this sense, Traugott's intermediate formulas resemble the user's style more closely than Murray's. Due to the latter issue, Traugott's rule can take advantage of the implication in assumption (4), using as p {\displaystyle p} the non-atomic formula a → e {\displaystyle a\rightarrow e} in step (12). Using Murray's rules, the semantically equivalent formula e ∨ ¬ a ∨ ¬ a {\displaystyle e\lor \lnot a\lor \lnot a} was obtained as (7), however, it could not be used as p {\displaystyle p} due to its syntactic form. === Non-clausal resolution in first-order logic === For first-order predicate logic, Murray's rule is generalized to allow distinct, but unifiable, subformulas p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} of F {\displaystyle F} and G {\displaystyle G} , respectively. If ϕ {\displaystyle \phi } is the most general unifier of p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} , then the generalized resolvent is F ϕ [ true ] ∨ G ϕ [ false ] {\displaystyle F\phi [{\textit {true}}]\lor G\phi [{\textit {false}}]} . While the rule remains sound if a more special substitution ϕ {\displaystyle \phi } is used, no such rule applications are needed to achieve completeness. Traugott's rule is generalized to allow several pairwise distinct subformulas p 1 , … , p m {\displaystyle p_{1},\ldots ,p_{m}} of F {\displaystyle F} and p m + 1 , … , p n {\displaystyle p_{m+1},\ldots ,p_{n}} of G {\displaystyle G} , as long as p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} have a common most general unifier, say ϕ {\displaystyle \phi } . The generalized resolvent is obtained after applying ϕ {\displaystyle \phi } to the parent formulas, thus making the propositional version applicable. Traugott's completeness proof relies on the assumption that this fully general rule is used;: 401  it is not clear whether his rule would remain complete if restricted to p 1 = ⋯ = p m {\displaystyle p_{1}=\cdots =p_{m}} and p m + 1 = ⋯ = p n {\displaystyle p_{m+1}=\cdots =p_{n}} . == Paramodulation == Paramodulation is a related technique for reasoning on sets of clauses where the predicate symbol is equality. It generates all "equal" versions of clauses, except reflexive identities. The paramodulation operation takes a positive from clause, which must contain an equality literal. It then searches an into clause with a subterm that unifies with one side of the equality. The subterm is then replaced by the other side of the equality. The general aim of paramodulation is to reduce the system to atoms, reducing the size of the terms when substituting. == Implementations == CARINE GKC Otter Prover9 SNARK SPASS Vampire Logictools online prover == See also == Condensed detachment — an earlier version of resolution Inductive logic programming Inverse resolution Logic programming Method of analytic tableaux SLD resolution == Notes == == References == Robinson, J. Alan (1965). "A Machine-Oriented Logic Based on the Resolution Principle". Journal of the ACM. 12 (1): 23–41. doi:10.1145/321250.321253. S2CID 14389185. Leitsch, Alexander (1997). The Resolution Calculus. Texts in Theoretical Computer Science. An EATCS Series. Springer. ISBN 978-3-642-60605-2. Gallier, Jean H. (1986). Logic for Computer Science: Foundations of Automatic Theorem Proving. Harper & Row. Lee, Chin-Liang Chang, Richard Char-Tung (1987). Symbolic logic and mechanical theorem proving. Academic Press. ISBN 0-12-170350-9.{{cite book}}: CS1 maint: multiple names: authors list (link) == External links == Alex Sakharov. "Resolution Principle". MathWorld. Alex Sakharov. "Resolution". MathWorld.
Wikipedia/Resolution_(logic)
In mathematical logic, a theory (also called a formal theory) is a set of sentences in a formal language. In most scenarios a deductive system is first understood from context, giving rise to a formal system that combines the language with deduction rules. An element ϕ ∈ T {\displaystyle \phi \in T} of a deductively closed theory T {\displaystyle T} is then called a theorem of the theory. In many deductive systems there is usually a subset Σ ⊆ T {\displaystyle \Sigma \subseteq T} that is called "the set of axioms" of the theory T {\displaystyle T} , in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms. == General theories (as expressed in formal language) == When defining theories for foundational purposes, additional care must be taken, as normal set-theoretic language may not be appropriate. The construction of a theory begins by specifying a definite non-empty conceptual class E {\displaystyle {\mathcal {E}}} , the elements of which are called statements. These initial statements are often called the primitive elements or elementary statements of the theory—to distinguish them from other statements that may be derived from them. A theory T {\displaystyle {\mathcal {T}}} is a conceptual class consisting of certain of these elementary statements. The elementary statements that belong to T {\displaystyle {\mathcal {T}}} are called the elementary theorems of T {\displaystyle {\mathcal {T}}} and are said to be true. In this way, a theory can be seen as a way of designating a subset of E {\displaystyle {\mathcal {E}}} that only contain statements that are true. This general way of designating a theory stipulates that the truth of any of its elementary statements is not known without reference to T {\displaystyle {\mathcal {T}}} . Thus the same elementary statement may be true with respect to one theory but false with respect to another. This is reminiscent of the case in ordinary language where statements such as "He is an honest person" cannot be judged true or false without interpreting who "he" is, and, for that matter, what an "honest person" is under this theory. === Subtheories and extensions === A theory S {\displaystyle {\mathcal {S}}} is a subtheory of a theory T {\displaystyle {\mathcal {T}}} if S {\displaystyle {\mathcal {S}}} is a subset of T {\displaystyle {\mathcal {T}}} . If T {\displaystyle {\mathcal {T}}} is a subset of S {\displaystyle {\mathcal {S}}} then S {\displaystyle {\mathcal {S}}} is called an extension or a supertheory of T {\displaystyle {\mathcal {T}}} === Deductive theories === A theory is said to be a deductive theory if T {\displaystyle {\mathcal {T}}} is an inductive class, which is to say that its content is based on some formal deductive system and that some of its elementary statements are taken as axioms. In a deductive theory, any sentence that is a logical consequence of one or more of the axioms is also a sentence of that theory. More formally, if ⊢ {\displaystyle \vdash } is a Tarski-style consequence relation, then T {\displaystyle {\mathcal {T}}} is closed under ⊢ {\displaystyle \vdash } (and so each of its theorems is a logical consequence of its axioms) if and only if, for all sentences ϕ {\displaystyle \phi } in the language of the theory T {\displaystyle {\mathcal {T}}} , if T ⊢ ϕ {\displaystyle {\mathcal {T}}\vdash \phi } , then ϕ ∈ T {\displaystyle \phi \in {\mathcal {T}}} ; or, equivalently, if T ′ {\displaystyle {\mathcal {T}}'} is a finite subset of T {\displaystyle {\mathcal {T}}} (possibly the set of axioms of T {\displaystyle {\mathcal {T}}} in the case of finitely axiomatizable theories) and T ′ ⊢ ϕ {\displaystyle {\mathcal {T}}'\vdash \phi } , then ϕ ∈ T ′ {\displaystyle \phi \in {\mathcal {T}}'} , and therefore ϕ ∈ T {\displaystyle \phi \in {\mathcal {T}}} . === Consistency and completeness === A syntactically consistent theory is a theory from which not every sentence in the underlying language can be proven (with respect to some deductive system, which is usually clear from context). In a deductive system (such as first-order logic) that satisfies the principle of explosion, this is equivalent to requiring that there is no sentence φ such that both φ and its negation can be proven from the theory. A satisfiable theory is a theory that has a model. This means there is a structure M that satisfies every sentence in the theory. Any satisfiable theory is syntactically consistent, because the structure satisfying the theory will satisfy exactly one of φ and the negation of φ, for each sentence φ. A consistent theory is sometimes defined to be a syntactically consistent theory, and sometimes defined to be a satisfiable theory. For first-order logic, the most important case, it follows from the completeness theorem that the two meanings coincide. In other logics, such as second-order logic, there are syntactically consistent theories that are not satisfiable, such as ω-inconsistent theories. A complete consistent theory (or just a complete theory) is a consistent theory T {\displaystyle {\mathcal {T}}} such that for every sentence φ in its language, either φ is provable from T {\displaystyle {\mathcal {T}}} or T {\displaystyle {\mathcal {T}}} ∪ {\displaystyle \cup } {φ} is inconsistent. For theories closed under logical consequence, this means that for every sentence φ, either φ or its negation is contained in the theory. An incomplete theory is a consistent theory that is not complete. (see also ω-consistent theory for a stronger notion of consistency.) === Interpretation of a theory === An interpretation of a theory is the relationship between a theory and some subject matter when there is a many-to-one correspondence between certain elementary statements of the theory, and certain statements related to the subject matter. If every elementary statement in the theory has a correspondent it is called a full interpretation, otherwise it is called a partial interpretation. === Theories associated with a structure === Each structure has several associated theories. The complete theory of a structure A is the set of all first-order sentences over the signature of A that are satisfied by A. It is denoted by Th(A). More generally, the theory of K, a class of σ-structures, is the set of all first-order σ-sentences that are satisfied by all structures in K, and is denoted by Th(K). Clearly Th(A) = Th({A}). These notions can also be defined with respect to other logics. For each σ-structure A, there are several associated theories in a larger signature σ' that extends σ by adding one new constant symbol for each element of the domain of A. (If the new constant symbols are identified with the elements of A that they represent, σ' can be taken to be σ ∪ {\displaystyle \cup } A.) The cardinality of σ' is thus the larger of the cardinality of σ and the cardinality of A. The diagram of A consists of all atomic or negated atomic σ'-sentences that are satisfied by A and is denoted by diagA. The positive diagram of A is the set of all atomic σ'-sentences that A satisfies. It is denoted by diag+A. The elementary diagram of A is the set eldiagA of all first-order σ'-sentences that are satisfied by A or, equivalently, the complete (first-order) theory of the natural expansion of A to the signature σ'. == First-order theories == A first-order theory Q S {\displaystyle {\mathcal {QS}}} is a set of sentences in a first-order formal language Q {\displaystyle {\mathcal {Q}}} . === Derivation in a first-order theory === There are many formal derivation ("proof") systems for first-order logic. These include Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method and resolution. === Syntactic consequence in a first-order theory === A formula A is a syntactic consequence of a first-order theory Q S {\displaystyle {\mathcal {QS}}} if there is a derivation of A using only formulas in Q S {\displaystyle {\mathcal {QS}}} as non-logical axioms. Such a formula A is also called a theorem of Q S {\displaystyle {\mathcal {QS}}} . The notation " Q S ⊢ A {\displaystyle {\mathcal {QS}}\vdash A} " indicates A is a theorem of Q S {\displaystyle {\mathcal {QS}}} . === Interpretation of a first-order theory === An interpretation of a first-order theory provides a semantics for the formulas of the theory. An interpretation is said to satisfy a formula if the formula is true according to the interpretation. A model of a first-order theory Q S {\displaystyle {\mathcal {QS}}} is an interpretation in which every formula of Q S {\displaystyle {\mathcal {QS}}} is satisfied. === First-order theories with identity === A first-order theory Q S {\displaystyle {\mathcal {QS}}} is a first-order theory with identity if Q S {\displaystyle {\mathcal {QS}}} includes the identity relation symbol "=" and the reflexivity and substitution axiom schemes for this symbol. === Topics related to first-order theories === Compactness theorem Consistent set Deduction theorem Lindenbaum's lemma Löwenheim–Skolem theorem == Examples == One way to specify a theory is to define a set of axioms in a particular language. The theory can be taken to include just those axioms, or their logical or provable consequences, as desired. Theories obtained this way include ZFC and Peano arithmetic. A second way to specify a theory is to begin with a structure, and let the theory be the set of sentences that are satisfied by the structure. This is a method for producing complete theories through the semantic route, with examples including the set of true sentences under the structure (N, +, ×, 0, 1, =), where N is the set of natural numbers, and the set of true sentences under the structure (R, +, ×, 0, 1, =), where R is the set of real numbers. The first of these, called the theory of true arithmetic, cannot be written as the set of logical consequences of any enumerable set of axioms. The theory of (R, +, ×, 0, 1, =) was shown by Tarski to be decidable; it is the theory of real closed fields (see Decidability of first-order theories of the real numbers for more). == See also == Axiomatic system Interpretability List of first-order theories Mathematical theory == References == == Further reading == Hodges, Wilfrid (1997). A shorter model theory. Cambridge University Press. ISBN 0-521-58713-1.
Wikipedia/First-order_theory
Cirquent calculus is a proof calculus that manipulates graph-style constructs termed cirquents, as opposed to the traditional tree-style objects such as formulas or sequents. Cirquents come in a variety of forms, but they all share one main characteristic feature, making them different from the more traditional objects of syntactic manipulation. This feature is the ability to explicitly account for possible sharing of subcomponents between different components. For instance, it is possible to write an expression where two subexpressions F and E, while neither one is a subexpression of the other, still have a common occurrence of a subexpression G (as opposed to having two different occurrences of G, one in F and one in E). == Overview == The approach was introduced by G. Japaridze as an alternative proof theory capable of "taming" various nontrivial fragments of his computability logic, which had otherwise resisted all axiomatization attempts within the traditional proof-theoretic frameworks. The origin of the term “cirquent” is CIRcuit+seQUENT, as the simplest form of cirquents, while resembling circuits rather than formulas, can be thought of as collections of one-sided sequents (for instance, sequents of a given level of a Gentzen-style proof tree) where some sequents may have shared elements. The basic version of cirquent calculus was accompanied with an "abstract resource semantics" and the claim that the latter was an adequate formalization of the resource philosophy traditionally associated with linear logic. Based on that claim and the fact that the semantics induced a logic properly stronger than (affine) linear logic, Japaridze argued that linear logic was incomplete as a logic of resources. Furthermore, he argued that not only the deductive power but also the expressive power of linear logic was weak, for it, unlike cirquent calculus, failed to capture the ubiquitous phenomenon of resource sharing. The resource philosophy of cirquent calculus sees the approaches of linear logic and classical logic as two extremes: the former does not allow any sharing at all, while in the latter “everything is shared that can be shared”. Unlike cirquent calculus, neither approach thus permits mixed cases where some identical subformulas are shared and some not. Among the later-found applications of cirquent calculus was the use of it to define a semantics for purely propositional independence-friendly logic. The corresponding logic was axiomatized by W. Xu. Syntactically, cirquent calculi are deep inference systems with the unique feature of subformula-sharing. This feature has been shown to provide speedup for certain proofs. For instance, polynomial size analytic proofs for the propositional pigeonhole have been constructed. Only quasipolynomial analytic proofs have been found for this principle in other deep inference systems. In resolution or analytic Gentzen-style systems, the pigeonhole principle is known to have only exponential size proofs. == References == == Further reading == M.Bauer, “The computational complexity of propositional cirquent calculus”. Logical Methods in Computer Science 11 (2015), Issue 1, Paper 12, pp. 1–16. I. Mezhirov and N. Vereshchagin, On abstract resource semantics and computability logic. Journal of Computer and System Sciences 76 (2010), pp. 356–372. W.Xu and S.Liu, “Soundness and completeness of the cirquent calculus system CL6 for computability logic”. Logic Journal of the IGPL 20 (2012), pp. 317–330. W.Xu and S.Liu, “Cirquent calculus system CL8S versus calculus of structures system SKSg for propositional logic”. In: Quantitative Logic and Soft Computing. Guojun Wang, Bin Zhao and Yongming Li, eds. Singapore, World Scientific, 2012, pp. 144–149. W. Xu, A cirquent calculus system with clustering and ranking. Journal of Applied Logic 16 (2016), pp. 37–49. == External links == Media related to Cirquent calculus at Wikimedia Commons
Wikipedia/Cirquent_calculus
A metatheory or meta-theory is a theory on a subject matter that is a theory in itself. Analyses or descriptions of an existing theory would be considered meta-theories. For mathematics and mathematical logic, a metatheory is a mathematical theory about another mathematical theory. Meta-theoretical investigations are part of the philosophy of science. The topic of metascience is an attempt to use scientific knowledge to improve the practice of science itself. The study of metatheory became widespread during the 20th century after its application to various topics, including scientific linguistics and its concept of metalanguage. == Examples of metatheories == === Metascience === Metascience is the use of scientific method to study science itself. Metascience is an attempt to increase the quality of scientific research while reducing wasted activity; it uses research methods to study how research is done or can be improved. It has been described as "research on research", "the science of science", and "a bird's eye view of science". As stated by John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better." In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten well-known medical journals. It found that, "in almost 73% of the reports read ... conclusions were drawn when the justification for these conclusions was invalid". Meta-research during the ensuing decades found many methodological flaws, inefficiencies, and bad practices in the research of numerous scientific topics. Many scientific studies could not be reproduced, particularly those involving medicine and the so-called soft sciences. The term "replication crisis" was invented during the early 2010s as part of an increasing awareness of the problem. Measures have been implemented to address the issues revealed by metascience. These measures include the pre-registration of scientific studies and clinical trials as well as the founding of organizations such as CONSORT and the EQUATOR Network that issue guidelines for methods and reporting. There are continuing efforts to reduce the misuse of statistics, to eliminate perverse incentives from academia, to improve the peer review process, to reduce bias in scientific literature, and to increase the overall quality and efficiency of the scientific process. A major criticism of metatheory is that it is theory based on other theory. === Computational Metatheory === Computational metatheory is a conceptual and formal framework based on Theoretical Computer Science to reason about how theories in the sciences can emerge out of theoretical and empirical work. It is a computation-centered approach to problems such as which properties theories should have, what empirical evidence is relevant in a given scientific problem situation, and how discoveries affect the problem space. As such, it complements prevailing approaches to metatheorizing with a focus on the analysis of computational problems. === Metamathematics === Introduced in 20th-century philosophy as a result of the work of the German mathematician David Hilbert, who in 1905 published a proposal for proof of the consistency and completeness of mathematics, creating the topic of metamathematics. His hopes for the success of this proof were disappointed by the work of Kurt Gödel, who in 1931, used his incompleteness theorems to prove the goal of consistency and completeness to be unattainable. Nevertheless, his program of unsolved mathematical problems influenced mathematics for the rest of the 20th century. A metatheorem is defined as: "a statement about theorems. It usually gives a criterion for getting a new theorem from an old one, either by changing its objects according to a rule" known as the duality principle, or by transferring it to another topic (e.g., from the theory of categories to the theory of groups) or to another context of the same topic (e.g., from linear transformations to matrices). === Metalogic === Metalogic is the study of the metatheory of logic. Whereas logic is the study of how logical systems can be used to construct valid and sound arguments, metalogic studies the properties of logical systems. Logic concerns the truths that may be derived using a logical system; metalogic concerns the truths that may be derived about the languages and systems that are used to express truths. The basic objects of metalogical study are formal languages, formal systems, and their interpretations. The study of interpretation of formal systems is the type of mathematical logic that is known as model theory, and the study of deductive systems is the type that is known as proof theory. === Metaphilosophy === Metaphilosophy is "the investigation of the nature of philosophy". Its subject matter includes the aims of philosophy, the boundaries of philosophy, and its methods. Thus, while philosophy characteristically inquires into the nature of being, the reality of objects, the possibility of knowledge, the nature of truth, and so on, metaphilosophy is the self-referential inquiry into the nature, purposes, and methods of the activity that makes these kinds of inquiries, by asking what is philosophy itself, what sorts of questions it should ask, how it might pose and answer them, and what it can achieve in doing so. It is considered by some to be a topic prior and preparatory to philosophy, while others see it as inherently a part of philosophy, or automatically a part of philosophy while others adopt some combination of these views. === Metasociology === Metasociology, or sociology of sociology, is a topic of sociology that combines social theories with analysis of the effect of socio-historical contexts in sociological intellectual production. == See also == Meta-aesthetics Meta-anthropology Metacognition Metacommunication Metadata Metadiscourse Metaeconomics Meta-emotion Metaepistemology Metaethics Metageography Metahistory Metaknowledge Metalanguage Metalearning Metalinguistics Metamemory Metanarrative Meta-ontology Metaphysics Metapolitics Metapragmatics Metapsychology Metatheology == References == == External links == Media related to Metatheory at Wikimedia Commons Meta-theoretical Issues (2003), Lyle Flint
Wikipedia/Metatheory
In geometry, Cavalieri's principle, a modern implementation of the method of indivisibles, named after Bonaventura Cavalieri, is as follows: 2-dimensional case: Suppose two regions in a plane are included between two parallel lines in that plane. If every line parallel to these two lines intersects both regions in line segments of equal length, then the two regions have equal areas. 3-dimensional case: Suppose two regions in three-space (solids) are included between two parallel planes. If every plane parallel to these two planes intersects both regions in cross-sections of equal area, then the two regions have equal volumes. Today Cavalieri's principle is seen as an early step towards integral calculus, and while it is used in some forms, such as its generalization in Fubini's theorem and layer cake representation, results using Cavalieri's principle can often be shown more directly via integration. In the other direction, Cavalieri's principle grew out of the ancient Greek method of exhaustion, which used limits but did not use infinitesimals. == History == Cavalieri's principle was originally called the method of indivisibles, the name it was known by in Renaissance Europe. Cavalieri developed a complete theory of indivisibles, elaborated in his Geometria indivisibilibus continuorum nova quadam ratione promota (Geometry, advanced in a new way by the indivisibles of the continua, 1635) and his Exercitationes geometricae sex (Six geometrical exercises, 1647). While Cavalieri's work established the principle, in his publications he denied that the continuum was composed of indivisibles in an effort to avoid the associated paradoxes and religious controversies, and he did not use it to find previously unknown results. In the 3rd century BC, Archimedes, using a method resembling Cavalieri's principle, was able to find the volume of a sphere given the volumes of a cone and cylinder in his work The Method of Mechanical Theorems. In the 5th century AD, Zu Chongzhi and his son Zu Gengzhi established a similar method to find a sphere's volume. Neither of the approaches, however, were known in early modern Europe. The transition from Cavalieri's indivisibles to Evangelista Torricelli's and John Wallis's infinitesimals was a major advance in the history of calculus. The indivisibles were entities of codimension 1, so that a plane figure was thought as made out of an infinite number of 1-dimensional lines. Meanwhile, infinitesimals were entities of the same dimension as the figure they make up; thus, a plane figure would be made out of "parallelograms" of infinitesimal width. Applying the formula for the sum of an arithmetic progression, Wallis computed the area of a triangle by partitioning it into infinitesimal parallelograms of width 1/∞. == 2-dimensional == === Cycloids === N. Reed has shown how to find the area bounded by a cycloid by using Cavalieri's principle. A circle of radius r can roll in a clockwise direction upon a line below it, or in a counterclockwise direction upon a line above it. A point on the circle thereby traces out two cycloids. When the circle has rolled any particular distance, the angle through which it would have turned clockwise and that through which it would have turned counterclockwise are the same. The two points tracing the cycloids are therefore at equal heights. The line through them is therefore horizontal (i.e. parallel to the two lines on which the circle rolls). Consequently each horizontal cross-section of the circle has the same length as the corresponding horizontal cross-section of the region bounded by the two arcs of cycloids. By Cavalieri's principle, the circle therefore has the same area as that region. Consider the rectangle bounding a single cycloid arch. From the definition of a cycloid, it has width 2πr and height 2r, so its area is four times the area of the circle. Calculate the area within this rectangle that lies above the cycloid arch by bisecting the rectangle at the midpoint where the arch meets the rectangle, rotate one piece by 180° and overlay the other half of the rectangle with it. The new rectangle, of area twice that of the circle, consists of the "lens" region between two cycloids, whose area was calculated above to be the same as that of the circle, and the two regions that formed the region above the cycloid arch in the original rectangle. Thus, the area bounded by a rectangle above a single complete arch of the cycloid has area equal to the area of the circle, and so, the area bounded by the arch is three times the area of the circle. == 3-dimensional == === Cones and pyramids === The fact that the volume of any pyramid, regardless of the shape of the base, including cones (circular base), is (1/3) × base × height, can be established by Cavalieri's principle if one knows only that it is true in one case. One may initially establish it in a single case by partitioning the interior of a triangular prism into three pyramidal components of equal volumes. One may show the equality of those three volumes by means of Cavalieri's principle. In fact, Cavalieri's principle or similar infinitesimal argument is necessary to compute the volume of cones and even pyramids, which is essentially the content of Hilbert's third problem – polyhedral pyramids and cones cannot be cut and rearranged into a standard shape, and instead must be compared by infinite (infinitesimal) means. The ancient Greeks used various precursor techniques such as Archimedes's mechanical arguments or method of exhaustion to compute these volumes. === Paraboloids === Consider a cylinder of radius r {\displaystyle r} and height h {\displaystyle h} , circumscribing a paraboloid y = h ( x r ) 2 {\displaystyle y=h\left({\frac {x}{r}}\right)^{2}} whose apex is at the center of the bottom base of the cylinder and whose base is the top base of the cylinder. Also consider the paraboloid y = h − h ( x r ) 2 {\displaystyle y=h-h\left({\frac {x}{r}}\right)^{2}} , with equal dimensions but with its apex and base flipped. For every height 0 ≤ y ≤ h {\displaystyle 0\leq y\leq h} , the disk-shaped cross-sectional area π ( 1 − y h r ) 2 {\displaystyle \pi \left({\sqrt {1-{\frac {y}{h}}}}\,r\right)^{2}} of the flipped paraboloid is equal to the ring-shaped cross-sectional area π r 2 − π ( y h r ) 2 {\displaystyle \pi r^{2}-\pi \left({\sqrt {\frac {y}{h}}}\,r\right)^{2}} of the cylinder part outside the inscribed paraboloid. Therefore, the volume of the flipped paraboloid is equal to the volume of the cylinder part outside the inscribed paraboloid. In other words, the volume of the paraboloid is π 2 r 2 h {\textstyle {\frac {\pi }{2}}r^{2}h} , half the volume of its circumscribing cylinder. === Spheres === If one knows that the volume of a cone is 1 3 ( base × height ) {\textstyle {\frac {1}{3}}\left({\text{base}}\times {\text{height}}\right)} , then one can use Cavalieri's principle to derive the fact that the volume of a sphere is 4 3 π r 3 {\textstyle {\frac {4}{3}}\pi r^{3}} , where r {\displaystyle r} is the radius. That is done as follows: Consider a sphere of radius r {\displaystyle r} and a cylinder of radius r {\displaystyle r} and height r {\displaystyle r} . Within the cylinder is the cone whose apex is at the center of one base of the cylinder and whose base is the other base of the cylinder. By the Pythagorean theorem, the plane located y {\displaystyle y} units above the "equator" intersects the sphere in a circle of radius r 2 − y 2 {\textstyle {\sqrt {r^{2}-y^{2}}}} and area π ( r 2 − y 2 ) {\displaystyle \pi \left(r^{2}-y^{2}\right)} . The area of the plane's intersection with the part of the cylinder that is outside of the cone is also π ( r 2 − y 2 ) {\displaystyle \pi \left(r^{2}-y^{2}\right)} . As can be seen, the area of the circle defined by the intersection with the sphere of a horizontal plane located at any height y {\displaystyle y} equals the area of the intersection of that plane with the part of the cylinder that is "outside" of the cone; thus, applying Cavalieri's principle, it could be said that the volume of the half sphere equals the volume of the part of the cylinder that is "outside" the cone. The aforementioned volume of the cone is 1 3 {\textstyle {\frac {1}{3}}} of the volume of the cylinder, thus the volume outside of the cone is 2 3 {\textstyle {\frac {2}{3}}} the volume of the cylinder. Therefore the volume of the upper half of the sphere is 2 3 {\textstyle {\frac {2}{3}}} of the volume of the cylinder. The volume of the cylinder is base × height = π r 2 ⋅ r = π r 3 {\displaystyle {\text{base}}\times {\text{height}}=\pi r^{2}\cdot r=\pi r^{3}} ("Base" is in units of area; "height" is in units of distance. Area × distance = volume.) Therefore the volume of the upper half-sphere is 2 3 π r 3 {\textstyle {\frac {2}{3}}\pi r^{3}} and that of the whole sphere is 4 3 π r 3 {\textstyle {\frac {4}{3}}\pi r^{3}} . === The napkin ring problem === In what is called the napkin ring problem, one shows by Cavalieri's principle that when a hole is drilled straight through the centre of a sphere where the remaining band has height h {\displaystyle h} , the volume of the remaining material surprisingly does not depend on the size of the sphere. The cross-section of the remaining ring is a plane annulus, whose area is the difference between the areas of two circles. By the Pythagorean theorem, the area of one of the two circles is π × ( r 2 − y 2 ) {\displaystyle \pi \times (r^{2}-y^{2})} , where r {\displaystyle r} is the sphere's radius and y {\displaystyle y} is the distance from the plane of the equator to the cutting plane, and that of the other is π × ( r 2 − ( h 2 ) 2 ) {\textstyle \pi \times \left(r^{2}-\left({\frac {h}{2}}\right)^{2}\right)} . When these are subtracted, the r 2 {\displaystyle r^{2}} cancels; hence the lack of dependence of the bottom-line answer upon r {\displaystyle r} . == Generalisation to measures == Let μ {\displaystyle \mu } be a measure on Ω ⊂ R N {\displaystyle \Omega \subset \mathbb {R} ^{N}} . Then Cavalieri's principle would be transcribed for f : Ω → R + {\displaystyle f\colon \Omega \to \mathbb {R} ^{+}} integrable as ∫ Ω f d μ = ∫ 0 ∞ μ ( { x ∈ Ω : f ( x ) > t } ) d t . {\displaystyle \int _{\Omega }f\,\mathrm {d} \mu =\int _{0}^{\infty }\mu {\bigl (}\{\,x\in \Omega :f(x)>t\,\}{\bigr )}\,\mathrm {d} t\;.} For a function f {\displaystyle f} on Ω {\displaystyle \Omega } with values in R {\displaystyle \mathbb {R} } , know that it can be rewritten as the difference of two positive functions f = f + − f − {\displaystyle f=f^{+}-f^{-}} , where f + {\displaystyle f^{+}} and f − {\displaystyle f^{-}} denote the positive and negative parts of f {\displaystyle f} respectively. == See also == Fubini's theorem (Cavalieri's principle is a particular case of Fubini's theorem) == References == == External links == Weisstein, Eric W. "Cavalieri's Principle". MathWorld. (in German) Prinzip von Cavalieri Cavalieri Integration
Wikipedia/Method_of_indivisibles
The Book of Lemmas or Book of Assumptions (Arabic Maʾkhūdhāt Mansūba ilā Arshimīdis) is a book attributed to Archimedes by Thābit ibn Qurra, though the authorship of the book is questionable. It consists of fifteen propositions (lemmas) on circles. == History == === Translations === The Book of Lemmas was first introduced in Arabic by Thābit ibn Qurra; he attributed the work to Archimedes. A translation from Arabic into Latin by John Greaves and revised by Samuel Foster (c. 1650) was published in 1659 as Lemmata Archimedis. Another Latin translation by Abraham Ecchellensis and edited by Giovanni A. Borelli was published in 1661 under the name Liber Assumptorum. T. L. Heath translated Heiburg's Latin work into English in his The Works of Archimedes. A more recently discovered manuscript copy of Thābit ibn Qurra's Arabic translation was translated into English by Emre Coşkun in 2018. === Authorship === The original authorship of the Book of Lemmas has been in question because in proposition four, the book refers to Archimedes in third person; however, it has been suggested that it may have been added by the translator. Another possibility is that the Book of Lemmas may be a collection of propositions by Archimedes later collected by a Greek writer. == New geometrical figures == The Book of Lemmas introduces several new geometrical figures. === Arbelos === Archimedes first introduced the arbelos (shoemaker's knife) in proposition four of his book: If AB be the diameter of a semicircle and N any point on AB, and if semicircles be described within the first semicircle and having AN, BN as diameters respectively, the figure included between the circumferences of the three semicircles is "what Archimedes called αρβηλος"; and its area is equal to the circle on PN as diameter, where PN is perpendicular to AB and meets the original semicircle in P. The figure is used in propositions four through eight. In propositions five, Archimedes introduces the Archimedes's twin circles, and in proposition eight, he makes use what would be the Pappus chain, formally introduced by Pappus of Alexandria. === Salinon === Archimedes first introduced the salinon (salt cellar) in proposition fourteen of his book: Let ACB be a semicircle on AB as diameter, and let AD, BE be equal lengths measured along AB from A, B respectively. On AD, BE as diameters describe semicircles on the side towards C, and on DE as diameter a semicircle on the opposite side. Let the perpendicular to AB through O, the centre of the first semicircle, meet the opposite semicircles in C, F respectively. Then shall the area of the figure bounded by the circumferences of all the semicircles be equal to the area of the circle on CF as diameter. Archimedes proved that the salinon and the circle are equal in area. == Propositions == If two circles touch at A, and if CD, EF be parallel diameters in them, ADF is a straight line. Let AB be the diameter of a semicircle, and let the tangents to it at B and at any other point D on it meet in T. If now DE be drawn perpendicular to AB, and if AT, DE meet in F, then DF = FE. Let P be any point on a segment of a circle whose base is AB, and let PN be perpendicular to AB. Take D on AB so that AN = ND. If now PQ be an arc equal to the arc PA, and BQ be joined, then BQ, BD shall be equal. If AB be the diameter of a semicircle and N any point on AB, and if semicircles be described within the first semicircle and having AN, BN as diameters respectively, the figure included between the circumferences of the three semicircles is "what Archimedes called αρβηλος"; and its area is equal to the circle on PN as diameter, where PN is perpendicular to AB and meets the original semicircle in P. Let AB be the diameter of a semicircle, C any point on AB, and CD perpendicular to it, and let semicircles be described within the first semicircle and having AC, CB as diameters. Then if two circles be drawn touching CD on different sides and each touching two of the semicircles, the circles so drawn will be equal. Let AB, the diameter of a semicircle, be divided at C so that AC = 3/2 × CB [or in any ratio]. Describe semicircles within the first semicircle and on AC, CB as diameters, and suppose a circle drawn touching the all three semicircles. If GH be the diameter of this circle, to find relation between GH and AB. If circles are circumscribed about and inscribed in a square, the circumscribed circle is double of the inscribed square. If AB be any chord of a circle whose centre is O, and if AB be produced to C so that BC is equal to the radius; if further CO meets the circle in D and be produced to meet the circle the second time in E, the arc AE will be equal to three times the arc BD. If in a circle two chords AB, CD which do not pass through the centre intersect at right angles, then (arc AD) + (arc CB) = (arc AC) + (arc DB). Suppose that TA, TB are two tangents to a circle, while TC cuts it. Let BD be the chord through B parallel to TC, and let AD meet TC in E. Then, if EH be drawn perpendicular to BD, it will bisect it in H. If two chords AB, CD in a circle intersect at right angles in a point O, not being the centre, then AO2 + BO2 + CO2 + DO2 = (diameter)2. If AB be the diameter of a semicircle, and TP, TQ the tangents to it from any point T, and if AQ, BP be joined meeting in R, then TR is perpendicular to AB. If a diameter AB of a circle meet any chord CD, not a diameter, in E, and if AM, BN be drawn perpendicular to CD, then CN = DM. Let ACB be a semicircle on AB as diameter, and let AD, BE be equal lengths measured along AB from A, B respectively. On AD, BE as diameters describe semicircles on the side towards C, and on DE as diameter a semicircle on the opposite side. Let the perpendicular to AB through O, the centre of the first semicircle, meet the opposite semicircles in C, F respectively. Then shall the area of the figure bounded by the circumferences of all the semicircles be equal to the area of the circle on CF as diameter. Let AB be the diameter of a circle., AC a side of an inscribed regular pentagon, D the middle point of the arc AC. Join CD and produce it to meet BA produced in E; join AC, DB meeting in F, and Draw FM perpendicular to AB. Then EM = (radius of circle). == References ==
Wikipedia/Book_of_Lemmas
In mathematics, and specifically in measure theory, equivalence is a notion of two measures being qualitatively similar. Specifically, the two measures agree on which events have measure zero. == Definition == Let μ {\displaystyle \mu } and ν {\displaystyle \nu } be two measures on the measurable space ( X , A ) , {\displaystyle (X,{\mathcal {A}}),} and let N μ := { A ∈ A ∣ μ ( A ) = 0 } {\displaystyle {\mathcal {N}}_{\mu }:=\{A\in {\mathcal {A}}\mid \mu (A)=0\}} and N ν := { A ∈ A ∣ ν ( A ) = 0 } {\displaystyle {\mathcal {N}}_{\nu }:=\{A\in {\mathcal {A}}\mid \nu (A)=0\}} be the sets of μ {\displaystyle \mu } -null sets and ν {\displaystyle \nu } -null sets, respectively. Then the measure ν {\displaystyle \nu } is said to be absolutely continuous in reference to μ {\displaystyle \mu } if and only if N ν ⊇ N μ . {\displaystyle {\mathcal {N}}_{\nu }\supseteq {\mathcal {N}}_{\mu }.} This is denoted as ν ≪ μ . {\displaystyle \nu \ll \mu .} The two measures are called equivalent if and only if μ ≪ ν {\displaystyle \mu \ll \nu } and ν ≪ μ , {\displaystyle \nu \ll \mu ,} which is denoted as μ ∼ ν . {\displaystyle \mu \sim \nu .} That is, two measures are equivalent if they satisfy N μ = N ν . {\displaystyle {\mathcal {N}}_{\mu }={\mathcal {N}}_{\nu }.} == Examples == === On the real line === Define the two measures on the real line as μ ( A ) = ∫ A 1 [ 0 , 1 ] ( x ) d x {\displaystyle \mu (A)=\int _{A}\mathbf {1} _{[0,1]}(x)\mathrm {d} x} ν ( A ) = ∫ A x 2 1 [ 0 , 1 ] ( x ) d x {\displaystyle \nu (A)=\int _{A}x^{2}\mathbf {1} _{[0,1]}(x)\mathrm {d} x} for all Borel sets A . {\displaystyle A.} Then μ {\displaystyle \mu } and ν {\displaystyle \nu } are equivalent, since all sets outside of [ 0 , 1 ] {\displaystyle [0,1]} have μ {\displaystyle \mu } and ν {\displaystyle \nu } measure zero, and a set inside [ 0 , 1 ] {\displaystyle [0,1]} is a μ {\displaystyle \mu } -null set or a ν {\displaystyle \nu } -null set exactly when it is a null set with respect to Lebesgue measure. === Abstract measure space === Look at some measurable space ( X , A ) {\displaystyle (X,{\mathcal {A}})} and let μ {\displaystyle \mu } be the counting measure, so μ ( A ) = | A | , {\displaystyle \mu (A)=|A|,} where | A | {\displaystyle |A|} is the cardinality of the set a. So the counting measure has only one null set, which is the empty set. That is, N μ = { ∅ } . {\displaystyle {\mathcal {N}}_{\mu }=\{\varnothing \}.} So by the second definition, any other measure ν {\displaystyle \nu } is equivalent to the counting measure if and only if it also has just the empty set as the only ν {\displaystyle \nu } -null set. == Supporting measures == A measure μ {\displaystyle \mu } is called a supporting measure of a measure ν {\displaystyle \nu } if μ {\displaystyle \mu } is σ {\displaystyle \sigma } -finite and ν {\displaystyle \nu } is equivalent to μ . {\displaystyle \mu .} == References ==
Wikipedia/Equivalence_(measure_theory)
In mathematics, nonstandard calculus is the modern application of infinitesimals, in the sense of nonstandard analysis, to infinitesimal calculus. It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic. Non-rigorous calculations with infinitesimals were widely used before Karl Weierstrass sought to replace them with the (ε, δ)-definition of limit starting in the 1870s. For almost one hundred years thereafter, mathematicians such as Richard Courant viewed infinitesimals as being naive and vague or meaningless. Contrary to such views, Abraham Robinson showed in 1960 that infinitesimals are precise, clear, and meaningful, building upon work by Edwin Hewitt and Jerzy Łoś. According to Howard Keisler, "Robinson solved a three hundred year old problem by giving a precise treatment of infinitesimals. Robinson's achievement will probably rank as one of the major mathematical advances of the twentieth century." == History == The history of nonstandard calculus began with the use of infinitely small quantities, called infinitesimals in calculus. The use of infinitesimals can be found in the foundations of calculus independently developed by Gottfried Leibniz and Isaac Newton starting in the 1660s. John Wallis refined earlier techniques of indivisibles of Cavalieri and others by exploiting an infinitesimal quantity he denoted 1 ∞ {\displaystyle {\tfrac {1}{\infty }}} in area calculations, preparing the ground for integral calculus. They drew on the work of such mathematicians as Pierre de Fermat, Isaac Barrow and René Descartes. In early calculus the use of infinitesimal quantities was criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley in his book The Analyst. Several mathematicians, including Maclaurin and d'Alembert, advocated the use of limits. Augustin Louis Cauchy developed a versatile spectrum of foundational approaches, including a definition of continuity in terms of infinitesimals and a (somewhat imprecise) prototype of an ε, δ argument in working with differentiation. Karl Weierstrass formalized the concept of limit in the context of a (real) number system without infinitesimals. Following the work of Weierstrass, it eventually became common to base calculus on ε, δ arguments instead of infinitesimals. This approach formalized by Weierstrass came to be known as the standard calculus. After many years of the infinitesimal approach to calculus having fallen into disuse other than as an introductory pedagogical tool, use of infinitesimal quantities was finally given a rigorous foundation by Abraham Robinson in the 1960s. Robinson's approach is called nonstandard analysis to distinguish it from the standard use of limits. This approach used technical machinery from mathematical logic to create a theory of hyperreal numbers that interpret infinitesimals in a manner that allows a Leibniz-like development of the usual rules of calculus. An alternative approach, developed by Edward Nelson, finds infinitesimals on the ordinary real line itself, and involves a modification of the foundational setting by extending ZFC through the introduction of a new unary predicate "standard". == Motivation == To calculate the derivative f ′ {\displaystyle f'} of the function y = f ( x ) = x 2 {\displaystyle y=f(x)=x^{2}} at x, both approaches agree on the algebraic manipulations: Δ y Δ x = ( x + Δ x ) 2 − x 2 Δ x = 2 x Δ x + ( Δ x ) 2 Δ x = 2 x + Δ x ≈ 2 x {\displaystyle {\frac {\Delta y}{\Delta x}}={\frac {(x+\Delta x)^{2}-x^{2}}{\Delta x}}={\frac {2x\Delta x+(\Delta x)^{2}}{\Delta x}}=2x+\Delta x\approx 2x} This becomes a computation of the derivatives using the hyperreals if Δ x {\displaystyle \Delta x} is interpreted as an infinitesimal and the symbol " ≈ {\displaystyle \approx } " is the relation "is infinitely close to". In order to make f ' a real-valued function, the final term Δ x {\displaystyle \Delta x} is dispensed with. In the standard approach using only real numbers, that is done by taking the limit as Δ x {\displaystyle \Delta x} tends to zero. In the hyperreal approach, the quantity Δ x {\displaystyle \Delta x} is taken to be an infinitesimal, a nonzero number that is closer to 0 than to any nonzero real. The manipulations displayed above then show that Δ y / Δ x {\displaystyle \Delta y/\Delta x} is infinitely close to 2x, so the derivative of f at x is then 2x. Discarding the "error term" is accomplished by an application of the standard part function. Dispensing with infinitesimal error terms was historically considered paradoxical by some writers, most notably George Berkeley. Once the hyperreal number system (an infinitesimal-enriched continuum) is in place, one has successfully incorporated a large part of the technical difficulties at the foundational level. Thus, the epsilon, delta techniques that some believe to be the essence of analysis can be implemented once and for all at the foundational level, and the students needn't be "dressed to perform multiple-quantifier logical stunts on pretense of being taught infinitesimal calculus", to quote a recent study. More specifically, the basic concepts of calculus such as continuity, derivative, and integral can be defined using infinitesimals without reference to epsilon, delta. == Keisler's textbook == Keisler's Elementary Calculus: An Infinitesimal Approach defines continuity on page 125 in terms of infinitesimals, to the exclusion of epsilon, delta methods. The derivative is defined on page 45 using infinitesimals rather than an epsilon-delta approach. The integral is defined on page 183 in terms of infinitesimals. Epsilon, delta definitions are introduced on page 282. == Definition of derivative == The hyperreals can be constructed in the framework of Zermelo–Fraenkel set theory, the standard axiomatisation of set theory used elsewhere in mathematics. To give an intuitive idea for the hyperreal approach, note that, naively speaking, nonstandard analysis postulates the existence of positive numbers ε which are infinitely small, meaning that ε is smaller than any standard positive real, yet greater than zero. Every real number x is surrounded by an infinitesimal "cloud" of hyperreal numbers infinitely close to it. To define the derivative of f at a standard real number x in this approach, one no longer needs an infinite limiting process as in standard calculus. Instead, one sets f ′ ( x ) = s t ( f ∗ ( x + ε ) − f ∗ ( x ) ε ) , {\displaystyle f'(x)=\mathrm {st} \left({\frac {f^{*}(x+\varepsilon )-f^{*}(x)}{\varepsilon }}\right),} where st is the standard part function, yielding the real number infinitely close to the hyperreal argument of st, and f ∗ {\displaystyle f^{*}} is the natural extension of f {\displaystyle f} to the hyperreals. == Continuity == A real function f is continuous at a standard real number x if for every hyperreal x' infinitely close to x, the value f(x' ) is also infinitely close to f(x). This captures Cauchy's definition of continuity as presented in his 1821 textbook Cours d'Analyse, p. 34. Here to be precise, f would have to be replaced by its natural hyperreal extension usually denoted f*. Using the notation ≈ {\displaystyle \approx } for the relation of being infinitely close as above, the definition can be extended to arbitrary (standard or nonstandard) points as follows: A function f is microcontinuous at x if whenever x ′ ≈ x {\displaystyle x'\approx x} , one has f ∗ ( x ′ ) ≈ f ∗ ( x ) {\displaystyle f^{*}(x')\approx f^{*}(x)} Here the point x' is assumed to be in the domain of (the natural extension of) f. The above requires fewer quantifiers than the (ε, δ)-definition familiar from standard elementary calculus: f is continuous at x if for every ε > 0, there exists a δ > 0 such that for every x' , whenever |x − x' | < δ, one has |f(x) − f(x' )| < ε. == Uniform continuity == A function f on an interval I is uniformly continuous if its natural extension f* in I* has the following property: for every pair of hyperreals x and y in I*, if x ≈ y {\displaystyle x\approx y} then f ∗ ( x ) ≈ f ∗ ( y ) {\displaystyle f^{*}(x)\approx f^{*}(y)} . In terms of microcontinuity defined in the previous section, this can be stated as follows: a real function is uniformly continuous if its natural extension f* is microcontinuous at every point of the domain of f*. This definition has a reduced quantifier complexity when compared with the standard (ε, δ)-definition. Namely, the epsilon-delta definition of uniform continuity requires four quantifiers, while the infinitesimal definition requires only two quantifiers. It has the same quantifier complexity as the definition of uniform continuity in terms of sequences in standard calculus, which however is not expressible in the first-order language of the real numbers. The hyperreal definition can be illustrated by the following three examples. Example 1: a function f is uniformly continuous on the semi-open interval (0,1], if and only if its natural extension f* is microcontinuous (in the sense of the formula above) at every positive infinitesimal, in addition to continuity at the standard points of the interval. Example 2: a function f is uniformly continuous on the semi-open interval [0,∞) if and only if it is continuous at the standard points of the interval, and in addition, the natural extension f* is microcontinuous at every positive infinite hyperreal point. Example 3: similarly, the failure of uniform continuity for the squaring function x 2 {\displaystyle x^{2}} is due to the absence of microcontinuity at a single infinite hyperreal point. Concerning quantifier complexity, the following remarks were made by Kevin Houston: The number of quantifiers in a mathematical statement gives a rough measure of the statement’s complexity. Statements involving three or more quantifiers can be difficult to understand. This is the main reason why it is hard to understand the rigorous definitions of limit, convergence, continuity and differentiability in analysis as they have many quantifiers. In fact, it is the alternation of the ∀ {\displaystyle \forall } and ∃ {\displaystyle \exists } that causes the complexity. Andreas Blass wrote as follows: Often ... the nonstandard definition of a concept is simpler than the standard definition (both intuitively simpler and simpler in a technical sense, such as quantifiers over lower types or fewer alternations of quantifiers). == Compactness == A set A is compact if and only if its natural extension A* has the following property: every point in A* is infinitely close to a point of A. Thus, the open interval (0,1) is not compact because its natural extension contains positive infinitesimals which are not infinitely close to any positive real number. == Heine–Cantor theorem == The fact that a continuous function on a compact interval I is necessarily uniformly continuous (the Heine–Cantor theorem) admits a succinct hyperreal proof. Let x, y be hyperreals in the natural extension I* of I. Since I is compact, both st(x) and st(y) belong to I. If x and y were infinitely close, then by the triangle inequality, they would have the same standard part c = st ⁡ ( x ) = st ⁡ ( y ) . {\displaystyle c=\operatorname {st} (x)=\operatorname {st} (y).} Since the function is assumed continuous at c, f ( x ) ≈ f ( c ) ≈ f ( y ) , {\displaystyle f(x)\approx f(c)\approx f(y),} and therefore f(x) and f(y) are infinitely close, proving uniform continuity of f. == Why is the squaring function not uniformly continuous? == Let f(x) = x2 defined on R {\displaystyle \mathbb {R} } . Let N ∈ R ∗ {\displaystyle N\in \mathbb {R} ^{*}} be an infinite hyperreal. The hyperreal number N + 1 N {\displaystyle N+{\tfrac {1}{N}}} is infinitely close to N. Meanwhile, the difference f ( N + 1 N ) − f ( N ) = N 2 + 2 + 1 N 2 − N 2 = 2 + 1 N 2 {\displaystyle f(N+{\tfrac {1}{N}})-f(N)=N^{2}+2+{\tfrac {1}{N^{2}}}-N^{2}=2+{\tfrac {1}{N^{2}}}} is not infinitesimal. Therefore, f* fails to be microcontinuous at the hyperreal point N. Thus, the squaring function is not uniformly continuous, according to the definition in uniform continuity above. A similar proof may be given in the standard setting (Fitzpatrick 2006, Example 3.15). == Example: Dirichlet function == Consider the Dirichlet function I Q ( x ) := { 1 if x is rational , 0 if x is irrational . {\displaystyle I_{Q}(x):={\begin{cases}1&{\text{ if }}x{\text{ is rational}},\\0&{\text{ if }}x{\text{ is irrational}}.\end{cases}}} It is well known that, under the standard definition of continuity, the function is discontinuous at every point. Let us check this in terms of the hyperreal definition of continuity above, for instance let us show that the Dirichlet function is not continuous at π. Consider the continued fraction approximation an of π. Now let the index n be an infinite hypernatural number. By the transfer principle, the natural extension of the Dirichlet function takes the value 1 at an. Note that the hyperrational point an is infinitely close to π. Thus the natural extension of the Dirichlet function takes different values (0 and 1) at these two infinitely close points, and therefore the Dirichlet function is not continuous at π. == Limit == While the thrust of Robinson's approach is that one can dispense with the approach using multiple quantifiers, the notion of limit can be easily recaptured in terms of the standard part function st, namely lim x → a f ( x ) = L {\displaystyle \lim _{x\to a}f(x)=L} if and only if whenever the difference x − a is infinitesimal, the difference f(x) − L is infinitesimal, as well, or in formulas: if st(x) = a then st(f(x)) = L, cf. (ε, δ)-definition of limit. == Limit of sequence == Given a sequence of real numbers { x n ∣ n ∈ N } {\displaystyle \{x_{n}\mid n\in \mathbb {N} \}} , if L ∈ R {\displaystyle L\in \mathbb {R} } L is the limit of the sequence and L = lim n → ∞ x n {\displaystyle L=\lim _{n\to \infty }x_{n}} if for every infinite hypernatural n, st(xn)=L (here the extension principle is used to define xn for every hyperinteger n). This definition has no quantifier alternations. The standard (ε, δ)-style definition, on the other hand, does have quantifier alternations: L = lim n → ∞ x n ⟺ ∀ ε > 0 , ∃ N ∈ N , ∀ n ∈ N : n > N → | x n − L | < ε . {\displaystyle L=\lim _{n\to \infty }x_{n}\Longleftrightarrow \forall \varepsilon >0\;,\exists N\in \mathbb {N} \;,\forall n\in \mathbb {N} :n>N\rightarrow |x_{n}-L|<\varepsilon .} == Extreme value theorem == To show that a real continuous function f on [0,1] has a maximum, let N be an infinite hyperinteger. The interval [0, 1] has a natural hyperreal extension. The function f is also naturally extended to hyperreals between 0 and 1. Consider the partition of the hyperreal interval [0,1] into N subintervals of equal infinitesimal length 1/N, with partition points xi = i /N as i "runs" from 0 to N. In the standard setting (when N is finite), a point with the maximal value of f can always be chosen among the N+1 points xi, by induction. Hence, by the transfer principle, there is a hyperinteger i0 such that 0 ≤ i0 ≤ N and f ( x i 0 ) ≥ f ( x i ) {\displaystyle f(x_{i_{0}})\geq f(x_{i})} for all i = 0, …, N (an alternative explanation is that every hyperfinite set admits a maximum). Consider the real point c = s t ( x i 0 ) {\displaystyle c={\rm {st}}(x_{i_{0}})} where st is the standard part function. An arbitrary real point x lies in a suitable sub-interval of the partition, namely x ∈ [ x i , x i + 1 ] {\displaystyle x\in [x_{i},x_{i+1}]} , so that st(xi) = x. Applying st to the inequality f ( x i 0 ) ≥ f ( x i ) {\displaystyle f(x_{i_{0}})\geq f(x_{i})} , s t ( f ( x i 0 ) ) ≥ s t ( f ( x i ) ) {\displaystyle {\rm {st}}(f(x_{i_{0}}))\geq {\rm {st}}(f(x_{i}))} . By continuity of f, s t ( f ( x i 0 ) ) = f ( s t ( x i 0 ) ) = f ( c ) {\displaystyle {\rm {st}}(f(x_{i_{0}}))=f({\rm {st}}(x_{i_{0}}))=f(c)} . Hence f(c) ≥ f(x), for all x, proving c to be a maximum of the real function f. == Intermediate value theorem == As another illustration of the power of Robinson's approach, a short proof of the intermediate value theorem (Bolzano's theorem) using infinitesimals is done by the following. Let f be a continuous function on [a,b] such that f(a)<0 while f(b)>0. Then there exists a point c in [a,b] such that f(c)=0. The proof proceeds as follows. Let N be an infinite hyperinteger. Consider a partition of [a,b] into N intervals of equal length, with partition points xi as i runs from 0 to N. Consider the collection I of indices such that f(xi)>0. Let i0 be the least element in I (such an element exists by the transfer principle, as I is a hyperfinite set). Then the real number c = s t ( x i 0 ) {\displaystyle c=\mathrm {st} (x_{i_{0}})} is the desired zero of f. Such a proof reduces the quantifier complexity of a standard proof of the IVT. == Basic theorems == If f is a real valued function defined on an interval [a, b], then the transfer operator applied to f, denoted by *f, is an internal, hyperreal-valued function defined on the hyperreal interval [*a, *b]. Theorem: Let f be a real-valued function defined on an interval [a, b]. Then f is differentiable at a < x < b if and only if for every non-zero infinitesimal h, the value Δ h f := st ⁡ [ ∗ f ] ( x + h ) − [ ∗ f ] ( x ) h {\displaystyle \Delta _{h}f:=\operatorname {st} {\frac {[{}^{*}\!f](x+h)-[{}^{*}\!f](x)}{h}}} is independent of h. In that case, the common value is the derivative of f at x. This fact follows from the transfer principle of nonstandard analysis and overspill. Note that a similar result holds for differentiability at the endpoints a, b provided the sign of the infinitesimal h is suitably restricted. For the second theorem, the Riemann integral is defined as the limit, if it exists, of a directed family of Riemann sums; these are sums of the form ∑ k = 0 n − 1 f ( ξ k ) ( x k + 1 − x k ) {\displaystyle \sum _{k=0}^{n-1}f(\xi _{k})(x_{k+1}-x_{k})} where a = x 0 ≤ ξ 0 ≤ x 1 ≤ … x n − 1 ≤ ξ n − 1 ≤ x n = b . {\displaystyle a=x_{0}\leq \xi _{0}\leq x_{1}\leq \ldots x_{n-1}\leq \xi _{n-1}\leq x_{n}=b.} Such a sequence of values is called a partition or mesh and sup k ( x k + 1 − x k ) {\displaystyle \sup _{k}(x_{k+1}-x_{k})} the width of the mesh. In the definition of the Riemann integral, the limit of the Riemann sums is taken as the width of the mesh goes to 0. Theorem: Let f be a real-valued function defined on an interval [a, b]. Then f is Riemann-integrable on [a, b] if and only if for every internal mesh of infinitesimal width, the quantity S M = st ⁡ ∑ k = 0 n − 1 [ ∗ f ] ( ξ k ) ( x k + 1 − x k ) {\displaystyle S_{M}=\operatorname {st} \sum _{k=0}^{n-1}[*f](\xi _{k})(x_{k+1}-x_{k})} is independent of the mesh. In this case, the common value is the Riemann integral of f over [a, b]. == Applications == One immediate application is an extension of the standard definitions of differentiation and integration to internal functions on intervals of hyperreal numbers. An internal hyperreal-valued function f on [a, b] is S-differentiable at x, provided Δ h f = st ⁡ f ( x + h ) − f ( x ) h {\displaystyle \Delta _{h}f=\operatorname {st} {\frac {f(x+h)-f(x)}{h}}} exists and is independent of the infinitesimal h. The value is the S derivative at x. Theorem: Suppose f is S-differentiable at every point of [a, b] where b − a is a bounded hyperreal. Suppose furthermore that | f ′ ( x ) | ≤ M a ≤ x ≤ b . {\displaystyle |f'(x)|\leq M\quad a\leq x\leq b.} Then for some infinitesimal ε | f ( b ) − f ( a ) | ≤ M ( b − a ) + ϵ . {\displaystyle |f(b)-f(a)|\leq M(b-a)+\epsilon .} To prove this, let N be a nonstandard natural number. Divide the interval [a, b] into N subintervals by placing N − 1 equally spaced intermediate points: a = x 0 < x 1 < ⋯ < x N − 1 < x N = b {\displaystyle a=x_{0}<x_{1}<\cdots <x_{N-1}<x_{N}=b} Then | f ( b ) − f ( a ) | ≤ ∑ k = 1 N − 1 | f ( x k + 1 ) − f ( x k ) | ≤ ∑ k = 1 N − 1 { | f ′ ( x k ) | + ϵ k } | x k + 1 − x k | . {\displaystyle |f(b)-f(a)|\leq \sum _{k=1}^{N-1}|f(x_{k+1})-f(x_{k})|\leq \sum _{k=1}^{N-1}\left\{|f'(x_{k})|+\epsilon _{k}\right\}|x_{k+1}-x_{k}|.} Now the maximum of any internal set of infinitesimals is infinitesimal. Thus all the εk's are dominated by an infinitesimal ε. Therefore, | f ( b ) − f ( a ) | ≤ ∑ k = 1 N − 1 ( M + ϵ ) ( x k + 1 − x k ) = M ( b − a ) + ϵ ( b − a ) {\displaystyle |f(b)-f(a)|\leq \sum _{k=1}^{N-1}(M+\epsilon )(x_{k+1}-x_{k})=M(b-a)+\epsilon (b-a)} from which the result follows. == See also == Adequality Archimedes' use of infinitesimals Criticism of nonstandard analysis Differential_(mathematics) Elementary Calculus: An Infinitesimal Approach Non-classical analysis History of calculus == Notes == == References == Fitzpatrick, Patrick (2006), Advanced Calculus, Brooks/Cole H. Jerome Keisler: Elementary Calculus: An Approach Using Infinitesimals. First edition 1976; 2nd edition 1986. (This book is now out of print. The publisher has reverted the copyright to the author, who has made available the 2nd edition in .pdf format available for downloading at http://www.math.wisc.edu/~keisler/calc.html.) H. Jerome Keisler: Foundations of Infinitesimal Calculus, available for downloading at http://www.math.wisc.edu/~keisler/foundations.html (10 jan '07) Blass, Andreas (1978), "Review: Martin Davis, Applied nonstandard analysis, and K. D. Stroyan and W. A. J. Luxemburg, Introduction to the theory of infinitesimals, and H. Jerome Keisler, Foundations of infinitesimal calculus", Bull. Amer. Math. Soc., 84 (1): 34–41, doi:10.1090/S0002-9904-1978-14401-2 Baron, Margaret E.: The origins of the infinitesimal calculus. Pergamon Press, Oxford-Edinburgh-New York 1969. Dover Publications, Inc., New York, 1987. (A new edition of Baron's book appeared in 2004) "Infinitesimal calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994] == External links == Keisler, H. Jerome (2007). Elementary Calculus: An Infinitesimal Approach. Dover Publications. ISBN 978-0-48-648452-5. On-line version (2022) Henle, James M.; Kleinberg, Eugene M. (1979). Infinitesimal Calculus. Dover Publications. ISBN 978-0-48-642886-4. Infinitesimal Calculus at the Internet Archive Brief Calculus (2005, rev. 2015) by Benjamin Crowel. This short text is designed more for self-study or review than for classroom use. Infinitesimals are used when appropriate, and are treated more rigorously than in old books like Thompson's Calculus Made Easy, but in less detail than in Keisler's Elementary Calculus: An Approach Using Infinitesimals.
Wikipedia/Non-standard_calculus
Fads and Fallacies in the Name of Science (1957)—originally published in 1952 as In the Name of Science: An Entertaining Survey of the High Priests and Cultists of Science, Past and Present—was Martin Gardner's second book. A survey of what it described as pseudosciences and cult beliefs, it became a founding document in the nascent scientific skepticism movement. Michael Shermer said of it: "Modern skepticism has developed into a science-based movement, beginning with Martin Gardner's 1952 classic". The book debunks what it characterises as pseudoscience and the pseudo-scientists who propagate it. == Contents == === Synopsis === Fads and Fallacies in the Name of Science starts with a brief survey of the spread of the ideas of "cranks" and "pseudo-scientists", attacking the credulity of the popular press and the irresponsibility of publishing houses in helping to propagate these ideas. Cranks often cite historical cases where ideas were rejected which are now accepted as right. Gardner acknowledges that such cases occurred, and describes some of them, but says that times have changed: "If anything, scientific journals err on the side of permitting questionable theses to be published". Gardner acknowledges that "among older scientists ... one may occasionally meet with irrational prejudice against a new point of view", but adds that "a certain degree of dogma ... is both necessary and desirable" because otherwise "science would be reduced to shambles by having to examine every new-fangled notion that came along." Gardner says that cranks have two common characteristics. The first "and most important" is that they work in almost total isolation from the scientific community. Gardner defines the community as an efficient network of communication within scientific fields, together with a co-operative process of testing new theories. This process allows for apparently bizarre theories to be published—such as Einstein's theory of relativity, which initially met with considerable opposition; it was never dismissed as the work of a crackpot, and it soon met with almost universal acceptance. But the crank "stands entirely outside the closely integrated channels through which new ideas are introduced and evaluated. He does not send his findings to the recognized journals or, if he does, they are rejected for reasons which in the vast majority of cases are excellent." The second characteristic of the crank (which also contributes to his or her isolation) is the tendency to paranoia. There are five ways in which this tendency is likely to be manifested. The pseudo-scientist considers himself a genius. He regards other researchers as stupid, dishonest or both. He believes there is a campaign against his ideas, a campaign comparable to the persecution of Galileo or Pasteur. He may attribute his "persecution" to a conspiracy by a scientific "masonry" who are unwilling to admit anyone to their inner sanctum without appropriate initiation. Instead of side-stepping the mainstream, the pseudo-scientist attacks it head-on: The most revered scientist is Einstein, so Gardner writes that Einstein is the most likely establishment figure to be attacked. He has a tendency to use complex jargon, often making up words and phrases. Gardner compares this to the way that schizophrenics talk in what psychiatrists call "neologisms", "words which have meaning to the patient, but sound like Jabberwocky to everyone else." These psychological traits are in varying degrees demonstrated throughout the remaining chapters of the book, in which Gardner examines particular "fads" he labels pseudo-scientific. His writing became the source book from which many later studies of pseudo-science were taken (e.g. Encyclopedia of Pseudo-science). === Chapters === As per the subtitle of the book, "The curious theories of modern pseudoscientists and the strange, amusing and alarming cults that surround them" are discussed in the chapters as listed. In the Name of Science the introductory chapter Flat and Hollow the Flat Earth theory of Wilbur Glenn Voliva the Hollow Earth theories of John Cleves Symmes, Jr. and Cyrus Reed Teed Monsters of Doom Immanuel Velikovsky’s Worlds in Collision William Whiston’s A New Theory of the Earth Ignatius Donnelly’s Ragnarok; Hanns Hörbiger’s Welteislehre and Hörbiger’s disciple Hans Schindler Bellamy. The Forteans Charles Fort, Tiffany Thayer and the Fortean Society The Hutchins-Adler Great Books Movement: "most of them regard scientists, on the whole, as a stupid lot." Flying Saucers Kenneth Arnold, the Mantell UFO Incident Raymond Palmer, Richard Shaver, Donald Keyhoe, Frank Scully, Gerald Heard and the Unidentified flying object movement. Zig-Zag-and-Swirl Alfred Lawson and his "Lawsonomy" Down with Einstein! Joseph Battell, Thomas H. Graydon, George Francis Gillette, Jeremiah J. Callahan and others. Sir Isaac Babson Roger Babson and the Gravity Research Foundation. Dowsing Rods and Doodlebugs Solco Walle Tromp and radiesthesia Kenneth Roberts, Henry Goss and their dowsing. Under the Microscope Andrew Crosse, Henry Charlton Bastian, Charles Wentworth Littlefield and others who claimed to observe spontaneous generation of living forms. Geology versus Genesis Philip Henry Gosse and his Omphalos George McCready Price and The New Geology Mortimer Adler’s writings on evolution. Hilaire Belloc’s debate with H. G. Wells. Lysenkoism Lamarck and Lamarckism; Lysenko and Lysenkoism Apologists for Hate Hans F. K. Günther and “nordicism” Charles Carroll, Madison Grant, Lothrop Stoddard, and “scientific racism”. Atlantis and Lemuria Ignatius Donnelly (again), Lewis Spence and Atlantis Madame Blavatsky, James Churchward and Lemuria The Great Pyramid John Taylor, Charles Piazzi Smyth, Charles Taze Russell and others with their theories about the Great Pyramid of Giza. Medical Cults Samuel Hahnemann, The Organon of the Healing Art, and homeopathy. Naturopathy, with iridiagnosis, zone therapy and Alexander technique. Andrew Taylor Still and osteopathy. Daniel D. Palmer and chiropractic. Medical Quacks Elisha Perkins Albert Abrams and his defender Upton Sinclair. Ruth Drown Dinshah Pestanji Framji Ghadiali color therapy Gurdjieff Aleister Crowley Edgar Cayce (in the Appendix) Hoxsey Therapy and Krebiozen Food Faddists Horace Fletcher and Fletcherism William Howard Hay and the Dr. Hay diet Vegetarianism ("We need not be concerned here with the ethical arguments...") J. I. Rodale and organic farming Rudolf Steiner, Ehrenfried Pfeiffer, anthroposophy and biodynamic agriculture. Gayelord Hauser Nutrilite Dudley J. LeBlanc and Hadacol Throw Away Your Glasses! William Horatio Bates, the Bates method, Aldous Huxley, The Art of Seeing. Eccentric Sexual Theories Arabella Kenealy Bernarr Macfadden John R. Brinkley Frank Harris John Humphrey Noyes and the Oneida Community Alice Bunker Stockham and “karezza” Orgonomy Wilhelm Reich and “orgone” Dianetics L. Ron Hubbard, Dianetics: The Modern Science of Mental Health. (The term Scientology had only just been introduced when Gardner's book was published.) General Semantics, Etc. Alfred Korzybski, Samuel I. Hayakawa and “general semantics” Jacob L. Moreno and “psychodrama” From Bumps to Handwriting Francis Joseph Gall and phrenology physiognomy; palmistry graphology ESP and PK Joseph Banks Rhine, extra-sensory perception and psychokinesis Nandor Fodor Upton Sinclair (again) and Mental Radio Max Freedom Long Bridey Murphy and Other Matters Morey Bernstein and Bridey Murphy A final plea for orthodoxy and responsibility in publishing == History == The 1957 Dover publication is a revised and expanded version of In the Name of Science, which was published by G. P. Putnam's Sons in 1952. The subtitle boldly states the book's theme: "The curious theories of modern pseudoscientists and the strange, amusing and alarming cults that surround them. A study in human gullibility". As of 2005, it had been reprinted at least 30 times. The book was expanded from an article first published in the Antioch Review in 1950, and in the preface to the first edition, Gardner thanks the Review for allowing him to develop the article as the starting point of his book. Not all material in the article is carried over to the book. For example, in the article, Gardner writes: The reader may wonder why a competent scientist does not publish a detailed refutation of Reich's absurd biological speculations. The answer is that the informed scientist doesn't care, and would, in fact, damage his reputation by taking the time to undertake such a thankless task. And comments in a footnote: It is not within the scope of this paper, however, to discuss technical criteria by which hypotheses are given high, low, or negative degrees of confirmation. Our purpose is simply to glance at several examples of a type of scientific activity which fails completely to conform to scientific standards, but at the same time is the result of such intricate mental activity that it wins temporary acceptance by many laymen insufficiently informed to recognize the scientist's incompetence. Although there obviously is no sharp line separating competent from incompetent research, and there are occasions when a scientific "orthodoxy" may delay the acceptance of novel views, the fact remains that the distance between the work of competent scientists and the speculations of a Voliva or Velikovsky is so great that a qualitative difference emerges which justifies the label of "pseudo-science." Since the time of Galileo the history of pseudo-science has been so completely outside the history of science that the two streams touch only in the rarest of instances. While in the book, Gardner writes: If someone announces that the moon is made of green cheese, the professional astronomer cannot be expected to climb down from his telescope and write a detailed refutation. “A fairly complete textbook of physics would be only part of the answer to Velikovsky,” writes Prof. Laurence J. Lafleur, in his excellent article on "Cranks and Scientists" (Scientific Monthly, Nov., 1951), "and it is therefore not surprising that the scientist does not find the undertaking worth while." And in the wrap-up of the chapter: Just as an experienced doctor is able to diagnose certain ailments the instant a new patient walks into his office, or a police officer learns to recognize criminal types from subtle behavior clues which escape the untrained eye, so we, perhaps, may learn to recognize the future scientific crank when we first encounter him. == Reception == A contemporary review in the Pittsburgh Post-Gazette particularly welcomed Gardner's critical remarks about Hoxsey Therapy and about Krebiozen, both of which were being advanced as anti-cancer measures at that time. The review concluded that the book "should help to counteract some amusing and some positively harmful cults, the existence of which is all too often promoted by irresponsible journalism." The work has often been mentioned in subsequent books and articles. Louis Lasagna, in his book The Doctors' Dilemmas, considered it to be a "superb account of scientific cults, fads, and frauds" and wrote that "This talented writer combines solid fact with a pleasing style." Sociologist of religion Anson D. Shupe took in general a positive attitude, and praises Gardner for his humor. But he says If there is a single criticism to be made of Gardner ... it is that he accepts too comfortably the conventional wisdom, or accepted social reality, of current twentieth-century science and middle-class American Christianity. Somehow it is evident (to me at least) that he is implicitly making a pact with the reader to evaluate these fringe groups in terms of their own shared presumptions about what is "normal". Thus he is quite confident throwing around labels like "quack", "crank" and "preposterous". In science the use of such value judgments can be quite time-bound; likewise in religions where today's heresy may become tomorrow's orthodoxy. The odds of course are always on the side of the writer criticizing fringe groups because statistically speaking so few of them survive. However, when a group does weather its infancy and go on to prosper, invariably its original detractors look a bit more arbitrary than they did initially, and then the shoe is on the other foot. In the 1980s a fierce interchange took place between Gardner and Colin Wilson. In The Quest for Wilhelm Reich Wilson wrote of this book(Gardner) writes about various kinds of cranks with the conscious superiority of the scientist, and in most cases one can share his sense of the victory of reason. But after half a dozen chapters this non-stop superiority begins to irritate; you begin to wonder about the standards that make him so certain he is always right. He asserts that the scientist, unlike the crank, does his best to remain open-minded. So how can he be so sure that no sane person has ever seen a flying saucer, or used a dowsing rod to locate water? And that all the people he disagrees with are unbalanced fanatics? A colleague of the positivist philosopher A. J. Ayer once remarked wryly "I wish I was as certain of anything as he seems to be about everything". Martin Gardner produces the same feeling. By Wilson's own account, up to that time he and Gardner had been friends, but Gardner took offence. In February 1989 Gardner wrote a letter published in The New York Review of Books describing Wilson as "England’s leading journalist of the occult, and a firm believer in ghosts, poltergeists, levitations, dowsing, PK (psychokinesis), ESP, and every other aspect of the psychic scene". Shortly afterwards, Wilson replied, defending himself and adding "What strikes me as so interesting is that when Mr. Gardner—and his colleagues of CSICOP—begin to denounce the 'Yahoos of the paranormal,' they manage to generate an atmosphere of such intense hysteria ...". Gardner in turn replied quoting his own earlier description of Wilson: "The former boy wonder, tall and handsome in his turtleneck sweater, has now decayed into one of those amiable eccentrics for which the land of Conan Doyle is noted. They prowl comically about the lunatic fringes of science ..." In a review of a subsequent Gardner work, Paul Stuewe of the Toronto Star called Fads and Fallacies a "hugely enjoyable demolition of pseudo-scientific nonsense". Ed Regis, writing in The New York Times, considered the book to be "the classic put-down of pseudoscience". Fellow skeptic Michael Shermer called the book "the skeptic classic of the past half-century." He noted that the mark of popularity for the book came when John W. Campbell denounced the chapter on dianetics over the radio. Mark Erickson, author of Science, culture and society: understanding science in the twenty-first century, noted that Gardner's book provided "a flavour of the immense optimism surrounding science in the 1950s" and that his choice of topics were "interesting", but also that his attacks on "osteopathy, chiropractice, and the Bates method for correcting eyesight would raise eyebrows amongst medical practitioners today". Gardner's own response to criticism is given in his preface: The first edition of this book prompted many curious letters from irate readers. The most violent letters came from Reichians, furious because the book considered orgonomy alongside such (to them) outlandish cults as dianetics. Dianeticians, of course, felt the same about orgonomy. I heard from homeopaths who were insulted to find themselves in company with such frauds as osteopathy and chiropractic, and one chiropractor in Kentucky “pitied” me because I had turned my spine on God’s greatest gift to suffering humanity. Several admirers of Dr. Bates favored me with letters so badly typed that I suspect the writers were in urgent need of strong spectacles. Oddly enough, most of these correspondents objected to one chapter only, thinking all the others excellent. == See also == Fads and Fallacies in the Social Sciences Survivorship bias The Demon-Haunted World == Notes == == References == Gardner, Martin (1950). "The hermit scientist". The Antioch Review. 10 (4): 447–457. doi:10.2307/4609447. JSTOR 4609447. Gardner, Martin (1957), Fads and Fallacies in the Name of Science (2nd, revised & expanded ed.), Mineola, New York: Dover Publications, ISBN 0-486-20394-8, retrieved 14 November 2010Originally published in 1952 by G.P. Putnam's Sons under the title In the Name of Science {{citation}}: ISBN / Date incompatibility (help)CS1 maint: postscript (link)
Wikipedia/Fads_and_Fallacies_in_the_Name_of_Science
Science Fiction Puzzle Tales is a book written by Martin Gardner. It is a book of puzzles and short stories that relate to them. == Reception == Dave Langford reviewed Science Fiction Puzzle Tales for White Dwarf #47, and stated that "Many are familiar from Gardner's former books, but he's added new twists to fool smart alecs, and often a puzzle's solution features a variant puzzle, and so on: there are three sets of answers!" == Reviews == Review by John DiPrete (1982) in Science Fiction Review, Spring 1982 Review by Dave Langford (1983) in Paperback Inferno, Volume 7, Number 3 == References ==
Wikipedia/Science_Fiction_Puzzle_Tales
The crocodile paradox, also known as crocodile sophism, is a paradox in logic in the same family of paradoxes as the liar paradox. The premise states that a crocodile, who has stolen a child, promises the parent that their child will be returned if and only if they correctly predict what the crocodile will do next. The transaction is logically smooth but unpredictable if the parent guesses that the child will be returned, but a dilemma arises for the crocodile if the parent guesses that the child will not be returned. In the case that the crocodile decides to keep the child, he violates his terms: the parent's prediction has been validated, and the child should be returned. However, in the case that the crocodile decides to give back the child, he still violates his terms, even if this decision is based on the previous result: the parent's prediction has been falsified, and the child should not be returned. The question of what the crocodile should do is therefore paradoxical, and there is no justifiable solution. The crocodile dilemma serves to expose some of the logical problems presented by metaknowledge. In this regard, it is similar in construction to the unexpected hanging paradox, which Richard Montague (1960) used to demonstrate that the following assumptions about knowledge are inconsistent when tested in combination: Ancient Greek sources were the first to discuss the crocodile dilemma. == See also == List of paradoxes Self-reference Halting problem – the usual proof of its undecidability uses a similar contradiction == Notes ==
Wikipedia/Crocodile_dilemma
Monetary circuit theory is a heterodox theory of monetary economics, particularly money creation, often associated with the post-Keynesian school. It holds that money is created endogenously by the banking sector, rather than exogenously by central bank lending; it is a theory of endogenous money. It is also called circuitism and the circulation approach. == Contrast with mainstream theory == The key distinction from mainstream economic theories of money creation is that circuitism holds that money is created endogenously by the banking sector, rather than exogenously by the government through central bank lending: that is, the economy creates money itself (endogenously), rather than money being provided by some outside agent (exogenously). These theoretical differences lead to a number of different consequences and policy prescriptions; circuitism rejects, among other things, the money multiplier based on reserve requirements, arguing that money is created by banks lending, which only then pulls in reserves from the central bank, rather than by re-lending money pushed in by the central bank. The money multiplier arises instead from capital adequacy ratios, i.e. the ratio of its capital to its risk-weighted assets. == Circuitist model == Circuitism is easily understood in terms of familiar bank accounts and debit card or credit card transactions: bank deposits are just an entry in a bank account book (not specie – bills and coins), and a purchase subtracts money from the buyer's account with the bank, and adds it to the seller's account with the bank. === Transactions === As with other monetary theories, circuitism distinguishes between hard money – money that is exchangeable at a given rate for some commodity, such as gold – and credit money. The theory considers credit money created by commercial banks as primary (at least in modern economies), rather than derived from central bank money – credit money drives the monetary system. While it does not claim that all money is credit money – historically money has often been a commodity, or exchangeable for such – basic models begin by only considering credit money, adding other types of money later. In circuitism, a monetary transaction – buying a loaf of bread, in exchange for dollars, for instance – is not a bilateral transaction (between buyer and seller) as in a barter economy, but is rather a tripartite transaction between buyer, seller, and bank. Rather than a buyer handing over a physical good in exchange for their purchase, instead there is a debit to their account at a bank, and a corresponding credit to the seller's account. This is precisely what happens in credit card or debit card transactions, and in the circuitist account, this is how all credit money transactions occur. For example, if one purchases a loaf of bread with fiat money bills, it may appear that one is purchasing the bread in exchange for the commodity of "dollar bills", but circuitism argues that one is instead simply transferring a credit, here with the issuing central bank: as the bills are not backed by anything, they are ultimately just a physical record of a credit with the central bank, not a commodity. === Monetary creation === In circuitism, as in other theories of credit money, credit money is created by a loan being extended. Crucially, this loan need not (in principle) be backed by any central bank money: the money is created from the promise (credit) embodied in the loan, not from the lending or relending of central bank money: credit is prior to reserves. When the loan is repaid, with interest, the credit money of the loan is destroyed, but reserves (equal to the interest) are created – the profit from the loan. The failure of monetary policy during depressions – central banks give money to commercial banks, but the commercial banks do not lend it out – is referred to as "pushing on a string", and is cited by circuitists in favor of their model: credit money is pulled out by loans being made, not pushed out by central banks printing money and giving it to commercial banks to lend. In 2014, economist Richard Werner conducted an empirical study to determine if, in the process of issuing a loan, banks create new money or transfer money from another account. The study involved taking out a loan with a cooperating bank and monitoring their internal records to determine if the bank transfers the funds from other accounts within or outside the bank, or whether they are newly created. The study determined that the bank did not transfer funds between any accounts when the loan was issued. == History == Circuitism was developed by French and Italian economists after World War II; it was officially presented by Augusto Graziani in (Graziani 1989), following an earlier outline in (Graziani 1984). The notion and terminology of a money circuit dates at least to 1903, when amateur economist Nicholas Johannsen wrote Der Kreislauf des Geldes und Mechanismus des Sozial-Lebens (The Circuit Theory of Money), under the pseudonym J.J.O. Lahn (Graziani 2003). In the interwar period, German and Austrian economists studied monetary circuits, under the term Kreislauf, with the term "circuit" being introduced by French economists following this usage. The main protagonists of the French approach to the monetary circuit is Alain Parguez. Today, the main defenders of the theory of the monetary circuit can be found in the work of Riccardo Realfonzo, Giuseppe Fontana and Riccardo Bellofiore in Italy; and in Canada, in the work of Marc Lavoie, Louis-Philippe Rochon and Mario Seccareccia. == Modeling difficulties == While the verbal description of circuitism has attracted interest, it has proven difficult to model mathematically. Initial efforts to model the monetary circuit proved problematic, with models exhibiting a number of unexpected and undesired properties – money disappearing immediately, for instance. These problem go by such names as: Losses in Circuit Destruction of Money Dilemma of profit A comprehensive model of the total monetary circuit, which is free from the above difficulties, was presented recently by Pokrovskii et al. The figure shows the money flows between the main economic agents. These agents can be imagined as immersed in the monetary environment created by the government, central bank, and many commercial banks. The production system creates all products and generates cash flows between production units as well as from production units to other economic entities. The government, as a central economic entity, represents the common interests of all members of society. It receives its share of the value produced in the form of taxes that are returned to other economic entities in various amounts. The central bank and commercial banks inject an indefinite amount of money in coins, banknotes, and deposits into the system made up of the government and many commercial bank customers. Money circulates in the economy, providing the exchange of products. The direction of money flow is opposite to the direction of product flow; two flows (product flow and money flow) move along the same contour towards each other, but are relatively independent. Product flow is determined by specific technological conditions; the origin of flows lies in the natural environment, and flows end with the final consumption of goods. The described scheme allows us to formulate an evolutionary system of money circulation equations (Pokrovskii and Schinkus, 2016; Schinckus et al., 2018). Two variables are assigned to each economic agent: the volume of deposits and loans issued by the bank. When some assumptions and simplifications are introduced, the evolutionary system is written as a closed system of seven equations. The system determines trajectory of evolution at given production program, government budget and external money flows. A special feature of the approach in the work (Pokrovsky and Schinkus, 2016) is the introduction and use of global characteristics of the system, including the cost of producing and maintaining circulation of one monetary unit per unit of time (κ), the ratio of income of the banking system to social public output with the exception of bank income, that is, the efficiency coefficient of the system (R), and a measure of distribution of credit money (ξ*). These quantities should be set. The ratio κ/R is the velocity of money in the well-known quantitative theory of money. == See also == Modern Monetary Theory, another theory of endogenous money Post-Keynesian economics == References == == Sources == Graziani, Augusto (1984), "The debate on Keynes's finance motive", Monte dei Paschi di Siena Economic Notes: 5–33 Graziani, Augusto (1989), Theory of the Monetary Circuit, Thames Polytechnic, ISBN 978-0-902169-39-5 Graziani, Augusto (2003), The Monetary Theory of Production, Cambridge University Press, ISBN 978-0-521-10417-3 == Further reading == Realfonzo, Riccardo (2012), Circuit Theory, in J.E. King, The Elgar Companion to Post Keynesian Economics, Edward Elgar, pp. 87-92 Realfonzo, Riccardo (2006), "The Italian Circuitist Approach", in A Handbook of Alternative Monetary Economics, edited by P. Arestis and M. Sawyer, Edward Elgar, Cheltenham, pp. 105-120.
Wikipedia/Monetary_circuit_theory
Decision field theory (DFT) is a dynamic-cognitive approach to human decision making. It is a cognitive model that describes how people actually make decisions rather than a rational or normative theory that prescribes what people should or ought to do. It is also a dynamic model of decision-making rather than a static model, because it describes how a person's preferences evolve across time until a decision is reached rather than assuming a fixed state of preference. The preference evolution process is mathematically represented as a stochastic process called a diffusion process. It is used to predict how humans make decisions under uncertainty, how decisions change under time pressure, and how choice context changes preferences. This model can be used to predict not only the choices that are made but also decision or response times. The paper "Decision Field Theory" was published by Jerome R. Busemeyer and James T. Townsend in 1993. The DFT has been shown to account for many puzzling findings regarding human choice behavior including violations of stochastic dominance, violations of strong stochastic transitivity, violations of independence between alternatives, serial-position effects on preference, speed accuracy tradeoff effects, inverse relation between probability and decision time, changes in decisions under time pressure, as well as preference reversals between choices and prices. The DFT also offers a bridge to neuroscience. Recently, the authors of decision field theory also have begun exploring a new theoretical direction called Quantum Cognition. == Introduction == The name decision field theory was chosen to reflect the fact that the inspiration for this theory comes from an earlier approach – avoidance conflict model contained in Kurt Lewin's general psychological theory, which he called field theory. DFT is a member of a general class of sequential sampling models that are commonly used in a variety of fields in cognition. The basic ideas underlying the decision process for sequential sampling models is illustrated in Figure 1 below. Suppose the decision maker is initially presented with a choice between three risky prospects, A, B, C, at time t = 0. The horizontal axis on the figure represents deliberation time (in seconds), and the vertical axis represents preference strength. Each trajectory in the figure represents the preference state for one of the risky prospects at each moment in time. Intuitively, at each moment in time, the decision maker thinks about various payoffs of each prospect, which produces an affective reaction, or valence, to each prospect. These valences are integrated across time to produce the preference state at each moment. In this example, during the early stages of processing (between 200 and 300 ms), attention is focused on advantages favoring prospect C, but later (after 600 ms) attention is shifted toward advantages favoring prospect A. The stopping rule for this process is controlled by a threshold (which is set equal to 1.0 in this example): the first prospect to reach the top threshold is accepted, which in this case is prospect A after about two seconds. Choice probability is determined by the first option to win the race and cross the upper threshold, and decision time is equal to the deliberation time required by one of the prospects to reach this threshold. The threshold is an important parameter for controlling speed–accuracy tradeoffs. If the threshold is set to a lower value (about .30) in Figure 1, then prospect C would be chosen instead of prospect A (and done so earlier). Thus decisions can reverse under time pressure. High thresholds require a strong preference state to be reached, which allows more information about the prospects to be sampled, prolonging the deliberation process, and increasing accuracy. Low thresholds allow a weak preference state to determine the decision, which cuts off sampling information about the prospects, shortening the deliberation process, and decreasing accuracy. Under high time pressure, decision makers must choose a low threshold; but under low time pressure, a higher threshold can be used to increase accuracy. Very careful and deliberative decision makers tend to use a high threshold, and impulsive and careless decision makers use a low threshold. To provide a bit more formal description of the theory, assume that the decision maker has a choice among three actions, and also suppose for simplicity that there are only four possible final outcomes. Thus each action is defined by a probability distribution across these four outcomes. The affective values produced by each payoff are represented by the values mj. At any moment in time, the decision maker anticipates the payoff of each action, which produces a momentary evaluation, Ui(t), for action i. This momentary evaluation is an attention-weighted average of the affective evaluation of each payoff: Ui(t) = Σ Wij(t)mj. The attention weight at time t, Wij(t), for payoff j offered by action i, is assumed to fluctuate according to a stationary stochastic process. This reflects the idea that attention is shifting from moment to moment, causing changes in the anticipated payoff of each action across time. The momentary evaluation of each action is compared with other actions to form a valence for each action at each moment, vi(t) = Ui(t) – U.(t), where U.(t) equals the average across all the momentary actions. The valence represents the momentary advantage or disadvantage of each action. The total valence balances out to zero so that all the options cannot become attractive simultaneously. Finally, the valences are the inputs to a dynamic system that integrates the valences over time to generate the output preference states. The output preference state for action i at time t is symbolized as Pi(t). The dynamic system is described by the following linear stochastic difference equation for a small time step h in the deliberation process: Pi(t+h) = Σ sijPj(t)+vi(t+h).The positive self feedback coefficient, sii = s > 0, controls the memory for past input valences for a preference state. Values of sii < 1 suggest decay in the memory or impact of previous valences over time, whereas values of sii > 1 suggest growth in impact over time (primacy effects). The negative lateral feedback coefficients, sij = sji < 0 for i not equal to j, produce competition among actions so that the strong inhibit the weak. In other words, as preference for one action grows stronger, then this moderates the preference for other actions. The magnitudes of the lateral inhibitory coefficients are assumed to be an increasing function of the similarity between choice options. These lateral inhibitory coefficients are important for explaining context effects on preference described later. Formally, this is a Markov process; matrix formulas have been mathematically derived for computing the choice probabilities and distribution of choice response times. The decision field theory can also be seen as a dynamic and stochastic random walk theory of decision making, presented as a model positioned between lower-level neural activation patterns and more complex notions of decision making found in psychology and economics. == Explaining context effects == The DFT is capable of explaining context effects that many decision making theories are unable to explain. Many classic probabilistic models of choice satisfy two rational types of choice principles. One principle is called independence of irrelevant alternatives, and according to this principle, if the probability of choosing option X is greater than option Y when only X,Y are available, then option X should remain more likely to be chosen over Y even when a new option Z is added to the choice set. In other words, adding an option should not change the preference relation between the original pair of options. A second principle is called regularity, and according to this principle, the probability of choosing option X from a set containing only X and Y should be greater than or equal to the probability of choosing option X from a larger set containing options X, Y, and a new option Z. In other words, adding an option should only decrease the probability of choosing one of the original pair of options. However, empirical findings obtained by consumer researchers studying human choice behavior have found systematic context effects that systematically violate both of these principles. The first context effect is the similarity effect. This effect occurs with the introduction of a third option S that is similar to X but it is not dominated by X. For example, suppose X is a BMW, Y is a Ford focus, and S is an Audi. The Audi is similar to the BMW because both are not very economical but they are both high quality and sporty. The Ford focus is different from the BMW and Audi because it is more economical but lower quality. Suppose in a binary choice, X is chosen more frequently than Y. Next suppose a new choice set is formed by adding an option S that is similar to X. If X is similar to S, and both are very different from Y, the people tend to view X and S as one group and Y as another option. Thus the probability of Y remains the same whether S is presented as an option or not. However, the probability of X will decrease by approximately half with the introduction of S. This causes the probability of choosing X to drop below Y when S is added to the choice set. This violates the independence of irrelevant alternatives property because in a binary choice, X is chosen more frequently than Y, but when S is added, then Y is chosen more frequently than X. The second context effect is the compromise effect. This effect occurs when an option C is added that is a compromise between X and Y. For example, when choosing between C = Honda and X = BMW, the latter is less economical but higher quality. However, if another option Y = Ford Focus is added to the choice set, then C = Honda becomes a compromise between X = BMW and Y = Ford Focus. Suppose in a binary choice, X (BMW) is chosen more often than C (Honda). But when option Y (Ford Focus) is added to the choice set, then option C (Honda) becomes the compromise between X (BMW) and Y (Ford Focus), and C is then chosen more frequently than X. This is another violation of the independence of irrelevant alternatives property because X is chosen more often than C in a binary choice, but C when option Y is added to the choice set, then C is chosen more often than X. The third effect is called the attraction effect. This effect occurs when the third option D is very similar to X but D is defective compared to X. For example D may be a new sporty car developed by a new manufacturer that is similar to option X = BMW, but costs more than the BMW. Therefore, there is little or no reason to choose D over X, and in this situation D is rarely ever chosen over X. However, adding D to a choice set boosts the probability of choosing X. In particular, the probability of choosing X from a set containing X,Y,D is larger than the probability of choosing X from a set containing only X and Y. The defective option D makes X shine, and this attraction effect violates the principle of regularity, which says that adding another option cannot increase the popularity of an option over the original subset. DFT accounts for all three effects using the same principles and same parameters across all three findings. According to DFT, the attention switching mechanism is crucial for producing the similarity effect, but the lateral inhibitory connections are critical for explaining the compromise and attraction effects. If the attention switching process is eliminated, then the similarity effect disappears, and if the lateral connections are all set to zero, then the attraction and compromise effects disappear. This property of the theory entails an interesting prediction about the effects of time pressure on preferences. The contrast effects produced by lateral inhibition require time to build up, which implies that the attraction and compromise effects should become larger under prolonged deliberation (see Roe, Busemeyer & Townsend 2001). Alternatively, if context effects are produced by switching from a weighted average rule under binary choice to a quick heuristic strategy for the triadic choice, then these effects should get larger under time pressure. Empirical tests show that prolonging the decision process increases the effects and time pressure decreases the effects. == Neuroscience == The Decision Field Theory has demonstrated an ability to account for a wide range of findings from behavioral decision making for which the purely algebraic and deterministic models often used in economics and psychology cannot account. Recent studies that record neural activations in non-human primates during perceptual decision making tasks have revealed that neural firing rates closely mimic the accumulation of preference theorized by behaviorally-derived diffusion models of decision making. The decision processes of sensory-motor decisions are beginning to be fairly well understood both at the behavioral and neural levels. Typical findings indicate that neural activation regarding stimulus movement information is accumulated across time up to a threshold, and a behavioral response is made as soon as the activation in the recorded area exceeds the threshold. A conclusion that one can draw is that the neural areas responsible for planning or carrying out certain actions are also responsible for deciding the action to carry out, a decidedly embodied notion. Mathematically, the spike activation pattern, as well as the choice and response time distributions, can be well described by what are known as diffusion models—especially in two-alternative forced choice tasks. Diffusion models, such as the decision field theory, can be viewed as stochastic recurrent neural network models, except that the dynamics are approximated by linear systems. The linear approximation is important for maintaining a mathematically tractable analysis of systems perturbed by noisy inputs. In addition to these neuroscience applications, diffusion models (or their discrete time, random walk, analogues) have been used by cognitive scientists to model performance in a variety of tasks ranging from sensory detection, and perceptual discrimination, to memory recognition, and categorization. Thus, diffusion models provide the potential to form a theoretical bridge between neural models of sensory-motor tasks and behavioral models of complex-cognitive tasks. == Notes == == References == Busemeyer, J. R.; Diederich, A. (2002). "Survey of decision field theory" (PDF). Mathematical Social Sciences. 43 (3): 345–370. doi:10.1016/S0165-4896(02)00016-1. Busemeyer, J. R.; Johnson, J. J. (2004). "Computational models of decision making" (PDF). In Koehler, D.; Harvey, N. (eds.). Handbook of Judgment and Decision Making. Blackwell. pp. 133–154. Busemeyer, J. R.; Johnson, J. J. (2008). "Micro-process models of decision-making" (PDF). In Sun, R. (ed.). Cambridge Handbook of Computational Cognitive Modeling. Cambridge University Press.
Wikipedia/Decision_field_theory
Emotional choice theory (also referred to as the "logic of affect") is a social scientific action model to explain human decision-making. Its foundation was laid in Robin Markwica’s monograph Emotional Choices published by Oxford University Press in 2018. It is associated with its own method for identifying emotions and tracing their influences on decision-making. Emotional choice theory is considered an alternative model to rational choice theory and constructivist perspectives. == Overview == Markwica suggests that political and social scientists have generally employed two main action models to explain human decision-making: On the one hand, rational choice theory (also referred to as the "logic of consequences") views people as homo economicus and assumes that they make decisions to maximize benefit and to minimize cost. On the other hand, a constructivist perspective (also known as the "logic of appropriateness") regards people as homo sociologicus, who behave according to their social norms and identities.: 3, 36  According to Markwica, recent research in neuroscience and psychology, however, shows that decision-making can be strongly influenced by emotion. Drawing on these insights, he develops "emotional choice theory," which conceptualizes decision-makers as homo emotionalis – "emotional, social, and physiological beings whose emotions connect them to, and separate them from, significant others.": 16–17  Emotional choice theory posits that individual-level decision-making is shaped in significant ways by the interplay between people’s norms, emotions, and identities. While norms and identities are important long-term factors in the decision process, emotions function as short-term, essential motivators for change. These motivators kick in when persons detect events in the environment that they deem relevant to a need, goal, value, or concern.: 50–51  == The role of emotions in decision-making == Markwica contends that rational choice theory and constructivist approaches generally ignore the role of affect and emotion in decision-making. They typically treat choice selection as a conscious and reflective process based on thoughts and beliefs. Two decades of research in neuroscience, however, suggest that only a small fraction of the brain’s activities operate at the level of conscious reflection. The vast majority of its activities consist of unconscious appraisals and emotion. Markwica concludes that emotions play a significant role in shaping decision-making processes: "They inform us what we like and what we loathe, what is good and bad for us, and whether we do right or wrong. They give meaning to our relationships with others, and they generate physiological impulses to act.": 4  == The theory == Emotional choice theory is a unitary action model to organize, explain, and predict the ways in which emotions shape decision-making. One of its main assumptions is that the role of emotion in choice selection can be captured systematically by homo emotionalis. The theory seeks to lay the foundation for an affective paradigm in the political and social sciences.: 7–8  Markwica emphasizes that it is not designed to replace rational choice theory and constructivist approaches, or to negate their value. Rather, it is supposed to offer a useful complement to these perspectives. Its purpose is to enable scholars to explain a broader spectrum of decision-making.: 25  The theory is developed in four main steps: The first part defines "emotion" and specifies the model’s main assumptions. The second part outlines how culture shapes emotions, while the third part delineates how emotions influence decision-making. The fourth part formulates the theory’s main propositions.: 36–37  === Defining "emotion" and the theory’s main assumptions === Emotional choice theory subscribes to a definition of "emotion" as a "transient, partly biologically based, partly culturally conditioned response to a stimulus, which gives rise to a coordinated process including appraisals, feelings, bodily reactions, and expressive behavior, all of which prepare individuals to deal with the stimulus.": 4  Markwica notes that the term "emotional choice theory" and the way it contrasts with rational choice theory may create the impression that it casts emotion in opposition to rationality. However, he stresses that the model does not conceive of feeling and thinking as antithetical processes. Rather, it seeks to challenge rational choice theory’s monopoly over the notion of rationality. He argues that the rational choice understanding of rationality is problematic not for what it includes, but for what it omits. It allegedly leaves out important affective capacities that put humans in a position to make reasoned decisions. He points out that two decades of research in neuroscience and psychology has shattered the orthodox view that emotions stand in opposition to rationality. This line of work suggests that the capacity to feel is a prerequisite for reasoned judgment and rational behavior.: 21  === The influence of culture on emotions === Emotional choice theory is based on the assumption that while emotion is felt by individuals, it cannot be isolated from the social context in which it arises. It is inextricably intertwined with people’s cultural ideas and practices. This is why it is necessary to understand how emotion is molded by the cultural environment in which it is embedded.: 58  The theory draws on insights from sociology to delineate how actors’ norms about the appropriate experience and expression of affect shape their emotions. It does not specify the precise substantive content of norms in advance. Given that they vary from case to case, Markwica suggests that they need to be investigated inductively. The model describes the generic processes through which norms guide emotions: Norms affect emotions through what sociologist Arlie Russell Hochschild has termed "feeling rules," which inform people how to experience emotions in a given situation, and "display rules," which tell them how to express emotions.: 15  === The influence of emotions on decision-making === Emotional choice theory assumes that emotions are not only social but also corporeal experiences that are tied to an organism’s autonomic nervous system. People feel emotions physically, often before they are aware of them. It is suggested that these physiological processes can exert a profound influence on human cognition and behavior. They generate or stifle energy, which makes decision-making a continuously dynamic phenomenon. To capture this physiological dimension of emotions, the theory draws on research in psychology in general and appraisal theory in particular.: 16  Appraisal theorists have found that each discrete emotion, such as fear, anger, or sadness, has a logic of its own. It is associated with what social psychologist Jennifer Lerner has termed "appraisal tendencies": 477  and what emotion researcher Nico Frijda has called "action tendencies.": 6, 70  An emotion’s appraisal tendencies influence what and how people think, while its action tendencies shape what they want and do.: 60–70  === Emotional choice theory’s propositions === The core of emotional choice theory consists of a series of propositions about how emotions tend to influence decision-makers’ thinking and behavior through their appraisal tendencies and action tendencies: Fear often prompts an attentional bias toward potential threats and may cause actors to fight, flee, or freeze. Anger is associated with a sense of power and a bias in favor of high-risk options. Hope may boost creativity and persistence, but it can also further confirmation bias. Pride can both cause people to be more persistent and to disregard their own weaknesses. And humiliation can lead people to withdraw or, alternatively, to resist the humiliator.: 86–89  Markwica emphasizes that even when emotions produce powerful impulses, individuals will not necessarily act on them. Emotional choice theory restricts itself to explaining and predicting the influence of emotions on decision-making in a probabilistic fashion. It also recognizes that emotions may mix, meld, or co-occur.: 89–90  == Method for identifying emotions and tracing their influences on decision-making == Emotional choice theory is associated with a method for identifying actors’ emotions and gauging their influences on decision-making. The idea is to infer the traces that emotions leave behind in their external representations, such as physiological manifestations or verbal utterances. To begin with, the method develops a taxonomy of emotion signs that includes four categories: explicit, implicit, cognitive, and behavioral emotion signs.: 100  It then employs a combination of qualitative sentiment analysis and an interpretive approach to identify these emotion signs in self-reports (e.g., autobiographies or statements in interviews) and observer reports (e.g., eyewitness accounts). Sentiment analysis is a technique for uncovering emotion terms in texts, i.e., written words that were originally noted down by an author or expressed by a speaker. While finding explicit emotion signs is helpful, an interpretive approach is best suited to identify implicit, cognitive, and behavioral signs of emotions.: 100–101  Markwica claims that neither traditional causal analysis nor constitutive analysis can properly account for the relationship between emotions and decision-making. Causal analysis, which conceptualizes cause and effect as two independent, stable entities, has difficulty grasping the continuously fluent nature of emotions. Constitutive analysis, on the other hand, fails to capture the dynamic character of emotions, because it assumes a static relationship between properties and their component parts. This is why emotional choice theory relies on "process analysis" as an alternative.: 116–119  Drawing on process philosophy, the theory conceives of emotions not as "objects" or "states" but as continuous processes "all the way down.": 118  In contrast to constitution, the concept of process includes an ineliminable temporal dimension. Whereas causal analysis is based on the assumption that an independent variable causes an effect, a process approach suggests that entities connected in a process do not exist separately from each other.: 118  Finally, emotional choice theory adapts the classic causal process tracing method to this process form of explanation in order to explore the relationship between emotions and decision-making. The result is an interpretive form of the process tracing technique, which seeks to bring together an interpretive sensitivity to social contexts with a commitment to gaining cross-case insights. The idea is to first establish the local meanings of emotion norms and to then move beyond singular causality to attain a higher level of analytical generality.: 119–121  == Reception == Emotional choice theory has been met with some praise but also with strong criticisms by political and social scientists and political psychologists. For example, political scientist Dustin Tingley (Harvard University) considers the model "an intellectual tour de force" that "should be required reading for anyone in the social sciences who is doing applied research that features a role for emotions." In his opinion, even scholars from the rational choice school of thought would "benefit from the clear explication of how to think about emotion in strategic contexts.": 8  International relations scholar Neta Crawford (Boston University) recognizes that emotional choice theory seeks to "dramatically revise, if not overturn," our understanding of decision-making.: 672  She concludes that the model is "strong [...] on theoretical, methodological, and empirical grounds.": 671  However, she criticizes its disregard for important factors that would need to be taken into consideration to fully explain decision-making. For instance, the theory’s focus on the psychology and emotions of individual actors makes it difficult to account for group dynamics in decision-making processes such as groupthink, in her opinion. She also finds that the theory neglects the role of ideology and gender, including norms about femininity and masculinity.: 671–672  Similarly, Matthew Costlow (National Institute of Public Policy) criticizes that the model does not adequately take into account how mental illnesses and personality disorders may influence certain emotions and people’s ability to regulate them. He notes that U.S. President Abraham Lincoln and British Prime Minister Winston Churchill suffered from depression, for example, which presumably affected their emotions and, hence, their decision-making.: 124  Political psychologist Rose McDermott (Brown University) considers emotional choice theory "remarkable for its creative integration of many facets of emotion into a single, detailed, comprehensive framework." She deems it an "important contribution" to the literature on decision-making, which can "easily serve as a foundational template for other scholars wishing to expand exploration into other emotions or other areas of application.": 7  Yet, she also notes "how deeply idiosyncratic the experience and expression of emotion is between individuals." In her eyes, this "does not make it impossible or pointless" to apply emotional choice theory, "but it does make it more difficult, and requires more and richer information sources than other models might demand.": 6  International relations scholar Adam Lerner (University of Cambridge) wonders whether emotions and their interpretations are not too context specific – both socially and historically – for their impacts to be understood systematically across time and space with emotional choice theory. He takes issue with the model’s complexity and concludes that it offers "relatively limited yield" when compared with rigorous historical analysis.: 82  Political scientist Ignas Kalpokas (Vytautas Magnus University) regards emotional choice theory as "a long-overdue and successful attempt to conceptualize the logic of affect." He highlights the theory’s "real subversive and disruptive potential" and considers it "of particular necessity in today’s environment when traditional political models based on rationality and deliberation are crumbling in the face of populism, resurgent emotion-based identities, and post-truth." In his eyes, the model’s most significant "drawback" is the methodological difficulty of accessing another person’s emotions. When analysts are not able to obtain this information, they cannot employ the theory.: 1410  According to international relations scholar Keren Yarhi-Milo (Columbia University), the theory "proves a useful, additional approach to understanding the decision-making process of leaders." In her view, the model and its methodology "are novel and significantly advance not only our understanding of [emotions'] role in decision-making but also how to study them systematically.": 206  She highlights the theory’s assumption that "emotions themselves are shaped by the cultural milieu in which they are embedded." Contextualizing emotions in such a way is "important," she contends, because cultures, norms, and identities are bound to vary over time and space, which will, in turn, affect how people experience and express emotions. At the same time, Yarhi-Milo points out that the theory sacrifices parsimony by incorporating a number of psychological and cultural processes, such as the role of identity validation dynamics, compliance with norms about emotions, and the influence of individual psychological dispositions. She notes that the model’s focus on the inductive reconstruction of the cultural context of emotions puts a "significant burden" on analysts who apply it, because they need access to evidence that is typically not easy to come by.: 205  == See also == Constructivism (international relations) Decision-making models Logic of appropriateness Rational choice theory Social choice theory == References == == Further reading == Elster, Jon (2009). "Emotional Choice and Rational Choice". In Goldie, Peter (ed.). The Oxford Handbook of Philosophy of Emotion (1 ed.). Oxford University Press. pp. 263–282. doi:10.1093/oxfordhb/9780199235018.001.0001. ISBN 978-0-19-923501-8.
Wikipedia/Emotional_choice_theory
Behavioural science is the branch of science concerned with human behaviour. While the term can technically be applied to the study of behaviour amongst all living organisms, it is nearly always used with reference to humans as the primary target of investigation (though animals may be studied in some instances, e.g. invasive techniques). The behavioural sciences sit in between the conventional natural sciences and social studies in terms of scientific rigor. It encompasses fields such as psychology, neuroscience, linguistics, and economics. == Scope == The behavioural sciences encompass both natural and social scientific disciplines, including various branches of psychology, neuroscience and biobehavioural sciences, behavioural economics and certain branches of criminology, sociology and political science. This interdisciplinary nature allows behavioural scientists to coordinate findings from psychological experiments, genetics and neuroimaging, self-report studies, interspecies and cross-cultural comparisons, and correlational and longitudinal designs to understand the nature, frequency, mechanisms, causes and consequences of given behaviours. With respect to the applied behavioural science and behavioural insights, the focus is usually narrower, tending to encompass cognitive psychology, social psychology and behavioural economics generally, and invoking other more specific fields (e.g. health psychology) where needed. In applied settings behavioural scientists exploit their knowledge of cognitive biases, heuristics, and peculiarities of how decision-making is affected by various factors to develop behaviour change interventions or develop policies which 'nudge' people to acting more auspiciously (see Applications below). === Future and emerging techniques === Robila explains how using modern technology to study and understand behavioral patterns on a greater scale, such as artificial intelligence, machine learning, and greater data has a future in brightening up behavioral science assistance/ research. Creating cutting-edge therapies and interventions with immersive technology like virtual reality/ AI would also be beneficial to behavioral science future(s). These concepts are only a hint of the many paths behavioral science may take in the future. == Applications == Insights from several pure disciplines across behavioural sciences are explored by various applied disciplines and practiced in the context of everyday life and business. Consumer behaviour, for instance, is the study of the decision making process consumers make when purchasing goods or services. It studies the way consumers recognise problems and discover solutions. Behavioural science is applied in this study by examining the patterns consumers make when making purchases, the factors that influenced those decisions, and how to take advantage of these patterns. Organisational behaviour is the application of behavioural science in a business setting. It studies what motivates employees, how to make them work more effectively, what influences this behaviour, and how to use these patterns in order to achieve the company's goals. Managers often use organisational behaviour to better lead their employees. Using insights from psychology and economics, behavioural science can be leveraged to understand how individuals make decisions regarding their health and ultimately reduce disease burden through interventions such as loss aversion, framing, defaults, nudges, and more. Other applied disciplines of behavioural science include operations research and media psychology. == Differentiation from social sciences == The terms behavioural sciences and social sciences are interconnected fields that both study systematic processes of behaviour, but they differ on their level of scientific analysis for various dimensions of behaviour. Behavioural sciences abstract empirical data to investigate the decision process and communication strategies within and between organisms in a social system. This characteristically involves fields like psychology, social neuroscience, ethology, and cognitive science. In contrast, social sciences provide a perceptive framework to study the processes of a social system through impacts of a social organisation on the structural adjustment of the individual and of groups. They typically include fields like sociology, economics, public health, anthropology, demography, and political science. Many subfields of these disciplines test the boundaries between behavioural and social sciences. For example, political psychology and behavioural economics use behavioural approaches, despite the predominant focus on systemic and institutional factors in the broader fields of political science and economics. == See also == Behaviour Human behaviour loss aversion List of academic disciplines Science Fields of science Natural sciences Social sciences History of science History of technology == References == == Selected bibliography == George Devereux: From anxiety to method in the behavioral sciences, The Hague, Paris. Mouton & Co, 1967 Fred N. Kerlinger (1979). Behavioural Research: A Conceptual Approach. New York: Holt, Rinehart & Winston. ISBN 0-03-013331-9. E.D. Klemke, R. Hollinger & A.D. Kline, (eds.) (1980). Introductory Readings in the Philosophy of Science. Prometheus Books, New York. Neil J. Smelser & Paul B. Baltes, eds. (2001). International Encyclopedia of the Social & Behavioral Sciences, 26 v. Oxford: Elsevier. ISBN 978-0-08-043076-8 Mills, J. A. (1998). Control a history of behavioral psychology. New York University Press. == External links == Media related to Behavioral sciences at Wikimedia Commons
Wikipedia/Behavioural_sciences
In economics, utility is a measure of a certain person's satisfaction from a certain state of the world. Over time, the term has been used with at least two meanings. In a normative context, utility refers to a goal or objective that we wish to maximize, i.e., an objective function. This kind of utility bears a closer resemblance to the original utilitarian concept, developed by moral philosophers such as Jeremy Bentham and John Stuart Mill. In a descriptive context, the term refers to an apparent objective function; such a function is revealed by a person's behavior, and specifically by their preferences over lotteries, which can be any quantified choice. The relationship between these two kinds of utility functions has been a source of controversy among both economists and ethicists, with most maintaining that the two are distinct but generally related. == Utility function == Consider a set of alternatives among which a person has a preference ordering. A utility function represents that ordering if it is possible to assign a real number to each alternative in such a manner that alternative a is assigned a number greater than alternative b if and only if the individual prefers alternative a to alternative b. In this situation, someone who selects the most preferred alternative must also choose one that maximizes the associated utility function. Suppose James has utility function U = x y {\displaystyle U={\sqrt {xy}}} such that x {\displaystyle x} is the number of apples and y {\displaystyle y} is the number of chocolates. Alternative A has x = 9 {\displaystyle x=9} apples and y = 16 {\displaystyle y=16} chocolates; alternative B has x = 13 {\displaystyle x=13} apples and y = 13 {\displaystyle y=13} chocolates. Putting the values x , y {\displaystyle x,y} into the utility function yields 9 × 16 = 12 {\displaystyle {\sqrt {9\times 16}}=12} for alternative A and 13 × 13 = 13 {\displaystyle {\sqrt {13\times 13}}=13} for B, so James prefers alternative B. In general economic terms, a utility function ranks preferences concerning a set of goods and services. Gérard Debreu derived the conditions required for a preference ordering to be representable by a utility function. For a finite set of alternatives, these require only that the preference ordering is complete (so the individual can determine which of any two alternatives is preferred or that they are indifferent), and that the preference order is transitive. Suppose the set of alternatives is not finite (for example, even if the number of goods is finite, the quantity chosen can be any real number on an interval). In that case, a continuous utility function exists representing a consumer's preferences if and only if the consumer's preferences are complete, transitive, and continuous. == Applications == Utility can be represented through sets of indifference curve, which are level curves of the function itself and which plot the combination of commodities that an individual would accept to maintain a given level of satisfaction. Combining indifference curves with budget constraints allows for individual demand curves derivation. A diagram of a general indifference curve is shown below (Figure 1). The vertical and horizontal axes represent an individual's consumption of commodity Y and X respectively. All the combinations of commodity X and Y along the same indifference curve are regarded indifferently by individuals, which means all the combinations along an indifference curve result in the same utility value. Individual and social utility can be construed as the value of a utility function and a social welfare function, respectively. When coupled with production or commodity constraints, by some assumptions, these functions can be used to analyze Pareto efficiency, such as illustrated by Edgeworth boxes in contract curves. Such efficiency is a major concept in welfare economics. == Preference == While preferences are the conventional foundation of choice theory in microeconomics, it is often convenient to represent preferences with a utility function. Let X be the consumption set, the set of all mutually exclusive baskets the consumer could consume. The consumer's utility function u : X → R {\displaystyle u\colon X\to \mathbb {R} } ranks each possible outcome in the consumption set. If the consumer strictly prefers x to y or is indifferent between them, then u ( x ) ≥ u ( y ) {\displaystyle u(x)\geq u(y)} . For example, suppose a consumer's consumption set is X = {nothing, 1 apple,1 orange, 1 apple and 1 orange, 2 apples, 2 oranges}, and his utility function is u(nothing) = 0, u(1 apple) = 1, u(1 orange) = 2, u(1 apple and 1 orange) = 5, u(2 apples) = 2 and u(2 oranges) = 4. Then this consumer prefers 1 orange to 1 apple but prefers one of each to 2 oranges. In micro-economic models, there is usually a finite set of L commodities, and a consumer may consume an arbitrary amount of each commodity. This gives a consumption set of R + L {\displaystyle \mathbb {R} _{+}^{L}} , and each package x ∈ R + L {\displaystyle x\in \mathbb {R} _{+}^{L}} is a vector containing the amounts of each commodity. For the example, there are two commodities: apples and oranges. If we say apples are the first commodity, and oranges the second, then the consumption set is X = R + 2 {\displaystyle X=\mathbb {R} _{+}^{2}} and u(0, 0) = 0, u(1, 0) = 1, u(0, 1) = 2, u(1, 1) = 5, u(2, 0) = 2, u(0, 2) = 4 as before. For u to be a utility function on X, however, it must be defined for every package in X, so now the function must be defined for fractional apples and oranges too. One function that would fit these numbers is u ( x apples , x oranges ) = x apples + 2 x oranges + 2 x apples x oranges . {\displaystyle u(x_{\text{apples}},x_{\text{oranges}})=x_{\text{apples}}+2x_{\text{oranges}}+2x_{\text{apples}}x_{\text{oranges}}.} Preferences have three main properties: Completeness Assume an individual has two choices, A and B. By ranking the two choices, one and only one of the following relationships is true: an individual strictly prefers A (A > B); an individual strictly prefers B (B>A); an individual is indifferent between A and B (A = B). Either a ≥ b OR b ≥ a (OR both) for all (a,b) Transitivity Individuals' preferences are consistent over bundles. If an individual prefers bundle A to bundle B and bundle B to bundle C, then it can be assumed that the individual prefers bundle A to bundle C. (If a ≥ b and b ≥ c, then a ≥ c for all (a,b,c)). Non-satiation or monotonicity If bundle A contains all the goods that a bundle B contains, but A also includes more of at least one good than B. The individual prefers A over B. If, for example, bundle A = {1 apple,2 oranges}, and bundle B = {1 apple,1 orange}, then A is preferred over B. === Revealed preference === It was recognized that utility could not be measured or observed directly, so instead economists devised a way to infer relative utilities from observed choice. These 'revealed preferences', as termed by Paul Samuelson, were revealed e.g. in people's willingness to pay: Utility is assumed to be correlative to Desire or Want. It has been argued already that desires cannot be measured directly, but only indirectly, by the outward phenomena which they cause: and that in those cases with which economics is mainly concerned the measure is found by the price which a person is willing to pay for the fulfillment or satisfaction of his desire.: 78  == Functions == Utility functions, expressing utility as a function of the amounts of the various goods consumed, are treated as either cardinal or ordinal, depending on whether they are or are not interpreted as providing more information than simply the rank ordering of preferences among bundles of goods, such as information concerning the strength of preferences. === Cardinal === Cardinal utility states that the utilities obtained from consumption can be measured and ranked objectively and are representable by numbers. There are fundamental assumptions of cardinal utility. Economic agents should be able to rank different bundles of goods based on their preferences or utilities and sort different transitions between two bundles of goods. A cardinal utility function can be transformed to another utility function by a positive linear transformation (multiplying by a positive number, and adding some other number); however, both utility functions represent the same preferences. When cardinal utility is assumed, the magnitude of utility differences is treated as an ethically or behaviorally significant quantity. For example, suppose a cup of orange juice has utility of 120 "utils", a cup of tea has a utility of 80 utils, and a cup of water has a utility of 40 utils. With cardinal utility, it can be concluded that the cup of orange juice is better than the cup of tea by the same amount by which the cup of tea is better than the cup of water. This means that if a person has a cup of tea, they would be willing to take any bet with a probability, p, greater than .5 of getting a cup of juice, with a risk of getting a cup of water equal to 1-p. One cannot conclude, however, that the cup of tea is two-thirds of the goodness of the cup of juice because this conclusion would depend not only on magnitudes of utility differences but also on the "zero" of utility. For example, if the "zero" of utility were located at -40, then a cup of orange juice would be 160 utils more than zero, a cup of tea 120 utils more than zero. Cardinal utility can be considered as the assumption that quantifiable characteristics, such as height, weight, temperature, etc can measure utility. Neoclassical economics has largely retreated from using cardinal utility functions as the basis of economic behavior. A notable exception is in the context of analyzing choice with conditions of risk (see below). Sometimes cardinal utility is used to aggregate utilities across persons, to create a social welfare function. === Ordinal === Instead of giving actual numbers over different bundles, ordinal utilities are only the rankings of utilities received from different bundles of goods or services. For example, ordinal utility could tell that having two ice creams provide a greater utility to individuals in comparison to one ice cream but could not tell exactly how much extra utility received by the individual. Ordinal utility, it does not require individuals to specify how much extra utility they received from the preferred bundle of goods or services in comparison to other bundles. They are only needed to tell which bundles they prefer. When ordinal utilities are used, differences in utils (values assumed by the utility function) are treated as ethically or behaviorally meaningless: the utility index encodes a full behavioral ordering between members of a choice set, but tells nothing about the related strength of preferences. For the above example, it would only be possible to say that juice is preferred to tea to water. Thus, ordinal utility utilizes comparisons, such as "preferred to", "no more", "less than", etc. If a function u ( x ) {\displaystyle u(x)} is ordinal and non-negative, it is equivalent to the function u ( x ) 2 {\displaystyle u(x)^{2}} , because taking the square is an increasing monotone (or monotonic) transformation. This means that the ordinal preference induced by these functions is the same (although they are two different functions). In contrast, if u ( x ) {\displaystyle u(x)} is cardinal, it is not equivalent to u ( x ) 2 {\displaystyle u(x)^{2}} . === Examples === In order to simplify calculations, various alternative assumptions have been made concerning details of human preferences, and these imply various alternative utility functions such as: CES (constant elasticity of substitution). Isoelastic utility Exponential utility Quasilinear utility Homothetic preferences Stone–Geary utility function Gorman polar form Greenwood–Hercowitz–Huffman preferences King–Plosser–Rebelo preferences Hyperbolic absolute risk aversion Most utility functions used for modeling or theory are well-behaved. They are usually monotonic and quasi-concave. However, it is possible for rational preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function. == Marginal utility == Economists distinguish between total utility and marginal utility. Total utility is the utility of an alternative, an entire consumption bundle or situation in life. The rate of change of utility from changing the quantity of one good consumed is termed the marginal utility of that good. Marginal utility therefore measures the slope of the utility function with respect to the changes of one good. Marginal utility usually decreases with consumption of the good, the idea of "diminishing marginal utility". In calculus notation, the marginal utility of good X is M U x = ∂ U ∂ X {\displaystyle MU_{x}={\frac {\partial U}{\partial X}}} . When a good's marginal utility is positive, additional consumption of it increases utility; if zero, the consumer is satiated and indifferent about consuming more; if negative, the consumer would pay to reduce his consumption. === Law of diminishing marginal utility === Rational individuals only consume additional units of goods if it increases the marginal utility. However, the law of diminishing marginal utility means an additional unit consumed brings a lower marginal utility than that carried by the previous unit consumed. For example, drinking one bottle of water makes a thirsty person satisfied; as the consumption of water increases, he may feel begin to feel bad which causes the marginal utility to decrease to zero or even become negative. Furthermore, this is also used to analyze progressive taxes as the greater taxes can result in the loss of utility. === Marginal rate of substitution (MRS) === Marginal rate of substitution is the absolute value of the slope of the indifference curve, which measures how much an individual is willing to switch from one good to another. Using a mathematic equation, M R S = − d x 2 / d x 1 {\displaystyle MRS=-\operatorname {d} \!x_{2}/\operatorname {d} \!x_{1}} keeping U(x1,x2) constant. Thus, MRS is how much an individual is willing to pay for consuming a greater amount of x1. MRS is related to marginal utility. The relationship between marginal utility and MRS is: M R S = M U 1 M U 2 {\displaystyle MRS={\frac {MU_{1}}{MU_{2}}}} == Expected utility == Expected utility theory deals with the analysis of choices among risky projects with multiple (possibly multidimensional) outcomes. The St. Petersburg paradox was first proposed by Nicholas Bernoulli in 1713 and solved by Daniel Bernoulli in 1738, although the Swiss mathematician Gabriel Cramer proposed taking the expectation of a square-root utility function of money in an 1728 letter to N. Bernoulli. D. Bernoulli argued that the paradox could be resolved if decision-makers displayed risk aversion and argued for a logarithmic cardinal utility function. (Analysis of international survey data during the 21st century has shown that insofar as utility represents happiness, as for utilitarianism, it is indeed proportional to log of income.) The first important use of the expected utility theory was that of John von Neumann and Oskar Morgenstern, who used the assumption of expected utility maximization in their formulation of game theory. In finding the probability-weighted average of the utility from each possible outcome: EU = Pr ( z ) ⋅ u ( Value ( z ) ) + Pr ( y ) ⋅ u ( Value ( y ) ) {\displaystyle {\text{EU}}=\Pr(z)\cdot u({\text{Value}}(z))+\Pr(y)\cdot u({\text{Value}}(y))} === Von Neumann–Morgenstern === Von Neumann and Morgenstern addressed situations in which the outcomes of choices are not known with certainty, but have probabilities associated with them. A notation for a lottery is as follows: if options A and B have probability p and 1 − p in the lottery, we write it as a linear combination: L = p A + ( 1 − p ) B {\displaystyle L=pA+(1-p)B} More generally, for a lottery with many possible options: L = ∑ i p i A i , {\displaystyle L=\sum _{i}p_{i}A_{i},} where ∑ i p i = 1 {\displaystyle \sum _{i}p_{i}=1} . By making some reasonable assumptions about the way choices behave, von Neumann and Morgenstern showed that if an agent can choose between the lotteries, then this agent has a utility function such that the desirability of an arbitrary lottery can be computed as a linear combination of the utilities of its parts, with the weights being their probabilities of occurring. This is termed the expected utility theorem. The required assumptions are four axioms about the properties of the agent's preference relation over 'simple lotteries', which are lotteries with just two options. Writing B ⪯ A {\displaystyle B\preceq A} to mean 'A is weakly preferred to B' ('A is preferred at least as much as B'), the axioms are: completeness: For any two simple lotteries L {\displaystyle L} and M {\displaystyle M} , either L ⪯ M {\displaystyle L\preceq M} or M ⪯ L {\displaystyle M\preceq L} (or both, in which case they are viewed as equally desirable). transitivity: for any three lotteries L , M , N {\displaystyle L,M,N} , if L ⪯ M {\displaystyle L\preceq M} and M ⪯ N {\displaystyle M\preceq N} , then L ⪯ N {\displaystyle L\preceq N} . convexity/continuity (Archimedean property): If L ⪯ M ⪯ N {\displaystyle L\preceq M\preceq N} , then there is a p {\displaystyle p} between 0 and 1 such that the lottery p L + ( 1 − p ) N {\displaystyle pL+(1-p)N} is equally desirable as M {\displaystyle M} . independence: for any three lotteries L , M , N {\displaystyle L,M,N} and any probability p, L ⪯ M {\displaystyle L\preceq M} if and only if p L + ( 1 − p ) N ⪯ p M + ( 1 − p ) N {\displaystyle pL+(1-p)N\preceq pM+(1-p)N} . Intuitively, if the lottery formed by the probabilistic combination of L {\displaystyle L} and N {\displaystyle N} is no more preferable than the lottery formed by the same probabilistic combination of M {\displaystyle M} and N , {\displaystyle N,} then and only then L ⪯ M {\displaystyle L\preceq M} . Axioms 3 and 4 enable us to decide about the relative utilities of two assets or lotteries. In more formal language: A von Neumann–Morgenstern utility function is a function from choices to the real numbers: u : X → R {\displaystyle u\colon X\to \mathbb {R} } which assigns a real number to every outcome in a way that represents the agent's preferences over simple lotteries. Using the four assumptions mentioned above, the agent will prefer a lottery L 2 {\displaystyle L_{2}} to a lottery L 1 {\displaystyle L_{1}} if and only if, for the utility function characterizing that agent, the expected utility of L 2 {\displaystyle L_{2}} is greater than the expected utility of L 1 {\displaystyle L_{1}} : L 1 ⪯ L 2 iff u ( L 1 ) ≤ u ( L 2 ) {\displaystyle L_{1}\preceq L_{2}{\text{ iff }}u(L_{1})\leq u(L_{2})} . Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which omit or relax the independence axiom. == Indirect utility == An indirect utility function gives the optimal attainable value of a given utility function, which depends on the prices of the goods and the income or wealth level that the individual possesses. === Money === One use of the indirect utility concept is the notion of the utility of money. The (indirect) utility function for money is a nonlinear function that is bounded and asymmetric about the origin. The utility function is concave in the positive region, representing the phenomenon of diminishing marginal utility. The boundedness represents the fact that beyond a certain amount money ceases being useful at all, as the size of any economy at that time is itself bounded. The asymmetry about the origin represents the fact that gaining and losing money can have radically different implications both for individuals and businesses. The non-linearity of the utility function for money has profound implications in decision-making processes: in situations where outcomes of choices influence utility by gains or losses of money, which are the norm for most business settings, the optimal choice for a given decision depends on the possible outcomes of all other decisions in the same time-period. == Budget constraints == Individuals' consumptions are constrained by their budget allowance. The graph of budget line is a linear, downward-sloping line between X and Y axes. All the bundles of consumption under the budget line allow individuals to consume without using the whole budget as the total budget is greater than the total cost of bundles (Figure 2). If only considers prices and quantities of two goods in one bundle, a budget constraint could be formulated as p 1 X 1 + p 2 X 2 = Y {\displaystyle p_{1}X_{1}+p_{2}X_{2}=Y} , where p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} are prices of the two goods, X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} are quantities of the two goods. slope = − P ( x ) P ( y ) {\displaystyle {\text{slope}}={\frac {-P(x)}{P(y)}}} === Constrained utility optimisation === Rational consumers wish to maximise their utility. However, as they have budget constraints, a change of price would affect the quantity of demand. There are two factors could explain this situation: Purchasing power. Individuals obtain greater purchasing power when the price of a good decreases. The reduction of the price allows individuals to increase their savings so they could afford to buy other products. Substitution effect. If the price of good A decreases, then the good becomes relatively cheaper with respect to its substitutes. Thus, individuals would consume more of good A as the utility would increase by doing so. == Discussion and criticism == Cambridge economist Joan Robinson famously criticized utility for being a circular concept: "Utility is the quality in commodities that makes individuals want to buy them, and the fact that individuals want to buy commodities shows that they have utility".: 48  Robinson also stated that because the theory assumes that preferences are fixed this means that utility is not a testable assumption. This is so because if we observe changes of peoples' behavior in relation to a change in prices or a change in budget constraint we can never be sure to what extent the change in behavior was due to the change of price or budget constraint and how much was due to a change of preference. This criticism is similar to that of the philosopher Hans Albert who argued that the ceteris paribus (all else equal) conditions on which the marginalist theory of demand rested rendered the theory itself a meaningless tautology, incapable of being tested experimentally. In essence, a curve of demand and supply (a theoretical line of quantity of a product which would have been offered or requested for given price) is purely ontological and could never have been demonstrated empirically. Other questions of what arguments ought to be included in a utility function are difficult to answer, yet seem necessary to understanding utility. Whether people gain utility from coherence of wants, beliefs or a sense of duty is important to understanding their behavior in the utility organon. Likewise, choosing between alternatives is itself a process of determining what to consider as alternatives, a question of choice within uncertainty. An evolutionary psychology theory is that utility may be better considered as due to preferences that maximized evolutionary fitness in the ancestral environment but not necessarily in the current one. == Measuring utility functions == There are many empirical works trying to estimate the form of utility functions of agents with respect to money. == See also == Happiness economics Law of demand Marginal utility Utility maximization problem - a problem faced by consumers in a market: how to maximize their utility given their budget. Utility assessment - processes for estimating the utility functions of human subjects. == References == == Further reading == Anand, Paul (1993). Foundations of Rational Choice Under Risk. Oxford University Press. ISBN 0-19-823303-5. Fishburn, Peter C. (1970). Utility Theory for Decision Making. Huntington, NY: Robert E. Krieger. ISBN 0-88275-736-9. Georgescu-Roegen, Nicholas (August 1936). "The Pure Theory of Consumer's Behavior". Quarterly Journal of Economics. 50 (4): 545–593. doi:10.2307/1891094. JSTOR 1891094. Gilboa, Itzhak (2009). Theory of Decision under Uncertainty. Cambridge University Press. ISBN 978-0-521-74123-1. Kreps, David M. (1988). Notes on the Theory of Choice. Boulder, CO: West-view Press. ISBN 0-8133-7553-3. Nash, John F. (1950). "The Bargaining Problem". Econometrica. 18 (2): 155–162. doi:10.2307/1907266. JSTOR 1907266. S2CID 153422092. Neumann, John von & Morgenstern, Oskar (1944). Theory of Games and Economic Behavior. Princeton University Press. Nicholson, Walter (1978). Micro-economic Theory (Second ed.). Hinsdale: Dryden Press. pp. 53–87. ISBN 0-03-020831-9. Plous, S. (1993). The Psychology of Judgement and Decision Making. McGraw-Hill. ISBN 0-07-050477-6. Viner, Jacob (1925). "The Utility Concept in Value Theory and Its Critics". Journal of Political Economy. 33 (4): 369–387. Viner, Jacob (1925). "The Utility Concept in Value Theory and Its Critics". Journal of Political Economy. 33 (6): 638–659. == External links == Definition of Utility by Investopedia Anatomy of Cobb-Douglas Type Utility Functions in 3D Anatomy of CES Type Utility Functions in 3D Simpler Definition with example from Investopedia Maximization of Originality - redefinition of classic utility Utility Model of Marketing - Form, Place Archived 12 November 2015 at the Wayback Machine, Time Archived 30 October 2015 at the Wayback Machine, Possession and perhaps also Task
Wikipedia/Utility_theory
The hedgehog's dilemma, or sometimes the porcupine dilemma, is a metaphor about the challenges of human intimacy. It describes a situation in which a group of hedgehogs seek to move close to one another to share heat during cold weather. They must remain apart, however, as they cannot avoid hurting one another with their sharp spines. Though they all share the intention of a close reciprocal relationship, this cannot occur, for unavoidable reasons. Arthur Schopenhauer conceived this metaphor for the state of the individual in society. Despite goodwill, humans cannot be intimate without the risk of mutual harm, leading to cautious and tentative relationships. It is wise to be guarded with others for fear of getting hurt and also fear of causing hurt. The dilemma may encourage self-imposed isolation. == Schopenhauer == The concept originates in the following parable from the German philosopher Schopenhauer: One cold winter's day, a number of porcupines huddled together quite closely in order through their mutual warmth to prevent themselves from being frozen. But they soon felt the effect of their quills on one another, which made them again move apart. Now when the need for warmth once more brought them together, the drawback of the quills was repeated so that they were tossed between two evils, until they had discovered the proper distance from which they could best tolerate one another. Thus the need for society which springs from the emptiness and monotony of men's lives, drives them together; but their many unpleasant and repulsive qualities and insufferable drawbacks once more drive them apart. The mean distance which they finally discover, and which enables them to endure being together, is politeness and good manners. Whoever does not keep to this, is told in England to ‘keep his distance’. By virtue thereof, it is true that the need for mutual warmth will be only imperfectly satisfied, but on the other hand, the prick of the quills will not be felt. Yet whoever has a great deal of internal warmth of his own will prefer to keep away from society in order to avoid giving or receiving trouble or annoyance. — Schopenhauer (1851) Parerga and Paralipomena == Freud == It entered the realm of psychology after the tale was discovered and adopted by Sigmund Freud. Schopenhauer's tale was quoted by Freud in a footnote to his 1921 work Group Psychology and the Analysis of the Ego (German: Massenpsychologie und Ich-Analyse). Freud stated, of his trip to the United States in 1909: "I am going to the USA to catch sight of a wild porcupine and to give some lectures." == Social psychological research == The dilemma has received empirical attention within the contemporary psychological sciences. Jon Maner and his colleagues (Nathan DeWall, Roy Baumeister, and Mark Schaller) referred to Schopenhauer's "porcupine problem" when interpreting results from experiments examining how people respond to ostracism. The study showed that participants who experienced social exclusion were more likely to seek out new social bonds with others. == See also == List of paradoxes Social isolation == References ==
Wikipedia/Hedgehog's_dilemma
Mutualism is an anarchist school of thought and economic theory that advocates for workers' control of the means of production, a free market made up of individual artisans, sole proprietorships and workers' cooperatives, and occupation and use property rights. As proponents of the labour theory of value and labour theory of property, mutualists oppose all forms of economic rent, profit and non-nominal interest, which they see as relying on the exploitation of labour. Mutualists seek to construct an economy without capital accumulation or concentration of land ownership. They also encourage the establishment of workers' self-management, which they propose could be supported through the issuance of mutual credit by mutual banks, with the aim of creating a federal society. Mutualism has its roots in the utopian socialism of Robert Owen and Charles Fourier. It first developed a practical expression in Josiah Warren's community experiments in the United States, which he established according to the principles of equitable commerce based on a system of labor notes. Mutualism was first formulated into a comprehensive economic theory by the French anarchist Pierre-Joseph Proudhon, who proposed the abolition of unequal exchange and the establishment of a new economic system based on reciprocity. In order to establish such a system, he proposed the creation of a "People's Bank" that could issue mutual credit to workers and eventually replace the state; although his own attempts to establish such a system were foiled by the 1851 French coup d'état. After Proudhon's death, mutualism lost its popularity within the European anarchist movement and was eventually redefined in counterposition to anarchist communism. Proudhon's thought was instead taken up by American individualists, who came to be closely identified with mutualist economics. Joshua K. Ingalls and William Batchelder Greene developed mutualist theories of value, property and mutual credit, while Benjamin Tucker elaborated a mutualist critique of capitalism. The American mutualist Dyer Lum attempted to bridge the divide between communist and individualist anarchists, but many of the latter camp eventually split from the anarchist movement and embraced right-wing politics. Mutualist ideas were later implemented in local exchange trading systems and alternative currency models, but the tendency itself fell out of the popular consciousness during the 20th century. The advent of the internet generated a revived interest in mutualist economics, particularly after the publication of new works on the subject by American libertarian theorist Kevin Carson. == History == === Origins === According to Peter Kropotkin, the origins of mutualism lay in the events of the French Revolution, particularly in the federalist and directly democratic structure of the Paris Commune. The foundations of mutualist economic theory lay in the radicalism of English socialist Thomas Spence. Drawing from the Bible, as well as the liberal works of John Locke and James Harrington, Spence called for the abolition of private property, and for the workers' control of production through workers' cooperatives. The term "mutualism" was first used during the 1820s; originally defined synonymously with terms such as "mutual aid, reciprocity and fair play". In 1822, French utopian philosopher Charles Fourier used the term "convergent compound mutualism" (French: mutualisme composé convergent), in order to describe a form of progressive education that adapted the Monitorial System of pupil-teachers. === Owenism and Josiah Warren === Thomas Spence's mutualist platform was taken up by Robert Owen, with Owenism becoming a pioneering force in the history of the cooperative movement. By 1826, the term "mutualism" was being used by members of Owen's utopian community of New Harmony, Indiana, in order to propose a more decentralised and libertarian project. The following year, Josiah Warren left New Harmony and established the Cincinnati Time Store, which was based on his ideas of "equitable commerce". This system set cost as the limit of price and established exchange based on labor notes. In keeping with his mutualist principles, Warren refused to substantially grow his business, maintaining the equal exchange between shopkeeper and customer. He hoped the shop would eventually be the nucleus for the establishment of a "mutualist village". After running it for three years, in May 1830, Warren decided to liquidate the Time Store in an attempt to pursue his plan of building a mutualist village in Ohio. This caught the attention of Robert Owen, who proposed that they instead build a mutualist community in New York, but this project never met fruition. Warren made the decision to delay the establishment of his mutualist colony until 1833. He spent the preceding years sketching out the voluntarist structure of the planned community, together with other prospective members. They established their anarchist community in Tuscarawas County, Ohio, but it was short-lived, as an influenza epidemic forced them to abandon the village by 1835. Warren returned to New Harmony, where he established another Time Store, but he received little support from other community members and shut it down in March 1844. Nevertheless, his mutualist ideas were still taken up by other radicals of the period, with George Henry Evans and his Land Reform Association championing mutualist arguments against the concentration of land ownership. Warren returned to Ohio, where he founded the mutualist community of Utopia in 1847, taking over from an earlier Fourierist community that had established the settlement. The community adopted labor notes as its currency and committed itself to an economy of equitable commerce, with cost as the limit of price. Utopia grew to about one hundred residents and established various enterprises, demonstrating to Warren the potential of his economic principles of decentralisation. By the 1850s, Warren himself had moved on from Ohio and established the Utopian Community of Modern Times on Long Island, New York, but his mutualist experiments would decline with the outbreak of the American Civil War. Warren's mutualist communities of Utopia and Modern Times ultimately lasted for twenty years, eventually evolving into traditional villages with only minor mutualist tendencies. === Formulation by Proudhon === The term "mutualist" was adopted in 1828 by the canuts of Lyon, who went on to lead a series of revolts throughout the 1830s. At this same time, the young French activist Pierre-Joseph Proudhon was first being drawn towards Fourier's utopian socialism; by 1843, Proudhon had joined the Mutualists in Lyon. The adoption of the term "mutualist" by Proudhon and his development of anarchism would result in a shift of its meaning and understanding. Proudhon held "mutuality" (or "exchange in kind") to be a core part of his economic theory. In his System of Economic Contradictions (1846), Proudhon described mutuality as a synthesis of "private property and collective ownership". The mutualist principle of reciprocity provided the basis for all its proposed institutions, including mutual aid, mutual credit and mutual insurance. Proudhon considered the perfect expression of mutuality to be a synallagmatic contract of equal exchange between equal individuals. Proudhon considered the unequal exchange to be at the root of the exploitation of labour and believed that a truly free market ought to be built on reciprocity, without coercion. He concluded that under such conditions, commerce would progress towards equal exchange, where the value of a product or service reflects only the cost of its production. He envisioned the eventual "withering away of the state" and its replacement with a system of economic contracts between individuals and collectives. Proudhon claimed that private property constituted a form of robbery, as proprietors used their title to extract labour-value from others, or as he characterised it: "the proprietor reaps where [they] did not sow". To Proudhon, wage labour represented the enclosure of the value of collective production, with the capitalist collecting economic rent from their workers in the form of profit. Proudhon's anti-capitalist economic theories stood in sharp contrast to liberal economists of the time, such as Frédéric Bastiat and Henry Charles Carey, who argued in defense of landlords and capitalists from the claims of workers. Proudhon claimed that landlords and capitalists contributed nothing to production, and that their only claim to the contrary was by not impeding access to the means of production. In the wake of the French Revolution of 1848, Proudhon began elaborating his proposal for a "Bank of the People". He thought such a bank could guarantee mutual credit to all workers, enabling them bring the product of their labour under the collective ownership of all that participated in production. After he was elected to the Constituent Assembly, Proudhon lobbied for the transformation of the Bank of France into such a "Bank of the People"; he proposed that a 2-3% interest rate would be enough to cover all public expenses and would allow for the abolition of taxation. Proudhon's "Bank of the People" was officially incorporated but was never able to establish itself in practice, as Proudhon was arrested and imprisoned following the rise to power of Louis Napoleon Bonaparte. In his final attempt at writing a comprehensive mutualist programme, The Political Capacity of the Working Classes (1865), Proudhon proposed the establishment of a "mutualist system" of workers' self-management. === Anarchism and the IWA === During the early 1860s, the French anarchist movement first began to develop an organised expression, establishing trade unions and mutual credit systems inspired by Proudhon's mutualist theories. Towards the end of his life, Proudhon himself had grown more cautious of using the term "anarchist" and instead referred to himself as a "federalist"; his followers also eschewed the "anarchist" label, instead calling themselves "mutualists" after his principle of mutual exchange. After Proudhon's death in 1865, his mutualist followers helped establish the International Workingmen's Association (IWA). In the early years of the IWA, the organisation was largely mutualist, as exemplified by its contractual demands during its 1866 Geneva Congress and 1867 Lausanne Congress. But in the following years, French mutualists began to lose control over the organisation to Russian and German communists based in Belgium, and mutualism was gradually overshadowed by other anarchist schools of thought. By the end of the 1860s, the French section of the IWA was already shifting away from mutualism, with Eugène Varlin and Benoît Malon convincing it to adopt Mikhail Bakunin's platform of collectivism; Bakunin himself held his collectivist theories to be "Proudhonism, greatly developed and taken to its ultimate conclusion". Although inspired by Proudhon's arguments for federalism, Bakunin broke from his mutualist economics and argued instead for the common ownership of land. By the 1870s, as divisions between the Marxists and the anti-authoritarians in the IWA grew sharper, Proudhonian mutualism gradually lost its remaining influence; although it continued to see minor developments by collectivists such as César De Paepe and Claude Pelletier, as well as in the programme of the Paris Commune of 1871. When the IWA finally split, members of the anti-authoritarian faction slowly adopted "anarchism" as the label for their philosophy. Following Bakunin's death in 1876, the Russian anarchist Peter Kropotkin had taken anarchist philosophy beyond both Proudhon's mutualism and Bakunin's collectivism. He argued instead for anarchist communism, in which resources are distributed "from each according to their ability, to each according to their needs." This quickly became the dominant form of anarchism, and by the 1880s, mutualism was slowly defined in opposition to anarchist communism. Mutualism was redefined as a non-communist form of anarchism, due to its emphasis on reciprocity, reformism and commerce. As a result, different tendencies and interpretations of mutualism started to emerge. === Development by American individualists === Warren's mutualist experiments, which inspired the American individualism of Lysander Spooner and Stephen Pearl Andrews, laid a foundation for the introduction of Proudhonian mutualism to the country. Warren himself was later retroactively labelled a "mutualist" by the individualists of this period, who were inspired by his system of "equitable commerce". American newspaper editor William Henry Channing synthesised Proudhon's mutualism with Fourierism and Christian socialism, conceiving of "the coming era of mutualism" in millenarian terms. Proudhonian mutualism was later discussed in articles by a number of American utopian theorists, including Francis George Shaw, Albert Brisbane and Charles Anderson Dana. ==== Theoretical developments ==== Two of the most important figures of this period were Joshua K. Ingalls and William Batchelder Greene, who were inspired by Proudhon's mutualist ideas as elaborated by Dana, synthesising it with American individualist traditions pioneered by Warren. Ingalls was a vocal proponent of the labour theory of value and an advocate of workers receiving the full product of their labor. He also argued against the concentration of land ownership, which he believed to be the principal source of social inequality, and instead called for the institution of occupancy-and-use property rights. Drawing from Esoteric Christianity, Greene presented Proudhonian mutualism as a successor to Christianity, describing it as "the religion of the coming age." Greene proposed the establishment of mutual banks, which would issue loans at a nominal interest rate. He believed these could outcompete the high interest rates charged by commercial banks and lead to wages rising to the "full value of the work done". Greene's model for a free price system, which he argued would trend towards labor-value as economic rents and social privileges were abolished, represented a break from Owen and Warren's "labor for labor" theory of exchange. Greene argued that the fruit of all human labors was a product, not only of individual effort, but also of social circumstances that aided them in that effort. In 1869, Ezra and Angela Heywood established the New England Labor Reform League (NELRL), which published the individualist anarchist magazine The Word and widely distributed the works of American mutualists such as Warren and Greene, who were also members of the organisation. From 1872 to 1876, the NELRL attempted to lobby the Massachusetts General Court to establish a mutual bank, but they were ultimately unsuccessful, convincing them that state legislatures had already undergone regulatory capture by capitalists. ==== Anti-capitalist critique ==== Inspired by the older mutualists within the NELRL, the young League member Benjamin Tucker quickly rose to prominence as one of the leading figures of individualist anarchism in the United States. Tucker integrated the earlier iterations of mutualism into a single ideological doctrine, adopting Greene's ideas on mutual banking and synthesising Proudhonian mutualism with the American individualist tradition of Warren and Spooner. Through his magazine Liberty, which he established in 1881, he contributed to the development of anarchist political thought. Drawing from the mutualism of Warren and Proudhon, he argued that the exploitation of labour derived from authority of the state, which collaborated with capitalists in order to extract labour value in the form of "interest, rent and profit". In order to combat coercive practices that allowed the proliferation of wealth and privilege, Tucker proposed the establishment of a "free market of anarchistic socialism", in which all forms of monopoly were abolished. Tucker derided profit as antithetical to free competition and criticised capitalism for "abolishing the free market", arguing that a truly free market was governed only by the cost of production. At the center of his anti-capitalist critique were what he called the "Four Monopolies": that of money, land, tariffs and patents. While Tucker was intensely critical of capitalism, he was uninterested in speculating on the makeup of a future mutualist society. This was eventually taken up by John Beverley Robinson, who built on Tucker's critique to advocate for cooperative economics and mutual aid. Tucker's followers dedicated themselves to elaborating mutualist projects, with Alfred B. Westrup, Herman Kuehn and Clarence Lee Swartz all writing extensively on the subject of mutual credit. ==== Divisions and decline ==== By the beginning of the Gilded Age, Proudhonian mutualism was firmly identified with American individualism, while Tucker's followers came to define themselves in opposition to anarchist communism. Although Tucker himself didn't make the distinction, anarchists such as Henry Seymour increasingly contrasted mutualism against communism, with mutualism eventually being used to refer to "non-communist anarchism". One of Tucker's disciples, Dyer Lum, attempted to bridge the divide between the American individualists and the growing labour movement, which was developing sympathies for social anarchism. In the 1880s, Lum joined the International Working People's Association (IWPA), in which he developed a laissez-faire analysis of "wage slavery" that proposed a form of occupation and use property rights and a mutual banking system. Over the years, Lum worked to develop ties between different radical tendencies, hoping to create a "pluralistic anarchistic coalition" capable of advancing a social revolution. By the time of the Haymarket affair, Lum had aligned with the IWPA's view of labor unions as a means to both combat the exploitation of labour and to prefigure a future free association of producers. In the 1890s, Lum joined the American Federation of Labor (AFL), within which he introduced workers' to mutualist ideas on banking, land reform and cooperativism through his pamphlet The Economics of Anarchy. Lum was instrumental in synthesising mutualist economics with populist politics into a "uniquely American ideology", one which was taken up by the American individualist Voltairine De Cleyre. But factional divides also began to exacerbate at this time, as Tucker's individualist group in Boston and immigrant revolutionary socialists in Chicago split into opposing camps, while Lum attempting to mend relations between the two. Lum and Decleyre together developed a perspective of "anarchism without adjectives", in an attempt to overcome the ongoing feud between individualist and communist anarchists. But eventually the split drove many of Tucker's disciples away from the anarchist movement and towards right-wing politics, with some like Clarence Lee Swartz coming to embrace capitalism and setting the groundwork for modern American libertarianism. === Contemporary developments === During the early 20th century, mutualist concepts were developed by the economists Ralph Borsodi and Silvio Gesell; while mutualist ideas were implemented within local exchange trading systems and by models of alternative currency. Gustav Landauer, who became a leading figure in the Bavarian Soviet Republic during the German Revolution of 1918–1919, adopted a mutualist programme of mutual credit, equal exchange and usufruct property rights. Some anarchist theorists of the late 20th century were also inspired by mutualism, including American social critic Paul Goodman, Italian essayist C. George Benello, British social historian Colin Ward, and American philosopher Peter Lamborn Wilson. In order to provide an alternative to the dominant anarchist schools of thought, Larry Gambone attempted to revive the mutualist movement; he established the Voluntary Cooperation Movement, which managed to recruit the British anarchist Jonathan Simcock and the American individualist Joe Peacott, but it was short-lived. By the end of the 20th century, mutualism had largely fallen out of the popular consciousness, only experiencing a revival in interest when the internet improved public access to old texts. American libertarian theorist Kevin Carson was central to the renewed interest in the subject, self-publishing a series of works on mutualism. Carson's Studies in Mutualist Political Economy proposed a synthesis of Austrian and Marxian economics, developing a form of "free-market anti-capitalism" based on Tucker's conception of mutualism. Interest in Proudhon's works was also renewed during the 21st century, with many of his texts being published by the Proudhon Library and compiled into the collection Property is Theft. == Theory == The primary aspects of mutualism are free association, free banking, reciprocity in the form of mutual aid, workplace democracy, workers' self-management, gradualism and dual power. Mutualism is often described by its proponents as advocating an anti-capitalist free market. Mutualists argue that most of the economic problems associated with capitalism each amount to a violation of the cost principle, or as Josiah Warren interchangeably said, cost the limit of price. It was inspired by the labour theory of value, which was popularized—although not invented—by Adam Smith in 1776 (Proudhon mentioned Smith as an inspiration). The labor theory of value holds that the actual price of a thing (or the true cost) is the amount of labor undertaken to produce it. In Warren's terms of his cost the limit of price theory, cost should be the limit of price, with cost referring to the amount of labour required to produce a good or service. Anyone who sells goods should charge no more than the cost to himself of acquiring these goods. === Contract and federation === Mutualism holds that producers should exchange their goods at cost-value using contract systems. While Proudhon's early definitions of cost-value were based on fixed assumptions about the value of labour hours, he later redefined cost-value to include other factors such as the labour intensity, the nature of the work involved, etc. He also expanded his notions of contract into expanded notions of federation. === Free association === Mutualists argue that association is only necessary where there is an organic combination of forces. An operation requires specialization and many different workers performing tasks to complete a unified product, i.e. a factory. In this situation, workers are inherently dependent on each other; without association, they are related as subordinate and superior, master and wage-slave. An operation that an individual can perform without the help of specialized workers does not require association. Proudhon argued that peasants do not require societal form and only feigned association for solidarity in abolishing rents, buying clubs, etc. He recognized that their work is inherently sovereign and free. For Proudhon, mutualism involved creating industrial democracy. In this system, workplaces would be "handed over to democratically organised workers' associations. ... We want these associations to be models for agriculture, industry and trade, the pioneering core of that vast federation of companies and societies woven into the common cloth of the democratic social Republic". K. Steven Vincent notes in his in-depth analysis of this aspect of Proudhon's ideas that "Proudhon consistently advanced a program of industrial democracy which would return control and direction of the economy to the workers". For Proudhon, "strong workers' associations ... would enable the workers to determine jointly by election how the enterprise was to be directed and operated on a day-to-day basis". === Mutual credit === Mutualists support mutual credit and argue that free banking should be taken back by the people to establish systems of free credit. They contend that banks have a monopoly on credit, just as capitalists have a monopoly on the means of production and landlords have a land monopoly. Banks create money by lending out deposits that do not belong to them and then charging interest on the difference. Mutualists argue that by establishing a democratically run mutual savings bank or credit union, it would be possible to issue free credit so that money could be created for the participants' benefit rather than the bankers' benefit. Individualist anarchists noted for their detailed views on mutualist banking include Pierre-Joseph Proudhon and William Batchelder Greene. === Property === Pierre-Joseph Proudhon was an anarchist and socialist philosopher who articulated thoughts on the nature of property. He claimed that "property is theft", "property is liberty", and "property is impossible". According to Colin Ward, Proudhon did not see a contradiction between these slogans. This was because Proudhon distinguished between what he considered to be two distinct forms of property often bound up in a single label. To the mutualist, this is the distinction between property created by coercion and property created by labour. Property is theft "when it is related to a landowner or capitalist whose ownership is derived from conquest or exploitation and [is] only maintained through the state, property laws, police, and an army". Property is freedom for "the peasant or artisan family [who have] a natural right to a home, land [they may] cultivate, ... to tools of a trade" and the fruits of that cultivation—but not to ownership or control of the lands and lives of others. The former is considered illegitimate property, while the latter is legitimate property. In What Is Mutualism?, Clarence Lee Swartz wrote: It is, therefore, one of the purposes of Mutualists, not only to awaken in the people the appreciation of and desire for freedom, but also to arouse in them a determination to abolish the legal restrictions now placed upon non-invasive human activities and to institute, through purely voluntary associations, such measures as will liberate all of us from the exactions of privilege and the power of concentrated capital. Swartz also argued that mutualism differs from anarcho-communism and other collectivist philosophies by its support of private property, writing: "One of the tests of any reform movement with regard to personal liberty is this: Will the movement prohibit or abolish private property? If it does, it is an enemy of liberty. For one of the most important criteria of freedom is the right to private property in the products of ones labor. State Socialists, Communists, Syndicalists and Communist-Anarchists deny private property." However, Proudhon warned that a society with private property would lead to statist relations between people. Unlike capitalist private-property supporters, Proudhon stressed equality. He thought that all workers should own property and have access to capital, stressing that in every cooperative, "every worker employed in the association [must have] an undivided share in the property of the company". === Usufruct === Mutualists believe that land should not be a commodity to be bought and sold, advocating for conditional titles to land based on occupancy and use norms. Mutualists argue whether an individual has a legitimate claim to land ownership if he is not currently using it but has already incorporated his labour into it. All mutualists agree that everything produced by human labour and machines can be owned as personal property. Mutualists reject the idea of non-personal property and non-proviso Lockean sticky property. Any property obtained through violence, bought with money gained through exploitation, or bought with money that was gained violating usufruct property norms is considered illegitimate. == Criticism == In Europe, a contemporary critic of Proudhon was the early libertarian communist Joseph Déjacque. Unlike and against Proudhon, he argued that "it is not the product of his or her labor that the worker has a right to, but to the satisfaction of his or her needs, whatever may be their nature". One area of disagreement between anarcho-communists and mutualists stems from Proudhon's alleged advocacy of labour vouchers to compensate individuals for their labour and markets or artificial markets for goods and services. However, the persistent claim that Proudhon proposed a labour currency has been challenged as a misunderstanding or misrepresentation. Like other anarcho-communists, Peter Kropotkin advocated the abolition of labour remuneration and questioned, "how can this new form of wages, the labor note, be sanctioned by those who admit that houses, fields, mills are no longer private property, that they belong to the commune or the nation?" According to George Woodcock, Kropotkin believed that a wage system, whether "administered by Banks of the People or by workers' associations through labor cheques", is a form of compulsion. Collectivist anarchist Mikhail Bakunin was an adamant critic of Proudhonian mutualism as well, stating: "How ridiculous are the ideas of the individualists of the Jean Jacques Rousseau school and of the Proudhonian mutualists who conceive society as the result of the free contract of individuals absolutely independent of one another and entering into mutual relations only because of the convention drawn up among men. As if these men had dropped from the skies, bringing with them speech, will, original thought, and as if they were alien to anything of the earth, that is, anything having social origin". == See also == == References == == Bibliography == == Further reading ==
Wikipedia/Mutualism_(economic_theory)
In decision theory, Savage's subjective expected utility model (also known as Savage's framework, Savage's axioms, or Savage's representation theorem) is a formalization of subjective expected utility (SEU) developed by Leonard J. Savage in his 1954 book The Foundations of Statistics, based on previous work by Ramsey, von Neumann and de Finetti. Savage's model concerns with deriving a subjective probability distribution and a utility function such that an agent's choice under uncertainty can be represented via expected-utility maximization. His contributions to the theory of SEU consist of formalizing a framework under which such problem is well-posed, and deriving conditions for its positive solution. == Primitives and problem == Savage's framework posits the following primitives to represent an agent's choice under uncertainty: A set of states of the world Ω {\displaystyle \Omega } , of which only one ω ∈ Ω {\displaystyle \omega \in \Omega } is true. The agent does not know the true ω {\displaystyle \omega } , so Ω {\displaystyle \Omega } represents something about which the agent is uncertain. A set of consequences X {\displaystyle X} : consequences are the objects from which the agent derives utility. A set of acts F {\displaystyle F} : acts are functions f : Ω → X {\displaystyle f:\Omega \rightarrow X} which map unknown states of the world ω ∈ Ω {\displaystyle \omega \in \Omega } to tangible consequences x ∈ X {\displaystyle x\in X} . A preference relation ≿ {\displaystyle \succsim } over acts in F {\displaystyle F} : we write f ≿ g {\displaystyle f\succsim g} to represent the scenario where, when only able to choose between f , g ∈ F {\displaystyle f,g\in F} , the agent (weakly) prefers to choose act f {\displaystyle f} . The strict preference f ≻ g {\displaystyle f\succ g} means that f ≿ g {\displaystyle f\succsim g} but it does not hold that g ≿ f {\displaystyle g\succsim f} . The model thus deals with conditions over the primitives ( Ω , X , F , ≿ ) {\displaystyle (\Omega ,X,F,\succsim )} —in particular, over preferences ≿ {\displaystyle \succsim } —such that one can represent the agent's preferences via expected-utility with respect to some subjective probability over the states Ω {\displaystyle \Omega } : i.e., there exists a subjective probability distribution p ∈ Δ ( Ω ) {\displaystyle p\in \Delta (\Omega )} and a utility function u : X → R {\displaystyle u:X\rightarrow \mathbb {R} } such that f ≿ g ⟺ E ω ∼ p ⁡ [ u ( f ( ω ) ) ] ≥ E ω ∼ p ⁡ [ u ( g ( ω ) ) ] , {\displaystyle f\succsim g\iff \mathop {\mathbb {E} } _{\omega \sim p}[u(f(\omega ))]\geq \mathop {\mathbb {E} } _{\omega \sim p}[u(g(\omega ))],} where E ω ∼ p ⁡ [ u ( f ( ω ) ) ] := ∫ Ω u ( f ( ω ) ) d p ( ω ) {\displaystyle \mathop {\mathbb {E} } _{\omega \sim p}[u(f(\omega ))]:=\int _{\Omega }u(f(\omega )){\text{d}}p(\omega )} . The idea of the problem is to find conditions under which the agent can be thought of choosing among acts f ∈ F {\displaystyle f\in F} as if he considered only 1) his subjective probability of each state ω ∈ Ω {\displaystyle \omega \in \Omega } and 2) the utility he derives from consequence f ( ω ) {\displaystyle f(\omega )} given at each state. == Axioms == Savage posits the following axioms regarding ≿ {\displaystyle \succsim } : P1 (Preference relation) : the relation ≿ {\displaystyle \succsim } is complete (for all f , g ∈ F {\displaystyle f,g\in F} , it's true that f ≿ g {\displaystyle f\succsim g} or g ≿ f {\displaystyle g\succsim f} ) and transitive. P2 (Sure-thing Principle): for any acts f , g ∈ F {\displaystyle f,g\in F} , let f E g {\displaystyle f_{E}g} be the act that gives consequence f ( ω ) {\displaystyle f(\omega )} if ω ∈ E {\displaystyle \omega \in E} and g ( ω ) {\displaystyle g(\omega )} if ω ∉ E {\displaystyle \omega \notin E} . Then for any event E ⊂ Ω {\displaystyle E\subset \Omega } and any acts f , g , h , h ′ ∈ F {\displaystyle f,g,h,h'\in F} , the following holds: f E h ≿ g E h ⟹ f E h ′ ≿ g E h ′ . {\displaystyle f_{E}h\succsim g_{E}h\implies f_{E}h'\succsim g_{E}h'.} In words: if you prefer act f {\displaystyle f} to act g {\displaystyle g} whether the event E {\displaystyle E} happens or not, then it does not matter the consequence when E {\displaystyle E} does not happen. An event E ⊂ Ω {\displaystyle E\subset \Omega } is nonnull if the agent has preferences over consequences when E {\displaystyle E} happens: i.e., there exist f , g , h ∈ F {\displaystyle f,g,h\in F} such that f E h ≻ g E h {\displaystyle f_{E}h\succ g_{E}h} . P3 (Monotonicity in consequences): let f ≡ x {\displaystyle f\equiv x} and g ≡ y {\displaystyle g\equiv y} be constant acts. Then f ≿ g {\displaystyle f\succsim g} if and only if f E h ≿ g E h {\displaystyle f_{E}h\succsim g_{E}h} for all nonnull events E {\displaystyle E} . P4 (Independence of beliefs from tastes): for all events E , E ′ ⊂ Ω {\displaystyle E,E'\subset \Omega } and constant acts f ≡ x {\displaystyle f\equiv x} , g ≡ y {\displaystyle g\equiv y} , f ′ ≡ x ′ {\displaystyle f'\equiv x'} , g ′ ≡ y ′ {\displaystyle g'\equiv y'} such that f ≻ g {\displaystyle f\succ g} and f ′ ≻ g ′ {\displaystyle f'\succ g'} , it holds that f E g ≿ f E ′ g ⟺ f E ′ g ′ ≿ f E ′ ′ g ′ {\displaystyle f_{E}g\succsim f_{E'}g\iff f'_{E}g'\succsim f'_{E'}g'} . P5 (Non-triviality): there exist acts f , f ′ ∈ F {\displaystyle f,f'\in F} such that f ≻ f ′ {\displaystyle f\succ f'} . P6 (Continuity in events): For all acts f , g , h ∈ F {\displaystyle f,g,h\in F} such that f ≻ g {\displaystyle f\succ g} , there is a finite partition ( E i ) i = 1 n {\displaystyle (E_{i})_{i=1}^{n}} of Ω {\displaystyle \Omega } such that f ≻ g E i h {\displaystyle f\succ g_{E_{i}}h} and h E i f ≻ g {\displaystyle h_{E_{i}}f\succ g} for all i ≤ n {\displaystyle i\leq n} . The final axiom is more technical, and of importance only when X {\displaystyle X} is infinite. For any E ⊂ Ω {\displaystyle E\subset \Omega } , let ≿ E {\displaystyle \succsim _{E}} be the restriction of ≿ {\displaystyle \succsim } to E {\displaystyle E} . For any act f ∈ F {\displaystyle f\in F} and state ω ∈ Ω {\displaystyle \omega \in \Omega } , let f ω ≡ f ( ω ) {\displaystyle f_{\omega }\equiv f(\omega )} be the constant act with value f ( ω ) {\displaystyle f(\omega )} . P7: For all acts f , g , ∈ F {\displaystyle f,g,\in F} and events E ⊂ Ω {\displaystyle E\subset \Omega } , we have f ≿ E g ω ∀ ω ∈ E ⟹ f ≿ E g {\displaystyle f\succsim _{E}g_{\omega }{\text{ }}\forall \omega \in E\implies f\succsim _{E}g} , f ω ≿ E g ∀ ω ∈ E ⟹ f ≿ E g {\displaystyle f_{\omega }\succsim _{E}g{\text{ }}\forall \omega \in E\implies f\succsim _{E}g} . == Savage's representation theorem == Theorem: Given an environment ( Ω , X , F , ≿ ) {\displaystyle (\Omega ,X,F,\succsim )} as defined above with X {\displaystyle X} finite, the following are equivalent: 1) ≿ {\displaystyle \succsim } satisfies axioms P1-P6. 2) there exists a non-atomic, finitely additive probability measure p ∈ Δ ( Ω ) {\displaystyle p\in \Delta (\Omega )} defined on 2 Ω {\displaystyle 2^{\Omega }} and a nonconstant function u : X → R {\displaystyle u:X\rightarrow \mathbb {R} } such that, for all f , g ∈ F {\displaystyle f,g\in F} , f ≿ g ⟺ E ω ∼ p ⁡ [ u ( f ( ω ) ) ] ≥ E ω ∼ p ⁡ [ u ( g ( ω ) ) ] . {\displaystyle f\succsim g\iff \mathop {\mathbb {E} } _{\omega \sim p}[u(f(\omega ))]\geq \mathop {\mathbb {E} } _{\omega \sim p}[u(g(\omega ))].} For infinite X {\displaystyle X} , one needs axiom P7. Furthermore, in both cases, the probability measure p {\displaystyle p} is unique and the function u {\displaystyle u} is unique up to positive linear transformations. == See also == Anscombe-Aumann subjective expected utility model von Neumann-Morgenstern utility theorem == Notes == == References ==
Wikipedia/Savage's_subjective_expected_utility_model
Causal decision theory (CDT) is a school of thought within decision theory which states that, when a rational agent is confronted with a set of possible actions, one should select the action which causes the best outcome in expectation. CDT contrasts with evidential decision theory (EDT), which recommends the action which would be indicative of the best outcome if one received the "news" that it had been taken.: 7  == Informal description == Informally, causal decision theory recommends the agent to make the decision with the best expected causal consequences. For example: if eating an apple will cause you to be happy and eating an orange will cause you to be sad then you would be rational to eat the apple. One complication is the notion of expected causal consequences. Imagine that eating a good apple will cause you to be happy and eating a bad apple will cause you to be sad but you aren't sure if the apple is good or bad. In this case you don't know the causal effects of eating the apple. Instead, then, you work from the expected causal effects, where these will depend on three things: how likely you think the apple is to be good or bad; how happy eating a good apple makes you; and how sad eating a bad apple makes you. In informal terms, causal decision theory advises the agent to make the decision with the best expected causal effects. == Formal description == In a 1981 article, Allan Gibbard and William Harper explained causal decision theory as maximization of the expected utility U {\displaystyle U} of an action A {\displaystyle A} "calculated from probabilities of counterfactuals": U ( A ) = ∑ j P ( A > O j ) D ( O j ) , {\displaystyle U(A)=\sum \limits _{j}P(A>O_{j})D(O_{j}),} where D ( O j ) {\displaystyle D(O_{j})} is the desirability of outcome O j {\displaystyle O_{j}} and P ( A > O j ) {\displaystyle P(A>O_{j})} is the counterfactual probability that, if A {\displaystyle A} were done, then O j {\displaystyle O_{j}} would hold. == Difference from evidential decision theory == David Lewis proved that the probability of a conditional P ( A > O j ) {\displaystyle P(A>O_{j})} does not always equal the conditional probability P ( O j | A ) {\displaystyle P(O_{j}|A)} . (see also Lewis's triviality result) If that were the case, causal decision theory would be equivalent to evidential decision theory, which uses conditional probabilities. Gibbard and Harper showed that if we accept two axioms (one related to the controversial principle of the conditional excluded middle), then the statistical independence of A {\displaystyle A} and A > O j {\displaystyle A>O_{j}} suffices to guarantee that P ( A > O j ) = P ( O j | A ) {\displaystyle P(A>O_{j})=P(O_{j}|A)} . However, there are cases in which actions and conditionals are not independent. Gibbard and Harper give an example in which King David wants Bathsheba but fears that summoning her would provoke a revolt. Further, David has studied works on psychology and political science which teach him the following: Kings have two personality types, charismatic and uncharismatic. A king's degree of charisma depends on his genetic make-up and early childhood experiences, and cannot be changed in adulthood. Now, charismatic kings tend to act justly and uncharismatic kings unjustly. Successful revolts against charismatic kings are rare, whereas successful revolts against uncharismatic kings are frequent. Unjust acts themselves, though, do not cause successful revolts; the reason uncharismatic kings are prone to successful revolts is that they have a sneaky, ignoble bearing. David does not know whether or not he is charismatic; he does know that it is unjust to send for another man's wife. (p. 164) In this case, evidential decision theory recommends that David abstain from Bathsheba, while causal decision theory—noting that whether David is charismatic or uncharismatic cannot be changed—recommends sending for her. When required to choose between causal decision theory and evidential decision theory, philosophers usually prefer causal decision theory. == Thought experiments == Different decision theories are often examined in their recommendations for action in different thought experiments. === Newcomb's paradox === In Newcomb's paradox, there is a predictor, a player, and two boxes designated A and B. The predictor is able to reliably predict the player's choices— say, with 99% accuracy. The player is given a choice between taking only box B, or taking both boxes A and B. The player knows the following: Box A is transparent and always contains a visible $1,000. Box B is opaque, and its content has already been set by the predictor: If the predictor has predicted the player will take both boxes A and B, then box B contains nothing. If the predictor has predicted that the player will take only box B, then box B contains $1,000,000. The player does not know what the predictor predicted or what box B contains while making the choice. Should the player take both boxes, or only box B? Causal decision theory recommends taking both boxes in this scenario, because at the moment when the player must make a decision, the predictor has already made a prediction (therefore, the action of the player will not affect the outcome). Conversely, evidential decision theory (EDT) would have recommended that the player takes only box B because taking only box B is strong evidence that the predictor anticipated that the player would only take box B, and therefore it is very likely that box B contains $1,000,000. Conversely, choosing to take both boxes is strong evidence that the predictor knew that the player would take both boxes; therefore we should expect that box B contains nothing.: 22  == Criticism == === Vagueness === The theory of causal decision theory (CDT) does not itself specify what algorithm to use to calculate the counterfactual probabilities. One proposal is the "imaging" technique suggested by Lewis: To evaluate P ( A > O j ) {\displaystyle P(A>O_{j})} , move probability mass from each possible world w {\displaystyle w} to the closest possible world w A {\displaystyle w_{A}} in which A {\displaystyle A} holds, assuming A {\displaystyle A} is possible. However, this procedure requires that we know what we would believe if we were certain of A {\displaystyle A} ; this is itself a conditional to which we might assign probability less than 1, leading to regress. === Counterexamples === There are innumerable "counterexamples" where, it is argued, a straightforward application of CDT fails to produce a defensibly "sane" decision. Philosopher Andy Egan argues this is due to a fundamental disconnect between the intuitive rational rule, "do what you expect will bring about the best results", and CDT's algorithm of "do whatever has the best expected outcome, holding fixed our initial views about the likely causal structure of the world." In this view, it is CDT's requirement to "hold fixed the agent’s unconditional credences in dependency hypotheses" that leads to irrational decisions. An early alleged counterexample is Newcomb's problem. Because your choice of one or two boxes can't causally affect the Predictor's guess, causal decision theory recommends the two-boxing strategy. However, this results in getting only $1,000, not $1,000,000. Philosophers disagree whether one-boxing or two-boxing is the "rational" strategy. Similar concerns may arise even in seemingly-straightforward problems like the prisoner's dilemma, especially when playing opposite your "twin" whose choice to cooperate or defect correlates strongly, but is not caused by, your own choice. In the "Death in Damascus" scenario, an anthropomorphic "Death" predicts where you will be tomorrow, and goes to wait for you there. As in Newcomb's problem, we postulate that Death is a reliable predictor. A CDT agent would be unable to process the correlation, and may as a consequence make irrational decisions: Recently, a few variants of Death in Damascus have been proposed in which following CDT’s recommendations voluntarily loses money or, relatedly, forgoes a guaranteed payoff. One example is the Adversarial Offer: "Two boxes are on offer. A buyer may purchase one or none of the boxes but not both. Each of the two boxes costs $1. Yesterday, the seller put $3 in each box that she predicted the buyer not to acquire. Both the seller and the buyer believe the seller’s prediction to be accurate with probability 0.75." Adopting the buyer's perspective, CDT reasons that at least one box contains $3. Therefore, the average box contains at least $1.50 in causal expected value, which is more than the cost. Hence, CDT requires buying one of the two boxes. However, this is profitable for the seller. Another recent counterexample is the "Psychopath Button": Paul is debating whether to press the ‘kill all psychopaths’ button. It would, he thinks, be much better to live in a world with no psychopaths. Unfortunately, Paul is quite confident that only a psychopath would press such a button. Paul very strongly prefers living in a world with psychopaths to dying. Should Paul press the button? According to Egan, "pretty much everyone" agrees that Paul should not press the button, yet CDT endorses pressing the button. Philosopher Jim Joyce, perhaps the most prominent modern defender of CDT, argues that CDT naturally is capable of taking into account any "information about what one is inclined or likely to do as evidence". == See also == == Notes == == External links == Causal Decision Theory at the Stanford Encyclopedia of Philosophy The Logic of Conditionals at the Stanford Encyclopedia of Philosophy
Wikipedia/Causal_decision_theory
Victor Vroom, a professor at Yale University and a scholar on leadership and decision-making, developed the normative model of decision-making. Drawing upon literature from the areas of leadership, group decision-making, and procedural fairness, Vroom’s model predicts the effectiveness of decision-making procedures. Specifically, Vroom’s model takes into account the situation and the importance of the decision to determine which of Vroom’s five decision-making methods will be most effective. == Decision-making processes == Vroom identified five types of decision-making processes, each varying on degree of participation by the leader. Decide: The leader makes the decision or solves the problem alone and announces his/her decision to the group. The leader may gather information from members of the group. Consult (Individually): The leader approaches group members individually and presents them with the problem. The leader records the group member’s suggestions and makes a decision, deciding whether or not to use the information provided by group members. Consult (Group): The leader holds a group meeting where he/she presents the problem to the group as a whole. All members are asked to contribute and make suggestions during the meeting. The leader makes his/her decision alone, choosing which information obtained from the group meeting to use or discard. Facilitate: The leader holds a group meeting where he/she presents the problem to the group as a whole. This differs from consulting approach as the leader ensures that his/her opinions are not given any more weight than those of the group. The decision is made by group consensus, and not solely by the leader. Delegate: The leader does not actively participate in the decision-making process. Instead, the leader provides resources (e.g., information about the problem) and encouragement. == Situational influence of decision-making == Vroom identified seven situational factors that leaders should consider when choosing a decision-making process. Decision significance: How will the decision affect the project’s success, or the organization as a whole? Importance of commitment: Is it important that team members are committed to the final decision? Leader’s expertise: How knowledgeable is the leader in regards to the problem(s) at hand? Likelihood of commitment: If the leader makes the decision by himself/herself, how committed would the group members be to the decision? Group support for objectives: To what degree do group members support the leader’s and organization’s objectives? Group expertise: How knowledgeable are the group members in regards to the problem(s) at hand? Team competence: How well can group members work together to solve the problem? Vroom created a number of matrices which allow leaders to take into consideration these seven situational influences in order to choose the most effective decision-making process. == Application == Vroom’s normative model of decision-making has been used in a wide array of organizational settings to help leaders select the best decision-making style and also to describe the behaviours of leaders and group members. Further, Vroom’s model has been applied to research in the areas of gender and leadership style, and cultural influences and leadership style. == References ==
Wikipedia/Normative_model_of_decision-making
The exploration–exploitation dilemma, also known as the explore–exploit tradeoff, is a fundamental concept in decision-making that arises in many domains. It is depicted as the balancing act between two opposing strategies. Exploitation involves choosing the best option based on current knowledge of the system (which may be incomplete or misleading), while exploration involves trying out new options that may lead to better outcomes in the future at the expense of an exploitation opportunity. Finding the optimal balance between these two strategies is a crucial challenge in many decision-making problems whose goal is to maximize long-term benefits. == Application in machine learning == In the context of machine learning, the exploration–exploitation tradeoff is fundamental in reinforcement learning (RL), a type of machine learning that involves training agents to make decisions based on feedback from the environment. Crucially, this feedback may be incomplete or delayed. The agent must decide whether to exploit the current best-known policy or explore new policies to improve its performance. === Multi-armed bandit methods === The multi-armed bandit (MAB) problem was a classic example of the tradeoff, and many methods were developed for it, such as epsilon-greedy, Thompson sampling, and the upper confidence bound (UCB). See the page on MAB for details. In more complex RL situations than the MAB problem, the agent can treat each choice as a MAB, where the payoff is the expected future reward. For example, if the agent performs an epsilon-greedy method, then the agent will often "pull the best lever" by picking the action that had the best predicted expected reward (exploit). However, it would pick a random action with probability epsilon (explore). Monte Carlo tree search, for example, uses a variant of the UCB method. === Exploration problems === There are some problems that make exploration difficult. Sparse reward. If rewards occur only once a long while, then the agent might not persist in exploring. Furthermore, if the space of actions is large, then the sparse reward would mean the agent would not be guided by the reward to find a good direction for deeper exploration. A standard example is Montezuma's Revenge. Deceptive reward. If some early actions give immediate small reward, but other actions give later large reward, then the agent might be lured away from exploring the other actions. Noisy TV problem. If certain observations are irreducibly noisy (such as a television showing random images), then the agent might be trapped exploring those observations (watching the television). === Exploration reward === This section based on. The exploration reward (also called exploration bonus) methods convert the exploration-exploitation dilemma into a balance of exploitations. That is, instead of trying to get the agent to balance exploration and exploitation, exploration is simply treated as another form of exploitation, and the agent simply attempts to maximize the sum of rewards from exploration and exploitation. The exploration reward can be treated as a form of intrinsic reward. We write these as r t i , r t e {\displaystyle r_{t}^{i},r_{t}^{e}} , meaning the intrinsic and extrinsic rewards at time step t {\displaystyle t} . However, exploration reward is different from exploitation in two regards: The reward of exploitation is not freely chosen, but given by the environment, but the reward of exploration may be picked freely. Indeed, there are many different ways to design r t i {\displaystyle r_{t}^{i}} described below. The reward of exploitation is usually stationary (i.e. the same action in the same state gives the same reward), but the reward of exploration is non-stationary (i.e. the same action in the same state should give less and less reward). Count-based exploration uses N n ( s ) {\displaystyle N_{n}(s)} , the number of visits to a state s {\displaystyle s} during the time-steps 1 : n {\displaystyle 1:n} , to calculate the exploration reward. This is only possible in small and discrete state space. Density-based exploration extends count-based exploration by using a density model ρ n ( s ) {\displaystyle \rho _{n}(s)} . The idea is that, if a state has been visited, then nearby states are also partly-visited. In maximum entropy exploration, the entropy of the agent's policy π {\displaystyle \pi } is included as a term in the intrinsic reward. That is, r t i = − ∑ a π ( a | s t ) ln ⁡ π ( a | s t ) + ⋯ {\displaystyle r_{t}^{i}=-\sum _{a}\pi (a|s_{t})\ln \pi (a|s_{t})+\cdots } . === Prediction-based === This section based on. The forward dynamics model is a function for predicting the next state based on the current state and the current action: f : ( s t , a t ) ↦ s t + 1 {\displaystyle f:(s_{t},a_{t})\mapsto s_{t+1}} . The forward dynamics model is trained as the agent plays. The model becomes better at predicting state transition for state-action pairs that had been done many times. A forward dynamics model can define an exploration reward by r t i = ‖ f ( s t , a t ) − s t + 1 ‖ 2 2 {\displaystyle r_{t}^{i}=\|f(s_{t},a_{t})-s_{t+1}\|_{2}^{2}} . That is, the reward is the squared-error of the prediction compared to reality. This rewards the agent to perform state-action pairs that had not been done many times. This is however susceptible to the noisy TV problem. Dynamics model can be run in latent space. That is, r t i = ‖ f ( s t , a t ) − ϕ ( s t + 1 ) ‖ 2 2 {\displaystyle r_{t}^{i}=\|f(s_{t},a_{t})-\phi (s_{t+1})\|_{2}^{2}} for some featurizer ϕ {\displaystyle \phi } . The featurizer can be the identity function (i.e. ϕ ( x ) = x {\displaystyle \phi (x)=x} ), randomly generated, the encoder-half of a variational autoencoder, etc. A good featurizer improves forward dynamics exploration. The Intrinsic Curiosity Module (ICM) method trains simultaneously a forward dynamics model and a featurizer. The featurizer is trained by an inverse dynamics model, which is a function for predicting the current action based on the features of the current and the next state: g : ( ϕ ( s t ) , ϕ ( s t + 1 ) ) ↦ a t {\displaystyle g:(\phi (s_{t}),\phi (s_{t+1}))\mapsto a_{t}} . By optimizing the inverse dynamics, both the inverse dynamics model and the featurizer are improved. Then, the improved featurizer improves the forward dynamics model, which improves the exploration of the agent. Random Network Distillation (RND) method attempts to solve this problem by teacher–student distillation. Instead of a forward dynamics model, it has two models f , f ′ {\displaystyle f,f'} . The f ′ {\displaystyle f'} teacher model is fixed, and the f {\displaystyle f} student model is trained to minimize ‖ f ( s ) − f ′ ( s ) ‖ 2 2 {\displaystyle \|f(s)-f'(s)\|_{2}^{2}} on states s {\displaystyle s} . As a state is visited more and more, the student network becomes better at predicting the teacher. Meanwhile, the prediction error is also an exploration reward for the agent, and so the agent learns to perform actions that result in higher prediction error. Thus, we have a student network attempting to minimize the prediction error, while the agent attempting to maximize it, resulting in exploration. The states are normalized by subtracting a running average and dividing a running variance, which is necessary since the teacher model is frozen. The rewards are normalized by dividing with a running variance. Exploration by disagreement trains an ensemble of forward dynamics models, each on a random subset of all ( s t , a t , s t + 1 ) {\displaystyle (s_{t},a_{t},s_{t+1})} tuples. The exploration reward is the variance of the models' predictions. === Noise === For neural network–based agents, the NoisyNet method changes some of its neural network modules by noisy versions. That is, some network parameters are random variables from a probability distribution. The parameters of the distribution are themselves learnable. For example, in a linear layer y = W x + b {\displaystyle y=Wx+b} , both W , b {\displaystyle W,b} are sampled from Gaussian distributions N ( μ W , Σ W ) , N ( μ b , Σ b ) {\displaystyle {\mathcal {N}}(\mu _{W},\Sigma _{W}),{\mathcal {N}}(\mu _{b},\Sigma _{b})} at every step, and the parameters μ W , Σ W , μ b , Σ b {\displaystyle \mu _{W},\Sigma _{W},\mu _{b},\Sigma _{b}} are learned via the reparameterization trick. == References == Amin, Susan; Gomrokchi, Maziar; Satija, Harsh; Hoof, van; Precup, Doina (September 1, 2021). "A Survey of Exploration Methods in Reinforcement Learning". arXiv:2109.00157 [cs.LG].
Wikipedia/Exploration–exploitation_dilemma
Possibility theory is a mathematical theory for dealing with certain types of uncertainty and is an alternative to probability theory. It uses measures of possibility and necessity between 0 and 1, ranging from impossible to possible and unnecessary to necessary, respectively. Professor Lotfi Zadeh first introduced possibility theory in 1978 as an extension of his theory of fuzzy sets and fuzzy logic. Didier Dubois and Henri Prade further contributed to its development. Earlier, in the 1950s, economist G. L. S. Shackle proposed the min/max algebra to describe degrees of potential surprise. == Formalization of possibility == For simplicity, assume that the universe of discourse Ω is a finite set. A possibility measure is a function Π {\displaystyle \Pi } from 2 Ω {\displaystyle 2^{\Omega }} to [0, 1] such that: Axiom 1: Π ( ∅ ) = 0 {\displaystyle \Pi (\varnothing )=0} Axiom 2: Π ( Ω ) = 1 {\displaystyle \Pi (\Omega )=1} Axiom 3: Π ( U ∪ V ) = max ( Π ( U ) , Π ( V ) ) {\displaystyle \Pi (U\cup V)=\max \left(\Pi (U),\Pi (V)\right)} for any disjoint subsets U {\displaystyle U} and V {\displaystyle V} . It follows that, like probability on finite probability spaces, the possibility measure is determined by its behavior on singletons: Π ( U ) = max ω ∈ U Π ( { ω } ) . {\displaystyle \Pi (U)=\max _{\omega \in U}\Pi (\{\omega \}).} Axiom 1 can be interpreted as the assumption that Ω is an exhaustive description of future states of the world, because it means that no belief weight is given to elements outside Ω. Axiom 2 could be interpreted as the assumption that the evidence from which Π {\displaystyle \Pi } was constructed is free of any contradiction. Technically, it implies that there is at least one element in Ω with possibility 1. Axiom 3 corresponds to the additivity axiom in probabilities. However, there is an important practical difference. Possibility theory is computationally more convenient because Axioms 1–3 imply that: Π ( U ∪ V ) = max ( Π ( U ) , Π ( V ) ) {\displaystyle \Pi (U\cup V)=\max \left(\Pi (U),\Pi (V)\right)} for any subsets U {\displaystyle U} and V {\displaystyle V} . Because one can know the possibility of the union from the possibility of each component, it can be said that possibility is compositional with respect to the union operator. Note however that it is not compositional with respect to the intersection operator. Generally: Π ( U ∩ V ) ≤ min ( Π ( U ) , Π ( V ) ) ≤ max ( Π ( U ) , Π ( V ) ) . {\displaystyle \Pi (U\cap V)\leq \min \left(\Pi (U),\Pi (V)\right)\leq \max \left(\Pi (U),\Pi (V)\right).} When Ω is not finite, Axiom 3 can be replaced by: For all index sets I {\displaystyle I} , if the subsets U i , i ∈ I {\displaystyle U_{i,\,i\in I}} are pairwise disjoint, Π ( ⋃ i ∈ I U i ) = sup i ∈ I Π ( U i ) . {\displaystyle \Pi \left(\bigcup _{i\in I}U_{i}\right)=\sup _{i\in I}\Pi (U_{i}).} == Necessity == Whereas probability theory uses a single number, the probability, to describe how likely an event is to occur, possibility theory uses two concepts, the possibility and the necessity of the event. For any set U {\displaystyle U} , the necessity measure is defined by N ( U ) = 1 − Π ( U ¯ ) {\displaystyle N(U)=1-\Pi ({\overline {U}})} . In the above formula, U ¯ {\displaystyle {\overline {U}}} denotes the complement of U {\displaystyle U} , that is the elements of Ω {\displaystyle \Omega } that do not belong to U {\displaystyle U} . It is straightforward to show that: N ( U ) ≤ Π ( U ) {\displaystyle N(U)\leq \Pi (U)} for any U {\displaystyle U} and that: N ( U ∩ V ) = min ( N ( U ) , N ( V ) ) {\displaystyle N(U\cap V)=\min(N(U),N(V))} . Note that contrary to probability theory, possibility is not self-dual. That is, for any event U {\displaystyle U} , we only have the inequality: Π ( U ) + Π ( U ¯ ) ≥ 1 {\displaystyle \Pi (U)+\Pi ({\overline {U}})\geq 1} However, the following duality rule holds: For any event U {\displaystyle U} , either Π ( U ) = 1 {\displaystyle \Pi (U)=1} , or N ( U ) = 0 {\displaystyle N(U)=0} Accordingly, beliefs about an event can be represented by a number and a bit. == Interpretation == There are four cases that can be interpreted as follows: N ( U ) = 1 {\displaystyle N(U)=1} means that U {\displaystyle U} is necessary. U {\displaystyle U} is certainly true. It implies that Π ( U ) = 1 {\displaystyle \Pi (U)=1} . Π ( U ) = 0 {\displaystyle \Pi (U)=0} means that U {\displaystyle U} is impossible. U {\displaystyle U} is certainly false. It implies that N ( U ) = 0 {\displaystyle N(U)=0} . Π ( U ) = 1 {\displaystyle \Pi (U)=1} means that U {\displaystyle U} is possible. I would not be surprised at all if U {\displaystyle U} occurs. It leaves N ( U ) {\displaystyle N(U)} unconstrained. N ( U ) = 0 {\displaystyle N(U)=0} means that U {\displaystyle U} is unnecessary. I would not be surprised at all if U {\displaystyle U} does not occur. It leaves Π ( U ) {\displaystyle \Pi (U)} unconstrained. The intersection of the last two cases is N ( U ) = 0 {\displaystyle N(U)=0} and Π ( U ) = 1 {\displaystyle \Pi (U)=1} meaning that I believe nothing at all about U {\displaystyle U} . Because it allows for indeterminacy like this, possibility theory relates to the graduation of a many-valued logic, such as intuitionistic logic, rather than the classical two-valued logic. Note that unlike possibility, fuzzy logic is compositional with respect to both the union and the intersection operator. The relationship with fuzzy theory can be explained with the following classic example. Fuzzy logic: When a bottle is half full, it can be said that the level of truth of the proposition "The bottle is full" is 0.5. The word "full" is seen as a fuzzy predicate describing the amount of liquid in the bottle. Possibility theory: There is one bottle, either completely full or totally empty. The proposition "the possibility level that the bottle is full is 0.5" describes a degree of belief. One way to interpret 0.5 in that proposition is to define its meaning as: I am ready to bet that it's empty as long as the odds are even (1:1) or better, and I would not bet at any rate that it's full. == Possibility theory as an imprecise probability theory == There is an extensive formal correspondence between probability and possibility theories, where the addition operator corresponds to the maximum operator. A possibility measure can be seen as a consonant plausibility measure in the Dempster–Shafer theory of evidence. The operators of possibility theory can be seen as a hyper-cautious version of the operators of the transferable belief model, a modern development of the theory of evidence. Possibility can be seen as an upper probability: any possibility distribution defines a unique credal set of admissible probability distributions by K = { P ∣ ∀ S P ( S ) ≤ Π ( S ) } . {\displaystyle K=\{\,P\mid \forall S\ P(S)\leq \Pi (S)\,\}.} This allows one to study possibility theory using the tools of imprecise probabilities. == Necessity logic == We call generalized possibility every function satisfying Axiom 1 and Axiom 3. We call generalized necessity the dual of a generalized possibility. The generalized necessities are related to a very simple and interesting fuzzy logic called necessity logic. In the deduction apparatus of necessity logic the logical axioms are the usual classical tautologies. Also, there is only a fuzzy inference rule extending the usual modus ponens. Such a rule says that if α and α → β are proved at degree λ and μ, respectively, then we can assert β at degree min{λ,μ}. It is easy to see that the theories of such a logic are the generalized necessities and that the completely consistent theories coincide with the necessities (see for example Gerla 2001). == See also == Fuzzy measure theory Logical possibility Modal logic Probabilistic logic Random-fuzzy variable Transferable belief model Upper and lower probabilities == References == === Citations === === Sources === Dubois, Didier and Prade, Henri, "Possibility Theory, Probability Theory and Multiple-valued Logics: A Clarification", Annals of Mathematics and Artificial Intelligence 32:35–66, 2002. Gerla Giangiacomo, Fuzzy logic: Mathematical Tools for Approximate Reasoning, Kluwer Academic Publishers, Dordrecht 2001. Ladislav J. Kohout, "Theories of Possibility: Meta-Axiomatics and Semantics", Fuzzy Sets and Systems 25:357-367, 1988. Zadeh, Lotfi, "Fuzzy Sets as the Basis for a Theory of Possibility", Fuzzy Sets and Systems 1:3–28, 1978. (Reprinted in Fuzzy Sets and Systems 100 (Supplement): 9–34, 1999.) Brian R. Gaines and Ladislav J. Kohout, "Possible Automata", in Proceedings of the International Symposium on Multiple-Valued Logic, pp. 183-192, Bloomington, Indiana, May 13-16, 1975.
Wikipedia/Possibility_theory
In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss). Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation. == Definition == Suppose an unknown parameter θ {\displaystyle \theta } is known to have a prior distribution π {\displaystyle \pi } . Let θ ^ = θ ^ ( x ) {\displaystyle {\widehat {\theta }}={\widehat {\theta }}(x)} be an estimator of θ {\displaystyle \theta } (based on some measurements x), and let L ( θ , θ ^ ) {\displaystyle L(\theta ,{\widehat {\theta }})} be a loss function, such as squared error. The Bayes risk of θ ^ {\displaystyle {\widehat {\theta }}} is defined as E π ( L ( θ , θ ^ ) ) {\displaystyle E_{\pi }(L(\theta ,{\widehat {\theta }}))} , where the expectation is taken over the probability distribution of θ {\displaystyle \theta } : this defines the risk function as a function of θ ^ {\displaystyle {\widehat {\theta }}} . An estimator θ ^ {\displaystyle {\widehat {\theta }}} is said to be a Bayes estimator if it minimizes the Bayes risk among all estimators. Equivalently, the estimator which minimizes the posterior expected loss E ( L ( θ , θ ^ ) | x ) {\displaystyle E(L(\theta ,{\widehat {\theta }})|x)} for each x {\displaystyle x} also minimizes the Bayes risk and therefore is a Bayes estimator. If the prior is improper then an estimator which minimizes the posterior expected loss for each x {\displaystyle x} is called a generalized Bayes estimator. == Examples == === Minimum mean square error estimation === The most common risk function used for Bayesian estimation is the mean square error (MSE), also called squared error risk. The MSE is defined by M S E = E [ ( θ ^ ( x ) − θ ) 2 ] , {\displaystyle \mathrm {MSE} =E\left[({\widehat {\theta }}(x)-\theta )^{2}\right],} where the expectation is taken over the joint distribution of θ {\displaystyle \theta } and x {\displaystyle x} . ==== Posterior mean ==== Using the MSE as risk, the Bayes estimate of the unknown parameter is simply the mean of the posterior distribution, θ ^ ( x ) = E [ θ | x ] = ∫ θ p ( θ | x ) d θ . {\displaystyle {\widehat {\theta }}(x)=E[\theta |x]=\int \theta \,p(\theta |x)\,d\theta .} This is known as the minimum mean square error (MMSE) estimator. === Bayes estimators for conjugate priors === If there is no inherent reason to prefer one prior probability distribution over another, a conjugate prior is sometimes chosen for simplicity. A conjugate prior is defined as a prior distribution belonging to some parametric family, for which the resulting posterior distribution also belongs to the same family. This is an important property, since the Bayes estimator, as well as its statistical properties (variance, confidence interval, etc.), can all be derived from the posterior distribution. Conjugate priors are especially useful for sequential estimation, where the posterior of the current measurement is used as the prior in the next measurement. In sequential estimation, unless a conjugate prior is used, the posterior distribution typically becomes more complex with each added measurement, and the Bayes estimator cannot usually be calculated without resorting to numerical methods. Following are some examples of conjugate priors. If x | θ {\displaystyle x|\theta } is Normal, x | θ ∼ N ( θ , σ 2 ) {\displaystyle x|\theta \sim N(\theta ,\sigma ^{2})} , and the prior is normal, θ ∼ N ( μ , τ 2 ) {\displaystyle \theta \sim N(\mu ,\tau ^{2})} , then the posterior is also Normal and the Bayes estimator under MSE is given by θ ^ ( x ) = σ 2 σ 2 + τ 2 μ + τ 2 σ 2 + τ 2 x . {\displaystyle {\widehat {\theta }}(x)={\frac {\sigma ^{2}}{\sigma ^{2}+\tau ^{2}}}\mu +{\frac {\tau ^{2}}{\sigma ^{2}+\tau ^{2}}}x.} If x 1 , . . . , x n {\displaystyle x_{1},...,x_{n}} are iid Poisson random variables x i | θ ∼ P ( θ ) {\displaystyle x_{i}|\theta \sim P(\theta )} , and if the prior is Gamma distributed θ ∼ G ( a , b ) {\displaystyle \theta \sim G(a,b)} , then the posterior is also Gamma distributed, and the Bayes estimator under MSE is given by θ ^ ( X ) = n X ¯ + a n + b . {\displaystyle {\widehat {\theta }}(X)={\frac {n{\overline {X}}+a}{n+b}}.} If x 1 , . . . , x n {\displaystyle x_{1},...,x_{n}} are iid uniformly distributed x i | θ ∼ U ( 0 , θ ) {\displaystyle x_{i}|\theta \sim U(0,\theta )} , and if the prior is Pareto distributed θ ∼ P a ( θ 0 , a ) {\displaystyle \theta \sim Pa(\theta _{0},a)} , then the posterior is also Pareto distributed, and the Bayes estimator under MSE is given by θ ^ ( X ) = ( a + n ) max ( θ 0 , x 1 , . . . , x n ) a + n − 1 . {\displaystyle {\widehat {\theta }}(X)={\frac {(a+n)\max {(\theta _{0},x_{1},...,x_{n})}}{a+n-1}}.} === Alternative risk functions === Risk functions are chosen depending on how one measures the distance between the estimate and the unknown parameter. The MSE is the most common risk function in use, primarily due to its simplicity. However, alternative risk functions are also occasionally used. The following are several examples of such alternatives. We denote the posterior generalized distribution function by F {\displaystyle F} . ==== Posterior median and other quantiles ==== A "linear" loss function, with a > 0 {\displaystyle a>0} , which yields the posterior median as the Bayes' estimate: L ( θ , θ ^ ) = a | θ − θ ^ | {\displaystyle L(\theta ,{\widehat {\theta }})=a|\theta -{\widehat {\theta }}|} F ( θ ^ ( x ) | X ) = 1 2 . {\displaystyle F({\widehat {\theta }}(x)|X)={\tfrac {1}{2}}.} Another "linear" loss function, which assigns different "weights" a , b > 0 {\displaystyle a,b>0} to over or sub estimation. It yields a quantile from the posterior distribution, and is a generalization of the previous loss function: L ( θ , θ ^ ) = { a | θ − θ ^ | , for θ − θ ^ ≥ 0 b | θ − θ ^ | , for θ − θ ^ < 0 {\displaystyle L(\theta ,{\widehat {\theta }})={\begin{cases}a|\theta -{\widehat {\theta }}|,&{\mbox{for }}\theta -{\widehat {\theta }}\geq 0\\b|\theta -{\widehat {\theta }}|,&{\mbox{for }}\theta -{\widehat {\theta }}<0\end{cases}}} F ( θ ^ ( x ) | X ) = a a + b . {\displaystyle F({\widehat {\theta }}(x)|X)={\frac {a}{a+b}}.} ==== Posterior mode ==== The following loss function is trickier: it yields either the posterior mode, or a point close to it depending on the curvature and properties of the posterior distribution. Small values of the parameter K > 0 {\displaystyle K>0} are recommended, in order to use the mode as an approximation ( L > 0 {\displaystyle L>0} ): L ( θ , θ ^ ) = { 0 , for | θ − θ ^ | < K L , for | θ − θ ^ | ≥ K . {\displaystyle L(\theta ,{\widehat {\theta }})={\begin{cases}0,&{\mbox{for }}|\theta -{\widehat {\theta }}|<K\\L,&{\mbox{for }}|\theta -{\widehat {\theta }}|\geq K.\end{cases}}} Other loss functions can be conceived, although the mean squared error is the most widely used and validated. Other loss functions are used in statistics, particularly in robust statistics. == Generalized Bayes estimators == The prior distribution p {\displaystyle p} has thus far been assumed to be a true probability distribution, in that ∫ p ( θ ) d θ = 1. {\displaystyle \int p(\theta )d\theta =1.} However, occasionally this can be a restrictive requirement. For example, there is no distribution (covering the set, R, of all real numbers) for which every real number is equally likely. Yet, in some sense, such a "distribution" seems like a natural choice for a non-informative prior, i.e., a prior distribution which does not imply a preference for any particular value of the unknown parameter. One can still define a function p ( θ ) = 1 {\displaystyle p(\theta )=1} , but this would not be a proper probability distribution since it has infinite mass, ∫ p ( θ ) d θ = ∞ . {\displaystyle \int {p(\theta )d\theta }=\infty .} Such measures p ( θ ) {\displaystyle p(\theta )} , which are not probability distributions, are referred to as improper priors. The use of an improper prior means that the Bayes risk is undefined (since the prior is not a probability distribution and we cannot take an expectation under it). As a consequence, it is no longer meaningful to speak of a Bayes estimator that minimizes the Bayes risk. Nevertheless, in many cases, one can define the posterior distribution p ( θ | x ) = p ( x | θ ) p ( θ ) ∫ p ( x | θ ) p ( θ ) d θ . {\displaystyle p(\theta |x)={\frac {p(x|\theta )p(\theta )}{\int p(x|\theta )p(\theta )d\theta }}.} This is a definition, and not an application of Bayes' theorem, since Bayes' theorem can only be applied when all distributions are proper. However, it is not uncommon for the resulting "posterior" to be a valid probability distribution. In this case, the posterior expected loss ∫ L ( θ , a ) p ( θ | x ) d θ {\displaystyle \int {L(\theta ,a)p(\theta |x)d\theta }} is typically well-defined and finite. Recall that, for a proper prior, the Bayes estimator minimizes the posterior expected loss. When the prior is improper, an estimator which minimizes the posterior expected loss is referred to as a generalized Bayes estimator. === Example === A typical example is estimation of a location parameter with a loss function of the type L ( a − θ ) {\displaystyle L(a-\theta )} . Here θ {\displaystyle \theta } is a location parameter, i.e., p ( x | θ ) = f ( x − θ ) {\displaystyle p(x|\theta )=f(x-\theta )} . It is common to use the improper prior p ( θ ) = 1 {\displaystyle p(\theta )=1} in this case, especially when no other more subjective information is available. This yields p ( θ | x ) = p ( x | θ ) p ( θ ) p ( x ) = f ( x − θ ) p ( x ) {\displaystyle p(\theta |x)={\frac {p(x|\theta )p(\theta )}{p(x)}}={\frac {f(x-\theta )}{p(x)}}} so the posterior expected loss E [ L ( a − θ ) | x ] = ∫ L ( a − θ ) p ( θ | x ) d θ = 1 p ( x ) ∫ L ( a − θ ) f ( x − θ ) d θ . {\displaystyle E[L(a-\theta )|x]=\int {L(a-\theta )p(\theta |x)d\theta }={\frac {1}{p(x)}}\int L(a-\theta )f(x-\theta )d\theta .} The generalized Bayes estimator is the value a ( x ) {\displaystyle a(x)} that minimizes this expression for a given x {\displaystyle x} . This is equivalent to minimizing ∫ L ( a − θ ) f ( x − θ ) d θ {\displaystyle \int L(a-\theta )f(x-\theta )d\theta } for a given x . {\displaystyle x.} (1) In this case it can be shown that the generalized Bayes estimator has the form x + a 0 {\displaystyle x+a_{0}} , for some constant a 0 {\displaystyle a_{0}} . To see this, let a 0 {\displaystyle a_{0}} be the value minimizing (1) when x = 0 {\displaystyle x=0} . Then, given a different value x 1 {\displaystyle x_{1}} , we must minimize ∫ L ( a − θ ) f ( x 1 − θ ) d θ = ∫ L ( a − x 1 − θ ′ ) f ( − θ ′ ) d θ ′ . {\displaystyle \int L(a-\theta )f(x_{1}-\theta )d\theta =\int L(a-x_{1}-\theta ')f(-\theta ')d\theta '.} (2) This is identical to (1), except that a {\displaystyle a} has been replaced by a − x 1 {\displaystyle a-x_{1}} . Thus, the expression minimizing is given by a − x 1 = a 0 {\displaystyle a-x_{1}=a_{0}} , so that the optimal estimator has the form a ( x ) = a 0 + x . {\displaystyle a(x)=a_{0}+x.\,\!} == Empirical Bayes estimators == A Bayes estimator derived through the empirical Bayes method is called an empirical Bayes estimator. Empirical Bayes methods enable the use of auxiliary empirical data, from observations of related parameters, in the development of a Bayes estimator. This is done under the assumption that the estimated parameters are obtained from a common prior. For example, if independent observations of different parameters are performed, then the estimation performance of a particular parameter can sometimes be improved by using data from other observations. There are both parametric and non-parametric approaches to empirical Bayes estimation. === Example === The following is a simple example of parametric empirical Bayes estimation. Given past observations x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} having conditional distribution f ( x i | θ i ) {\displaystyle f(x_{i}|\theta _{i})} , one is interested in estimating θ n + 1 {\displaystyle \theta _{n+1}} based on x n + 1 {\displaystyle x_{n+1}} . Assume that the θ i {\displaystyle \theta _{i}} 's have a common prior π {\displaystyle \pi } which depends on unknown parameters. For example, suppose that π {\displaystyle \pi } is normal with unknown mean μ π {\displaystyle \mu _{\pi }\,\!} and variance σ π . {\displaystyle \sigma _{\pi }\,\!.} We can then use the past observations to determine the mean and variance of π {\displaystyle \pi } in the following way. First, we estimate the mean μ m {\displaystyle \mu _{m}\,\!} and variance σ m {\displaystyle \sigma _{m}\,\!} of the marginal distribution of x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} using the maximum likelihood approach: μ ^ m = 1 n ∑ x i , {\displaystyle {\widehat {\mu }}_{m}={\frac {1}{n}}\sum {x_{i}},} σ ^ m 2 = 1 n ∑ ( x i − μ ^ m ) 2 . {\displaystyle {\widehat {\sigma }}_{m}^{2}={\frac {1}{n}}\sum {(x_{i}-{\widehat {\mu }}_{m})^{2}}.} Next, we use the law of total expectation to compute μ m {\displaystyle \mu _{m}} and the law of total variance to compute σ m 2 {\displaystyle \sigma _{m}^{2}} such that μ m = E π [ μ f ( θ ) ] , {\displaystyle \mu _{m}=E_{\pi }[\mu _{f}(\theta )]\,\!,} σ m 2 = E π [ σ f 2 ( θ ) ] + E π [ ( μ f ( θ ) − μ m ) 2 ] , {\displaystyle \sigma _{m}^{2}=E_{\pi }[\sigma _{f}^{2}(\theta )]+E_{\pi }[(\mu _{f}(\theta )-\mu _{m})^{2}],} where μ f ( θ ) {\displaystyle \mu _{f}(\theta )} and σ f ( θ ) {\displaystyle \sigma _{f}(\theta )} are the moments of the conditional distribution f ( x i | θ i ) {\displaystyle f(x_{i}|\theta _{i})} , which are assumed to be known. In particular, suppose that μ f ( θ ) = θ {\displaystyle \mu _{f}(\theta )=\theta } and that σ f 2 ( θ ) = K {\displaystyle \sigma _{f}^{2}(\theta )=K} ; we then have μ π = μ m , {\displaystyle \mu _{\pi }=\mu _{m}\,\!,} σ π 2 = σ m 2 − σ f 2 = σ m 2 − K . {\displaystyle \sigma _{\pi }^{2}=\sigma _{m}^{2}-\sigma _{f}^{2}=\sigma _{m}^{2}-K.} Finally, we obtain the estimated moments of the prior, μ ^ π = μ ^ m , {\displaystyle {\widehat {\mu }}_{\pi }={\widehat {\mu }}_{m},} σ ^ π 2 = σ ^ m 2 − K . {\displaystyle {\widehat {\sigma }}_{\pi }^{2}={\widehat {\sigma }}_{m}^{2}-K.} For example, if x i | θ i ∼ N ( θ i , 1 ) {\displaystyle x_{i}|\theta _{i}\sim N(\theta _{i},1)} , and if we assume a normal prior (which is a conjugate prior in this case), we conclude that θ n + 1 ∼ N ( μ ^ π , σ ^ π 2 ) {\displaystyle \theta _{n+1}\sim N({\widehat {\mu }}_{\pi },{\widehat {\sigma }}_{\pi }^{2})} , from which the Bayes estimator of θ n + 1 {\displaystyle \theta _{n+1}} based on x n + 1 {\displaystyle x_{n+1}} can be calculated. == Properties == === Admissibility === Bayes rules having finite Bayes risk are typically admissible. The following are some specific examples of admissibility theorems. If a Bayes rule is unique then it is admissible. For example, as stated above, under mean squared error (MSE) the Bayes rule is unique and therefore admissible. If θ belongs to a discrete set, then all Bayes rules are admissible. If θ belongs to a continuous (non-discrete) set, and if the risk function R(θ,δ) is continuous in θ for every δ, then all Bayes rules are admissible. By contrast, generalized Bayes rules often have undefined Bayes risk in the case of improper priors. These rules are often inadmissible and the verification of their admissibility can be difficult. For example, the generalized Bayes estimator of a location parameter θ based on Gaussian samples (described in the "Generalized Bayes estimator" section above) is inadmissible for p > 2 {\displaystyle p>2} ; this is known as Stein's phenomenon. === Asymptotic efficiency === Let θ be an unknown random variable, and suppose that x 1 , x 2 , … {\displaystyle x_{1},x_{2},\ldots } are iid samples with density f ( x i | θ ) {\displaystyle f(x_{i}|\theta )} . Let δ n = δ n ( x 1 , … , x n ) {\displaystyle \delta _{n}=\delta _{n}(x_{1},\ldots ,x_{n})} be a sequence of Bayes estimators of θ based on an increasing number of measurements. We are interested in analyzing the asymptotic performance of this sequence of estimators, i.e., the performance of δ n {\displaystyle \delta _{n}} for large n. To this end, it is customary to regard θ as a deterministic parameter whose true value is θ 0 {\displaystyle \theta _{0}} . Under specific conditions, for large samples (large values of n), the posterior density of θ is approximately normal. In other words, for large n, the effect of the prior probability on the posterior is negligible. Moreover, if δ is the Bayes estimator under MSE risk, then it is asymptotically unbiased and it converges in distribution to the normal distribution: n ( δ n − θ 0 ) → N ( 0 , 1 I ( θ 0 ) ) , {\displaystyle {\sqrt {n}}(\delta _{n}-\theta _{0})\to N\left(0,{\frac {1}{I(\theta _{0})}}\right),} where I(θ0) is the Fisher information of θ0. It follows that the Bayes estimator δn under MSE is asymptotically efficient. Another estimator which is asymptotically normal and efficient is the maximum likelihood estimator (MLE). The relations between the maximum likelihood and Bayes estimators can be shown in the following simple example. ==== Example: estimating p in a binomial distribution ==== Consider the estimator of θ based on binomial sample x~b(θ,n) where θ denotes the probability for success. Assuming θ is distributed according to the conjugate prior, which in this case is the Beta distribution B(a,b), the posterior distribution is known to be B(a+x,b+n-x). Thus, the Bayes estimator under MSE is δ n ( x ) = E [ θ | x ] = a + x a + b + n . {\displaystyle \delta _{n}(x)=E[\theta |x]={\frac {a+x}{a+b+n}}.} The MLE in this case is x/n and so we get, δ n ( x ) = a + b a + b + n E [ θ ] + n a + b + n δ M L E . {\displaystyle \delta _{n}(x)={\frac {a+b}{a+b+n}}E[\theta ]+{\frac {n}{a+b+n}}\delta _{MLE}.} The last equation implies that, for n → ∞, the Bayes estimator (in the described problem) is close to the MLE. On the other hand, when n is small, the prior information is still relevant to the decision problem and affects the estimate. To see the relative weight of the prior information, assume that a=b; in this case each measurement brings in 1 new bit of information; the formula above shows that the prior information has the same weight as a+b bits of the new information. In applications, one often knows very little about fine details of the prior distribution; in particular, there is no reason to assume that it coincides with B(a,b) exactly. In such a case, one possible interpretation of this calculation is: "there is a non-pathological prior distribution with the mean value 0.5 and the standard deviation d which gives the weight of prior information equal to 1/(4d2)-1 bits of new information." Another example of the same phenomena is the case when the prior estimate and a measurement are normally distributed. If the prior is centered at B with deviation Σ, and the measurement is centered at b with deviation σ, then the posterior is centered at α α + β B + β α + β b {\displaystyle {\frac {\alpha }{\alpha +\beta }}B+{\frac {\beta }{\alpha +\beta }}b} , with weights in this weighted average being α=σ², β=Σ². Moreover, the squared posterior deviation is Σ²+σ². In other words, the prior is combined with the measurement in exactly the same way as if it were an extra measurement to take into account. For example, if Σ=σ/2, then the deviation of 4 measurements combined matches the deviation of the prior (assuming that errors of measurements are independent). And the weights α,β in the formula for posterior match this: the weight of the prior is 4 times the weight of the measurement. Combining this prior with n measurements with average v results in the posterior centered at 4 4 + n V + n 4 + n v {\displaystyle {\frac {4}{4+n}}V+{\frac {n}{4+n}}v} ; in particular, the prior plays the same role as 4 measurements made in advance. In general, the prior has the weight of (σ/Σ)² measurements. Compare to the example of binomial distribution: there the prior has the weight of (σ/Σ)²−1 measurements. One can see that the exact weight does depend on the details of the distribution, but when σ≫Σ, the difference becomes small. == Practical example of Bayes estimators == The Internet Movie Database uses a formula for calculating and comparing the ratings of films by its users, including their Top Rated 250 Titles which is claimed to give "a true Bayesian estimate". The following Bayesian formula was initially used to calculate a weighted average score for the Top 250, though the formula has since changed: W = R v + C m v + m {\displaystyle W={Rv+Cm \over v+m}\ } where: W {\displaystyle W\ } = weighted rating R {\displaystyle R\ } = average rating for the movie as a number from 1 to 10 (mean) = (Rating) v {\displaystyle v\ } = number of votes/ratings for the movie = (votes) m {\displaystyle m\ } = weight given to the prior estimate (in this case, the number of votes IMDB deemed necessary for average rating to approach statistical validity) C {\displaystyle C\ } = the mean vote across the whole pool (currently 7.0) Note that W is just the weighted arithmetic mean of R and C with weight vector (v, m). As the number of ratings surpasses m, the confidence of the average rating surpasses the confidence of the mean vote for all films (C), and the weighted bayesian rating (W) approaches a straight average (R). The closer v (the number of ratings for the film) is to zero, the closer W is to C, where W is the weighted rating and C is the average rating of all films. So, in simpler terms, the fewer ratings/votes cast for a film, the more that film's Weighted Rating will skew towards the average across all films, while films with many ratings/votes will have a rating approaching its pure arithmetic average rating. IMDb's approach ensures that a film with only a few ratings, all at 10, would not rank above "the Godfather", for example, with a 9.2 average from over 500,000 ratings. == See also == Recursive Bayesian estimation Generalized expected utility == Notes == == References == Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611. Lehmann, E. L.; Casella, G. (1998). Theory of Point Estimation (2nd ed.). Springer. ISBN 0-387-98502-6. Pilz, Jürgen (1991). "Bayesian estimation". Bayesian Estimation and Experimental Design in Linear Regression Models. Chichester: John Wiley & Sons. pp. 38–117. ISBN 0-471-91732-X. == External links == "Bayesian estimator", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Bayesian_decision_theory
The decision-matrix method, also Pugh method or Pugh concept selection, invented by Stuart Pugh, is a qualitative technique used to rank the multi-dimensional options of an option set. It is frequently used in engineering for making design decisions but can also be used to rank investment options, vendor options, product options or any other set of multidimensional entities. == Definition == A basic decision matrix consists of establishing a set of criteria and a group of potential candidate designs. One of these is a reference candidate design. The other designs are then compared to this reference design and being ranked as better, worse, or same based on each criterion. The number of times "better" and "worse" appeared for each design is then displayed, but not summed up. A weighted decision matrix operates in the same way as the basic decision matrix but introduces the concept of weighting the criteria in order of importance. The more important the criterion the higher the weighting it should be given. == Advantages == The advantage of the decision-making matrix is that it encourages self-reflection amongst the members of a design team to analyze each candidate with a minimized bias (for team members can be biased towards certain designs, such as their own). Another advantage of this method is that sensitivity studies can be performed. An example of this might be to see how much your opinion would have to change in order for a lower ranked alternative to outrank a competing alternative. == Disadvantages == However, there are some important disadvantages of the decision-matrix method: The list of criteria options is arbitrary. There is no way to know a priori if the list is complete; it is likely that important criteria are missing. Conversely, it is possible that less important criteria are included, causing decision makers to be distracted and biased in their choice of options. Scoring methods, even with weighting, tend to equalize all the requirements. But a few requirements are "must haves". If enough minor criteria are listed, it is possible for them to add up and select an option that misses a "must have" requirement. The values assigned to each option are guesses, not based on any quantitative measurements. In fact the entire decision matrix can create the impression of being scientific, even though it requires no quantitative measurements of anything at all. == Morphological analysis == Morphological analysis is another form of a decision matrix employing a multi-dimensional configuration space linked by way of logical relationships. == See also == MCDA Belief decision matrix == References ==
Wikipedia/Decision-matrix_method
Prospect theory is a theory of behavioral economics, judgment and decision making that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics. Based on results from controlled studies, it describes how individuals assess their loss and gain perspectives in an asymmetric manner (see loss aversion). For example, for some individuals, the pain from losing $1,000 could only be compensated by the pleasure of earning $2,000. Thus, contrary to the expected utility theory (which models the decision that perfectly rational agents would make), prospect theory aims to describe the actual behavior of people. In the original formulation of the theory, the term prospect referred to the predictable results of a lottery. However, prospect theory can also be applied to the prediction of other forms of behaviors and decisions. Prospect theory challenges the expected utility theory developed by John von Neumann and Oskar Morgenstern in 1944 and constitutes one of the first economic theories built using experimental methods. == History == In the draft received by the economist Richard Thaler in 1976, the term "Value Theory" was used instead of Prospect Theory. Later on, Kahneman and Tversky changed the title to Prospect Theory to avoid possible confusions. According to Kahneman, the new title was 'meaningless.' == Overview == Prospect theory stems from loss aversion, where the observation is that agents asymmetrically feel losses greater than that of an equivalent gain. It centralises around the idea that people conclude their utility from gains and losses relative to a certain "neutral" reference point regarding their current individual situation. Thus, rather than making decisions like a rational agent maximizing a fixed expected utility, value decisions are made relative to the current neutral situation, not following any absolute measure of utility. Consider two scenarios; 100% chance to gain $450 or 50% chance to gain $1000 100% chance to lose $500 or 50% chance to lose $1100 It is assumed that the agent's individual utility is proportional to the dollar amount (e.g. $1000 would be twice as useful as $500). Prospect theory suggests that; When faced with a risky choice leading to gains, agents are risk averse, preferring the certain outcome with a lower expected utility (concave value function). Agents will choose the certain $450 even though the expected utility of the risky gain is higher When faced with a risky choice leading to losses agents are risk seeking, preferring the outcome that has a lower expected utility but the potential to avoid losses (convex value function). Agents will choose the 50% chance to lose $1100 even though the expected utility is lower, due to the chance that they lose nothing at all These two examples are thus in contradiction with the theory of expected utility, which leads only to choices with the maximum utility. Also, the concavity for gains and convexity for losses implies diminishing marginal utility with increasing gains/losses. In other words, someone who has more money has a lower desire for a fixed amount of gain (and lower aversion to a fixed amount of loss) than someone who has less money. The theory continues with a second concept, based on the observation that people attribute excessive weight to events with low probability and insufficient weight to events with high probability. For example, individuals may unconsciously treat an outcome with a probability of 99% as if its probability were 95%, and an outcome with probability of 1% as if it had a probability of 5%. Under- and over-weighting of probabilities is importantly distinct from under- and over-estimating probabilities, a different type of cognitive bias observed for example in the overconfidence effect. == Model == The theory describes the decision processes in two stages: During an initial phase termed editing, outcomes of a decision are ordered according to a certain heuristic. In particular, people decide which outcomes they consider equivalent, set a reference point and then consider lesser outcomes as losses and greater ones as gains. The editing phase aims to alleviate any framing effects. It also aims to resolve isolation effects stemming from individuals' propensity to often isolate consecutive probabilities instead of treating them together. The editing process can be viewed as composed of coding, combination, segregation, cancellation, simplification and detection of dominance. In the subsequent evaluation phase, people behave as if they would compute a value (utility), based on the potential outcomes and their respective probabilities, and then choose the alternative having a higher utility. The formula that Kahneman and Tversky assume for the evaluation phase is (in its simplest form) given by: V = ∑ i = 1 n π ( p i ) v ( x i ) {\displaystyle V=\sum _{i=1}^{n}\pi (p_{i})v(x_{i})} where V {\displaystyle V} is the overall or expected utility of the outcomes to the individual making the decision, x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} are the potential outcomes and p 1 , p 2 , … , p n {\displaystyle p_{1},p_{2},\dots ,p_{n}} their respective probabilities and v {\displaystyle v} is a function that assigns a value to an outcome. The value function that passes through the reference point is s-shaped and asymmetrical. Losses hurt more than gains feel good (loss aversion). This differs from expected utility theory, in which a rational agent is indifferent to the reference point. In expected utility theory, the individual does not care how the outcome of losses and gains are framed. The function π {\displaystyle \pi } is a probability weighting function and captures the idea that people tend to overreact to small probability events, but underreact to large probabilities. Let ( x , p ; y , q ) {\displaystyle (x,p;y,q)} denote a prospect with outcome x {\displaystyle x} with probability p {\displaystyle p} and outcome y {\displaystyle y} with probability q {\displaystyle q} and nothing with probability 1 − p − q {\displaystyle 1-p-q} . If ( x , p ; y , q ) {\displaystyle (x,p;y,q)} is a regular prospect (i.e., either p + q < 1 {\displaystyle p+q<1} , or x ≥ 0 ≥ y {\displaystyle x\geq 0\geq y} , or x ≤ 0 ≤ y {\displaystyle x\leq 0\leq y} ), then: V ( x , p ; y , q ) = π ( p ) ν ( x ) + π ( q ) ν ( y ) {\displaystyle V(x,p;y,q)=\pi (p)\nu (x)+\pi (q)\nu (y)} However, if p + q = 1 {\displaystyle p+q=1} and either x > y > 0 {\displaystyle x>y>0} or x < y < 0 {\displaystyle x<y<0} , then: V ( x , p ; y , q ) = ν ( y ) + π ( p ) [ ν ( x ) − ν ( y ) ] {\displaystyle V(x,p;y,q)=\nu (y)+\pi (p)\left[\nu (x)-\nu (y)\right]} It can be deduced from the first equation that ν ( y ) + ν ( − y ) > ν ( x ) + ν ( − x ) {\displaystyle \nu (y)+\nu (-y)>\nu (x)+\nu (-x)} and ν ( − y ) + ν ( − x ) > ν ( x ) + ν ( − x ) {\displaystyle \nu (-y)+\nu (-x)>\nu (x)+\nu (-x)} . The value function is thus defined on deviations from the reference point, generally concave for gains and commonly convex for losses and steeper for losses than for gains. If ( x , p ) {\displaystyle (x,p)} is equivalent to ( y , p q ) {\displaystyle (y,pq)} then ( x , p r ) {\displaystyle (x,pr)} is not preferred to ( y , p q r ) {\displaystyle (y,pqr)} , but from the first equation it follows that π ( p ) ν ( x ) + π ( p q ) ν ( y ) = π ( p q ) ν ( y ) {\displaystyle \pi (p)\nu (x)+\pi (pq)\nu (y)=\pi (pq)\nu (y)} , which leads to π ( p r ) ν ( x ) ≤ π ( p q r ) ν ( y ) {\displaystyle \pi (pr)\nu (x)\leq \pi (pqr)\nu (y)} , therefore: π ( p q ) π ( p ) ≤ π ( p q r ) π ( p r ) {\displaystyle {\frac {\pi \left(pq\right)}{\pi \left(p\right)}}\leq {\frac {\pi \left(pqr\right)}{\pi \left(pr\right)}}} This means that for a fixed ratio of probabilities the decision weights are closer to unity when probabilities are low than when they are high. In prospect theory, π {\displaystyle \pi } is never linear. In the case that x > y > 0 {\displaystyle x>y>0} , p > p ′ {\displaystyle p>p'} and p + q = p ′ + q ′ < 1 , {\displaystyle p+q=p'+q'<1,} prospect ( x , p ′ ; y , q ) {\displaystyle (x,p';y,q)} dominates prospect ( x , p ′ ; y , q ′ ) {\displaystyle (x,p';y,q')} , which means that π ( p ) ν ( x ) + π ( q ) ν ( y ) > π ( p ′ ) ν ( x ) + π ( q ′ ) ν ( y ) {\displaystyle \pi (p)\nu (x)+\pi (q)\nu (y)>\pi (p')\nu (x)+\pi (q')\nu (y)} , therefore: π ( p ) − π ( p ′ ) π ( q ′ ) − π ( q ) ≤ ν ( y ) ν ( x ) {\displaystyle {\frac {\pi \left(p\right)-\pi (p')}{\pi \left(q'\right)-\pi \left(q\right)}}\leq {\frac {\nu \left(y\right)}{\nu \left(x\right)}}} As y → x {\displaystyle y\rightarrow x} , π ( p ) − π ( p ′ ) → π ( q ′ ) − π ( q ) {\displaystyle \pi (p)-\pi (p')\rightarrow \pi (q')-\pi (q)} , but since p − p ′ = q ′ − q {\displaystyle p-p'=q'-q} , it would imply that π {\displaystyle \pi } must be linear; however, dominated alternatives are brought to the evaluation phase since they are eliminated in the editing phase. Although direct violations of dominance never happen in prospect theory, it is possible that a prospect A dominates B, B dominates C but C dominates A. == Example == To see how prospect theory can be applied, consider the decision to buy insurance. Assume the probability of the insured risk is 1%, the potential loss is $1,000 and the premium is $15. If we apply prospect theory, we first need to set a reference point. This could be the current wealth or the worst case (losing $1,000). If we set the frame to the current wealth, the decision would be to either 1. Pay $15 for insurance, which yields a prospect-utility of v ( − 15 ) {\displaystyle v(-15)} , OR 2. Enter a lottery with possible outcomes of $0 (probability 99%) or −$1,000 (probability 1%), which yields a prospect-utility of π ( 0.01 ) × v ( − 1000 ) + π ( 0.99 ) × v ( 0 ) = π ( 0.01 ) × v ( − 1000 ) {\displaystyle \pi (0.01)\times v(-1000)+\pi (0.99)\times v(0)=\pi (0.01)\times v(-1000)} . According to prospect theory, π ( 0.01 ) > 0.01 {\displaystyle \pi (0.01)>0.01} , because low probabilities are usually overweighted; v ( − 15 ) / v ( − 1000 ) > 0.015 {\displaystyle v(-15)/v(-1000)>0.015} , by the convexity of value function in losses. The comparison between π ( 0.01 ) {\displaystyle \pi (0.01)} and v ( − 15 ) / v ( − 1000 ) {\displaystyle v(-15)/v(-1000)} is not immediately evident. However, for typical value and weighting functions, π ( 0.01 ) > v ( − 15 ) / v ( − 1000 ) {\displaystyle \pi (0.01)>v(-15)/v(-1000)} , and hence π ( 0.01 ) × v ( − 1000 ) < v ( − 15 ) {\displaystyle \pi (0.01)\times v(-1000)<v(-15)} . That is, a strong overweighting of small probabilities is likely to undo the effect of the convexity of v {\displaystyle v} in losses, making the insurance attractive. If we set the frame to -$1,000, we have a choice between v ( 985 ) {\displaystyle v(985)} and π ( 0.99 ) × v ( 1000 ) {\displaystyle \pi (0.99)\times v(1000)} . In this case, the concavity of the value function in gains and the underweighting of high probabilities can also lead to a preference for buying the insurance. The interplay of overweighting of small probabilities and concavity-convexity of the value function leads to the so-called fourfold pattern of risk attitudes: risk-averse behavior when gains have moderate probabilities or losses have small probabilities; risk-seeking behavior when losses have moderate probabilities or gains have small probabilities. Below is an example of the fourfold pattern of risk attitudes. The first item in each quadrant shows an example prospect (e.g. 95% chance to win $10,000 is high probability and a gain). The second item in the quadrant shows the focal emotion that the prospect is likely to evoke. The third item indicates how most people would behave given each of the prospects (either Risk Averse or Risk Seeking). The fourth item states expected attitudes of a potential defendant and plaintiff in discussions of settling a civil suit. Probability distortion is that people generally do not look at the value of probability uniformly between 0 and 1. Lower probability is said to be over-weighted (that is, a person is overly concerned with the outcome of the probability) while medium to high probability is under-weighted (that is, a person is not concerned enough with the outcome of the probability). The exact point in which probability goes from over-weighted to under-weighted is arbitrary, but a good point to consider is probability = 0.33. A person values probability = 0.01 much more than the value of probability = 0 (probability = 0.01 is said to be over-weighted). However, a person has about the same value for probability = 0.4 and probability = 0.5. Also, the value of probability = 0.99 is much less than the value of probability = 1, a sure thing (probability = 0.99 is under-weighted). A little more in depth when looking at probability distortion is that π(p) + π(1 − p) < 1 (where π(p) is probability in prospect theory). == Myopic Loss Aversion (MLA) == Myopic loss aversion (MLA), a concept derived from prospect theory, refers to the natural tendency of humans to focus on short-term losses and gains and to weigh them more heavily than long-term losses and gains. This bias can lead to seemingly poorer decision making, as individuals may focus towards avoiding immediate losses instead of achieving long-term gains. A prolific study that examined myopic loss aversion was conducted by Gneezy and Potters in 1997.[9] In this study, participants engaged in a straightforward betting game in which they could either place a bet on a coin landing , or they could choose to not bet at all. The participants were provided with a fixed amount of money, and held the task to maximize their earnings over a series of rounds. The results of the study exhibited that participants were more likely to place a bet when they had just lost money in the previous round, and they were more likely to avoid a bet when they had just won money in the previous round. This behavior is consistent with myopic loss aversion theory, as the participants were placing greater magnitude on their short-term gains and losses instead of their overall earnings over the course of the study. Additionally, the findings revealed that the participants that were provided with a higher amount of money at the beginning of the study tended to be more risk-averse than those who were given a lower starting amount. This observation supports the concept of diminishing sensitivity to changes in wealth predicted by prospect theory. Overall, the study by Gneezy and Potters emphasizes the existence of myopic loss aversion, demonstrating how this bias can result in non-optimal decisions. By analyzing how prospect theory and myopic loss aversion influence decision-making, it provides the ability for researchers and policymakers to create interventions that help people make more informed choices and attain their long-term goals. When referring to investment decisions, myopic loss aversion has the ability to lead to investment decisions that can be of a more conservative approach. For instance, investors potentially overreact to dips in stock prices in their stock portfolio, which causes feelings of fear and anxiety of profit loss. This reaction from investors has the ability to lead in a loss in profit due to selling off their stock. Studies in behavioral finance analyzed this pattern, observing that there is a tendency to avoid high-reward options in the market, as the risk of short-term loss potentially influences the broker. Acclaimed behavioral economists Benartzi and Thaler analyzed this concept, calling it the "equity premium puzzle." This puzzle refers to the fact that stocks, in terms of historical statistics, exceed profits in comparison to bonds over extended periods of time. More interestingly, they observed that newer investors tend not to emphasize stocks over bonds. This phenomenon has been linked by Benartzi and Thaler to myopic loss aversion due to the lack of emphasis on stocks by young investors, as young investors tended to abandon stocks due to minor dips in the market. This behavior can lead to a decreases market predictability, as investors act on short-term losses by selling their stocks, there can be a ripple effect that intensifies dips in the economy. As investors that are heavily influenced by the market decline sell their stocks, the now increased amount of shares due to mass sell-offs further lower prices. This hypothetical community of investors react along with falling stock prices, causing them to sell, potentially causing the stock price to lower as a whole. This concept, investor anxiety, can potentially emphasize the want to sell of investments for security reasons, regardless of long-term profit potential. This constant market fluctuation is directly related to market stability. An example of this effect was seen during economic crises such as the 2008 financial crash, when panic induced sell-offs heavily impacted market stability. The period prior to the Great Recession had a "decade-long expansion in US housing market activity peaked in 2006," which came to a halt in 2007. As the trends prior to 2008 hinted at the fall of mortgage pricing, real-estate investors reacted promptly. The mass sell-offs of mortgaged-backed investments led to a similar instability in other markets, including credit markets, and the stock market. == Applications == === Economics === Some behaviors observed in economics, like the disposition effect or the reversing of risk aversion/risk seeking in case of gains or losses (termed the reflection effect), can also be explained by referring to the prospect theory. An important implication of prospect theory is that the way economic agents subjectively frame an outcome or transaction in their mind affects the utility they expect or receive. Narrow framing is a derivative result which has been documented in experimental settings by Tversky and Kahneman, whereby people evaluate new gambles in isolation, ignoring other relevant risks. This phenomenon can be seen in practice in the reaction of people to stock market fluctuations in comparison with other aspects of their overall wealth; people are more sensitive to spikes in the stock market as opposed to their labor income or the housing market. It has also been shown that narrow framing causes loss aversion among stock market investors. And the work of Tversky and Kahneman is largely responsible for the advent of behavioral economics, and is used extensively in mental accounting. === Software === The digital age has brought the implementation of prospect theory in software. Framing and prospect theory has been applied to a diverse range of situations which appear inconsistent with standard economic rationality: the equity premium puzzle, the excess returns puzzle and long swings/PPP puzzle of exchange rates through the endogenous prospect theory of Imperfect Knowledge Economics, the status quo bias, various gambling and betting puzzles, intertemporal consumption, and the endowment effect. It has also been argued that prospect theory can explain several empirical regularities observed in the context of auctions (such as secret reserve prices) which are difficult to reconcile with standard economic theory. Online pay-per bid auction sites are a classic example of decision making under risk. Previous attempts at predicting consumer behavior have shown that utility theory does not sufficiently describe decision making under risk. When prospect theory was added to a previously existing model that was attempting to explain consumer behavior during auctions, out-of-sample predictions were shown to be more accurate than a corresponding expected utility model. Specifically, prospect theory was boiled down to certain elements: preference, loss aversion and probability weighting. These elements were then used to find a backward solution on 537,045 auctions. The greater accuracy may be explained by the new model having the ability to correct for two behavioral irrationalities: The sunk cost fallacy and average auctioneer revenues above current retail price. These findings would also imply that the using prospect theory as a descriptive theory of decision making under risk is also accurate in situations where risk arises through the interactions of different people. === Politics === Given the necessary degree of uncertainty for which prospect theory is applied, it should come as no surprise that it and other psychological models are applied extensively in the context of political decision-making. Both rational choice and game theoretical models generate significant predictive power in the analysis of politics and international relations (IR). But prospect theory, unlike the alternative models, (1) is "founded on empirical data", (2) allows and accounts for dynamic change, (3) addresses previously-ignored modular elements, (4) emphasizes the situation in the decision-making process, (5) "provides a micro-foundational basis for the explanation of larger phenomena", and (6) stresses the importance of loss in utility and value calculations. Moreover, again unlike other models, prospect theory "asks different sorts of questions, seeks different evidence, and reaches different conclusions." However, there exist shortcomings inherent in prospect theory's political application, such as the dilemma regarding an actor's perceived position on the gain-loss domain spectrum, and the discordance between ideological and pragmatic (i.e. 'in the lab' versus 'in the field') assessments of an actor's propensity toward seeking or avoiding risk. That said, political scientists have applied prospect theory to a wide range of issues in domestic and comparative politics. For example, they have found that politicians are more likely to phrase a radical economic policy as one ensuring 90% employment rather than 10% unemployment, because framing it as the former puts the citizenry in a "domain of gain," which is thereby conducive to greater populace satisfaction. On a broader scale: Consider an administration debating the implementation of a controversial reform, and that such a reform yields a small chance for a widespread revolt. "[T]he disutility induced by loss aversion," even with minute probabilities of said insurrection, will dissuade the government from moving forward with the reform. Scholars have employed prospect theory to shed light on a number of issue areas in politics. For example, Kurt Weyland finds that political leaders do not always undertake bold and politically risky domestic initiatives when they are at the pinnacle of their power. Instead, such policies often appear to be risky gambits initiated by politically vulnerable regimes. He suggests that in Latin America, politically weakened governments were more likely to implement fundamental and economically painful market-oriented reforms, even though they were more vulnerable to political backlash. Barbara Vis and Kees van Kersbergen have reached a similar conclusion in their investigation of Italian welfare reforms. Maria Fanis uses prospect theory to show how risk acceptance can help domestic groups overcome collective action problems inherent to coalition building. She suggests that collective action is more likely in a perceive domain of loss because individuals become more willing to accept the risk of free riding by others. In Chile, this process led domestic interest groups to form unlikely political coalitions. Zeynep Somer-Topcu's research suggests that political parties respond more strongly to electoral defeat than to success in the next election cycle. As prospect theory predicts, parties are more likely to shift their policies in response to a vote loss in the previous election cycle compared to a vote gain. Lawrence Kuznar and James Lutz find that loss frames can increase support of individuals for terrorist groups. === International relations === International relations theorists have applied prospect theory to a wide range of issues in world politics, especially security-related matters. For example, in war-time, policy-makers, when in a perceived domain of loss, are more likely to take risks that would otherwise have been avoided, e.g. "gambling on a risky rescue mission", or implementing radical domestic reform to support military efforts. Early applications of prospect theory in International Relations emphasized the potential to explain anomalies in foreign policy decision-making that remained difficult to account for on the basis of rational choice theory. They developed detailed qualitative case studies of specific foreign policy decisions to explore the role of framing effects in choice selection. For example, Rose McDermott applied prospect theory to a series of case studies in American foreign policy, including the Suez Crisis in 1956, the U-2 Crisis in 1960, the U.S. decision to admit the Iranian shah to the United States in 1979, and the U.S. decision to carry out a hostage rescue mission in 1980. Jeffrey Berejikian employed prospect theory to analyze the genesis of the Montreal Protocol, a landmark environmental agreement. William Boettcher integrated elements of prospect theory with psychological research on personality dispositions to construct a “Risk Explanation Framework,” which he used to analyze foreign-policy decision making. He then evaluated the framework against six case studies on presidential foreign policy decision-making. === Insurance === Applications of prospect theory in the context of insurance seek to explain the consumer choices. Syndor (2010) suggests that the probability weighting aspect of prospect theory aims to explain the behaviour of the consumers who choose a higher premium for a reduced deductible even when the annualised claim rate is very low (approximately 5%). In a study of 50,000 customers, they had four options for the deductibles on their policy; $100, $250, $500, $1000. From this it was found that a $500 deductible resulted in a $715 annual premium and $1000 deductible being $615. The customers that chose the $500 deductible were paying an additional $100 per year even though the chance that a claim will be made is extremely low, and the deductible be paid. Under the expected utility framework, this can only be realised through high levels of risk aversion. Households place a greater weight on the probability that a claim will be made when choosing a policy, thus it is suggested that the reference point of the household significantly influences the decisions when it comes to premiums and deductibles. This is consistent with the theory that people assign excessive weight to scenarios with low probabilities and insufficient weight to events with high probability. == Limits and extensions == The original version of prospect theory gave rise to violations of first-order stochastic dominance. That is, prospect A might be preferred to prospect B even if the probability of receiving a value x or greater is at least as high under prospect B as it is under prospect A for all values of x, and is greater for some value of x. Later theoretical improvements overcame this problem, but at the cost of introducing intransitivity in preferences. A revised version, called cumulative prospect theory overcame this problem by using a probability weighting function derived from rank-dependent expected utility theory. Cumulative prospect theory can also be used for infinitely many or even continuous outcomes (for example, if the outcome can be any real number). An alternative solution to overcome these problems within the framework of (classical) prospect theory has been suggested as well. The reference point in the prospect theory inverse s-shaped graph also could lead to limitations due to it possibly being discontinuous at that point and having a geometric violation. This would lead to limitations in regards to accounting for the zero-outcome effect, the absence of behavioral conditionality in risky decisions as well as limitations in deriving the curve. A transitionary concave-convex universal system was proposed to eliminate this limitation. Critics from the field of psychology argued that even if Prospect Theory arose as a descriptive model, it offers no psychological explanations for the processes stated in it. Furthermore, factors that are equally important to decision making processes have not been included in the model, such as emotion. A relatively simple ad hoc decision strategy, the priority heuristic, has been suggested as an alternative model. While it can predict the majority choice in all (one-stage) gambles in Kahneman and Tversky (1979), and predicts the majority choice better than cumulative prospect theory across four different data sets with a total of 260 problems, this heuristic, however, fails to predict many simple decision situations that are typically not tested in experiments and it also does not explain heterogeneity between subjects. An international survey in 53 countries, published in Theory and Decision in 2017, confirmed that prospect theory describes decisions on lotteries well, not only in Western countries, but across many different cultures. The study also found cultural and economic factors influencing systematically average prospect theory parameters. A study published in Nature Human Behaviour in 2020 replicated research on prospect theory and concluded that it successfully replicated: "We conclude that the empirical foundations for prospect theory replicate beyond any reasonable thresholds." == Critiques == Although Prospect Theory is a largely celebrated idea in behavioural economics it does have limitations. The reference point has been argued to be difficult to precisely determine in any given context. Many external factors can influence what the reference point is and thus makes it difficult to define what a “gain” and a “loss” actually is. Kőszegi and Rabin (2007) present the idea of a personal equilibrium in decision making. This is essentially the premise that expectations and context have a large impact on determining the reference point and therefore the perception of “gains” and “losses”. Considering personal equilibrium and choice with risk creates even more ambiguity about the perception of what the reference point may be. Some critics have charged that while prospect theory seeks to predict what people choose, it does not adequately describe the actual process of decision-making. For example, Nathan Berg and Gerd Gigerenzer claim that neither classical economics nor prospect theory provide a convincing explanation of how people actually make decisions. They go so far as to claim that prospect theory is even more demanding of cognitive resources than classical expected utility theory. Moreover, scholars have raised doubts about the degree to which framing effects matter. For instance, John List argues that framing effects diminish in complex decision environments. His experimental evidence suggests that as actors gain experience with the consequences of competitive markets, they behave more like rational actors and the impact of prospect theory diminishes. Steven Kachelmeier and Mohamed Shehata find little support for prospect theory among experimental subjects in China. They do not, however, make a cultural argument against prospect theory. Rather, they conclude that when payoffs are large relative to net wealth, the effect of prospect theory diminishes. == See also == == Notes == == Further reading == Baron, Jonathan (2006). Thinking and Deciding (4th ed.). Cambridge University Press. ISBN 978-1-139-46602-8. Retrieved March 10, 2016. Dacey, Raymond; Zielonka, Piotr (2013). "High volatility eliminates the disposition effect in a market crisis". Decyzje. 10 (20): 5–20. doi:10.7206/DEC.1733-0092.9. Easterlin, Richard A. "Does Economic Growth Improve the Human Lot?", in Abramovitz, Moses; David, Paul A.; Reder, Melvin Warren (1974). Nations and Households in Economic Growth: Essays in Honor of Moses Abramovitz. Academic Press. ISBN 978-0-12-205050-3. Retrieved March 10, 2016. Frank, Robert H. (1997). "The frame of reference as a public good". The Economic Journal. 107 (445): 1832–1847. CiteSeerX 10.1.1.205.3040. doi:10.1111/j.1468-0297.1997.tb00086.x. ISSN 0013-0133. Kahneman, Daniel (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN 978-1-4299-6935-2. Retrieved March 10, 2016. Kahneman, Daniel; Tversky, Amos (1979). "Prospect Theory: An Analysis of Decision under Risk" (PDF). Econometrica. 47 (2): 263–291. CiteSeerX 10.1.1.407.1910. doi:10.2307/1914185. ISSN 0012-9682. JSTOR 1914185. Kahneman, Daniel, Jack L. Knetsch, and Richard H. Thaler (1991). “Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias.” Journal of Economic Perspectives 5 (1): 193–206. Kahneman, Daniel, and Amos Tversky, eds. (2000). Choices, Values, and Frames. Cambridge: Cambridge University Press. Lynn, John A. (1999). The Wars of Louis XIV 1667-1714. Routledge. ISBN 9780582056299. Retrieved March 10, 2016. McDermott, Rose; Fowler, James H.; Smirnov, Oleg (2008). "On the Evolutionary Origin of Prospect Theory Preferences". The Journal of Politics. 70 (2): 335–350. doi:10.1017/S0022381608080341. ISSN 0022-3816. S2CID 1788641. Post, Thierry; van den Assem, Martijn J; Baltussen, Guido; Thaler, Richard H (2008). "Deal or No Deal? Decision Making under Risk in a Large-Payoff Game Show". American Economic Review. 98 (1): 38–71. doi:10.1257/aer.98.1.38. hdl:10419/86601. ISSN 0002-8282. S2CID 12816022. Quattrone, George A., and Amos Tversky (1988). “Contrasting Rational and Psychological Analyses of Political Choice.” American Political Science Review 82 (3): 719–736. Shafir, Eldar; LeBoeuf, Robyn A. (2002). "Rationality". Annual Review of Psychology. 53 (1): 491–517. doi:10.1146/annurev.psych.53.100901.135213. ISSN 0066-4308. PMID 11752494. Tversky, Amos; Kahneman, Daniel (1986). "Rational Choice and the Framing of Decisions" (PDF). The Journal of Business. 59 (S4): S251. CiteSeerX 10.1.1.463.1334. doi:10.1086/296365. S2CID 2817965. Tversky, Amos; Kahneman, Daniel (1992). "Advances in prospect theory: Cumulative representation of uncertainty". Journal of Risk and Uncertainty. 5 (4): 297–323. CiteSeerX 10.1.1.320.8769. doi:10.1007/BF00122574. ISSN 0895-5646. S2CID 8456150. == External links == An introduction to Prospect Theory Prospect Theory
Wikipedia/Prospect_theory
The analytic network process (ANP) is a more general form of the analytic hierarchy process (AHP) used in multi-criteria decision analysis. AHP structures a decision problem into a hierarchy with a goal, decision criteria, and alternatives, while the ANP structures it as a network. Both then use a system of pairwise comparisons to measure the weights of the components of the structure, and finally to rank the alternatives in the decision. ANP can be used for both best alternative selection and judgmental forecasting. == Hierarchy vs. network == In the AHP, each element in the hierarchy is considered to be independent of all the others—the decision criteria are considered to be independent of one another, and the alternatives are considered to be independent of the decision criteria and of each other. But in many real-world cases, there is interdependence among the items and the alternatives. ANP does not require independence among elements, so it can be used as an effective tool in these cases. To illustrate this, consider a simple decision about buying an automobile. The decision maker may want to decide among several moderately-priced full-size sedans. He might choose to base his decision on only three factors: purchase price, safety, and comfort. Both the AHP and ANP would provide useful frameworks to use in making his decision. The AHP would assume that purchase price, safety, and comfort are independent of one another, and would evaluate each of the sedans independently on those criteria. The ANP would allow consideration of the interdependence of price, safety, and comfort. If one could get more safety or comfort by paying more for the automobile (or less by paying less), the ANP could take that into account. Similarly, the ANP could allow the decision criteria to be affected by the traits of the cars under consideration. If, for example, all the cars are very, very safe, the importance of safety as a decision criterion could appropriately be reduced. == Literature and community == Academic articles about ANP appear in journals dealing with the decision sciences, and several books have been written on the subject. There are numerous practical applications of ANP, many of them involving complex decisions about benefits (B), opportunities (O), costs (C) and risks (R). Studying these applications can be very useful in understanding the complexities of the ANP. The literature contains hundreds of elaborately worked out examples of the process, developed by executives, managers, engineers, MBA and Ph.D. students and others from many countries. About a hundred such uses are illustrated and discussed in The Encyclicon, a dictionary of decisions with dependence and feedback. Academics and practitioners meet biennially at the International Symposium on the Analytic Hierarchy Process (ISAHP), which, despite its name, devotes considerable attention to the ANP. == The steps == Understanding of the ANP is best achieved by using ANP software to work with previously-completed decisions. One of the field's standard texts lists the following steps: Make sure that you understand the decision problem in detail, including its objectives, criteria and subcriteria, actors and their objectives and the possible outcomes of that decision. Give details of influences that determine how that decision may come out. Determine the control criteria and subcriteria in the four control hierarchies one each for the benefits, opportunities, costs and risks of that decision and obtain their priorities from paired comparison matrices. You may use the same control criteria and perhaps subcriteria for all of the four merits. If a control criterion or subcriterion has a global priority of 3% or less, you may consider carefully eliminating it from further consideration. The software automatically deals only with those criteria or subcriteria that have subnets under them. For benefits and opportunities, ask what gives the most benefits or presents the greatest opportunity to influence fulfillment of that control criterion. For costs and risks, ask what incurs the most cost or faces the greatest risk. Sometimes (very rarely), the comparisons are made simply in terms of benefits, opportunities, costs, and risks by aggregating all the criteria of each BOCR into their merit. Determine a complete set of network clusters (components) and their elements that are relevant to each and every control criterion. To better organize the development of the model as well as you can, number and arrange the clusters and their elements in a convenient way (perhaps in a column). Use the identical label to represent the same cluster and the same elements for all the control criteria. For each control criterion or subcriterion, determine the appropriate subset of clusters of the comprehensive set with their elements and connect them according to their outer and inner dependence influences. An arrow is drawn from a cluster to any cluster whose elements influence it. Determine the approach you want to follow in the analysis of each cluster or element, influencing (the suggested approach) other clusters and elements with respect to a criterion, or being influenced by other clusters and elements. The sense (being influenced or influencing) must apply to all the criteria for the four control hierarchies for the entire decision. For each control criterion, construct the supermatrix by laying out the clusters in the order they are numbered and all the elements in each cluster both vertically on the left and horizontally at the top. Enter in the appropriate position the priorities derived from the paired comparisons as subcolumns of the corresponding column of the supermatrix. Perform paired comparisons on the elements within the clusters themselves according to their influence on each element in another cluster they are connected to (outer dependence) or on elements in their own cluster (inner dependence). In making comparisons, you must always have a criterion in mind. Comparisons of elements according to which element influences a third element more and how strongly more than another element it is compared with are made with a control criterion or subcriterion of the control hierarchy in mind. Perform paired comparisons on the clusters as they influence each cluster to which they are connected with respect to the given control criterion. The derived weights are used to weight the elements of the corresponding column blocks of the supermatrix. Assign a zero when there is no influence. Thus obtain the weighted column stochastic supermatrix. Compute the limit priorities of the stochastic supermatrix according to whether it is irreducible (primitive or imprimitive [cyclic]) or it is reducible with one being a simple or a multiple root and whether the system is cyclic or not. Two kinds of outcomes are possible. In the first, all the columns of the matrix are identical and each gives the relative priorities of the elements from which the priorities of the elements in each cluster are normalized to one. In the second, the limit cycles in blocks and the different limits are summed and averaged and again normalized to one for each cluster. Although the priority vectors are entered in the supermatrix in normalized form, the limit priorities are put in idealized form because the control criteria do not depend on the alternatives. Synthesize the limiting priorities by weighting each idealized limit vector by the weight of its control criterion and adding the resulting vectors for each of the four merits: Benefits (B), Opportunities (O), Costs (C) and Risks (R). There are now four vectors, one for each of the four merits. An answer involving ratio values of the merits is obtained by forming the ratio BiOi / CiRi for alternative i from each of the four vectors. The synthesized ideals for all the control criteria under each merit may result in an ideal whose priority is less than one for that merit. Only an alternative that is ideal for all the control criteria under a merit receives the value one after synthesis for that merit. The alternative with the largest ratio is chosen for some decisions. Companies and individuals with limited resources often prefer this type of synthesis. Determine strategic criteria and their priorities to rate the top ranked (ideal) alternative for each of the four merits one at a time. Normalize the four ratings thus obtained and use them to calculate the overall synthesis of the four vectors. For each alternative, subtract the sum of the weighted costs and risks from the sum of the weighted benefits and opportunities. Perform sensitivity analysis on the final outcome. Sensitivity analysis is concerned with “what if” kinds of questions to see if the final answer is stable to changes in the inputs, whether judgments or priorities. Of special interest is to see if these changes change the order of the alternatives. How significant the change is can be measured with the Compatibility Index of the original outcome and each new outcome. == See also == Analytic hierarchy process Decision making Decision making software L. L. Thurstone Law of comparative judgment Multi-criteria decision analysis Ordinal Priority Approach Pairwise comparison Preference Thomas L. Saaty == References == == External links == Superdecisions tutorial software for the ANP
Wikipedia/Analytic_network_process
Fuzzy-trace theory (FTT) is a theory of cognition originally proposed by Valerie F. Reyna and Charles Brainerd to explain cognitive phenomena, particularly in memory and reasoning. FTT posits two types of memory processes (verbatim and gist) and, therefore, it is often referred to as a dual process theory of memory. According to FTT, retrieval of verbatim traces (recollective retrieval) is characterized by mental reinstatement of the contextual features of a past event, whereas retrieval of gist traces (nonrecollective retrieval) is not. In fact, gist processes form representations of an event's semantic features rather than its surface details, the latter being a property of verbatim processes. The theory has been used in areas such as cognitive psychology, human development, and social psychology to explain, for instance, false memory and its development, probability judgments, medical decision making, risk perception and estimation, and biases and fallacies in decision making. FTT can explain phenomena involving both true memories (i.e., memories about events that actually happened) as well as false memories (i.e., memories about events that never happened). == History == FTT was initially proposed in the 1990s as an attempt to unify findings from the memory and reasoning domains that could not be predicted or explained by earlier approaches to cognition and its development (e.g., constructivism and information processing). One of such challenges was the statistical independence between memory and reasoning, that is, memory for background facts of problem situations is often unrelated to accuracy in reasoning tasks. Such findings called for a rethinking of the memory-reasoning relation, which in FTT took the form of a dual-process theory linking basic concepts from psycholinguistic and Gestalt theory to memory and reasoning. More specifically, FTT posits that people form two types of mental representations about a past event, called verbatim and gist traces. Gist traces are fuzzy representations of a past event (e.g., its bottom-line meaning), hence the name fuzzy-trace theory, whereas verbatim traces are detailed representations of a past event. Although people are capable of processing both verbatim and gist information, they prefer to reason with gist traces rather than verbatim. This implies, for example, that even if people are capable of understanding ratio concepts like probabilities and prevalence rates, which are the standard for the presentation of health- and risk-related data, their choice in decision situations will usually be governed by the bottom-line meaning of it (e.g., "the risk is high" or "the risk is low"; "the outcome is bad" or "the outcome is good") rather than the actual numbers. More importantly, in FTT, memory-reasoning independence can be explained in terms of preferred modes of processing when one performs a memory task (e.g., retrieval of verbatim traces) relative to when one performs a reasoning task (e.g., preference for reasoning with gist traces). In 1999, a similar approach was applied to human vision. It suggested that human vision has two types of processing: one that aggregates local spatial receptive fields, and one that parses the local receptive field. People used prior experience, gists, to decide which process dominates a perceptual decision. The work attempted to link Gestalt theory and psychophysics (i.e., independent linear filters). This theory was further developed into fuzzy image processing and used in information processing technology and edge detection. == Memory == FTT posits two types of memory processes (verbatim and gist) and, therefore, it is often referred to as a dual process theory of memory. According to FTT, retrieval of verbatim traces (recollective retrieval) is characterized by mental reinstatement of the contextual features of a past event, whereas retrieval of gist traces (nonrecollective retrieval) is not. In fact, gist processes form representations of an event's semantic features rather than its surface details, the latter being a property of verbatim processes. In the memory domain, FTT's notion of verbatim and gist representations has been influential in explaining true memories (i.e., memories about events that actually happened) as well as false memories (i.e., memories about events that never happened). The following five principles have been used to predict and explain true and false memory phenomena: === Principles === ==== Process independence ==== ===== Parallel storage ===== The principle of parallel storage asserts that the encoding and storage of verbatim and gist information operate in parallel rather than in a serial fashion. For instance, suppose that a person is presented with the word "apple" in red color. On the one hand, according to the principle of parallel storage of verbatim and gist traces, verbatim features of the target item (e.g., the word was apple, it was presented in red, printed in boldface and italic, and all but the first letter were presented in lowercase) and gist features (e.g., the word was a type of fruit) would be encoded and stored simultaneously via distinct pathways. Conversely, if verbatim and gist traces are stored in a serial fashion, then gist features of the target item (the word was a type of fruit) would be derived from its verbatim features and, therefore, the formation of gist traces would depend on the encoding and storage of verbatim traces. The latter idea was often assumed by early memory models. However, despite the intuitive appeal of the serial processing approach, research suggests that the encoding and storage of gist traces do not depend on verbatim ones. Several studies have converged on the finding that the meaning of target items is encoded independently of, and even prior to, the encoding of the surface form of the same items. Ankrum and Palmer, for example, found that when participants are presented with a familiar word (e.g., apple) for a very brief period (100 milliseconds), they are able to identify the word itself ("was it apple?") better than its letters ("did it contain the letter L?"). ===== Dissociated retrieval ===== Similar to the principle of parallel storage, retrieval of verbatim and gist traces also occur via dissociated pathways. According to the principle of dissociated retrieval, recollective and nonrecollective retrieval processes are independent of each other. Consequently, this principle allows verbatim and gist processes to be differentially influenced by factors such as the type of retrieval cues and the availability of each form of representation. In connection with Tulving's encoding specificity principle, items that were actually presented in the past are better cues for verbatim traces than items that were not. Similarly, items that were not presented in the past but preserve the meaning of presented items are usually better cues for gist traces. Suppose, for example, that subjects of an experiment are presented with a word list containing several dog breeds, such as poodle, bulldog, greyhound, doberman, beagle, collie, boxer, mastiff, husky, and terrier. During a recognition test, the words poodle, spaniel, and chair are presented. According to the principle of dissociated retrieval, retrieval of verbatim and gist traces does not depend on each other and, therefore, different types of test probes might serve as better cues to one type of trace than another. In this example, test probes such as poodle (targets, or studied items) will be better retrieval cues for verbatim traces than gist, whereas test probes such as spaniel (related distractors, non-studied items but related to targets) will be better retrieval cues for gist traces than verbatim. Chair, on the other hand, would neither be a better cue for verbatim traces nor for gist traces because it was not presented and is not related to dogs. If verbatim and gist processes were dependent, then factors that affect one process would also affect the other in the same direction. However, several experiments showing, for example, differential forgetting rates between memory for the surface details and memory for the bottom-line meaning of past events favor the notion of dissociated retrieval of verbatim and gist traces. In the case of forgetting rates, those experiments have shown that, over time, verbatim traces become inaccessible at a faster rate than gist traces. Brainerd, Reyna, and Kneer, for instance, found that delay drives true recognition rates (supported by both verbatim and gist traces) and false recognition rates (supported by gist and suppressed by verbatim traces) in opposite directions, namely true memory decays over time while false memory increases. ==== Opponent processes in false memory ==== The principle of opponent processes describes the interaction between verbatim and gist processes in creating true and false memories. Whereas true memory is supported by both verbatim and gist processes, false memory is supported by gist processes and suppressed by verbatim processes. In other words, verbatim and gist processes work in opposition to one another when it comes to false memories. Suppose, for example, that one is presented with a word list such as lemon, apple, pear, and citrus. During a recognition test, the items lemon (target), orange (related distractor), and fan (unrelated distractor) are shown. In this case, retrieval of a gist trace (fruits) supports acceptance of both test probes lemon (true memory) and orange (false memory), whereas retrieval of a verbatim trace (lemon) only supports acceptance of the test probe lemon. In addition, retrieval of an exclusory verbatim trace ("I saw only the words lemon, apple, pear, and citrus") suppresses acceptance of false but related items such as orange through an operation known as recollection rejection. If neither verbatim nor gist traces are retrieved, then one might accept any test probe on the basis of response bias. This principle plays a key role in FTT's explanation of experimental dissociations between true and false memories (e.g., when a variable affects one type of memory without affecting the other, or when it produces opposite effects on them). The time of exposure of each word during study and the number of repetitions have been shown to produce such dissociations. More specifically, while true memory follows a monotonically increasing function when plotted against presentation duration, false memory rates exhibit an inverted-U pattern when plotted as a function of presentation duration. Similarly, repetition is monotonically related to true memory (true memory increases as a function of the number of repetitions) and is non-monotonically related to false memory (repetition produces an inverted-U relation with false memory). ==== Retrieval phenomenology ==== Retrieval phenomenologies are spontaneous mental experiences associated with the act of remembering. It was first systematically characterized by E. K. Strong in the early 1900s. Strong identified two distinct types of introspective phenomena associated with memory retrieval that have since been termed recollection (or remembrance) and familiarity. Whereas the former is characterized as retrieval associated with recollection of past experiences, the latter lacks such association. The two forms of experiences can be illustrated by everyday expressions such as "I remember that!" (recollection) and "That seems familiar..." (familiarity). In FTT, retrieval of verbatim traces often produces recollective phenomenology and thus is frequently referred to as recollective retrieval. However, one feature of FTT is that recollective phenomenology is not particular to one type of memory process as posited by other dual-process theories of memory. Instead, FTT posits that retrieval of gist traces can also produce recollective phenomenology under some circumstances. When gist resemblance between a false item and memory is high and compelling, this gives rise to a phenomenon called phantom recollection, which is a vivid, but false, memory deemed to be true. ==== Developmental variability in dual processes ==== The principle of developmental variability in dual processes posits that verbatim and gist processes show variability across the lifespan. More specifically, verbatim and gist processes have been shown to improve between early childhood and young adulthood. Regarding verbatim processes, older children are better at retrieval of verbatim traces than younger children, although even very young children (4-year-olds) are able to retrieve verbatim information at above chance level. For instance, source memory accuracy greatly increases between 4-year-olds and 6-year-olds, and memory for nonsense words (i.e., words without a meaning, such as neppez) has been shown to increase between 7- and 10-year-olds. Gist processes also improve with age. For example, semantic clustering in free recall increases from 8-year-olds to 14-year-olds, and meaning connection across words and sentences has been shown to improve between 6- and 9-year-olds. In particular, the notion that gist memory improves with age plays a central role in FTT's prediction of age increases in false memory, a counterintuitive pattern that has been called developmental reversal. Regarding old age, several studies suggest that verbatim memory declines between early and late adulthood, while gist memory remains fairly stable. Experiments indicate that older adults perform worse on tasks that require retrieval of surface features from studied items relative to younger adults. In addition, results with measurement models that quantify verbatim and gist processes indicate that older adults are less able to use verbatim traces during recall than younger adults. === False memories === When people try to remember past events (e.g., a birthday party or the last dinner), they often commit two types of errors: errors of omission and errors of commission. The former is known as forgetting, while the latter is better known as false memories. False memories can be separated into spontaneous and implanted false memories. Spontaneous false memories result from endogenous (internal) processes, such as meaning processing, while implanted false memories are the result of exogenous (external) processes, such as the suggestion of false information by an outside source (e.g., an interviewer asking misleading questions). Research had first suggested that younger children are more susceptible to suggestion of false information than adults. However, research has since indicated that younger children are much less likely to form false memories than older children and adults. Moreover, in opposition to common sense, true memories are not more stable than false ones. Studies have shown that false memories are actually more persistent than true memories. According to FTT, such pattern arises because false memories are supported by memory traces that are less susceptible to interference and forgetting (gist traces) than traces that suppress them and also support true memories (verbatim traces). FTT is not a model for false memories but rather a model that explains how memory reacts with a higher reasoning process. Essentially, the gist and verbatim traces of whatever the subject is experiencing has a major effect on information that the subject falsely remembers. Verbatim and gist traces assist with memory performance due to the performance being able to pull from traces, relying on factors of different retrieval cues, on the accessibility of these kinds of memories, and forgetting. Although not a model for false memories, FTT is able to predict true and false memories associated with narratives and sentences. This is especially apparent in eye witness testimonies. There are 5 explanatory principles that explain FTT's description of false memory, which lays out the differences between experiences dealing with gist and verbatim traces. The storage of verbatim and gist traces are lateral. The subject and the meaning contents are lateral in experiences. The very surface forms of directly experienced events are representations of verbatim traces; gist traces are stored throughout many levels of familiarity. The retrieval of gist and verbatim traces: Retrieval cues work best with verbatim when the subject experiences different events. Events that are not explicitly experienced work best with regard to retrieval cues in gist traces. Surface memories in verbatim traces typically will decline more rapidly than those memories which deal with meaning. False Memory and the dual-opponent processes: Effects on false memory typically differ between retrieval cues of verbatim and gist traces. Gist traces will support false memory because the meaning an item has to the subject will seem like it is familiar; whereas verbatim processes will suppress the false memory by getting rid of the familiarity of the meaning to the subject. However, when a false memory is shown to the subject as a suggestion, this rule takes exception. In this case, both retrieval cues of gist and verbatim traces will support the false memory, while the verbatim trace, through retrieval of originally experienced memories, will suppress the false memory. Variability in development: There is some variability to the development of retrieval of both gist and verbatim memory; both of these will improve during development into adulthood. Especially in the case of gist traces, where as someone gets older, connecting meaning with different items/events will improve. Gist and verbatim processes assist with remembering an event vividly. The retrieval of both gist and verbatim support a form of verbatim assist with remembering, either the recollected thoughts will be more generic like in the case of gist traces or conscience experiences like with verbatim traces. Differences between true and false memories are also laid out by FTT. The associations and dissociations between true and false memories are predicted by FTT, namely, certain associations and dissociations are observed under different kinds of conditions. Dissociation emerges under situations that involve reliance on verbatim traces. Memories, whether true or false are then based on different kinds of representations. FTT may also help explain the effects of false memories, misinformation, and false-recognition in children as well as how this may vary during developmental changes. While many false memories may be perceived as being "dumb," recent studies on FTT have shown that the theory might have an influence on creating "smart" false memories, which are created from being aware of the meaning of certain experiences. While false memory research is still in early development, the application of FTT in false memory has been able to apply to real world settings; FTT has been effective in explaining multiple phenomena of false memory. In explaining false memories, FTT rejects the idea that offhand false memories are deemed as being true and how gist and verbatim traces embed false memories. == Reasoning and decision-making == FTT, as it applies to reasoning, is adapted from dual process models of human cognition. It differs from the traditional dual process model in that it makes a distinction between impulsivity and intuition—which are combined in System 1 according to traditional dual process theories—and then makes the claim that expertise and advanced cognition relies on intuition. The distinction between intuition and analysis depends on what kind of representation is used to process information. The mental representations described by FTT are categorized as either gist or verbatim representations: Gist representations are bottom-line understandings of the meaning of information or experience, and are used in intuitive gist processing. Verbatim representations are the precise and detailed representations of the exact information or experience, and are used in analytic verbatim processing. Generally, most adults display what is called a "fuzzy processing preference," meaning that they rely on the least precise gist representations necessary to make a decision, despite parallel processing of both gist and verbatim representations. Both processes increase with age, though the verbatim process develops sooner than the gist, and is thus more heavily relied on in adolescence. In this regard, the theory expands on research that has illustrated the role of memory representations in reasoning processes, the intersection of which has been previously underexplored. However, in certain circumstances, FTT predicts independence between memory and reasoning, specifically between reasoning tasks that rely on gist representations and memory tests that rely on verbatim representations. An example of this is research between the risky choice framing task and working memory, in which better working memory is not associated with a reduction in bias. FTT thus explains inconsistencies or biases in reasoning to be dependent on retrieval cues that access stored values and principles that are gist representations, which can be filtered through experience and cultural, affective, and developmental factors. This dependence on gist results in a vulnerability of reasoning to processing interference from overlapping classes of events, but can also explain expert reasoning in that a person can treat superficially different reasoning problems in the same way if the problems share an underlying gist. === Risk perception and probability judgments === FTT posits that when people are presented with statistical information, they extract representations of the gist of the information (qualitatively) as well as the exact verbatim information (quantitatively). The gist that is encoded is often a basic categorical distinction between no risk and some risk. However, in situations when both choices in the decision have a level of uncertainty or risk, then another level of precision would be required, e.g., low risk or high risk. An illustration of this principle can be found in FTT's explanation of the common framing effect. ==== Framing effects ==== Framing effects occur when linguistically different descriptions of equivalent options lead to inconsistent choices. A famous example of a risky choice framing task is the Asian Disease Problem. This task requires the participants to imagine that their country is about to face a disease which is expected to kill 600 people. They have to choose among two programs to combat this disease. Subjects are presented with options that are framed as either gains (lives saved) or losses (lives lost). The possible options, as well as the categorical gists that are posited to be encoded by FTT are displayed below. It is commonly found that people prefer the sure option when the options are framed as gains (program A) and the risky option when they are framed as losses (program D), despite the fact that the expected values for all the programs are equivalent. This is in contrast to a normative point of view that would indicate that if respondents prefer the sure option in the positive frame, they should also prefer the sure option in the negative frame. The explanation for this effect according to FTT is that people will tend to operate on the simplest gist that is permitted to make a decision. In the case of this framing question, the gain frame presents a situation in which people prefer the gist of some people being saved to the possibility that some are saved or no one could be saved, and conversely, that the possibility of some people dying or no one dying is preferable to the option that some people will surely die. Critical tests have been conducted to provide evidence in support of this explanation in favor of other theoretical explanations (i.e., Prospect theory) by presenting a modified version of this task that eliminates some mathematically redundant wording, e.g., program B would instead indicate that "If program B is adopted, there is 1/3 probability that 600 people will be saved." FTT predicts, in this case, that the elimination of the additional gist (the explicit possible death in program B) would result in indifference and eliminate the framing effect, which is indeed what was found. ==== Probability judgments and risk ==== The dual-process assumption of FTT has also been used to explain common biases of probability judgment, including the conjunction and disjunction fallacies. The conjunction fallacy occurs when people mistakenly judge a specific set of circumstances to be more probable than a more general set that includes the specific set. This fallacy is famously demonstrated by the Linda problem: that given a description of a woman named Linda who is an outspoken philosophy major who is concerned about discrimination and social justice, people will judge "Linda is a bank teller and is active in the feminist movement" to be more probable than "Linda is a bank teller", despite the fact that the latter statement is entirely inclusive of the former. FTT explains this phenomenon to not be a matter of encoding, given that priming participants to understand the inclusive nature of the categories tends not to reduce the bias. Instead, this is the result of the salience of relational gist, which contributes to a tendency to judge relative numerosity instead of merely applying the principle of class inclusion. Errors of probability perception are also associated with the theory's predictions of contradictory relationships between risk perception and risky behavior. Specifically, that endorsement of accurate principles of objective risk is actually associated with greater risk-taking, whereas measures that assess global, gist-based judgments of risk had a protective effect (consistent with other predictions from FTT in the field of medical decision making). Since gist processing develops after verbatim processing as people age, this finding lends explanation to the increase in risk-taking that occurs during adolescence. === Management and economics === FTT has also been applied in the domains of consumer behavior and economics. For example, since the theory posits that people rely primarily on gist representations in making decisions, and that culture and experience can affect consumers' gist representations, factors such as cultural similarity and personal relevance have been used to explain consumers' perceptions of the risk of food-borne contamination and their intentions to reduce consumption of certain foods. In other words, one's evaluation of how "at-risk" he or she is can be influenced both by specific information learned as well as by the fuzzy representations of culture experience, and perceived proximity. In practice this resulted in greater consumer concern when the threat of a food-borne-illness was described in a culturally similar location, regardless of geographical proximity or other verbatim details. Evidence was also found in consumer research in support of FTT's "editing" hypothesis, namely that extremely low-probability risks can be simplified by gist processing to be represented as "essentially nil." For example, one study found that people were willing to pay more for a safer product if safety was expressed relatively (i.e., product A is safer than product B) than they were if safety was expressed with statistics of actual incidence of safety hazards. This result is in contrast to most prescriptive decision rules that predict that formally equivalent methods of communicating risk information should have identical effects on risk-taking behavior, even if the pertinent displays are different. These findings are predicted by FTT (and related models), which suggest that people reason on the basis of simplified representations rather than on the literal information available. === Medical decision-making === Like other people, clinicians apply cognitive heuristics and fall into systematic errors which affect decisions in everyday life. Research has shown that patients and their physicians have difficulty understanding a host of numerical concepts, especially risks and probabilities, and this often implies some problems with numeracy, or mathematical proficiency. For example, physicians and patients both demonstrate great difficulty understanding the probabilities of certain genetic risks and were prone to the same errors, despite vast differences in medical knowledge. Though traditional dual process theory generally predicts that decisions made by computation are superior to those made by intuition, FTT assumes the opposite: that intuitive processing is more sophisticated and is capable of making better decisions, and that increases in expertise are accompanied by reliance on intuitive, gist-based reasoning rather than on literal, verbatim reasoning. FTT predicts that simply educating people with statistics regarding risk factors can hinder prevention efforts. Due to low prevalence of HIV or cancer, for example, people tend to overestimate their risks, and consequently interventions stressing the actual numbers may move people toward complacency as opposed to risk reduction. When women learn that their actual risks for breast cancer are lower than they thought, they return for screening at a lower rate. Also, some interventions to discourage adolescent drug use by presenting the risks have been shown to be ineffective or can even backfire. The conclusion drawn from this evidence is that health-care professionals and health policymakers need to package, present, and explain information in more meaningful ways that facilitate forming an appropriate gist. Such strategies would include explaining quantities qualitatively, displaying information visually, and tailoring the format to trigger the appropriate gist and to cue the retrieval of health-related knowledge and values. Web-based interventions have been designed using these principles, which have been found to increase the patient's willingness to escalate care, as well as gain knowledge and make an informed choice. == Implications == Theory-driven research using principles from FTT provides empirically supported recommendations that can be applied in many fields. For example, it provides specific recommendations regarding interventions aiming at reducing adolescent risk taking. Moreover, according to FTT, precise information does not necessarily work to communicate health-related information, which has obvious implications to public policy and procedures for improving treatment adherence in particular. Specifically, FTT principles suggest examples of how to display risk proportions in order to be comprehensible for both patients and health care professionals: Explain quantities qualitatively. Do not rely solely on numbers when presenting information. Explain quantities, percentages, and probabilities verbally, stressing conceptual understanding (the bottom-line meaning of information) over precise memorization of verbatim facts or numbers (e.g., a 20% chance of breast cancer is actually a "high" risk). Provide verbal guidance in disentangling classes and class-inclusion relationships. Display information visually. When it is necessary to present information numerically, arrange numbers so that meaningful patterns or relationships among them are obvious. Make use of graphical displays which help people extract the relevant gist. Useful formats for conveying relative risks and other comparative information include simple bar graphs and risk ladders. Pie charts are good for representing relative proportions. Line graphs are optimal for conveying the gist of a linear trend, such as survival and mortality curves or the effectiveness of a drug over time. Stacked bar graphs are useful for showing absolute risks; and Venn diagrams, two-by-two grids, and 100-square grids are useful for disentangling numerators and denominators and for eliminating errors from probability judgments. Avoid distracting gists. The class-inclusion confusion is especially likely to produce errors when visually or emotionally salient details, a story, or a stereotype draws attention away from the relevant data in the direction of extraneous information. For example, given a display of seven cows and three horses, children are asked whether there are more cows or more animals. Until the age of ten, children often respond that there are more cows than animals, even after counting the number in each class aloud correctly. However, young children in the previous example are more likely to answer the problem correctly when they are not shown a picture with the visually hard-to-ignore detail, that is, several figures of cows. Facilitate reexamination of problems. Encourage people to reexamine problems and edit their initial judgments. Although gist for quantities tends to be more available than the numbers verbatim, people can and do attend to the numbers to correct their first gist-based impressions when cued to do so and when they are given the time and opportunity, which can help reducing errors. In addition, memory principles in FTT provide recommendations to eyewitness testimony. Children are often called upon to testify in courts, most commonly in cases of maltreatment, divorce, and child custody. Contrary to common sense, FTT posits that children can be reliable witnesses as long as they are encouraged to report verbatim memories and their reports are protected from suggestion of false information. More specifically: Children should be interviewed as soon as possible after the target event to reduce exposure to false suggestions and to facilitate retrieval of verbatim memories before their rapid decay. When reminding a witness of a target event, interviewers should present pictures or photos rather than words to describe it. Pictures of the actual target event help to increase retrieval of true memories as they are better cues to verbatim memories than words. Avoid repeated questioning. FTT predicts, for example, that the repetition of questions that restate the gist of a false information can increase the probability of false memories during subsequent interviews. Do not give children negative feedback about their performance during an interview. This procedure prompts children to provide additional information that is often false rather than true. == See also == Behavioral economics Cognitive development Decision-making Developmental psychology Framing Reason Risk == References ==
Wikipedia/Fuzzy-trace_theory
Choice modelling attempts to model the decision process of an individual or segment via revealed preferences or stated preferences made in a particular context or contexts. Typically, it attempts to use discrete choices (A over B; B over A, B & C) in order to infer positions of the items (A, B and C) on some relevant latent scale (typically "utility" in economics and various related fields). Indeed many alternative models exist in econometrics, marketing, sociometrics and other fields, including utility maximization, optimization applied to consumer theory, and a plethora of other identification strategies which may be more or less accurate depending on the data, sample, hypothesis and the particular decision being modelled. In addition, choice modelling is regarded as the most suitable method for estimating consumers' willingness to pay for quality improvements in multiple dimensions. == Related terms == There are a number of terms which are considered to be synonyms with the term choice modelling. Some are accurate (although typically discipline or continent specific) and some are used in industry applications, although considered inaccurate in academia (such as conjoint analysis). These include the following: Stated preference discrete choice modeling Discrete choice Choice experiment Stated preference studies Conjoint analysis Controlled experiments Although disagreements in terminology persist, it is notable that the academic journal intended to provide a cross-disciplinary source of new and empirical research into the field is called the Journal of Choice Modelling. == Theoretical background == The theory behind choice modelling was developed independently by economists and mathematical psychologists. The origins of choice modelling can be traced to Thurstone's research into food preferences in the 1920s and to random utility theory. In economics, random utility theory was then developed by Daniel McFadden and in mathematical psychology primarily by Duncan Luce and Anthony Marley. In essence, choice modelling assumes that the utility (benefit, or value) that an individual derives from item A over item B is a function of the frequency that (s)he chooses item A over item B in repeated choices. Due to his use of the normal distribution Thurstone was unable to generalise this binary choice into a multinomial choice framework (which required the multinomial logistic regression rather than probit link function), hence why the method languished for over 30 years. However, in the 1960s through 1980s the method was axiomatised and applied in a variety of types of study. == Distinction between revealed and stated preference studies == Choice modelling is used in both revealed preference (RP) and stated preference (SP) studies. RP studies use the choices made already by individuals to estimate the value they ascribe to items - they "reveal their preferences - and hence values (utilities) – by their choices". SP studies use the choices made by individuals made under experimental conditions to estimate these values – they "state their preferences via their choices". McFadden successfully used revealed preferences (made in previous transport studies) to predict the demand for the Bay Area Rapid Transit (BART) before it was built. Luce and Marley had previously axiomatised random utility theory but had not used it in a real world application; furthermore they spent many years testing the method in SP studies involving psychology students. == History == McFadden's work earned him the Nobel Memorial Prize in Economic Sciences in 2000. However, much of the work in choice modelling had for almost 20 years been proceeding in the field of stated preferences. Such work arose in various disciplines, originally transport and marketing, due to the need to predict demand for new products that were potentially expensive to produce. This work drew heavily on the fields of conjoint analysis and design of experiments, in order to: Present to consumers goods or services that were defined by particular features (attributes) that had levels, e.g. "price" with levels "$10, $20, $30"; "follow-up service" with levels "no warranty, 10 year warranty"; Present configurations of these goods that minimised the number of choices needed in order to estimate the consumer's utility function (decision rule). Specifically, the aim was to present the minimum number of pairs/triples etc of (for example) mobile/cell phones in order that the analyst might estimate the value the consumer derived (in monetary units) from every possible feature of a phone. In contrast to much of the work in conjoint analysis, discrete choices (A versus B; B versus A, B & C) were to be made, rather than ratings on category rating scales (Likert scales). David Hensher and Jordan Louviere are widely credited with the first stated preference choice models. They remained pivotal figures, together with others including Joffre Swait and Moshe Ben-Akiva, and over the next three decades in the fields of transport and marketing helped develop and disseminate the methods. However, many other figures, predominantly working in transport economics and marketing, contributed to theory and practice and helped disseminate the work widely. == Relationship with conjoint analysis == Choice modelling from the outset suffered from a lack of standardisation of terminology and all the terms given above have been used to describe it. However, the largest disagreement has proved to be geographical: in the Americas, following industry practice there, the term "choice-based conjoint analysis" has come to dominate. This reflected a desire that choice modelling (1) reflect the attribute and level structure inherited from conjoint analysis, but (2) show that discrete choices, rather than numerical ratings, be used as the outcome measure elicited from consumers. Elsewhere in the world, the term discrete choice experiment has come to dominate in virtually all disciplines. Louviere (marketing and transport) and colleagues in environmental and health economics came to disavow the American terminology, claiming that it was misleading and disguised a fundamental difference discrete choice experiments have from traditional conjoint methods: discrete choice experiments have a testable theory of human decision-making underpinning them (random utility theory), whilst conjoint methods are simply a way of decomposing the value of a good using statistical designs from numerical ratings that have no psychological theory to explain what the rating scale numbers mean. == Designing a choice model == Designing a choice model or discrete choice experiment (DCE) generally follows the following steps: Identifying the good or service to be valued; Deciding on what attributes and levels fully describe the good or service; Constructing an Experimental design that is appropriate for those attributes and levels, either from a design catalogue, or via a software program; Constructing the survey, replacing the design codes (numbers) with the relevant attribute levels; Administering the survey to a sample of respondents in any of a number of formats including paper and pen, but increasingly via web surveys; Analysing the data using appropriate models, often beginning with the Multinomial logistic regression model, given its attractive properties in terms of consistency with economic demand theory. === Identifying the good or service to be valued === This is often the easiest task, typically defined by: the research question in an academic study, or the needs of the client (in the context of a consumer good or service) === Deciding on what attributes and levels fully describe the good or service === A good or service, for instance mobile (cell) phone, is typically described by a number of attributes (features). Phones are often described by shape, size, memory, brand, etc. The attributes to be varied in the DCE must be all those that are of interest to respondents. Omitting key attributes typically causes respondents to make inferences (guesses) about those missing from the DCE, leading to omitted variable problems. The levels must typically include all those currently available, and often are expanded to include those that are possible in future – this is particularly useful in guiding product development. === Constructing an experimental design that is appropriate for those attributes and levels, either from a design catalogue, or via a software program === A strength of DCEs and conjoint analyses is that they typically present a subset of the full factorial. For example, a phone with two brands, three shapes, three sizes and four amounts of memory has 2x3x3x4=72 possible configurations. This is the full factorial and in most cases is too large to administer to respondents. Subsets of the full factorial can be produced in a variety of ways but in general they have the following aim: to enable estimation of a certain limited number of parameters describing the good: main effects (for example the value associated with brand, holding all else equal), two-way interactions (for example the value associated with this brand and the smallest size, that brand and the smallest size), etc. This is typically achieved by deliberately confounding higher order interactions with lower order interactions. For example, two-way and three-way interactions may be confounded with main effects. This has the following consequences: The number of profiles (configurations) is significantly reduced; A regression coefficient for a given main effect is unbiased if and only if the confounded terms (higher order interactions) are zero; A regression coefficient is biased in an unknown direction and with an unknown magnitude if the confounded interaction terms are non-zero; No correction can be made at the analysis to solve the problem, should the confounded terms be non-zero. Thus, researchers have repeatedly been warned that design involves critical decisions to be made concerning whether two-way and higher order interactions are likely to be non-zero; making a mistake at the design stage effectively invalidates the results since the hypothesis of higher order interactions being non-zero is untestable. Designs are available from catalogues and statistical programs. Traditionally they had the property of Orthogonality where all attribute levels can be estimated independently of each other. This ensures zero collinearity and can be explained using the following example. Imagine a car dealership that sells both luxury cars and used low-end vehicles. Using the utility maximisation principle and assuming an MNL model, we hypothesise that the decision to buy a car from this dealership is the sum of the individual contribution of each of the following to the total utility. Price Marque (BMW, Chrysler, Mitsubishi) Origin (German, American) Performance Using multinomial regression on the sales data however will not tell us what we want to know. The reason is that much of the data is collinear since cars at this dealership are either: high performance, expensive German cars low performance, cheap American cars There is not enough information, nor will there ever be enough, to tell us whether people are buying cars because they are European, because they are a BMW or because they are high performance. This is a fundamental reason why RP data are often unsuitable and why SP data are required. In RP data these three attributes always co-occur and in this case are perfectly correlated. That is: all BMWs are made in Germany and are of high performance. These three attributes: origin, marque and performance are said to be collinear or non-orthogonal. Only in experimental conditions, via SP data, can performance and price be varied independently – have their effects decomposed. An experimental design (below) in a Choice Experiment is a strict scheme for controlling and presenting hypothetical scenarios, or choice sets to respondents. For the same experiment, different designs could be used, each with different properties. The best design depends on the objectives of the exercise. It is the experimental design that drives the experiment and the ultimate capabilities of the model. Many very efficient designs exist in the public domain that allow near optimal experiments to be performed. For example the Latin square 1617 design allows the estimation of all main effects of a product that could have up to 1617 (approximately 295 followed by eighteen zeros) configurations. Furthermore this could be achieved within a sample frame of only around 256 respondents. Below is an example of a much smaller design. This is 34 main effects design. This design would allow the estimation of main effects utilities from 81 (34) possible product configurations assuming all higher order interactions are zero. A sample of around 20 respondents could model the main effects of all 81 possible product configurations with statistically significant results. Some examples of other experimental designs commonly used: Balanced incomplete block designs (BIBD) Random designs Main effects Higher order interaction designs Full factorial More recently, efficient designs have been produced. These typically minimise functions of the variance of the (unknown but estimated) parameters. A common function is the D-efficiency of the parameters. The aim of these designs is to reduce the sample size required to achieve statistical significance of the estimated utility parameters. Such designs have often incorporated Bayesian priors for the parameters, to further improve statistical precision. Highly efficient designs have become extremely popular, given the costs of recruiting larger numbers of respondents. However, key figures in the development of these designs have warned of possible limitations, most notably the following. Design efficiency is typically maximised when good A and good B are as different as possible: for instance every attribute (feature) defining the phone differs across A and B. This forces the respondent to trade across price, brand, size, memory, etc; no attribute has the same level in both A and B. This may impose cognitive burden on the respondent, leading him/her to use simplifying heuristics ("always choose the cheapest phone") that do not reflect his/her true utility function (decision rule). Recent empirical work has confirmed that respondents do indeed have different decision rules when answering a less efficient design compared to a highly efficient design. More information on experimental designs may be found here. It is worth reiterating, however, that small designs that estimate main effects typically do so by deliberately confounding higher order interactions with the main effects. This means that unless those interactions are zero in practice, the analyst will obtain biased estimates of the main effects. Furthermore (s)he has (1) no way of testing this, and (2) no way of correcting it in analysis. This emphasises the crucial role of design in DCEs. === Constructing the survey === Constructing the survey typically involves: Doing a "find and replace" in order that the experimental design codes (typically numbers as given in the example above) are replaced by the attribute levels of the good in question. Putting the resulting configurations (for instance types of mobile/cell phones) into a broader survey than may include questions pertaining to sociodemographics of the respondents. This may aid in segmenting the data at the analysis stage: for example males may differ from females in their preferences. === Administering the survey to a sample of respondents in any of a number of formats including paper and pen, but increasingly via web surveys === Traditionally, DCEs were administered via paper and pen methods. Increasingly, with the power of the web, internet surveys have become the norm. These have advantages in terms of cost, randomising respondents to different versions of the survey, and using screening. An example of the latter would be to achieve balance in gender: if too many males answered, they can be screened out in order that the number of females matches that of males. === Analysing the data using appropriate models, often beginning with the multinomial logistic regression model, given its attractive properties in terms of consistency with economic demand theory === Analysing the data from a DCE requires the analyst to assume a particular type of decision rule - or functional form of the utility equation in economists' terms. This is usually dictated by the design: if a main effects design has been used then two-way and higher order interaction terms cannot be included in the model. Regression models are then typically estimated. These often begin with the conditional logit model - traditionally, although slightly misleadingly, referred to as the multinomial logistic (MNL) regression model by choice modellers. The MNL model converts the observed choice frequencies (being estimated probabilities, on a ratio scale) into utility estimates (on an interval scale) via the logistic function. The utility (value) associated with every attribute level can be estimated, thus allowing the analyst to construct the total utility of any possible configuration (in this case, of car or phone). However, a DCE may alternatively be used to estimate non-market environmental benefits and costs. == Strengths == Forces respondents to consider trade-offs between attributes; Makes the frame of reference explicit to respondents via the inclusion of an array of attributes and product alternatives; Enables implicit prices to be estimated for attributes; Enables welfare impacts to be estimated for multiple scenarios; Can be used to estimate the level of customer demand for alternative 'service product' in non-monetary terms; and Potentially reduces the incentive for respondents to behave strategically. == Weaknesses == Discrete choices provide only ordinal data, which provides less information than ratio or interval data; Inferences from ordinal data, to produce estimates on an interval/ratio scale, require assumptions about error distributions and the respondent's decision rule (functional form of the utility function); Fractional factorial designs used in practice deliberately confound two-way and higher order interactions with lower order (typically main effects) estimates in order to make the design small: if the higher order interactions are non-zero then main effects are biased, with no way for the analyst to know or correct this ex post; Non-probabilistic (deterministic) decision-making by the individual violates random utility theory: under a random utility model, utility estimates become infinite. There is one fundamental weakness of all limited dependent variable models such as logit and probit models: the means (true positions) and variances on the latent scale are perfectly Confounded. In other words they cannot be separated. == The mean-variance confound == Yatchew and Griliches first proved that means and variances were confounded in limited dependent variable models (where the dependent variable takes any of a discrete set of values rather than a continuous one as in conventional linear regression). This limitation becomes acute in choice modelling for the following reason: a large estimated beta from the MNL regression model or any other choice model can mean: Respondents place the item high up on the latent scale (they value it highly), or Respondents do not place the item high up on the scale BUT they are very certain of their preferences, consistently (frequently) choosing the item over others presented alongside, or Some combination of (1) and (2). This has significant implications for the interpretation of the output of a regression model. All statistical programs "solve" the mean-variance confound by setting the variance equal to a constant; all estimated beta coefficients are, in fact, an estimated beta multiplied by an estimated lambda (an inverse function of the variance). This tempts the analyst to ignore the problem. However (s)he must consider whether a set of large beta coefficients reflect strong preferences (a large true beta) or consistency in choices (a large true lambda), or some combination of the two. Dividing all estimates by one other – typically that of the price variable – cancels the confounded lambda term from numerator and denominator. This solves the problem, with the added benefit that it provides economists with the respondent's willingness to pay for each attribute level. However, the finding that results estimated in "utility space" do not match those estimated in "willingness to pay space", suggests that the confound problem is not solved by this "trick": variances may be attribute specific or some other function of the variables (which would explain the discrepancy). This is a subject of current research in the field. == Versus traditional ratings-based conjoint methods == Major problems with ratings questions that do not occur with choice models are: no trade-off information. A risk with ratings is that respondents tend not to differentiate between perceived 'good' attributes and rate them all as attractive. variant personal scales. Different individuals value a '2' on a scale of 1 to 5 differently. Aggregation of the frequencies of each of the scale measures has no theoretical basis. no relative measure. How does an analyst compare something rated a 1 to something rated a 2? Is one twice as good as the other? Again there is no theoretical way of aggregating the data. == Other types == === Ranking === Rankings do tend to force the individual to indicate relative preferences for the items of interest. Thus the trade-offs between these can, like in a DCE, typically be estimated. However, ranking models must test whether the same utility function is being estimated at every ranking depth: e.g. the same estimates (up to variance scale) must result from the bottom rank data as from the top rank data. === Best–worst scaling === Best–worst scaling (BWS) is a well-regarded alternative to ratings and ranking. It asks people to choose their most and least preferred options from a range of alternatives. By subtracting or integrating across the choice probabilities, utility scores for each alternative can be estimated on an interval or ratio scale, for individuals and/or groups. Various psychological models may be utilised by individuals to produce best-worst data, including the MaxDiff model. == Uses == Choice modelling is particularly useful for: Predicting uptake and refining new product development Estimating the implied willingness to pay (WTP) for goods and services Product or service viability testing Estimating the effects of product characteristics on consumer choice Variations of product attributes Understanding brand value and preference, including for products like colleges Demand estimates and optimum pricing Transportation demand Evacuation and disaster investigations and forecasting The section on "Applications" of discrete choice provides further details on how this type of modelling can be applied in different fields. === Occupational choice model === In Economics, an occupational choice model is a model that seeks to answer why people enter into different occupations . In the model, in each moment, the person decides whether to work as in the previous occupation, in some other occupation, or not to be employed. In some versions of the model, an individual chooses that occupation for which the present value of his expected income is a maximum. However, in other versions, risk aversion may drive people to work in the same occupation as before. == See also == Consumer choice Discrete choice Outline of management == References == == External links == Media related to Choice modelling at Wikimedia Commons Curated bibliography at IDEAS/RePEc
Wikipedia/Choice_modelling
In psychological theories of motivation, the Rubicon model, more completely the Rubicon model of action phases, makes a distinction between motivational and volitional processes. The Rubicon model "defines clear boundaries between motivational and action phases." The first boundary "separates the motivational process of the predecisional phase from the volitional processes of postdecisional phase." Another boundary is that between initiation and conclusion of an action. A self-regulatory feedback model incorporating these interfaces was proposed later by others, as illustrated in the figure. The name "Rubicon model" derives from the tale of Caesar's crossing the Rubicon River, a point of no return, thereby revealing his intentions. According to the Rubicon model, every action includes such a point of no return at which the individual moves from goal setting to goal striving. "Once subjects move from planning and goal-setting to the implementation of plans, they cross a metaphorical Rubicon. That is, their goals are typically protected and fostered by self-regulatory activity rather than reconsidered or changed, often even when challenged." — Lyn Corno, The best laid plans, p. 15 (quoted by Rauber) The Rubicon model addresses four questions, as identified by Achtziger and Gollwitzer: How do people select their goals? How do they plan the execution of their goals? How do they enact their plans? How do they evaluate their efforts to accomplish a specific goal? The study of these issues is undertaken by both the fields of cognitive neuroscience and social psychology. A possible connection between these approaches is brain imaging work attempting to relate volition to neuroanatomy. == Background == Human action coordinates such aspects of human behavior as perception, thought, emotion, and skills to classify goals as attainable or unattainable and then to engage or disengage in trying to attain these goals. According to Heckhausen & Heckhausen, "Research based on the Rubicon model of action phases has provided a wealth of empirical evidence for mental and behavioral resources being orchestrated in this manner." Engagement and disengagement with goals affects personal distress over the unachievable. "By having new goals available, and reengaging in those new goals, a person can reduce distress....while continuing to derive a sense of purpose in life by finding other pursuits of value." == See also == Cognitive psychology Executive functions Delayed gratification Motivation Self-control Self-efficacy Self-regulated learning Self-regulation theory == References ==
Wikipedia/Rubicon_model
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss. == Examples == === Regret === Leonard J. Savage argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. === Quadratic loss function === The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is λ ( x ) = C ( t − x ) 2 {\displaystyle \lambda (x)=C(t-x)^{2}\;} for some constant C; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as the squared error loss (SEL). Many common statistics, including t-tests, regression models, design of experiments, and much else, use least squares methods applied using linear regression theory, which is based on the quadratic loss function. The quadratic loss function is also used in linear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a quadratic form in the deviations of the variables of interest from their desired values; this approach is tractable because it results in linear first-order conditions. In the context of stochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like the Huber, Log-Cosh and SMAE losses are used when the data has many large outliers. === 0-1 loss function === In statistics and decision theory, a frequently used loss function is the 0-1 loss function L ( y ^ , y ) = [ y ^ ≠ y ] {\displaystyle L({\hat {y}},y)=\left[{\hat {y}}\neq y\right]} using Iverson bracket notation, i.e. it evaluates to 1 when y ^ ≠ y {\displaystyle {\hat {y}}\neq y} , and 0 otherwise. == Constructing loss and objective functions == In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences. In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers. Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities and the European subsidies for equalizing unemployment rates among 271 German regions. == Expected loss == In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X. === Statistics === Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms. ==== Frequentist expected loss ==== We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the probability distribution, Pθ, of the observed data, X. This is also referred to as the risk function of the decision rule δ and the parameter θ. Here the decision rule depends on the outcome of X. The risk function is given by: R ( θ , δ ) = E θ ⁡ L ( θ , δ ( X ) ) = ∫ X L ( θ , δ ( x ) ) d P θ ( x ) . {\displaystyle R(\theta ,\delta )=\operatorname {E} _{\theta }L{\big (}\theta ,\delta (X){\big )}=\int _{X}L{\big (}\theta ,\delta (x){\big )}\,\mathrm {d} P_{\theta }(x).} Here, θ is a fixed but possibly unknown state of nature, X is a vector of observations stochastically drawn from a population, E θ {\displaystyle \operatorname {E} _{\theta }} is the expectation over all population values of X, dPθ is a probability measure over the event space of X (parametrized by θ) and the integral is evaluated over the entire support of X. ==== Bayes Risk ==== In a Bayesian approach, the expectation is calculated using the prior distribution π* of the parameter θ: ρ ( π ∗ , a ) = ∫ Θ ∫ X L ( θ , a ( x ) ) d P ( x | θ ) d π ∗ ( θ ) = ∫ X ∫ Θ L ( θ , a ( x ) ) d π ∗ ( θ | x ) d M ( x ) {\displaystyle \rho (\pi ^{*},a)=\int _{\Theta }\int _{\mathbf {X}}L(\theta ,a({\mathbf {x}}))\,\mathrm {d} P({\mathbf {x}}\vert \theta )\,\mathrm {d} \pi ^{*}(\theta )=\int _{\mathbf {X}}\int _{\Theta }L(\theta ,a({\mathbf {x}}))\,\mathrm {d} \pi ^{*}(\theta \vert {\mathbf {x}})\,\mathrm {d} M({\mathbf {x}})} where m(x) is known as the predictive likelihood wherein θ has been "integrated out," π* (θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the action a* which minimises this expected loss, which is referred to as Bayes Risk. In the latter equation, the integrand inside dx is known as the Posterior Risk, and minimising it with respect to decision a also minimizes the overall Bayes Risk. This optimal decision, a* is known as the Bayes (decision) Rule - it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. ==== Examples in statistics ==== For a scalar parameter θ, a decision function whose output θ ^ {\displaystyle {\hat {\theta }}} is an estimate of θ, and a quadratic loss function (squared error loss) L ( θ , θ ^ ) = ( θ − θ ^ ) 2 , {\displaystyle L(\theta ,{\hat {\theta }})=(\theta -{\hat {\theta }})^{2},} the risk function becomes the mean squared error of the estimate, R ( θ , θ ^ ) = E θ ⁡ [ ( θ − θ ^ ) 2 ] . {\displaystyle R(\theta ,{\hat {\theta }})=\operatorname {E} _{\theta }\left[(\theta -{\hat {\theta }})^{2}\right].} An Estimator found by minimizing the Mean squared error estimates the Posterior distribution's mean. In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm, L ( f , f ^ ) = ‖ f − f ^ ‖ 2 2 , {\displaystyle L(f,{\hat {f}})=\|f-{\hat {f}}\|_{2}^{2}\,,} the risk function becomes the mean integrated squared error R ( f , f ^ ) = E ⁡ ( ‖ f − f ^ ‖ 2 ) . {\displaystyle R(f,{\hat {f}})=\operatorname {E} \left(\|f-{\hat {f}}\|^{2}\right).\,} === Economic choice under uncertainty === In economics, decision-making under uncertainty is often modelled using the von Neumann–Morgenstern utility function of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. == Decision rules == A decision rule makes a choice using an optimality criterion. Some commonly used criteria are: Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: a r g m i n δ max θ ∈ Θ R ( θ , δ ) . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\ \max _{\theta \in \Theta }\ R(\theta ,\delta ).} Invariance: Choose the decision rule which satisfies an invariance requirement. Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): a r g m i n δ E θ ∈ Θ ⁡ [ R ( θ , δ ) ] = a r g m i n δ ∫ θ ∈ Θ R ( θ , δ ) p ( θ ) d θ . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\operatorname {E} _{\theta \in \Theta }[R(\theta ,\delta )]={\underset {\delta }{\operatorname {arg\,min} }}\ \int _{\theta \in \Theta }R(\theta ,\delta )\,p(\theta )\,d\theta .} == Selecting a loss function == Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances. A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering. For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used loss functions are the squared loss, L ( a ) = a 2 {\displaystyle L(a)=a^{2}} , and the absolute loss, L ( a ) = | a | {\displaystyle L(a)=|a|} . However the absolute loss has the disadvantage that it is not differentiable at a = 0 {\displaystyle a=0} . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of a {\displaystyle a} 's (as in ∑ i = 1 n L ( a i ) {\textstyle \sum _{i=1}^{n}L(a_{i})} ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties. Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others. W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases. == See also == Bayesian regret Loss functions for classification Discounted maximum loss Hinge loss Scoring rule Statistical risk == References == == Further reading == Aretz, Kevin; Bartram, Söhnke M.; Pope, Peter F. (April–June 2011). "Asymmetric Loss Functions and the Rationality of Expected Stock Returns" (PDF). International Journal of Forecasting. 27 (2): 413–437. doi:10.1016/j.ijforecast.2009.10.008. SSRN 889323. Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2. MR 0804611. Cecchetti, S. (2000). "Making monetary policy: Objectives and rules". Oxford Review of Economic Policy. 16 (4): 43–59. doi:10.1093/oxrep/16.4.43. Horowitz, Ann R. (1987). "Loss functions and public policy". Journal of Macroeconomics. 9 (4): 489–504. doi:10.1016/0164-0704(87)90016-4. Waud, Roger N. (1976). "Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty". Econometrica. 44 (1): 53–66. doi:10.2307/1911380. JSTOR 1911380.
Wikipedia/Risk_function
In decision theory, the Anscombe-Aumann subjective expected utility model (also known as Anscombe-Aumann framework, Anscombe-Aumann approach, or Anscombe-Aumann representation theorem) is a framework to formalizing subjective expected utility (SEU) developed by Frank Anscombe and Robert Aumann. Anscombe and Aumann's approach can be seen as an extension of Savage's framework to deal with more general acts, leading to a simplification of Savage's representation theorem. It can also be described as a middle-course theory that deals with both objective uncertainty (as in the von Neumann-Morgenstern framework) and subjective uncertainty (as in Savage's framework). The Anscombe-Aumann framework builds upon previous work by Savage, von Neumann, and Morgenstern on the theory of choice under uncertainty and the formalization of SEU. It has since become one of the standard approaches to choice under uncertainty, serving as the basis for alternative models of decision theory such as maxmin expected utility, multiplier preferences and choquet expected utility. == Setup == === Roulette lotteries and horse lotteries === The Anscombe-Aumann framework is essentially the same as Savage's, dealing with primitives ( Ω , X , F , ≿ ) {\displaystyle (\Omega ,X,F,\succsim )} . The only difference is that now the set of acts F {\displaystyle F} consists of functions f : Ω → Δ ( X ) {\displaystyle f:\Omega \to \Delta (X)} , where Δ ( X ) {\displaystyle \Delta (X)} is the set of lotteries over outcomes X {\displaystyle X} . This way, Anscombe and Aumann differentiate between the subjective uncertainty over the states Ω {\displaystyle \Omega } (referred to as a horse lottery), and the objective uncertainty given by the acts f {\displaystyle f} (referred to as roulette lotteries). Importantly, such assumption greatly simplifies the proof of an expected utility representation, since it gives the set F {\displaystyle F} a linear structure inherited from Δ ( X ) {\displaystyle \Delta (X)} . In particular, we can define a mixing operation: given any two acts f , g ∈ F {\displaystyle f,g\in F} and α ∈ [ 0 , 1 ] {\displaystyle \alpha \in [0,1]} , we have the act α f + ( 1 − α ) g ∈ F {\displaystyle \alpha f+(1-\alpha )g\in F} define by ( α f + ( 1 − α ) g ) ( ω ) = α f ( ω ) + ( 1 − α ) g ( ω ) ∈ Δ ( X ) {\displaystyle (\alpha f+(1-\alpha )g)(\omega )=\alpha f(\omega )+(1-\alpha )g(\omega )\in \Delta (X)} for all ω ∈ Ω {\displaystyle \omega \in \Omega } . === Expected utility representation === As in Savage's model, we want to derive conditions on the primitives ( Ω , X , F , ≿ ) {\displaystyle (\Omega ,X,F,\succsim )} such that the preference ≿ {\displaystyle \succsim } can be represented by expected-utility maximization. Since acts are now themselves lotteries, however, such representation involves a probability distribution p ∈ Δ ( Ω ) {\displaystyle p\in \Delta (\Omega )} and a utility function u : X → R {\displaystyle u:X\to \mathbb {R} } which must satisfy f ≿ g ⟺ ∫ Ω E x ∼ f ( ω ) ⁡ [ u ( x ) ] d p ( ω ) ≥ ∫ Ω E x ∼ g ( ω ) ⁡ [ u ( x ) ] d p ( ω ) . {\displaystyle f\succsim g\iff \int _{\Omega }\mathop {\mathbb {E} } _{x\sim f(\omega )}\left[u(x)\right]{\text{d}}p(\omega )\geq \int _{\Omega }\mathop {\mathbb {E} } _{x\sim g(\omega )}\left[u(x)\right]{\text{d}}p(\omega ).} == Axioms == Anscombe and Aumann posit the following axioms regarding ≿ {\displaystyle \succsim } : Axiom 1 (Preference relation) : ≿ {\displaystyle \succsim } is complete (for all f , g ∈ F {\displaystyle f,g\in F} , it's true that f ≿ g {\displaystyle f\succsim g} or g ≿ f {\displaystyle g\succsim f} ) and transitive. Axiom 2 (Independence axiom): given f , g ∈ F {\displaystyle f,g\in F} , we have that f ≿ g ⟺ α f + ( 1 − α ) h ≿ α g + ( 1 − α ) h {\displaystyle f\succsim g\iff \alpha f+(1-\alpha )h\succsim \alpha g+(1-\alpha )h} for any h ∈ F {\displaystyle h\in F} and α ∈ [ 0 , 1 ] {\displaystyle \alpha \in [0,1]} . Axiom 3 (Archimedean axiom): for any f , g , h {\displaystyle f,g,h} such that f ≻ g ≻ h {\displaystyle f\succ g\succ h} , there exist α , β ∈ ( 0 , 1 ) {\displaystyle \alpha ,\beta \in (0,1)} such that α f + ( 1 − α ) h ≻ g ≻ β f + ( 1 − β ) h . {\displaystyle \alpha f+(1-\alpha )h\succ g\succ \beta f+(1-\beta )h.} For any act f ∈ F {\displaystyle f\in F} and state ω ∈ Ω {\displaystyle \omega \in \Omega } , let f ω ≡ f ( ω ) {\displaystyle f_{\omega }\equiv f(\omega )} be the constant act with value f ( ω ) {\displaystyle f(\omega )} . Axiom 4 (Monotonicity): given acts f , g ∈ F {\displaystyle f,g\in F} , we have f ω ≿ g ω ∀ ω ∈ Ω ⟹ f ≿ g . {\displaystyle f_{\omega }\succsim g_{\omega }{\text{ }}\forall \omega \in \Omega \implies f\succsim g.} Axiom 5 (Non-triviality): there exist acts f , f ′ ∈ F {\displaystyle f,f'\in F} such that f ≻ f ′ {\displaystyle f\succ f'} . == Anscombe-Aumann representation theorem == Theorem: given an environment ( Ω , X , F , ≿ ) {\displaystyle (\Omega ,X,F,\succsim )} , the preference relation ≿ {\displaystyle \succsim } satisfies Axioms 1-5 if and only if there exist a probability distribution p ∈ Δ ( Ω ) {\displaystyle p\in \Delta (\Omega )} and a non-constant utility function u : X → R {\displaystyle u:X\to \mathbb {R} } such that f ≿ g ⟺ ∫ Ω E x ∼ f ( ω ) ⁡ [ u ( x ) ] d p ( ω ) ≥ ∫ Ω E x ∼ g ( ω ) ⁡ [ u ( x ) ] d p ( ω ) {\displaystyle f\succsim g\iff \int _{\Omega }\mathop {\mathbb {E} } _{x\sim f(\omega )}\left[u(x)\right]{\text{d}}p(\omega )\geq \int _{\Omega }\mathop {\mathbb {E} } _{x\sim g(\omega )}\left[u(x)\right]{\text{d}}p(\omega )} for all acts f , g {\displaystyle f,g} . Furthermore, p {\displaystyle p} is unique and u {\displaystyle u} is unique up to positive affine transformations. == See also == Savage's subjective expected utility model von Neumann-Morgenstern utility theorem == Notes == == References ==
Wikipedia/Anscombe-Aumann_subjective_expected_utility_model
In economics, utility is a measure of a certain person's satisfaction from a certain state of the world. Over time, the term has been used with at least two meanings. In a normative context, utility refers to a goal or objective that we wish to maximize, i.e., an objective function. This kind of utility bears a closer resemblance to the original utilitarian concept, developed by moral philosophers such as Jeremy Bentham and John Stuart Mill. In a descriptive context, the term refers to an apparent objective function; such a function is revealed by a person's behavior, and specifically by their preferences over lotteries, which can be any quantified choice. The relationship between these two kinds of utility functions has been a source of controversy among both economists and ethicists, with most maintaining that the two are distinct but generally related. == Utility function == Consider a set of alternatives among which a person has a preference ordering. A utility function represents that ordering if it is possible to assign a real number to each alternative in such a manner that alternative a is assigned a number greater than alternative b if and only if the individual prefers alternative a to alternative b. In this situation, someone who selects the most preferred alternative must also choose one that maximizes the associated utility function. Suppose James has utility function U = x y {\displaystyle U={\sqrt {xy}}} such that x {\displaystyle x} is the number of apples and y {\displaystyle y} is the number of chocolates. Alternative A has x = 9 {\displaystyle x=9} apples and y = 16 {\displaystyle y=16} chocolates; alternative B has x = 13 {\displaystyle x=13} apples and y = 13 {\displaystyle y=13} chocolates. Putting the values x , y {\displaystyle x,y} into the utility function yields 9 × 16 = 12 {\displaystyle {\sqrt {9\times 16}}=12} for alternative A and 13 × 13 = 13 {\displaystyle {\sqrt {13\times 13}}=13} for B, so James prefers alternative B. In general economic terms, a utility function ranks preferences concerning a set of goods and services. Gérard Debreu derived the conditions required for a preference ordering to be representable by a utility function. For a finite set of alternatives, these require only that the preference ordering is complete (so the individual can determine which of any two alternatives is preferred or that they are indifferent), and that the preference order is transitive. Suppose the set of alternatives is not finite (for example, even if the number of goods is finite, the quantity chosen can be any real number on an interval). In that case, a continuous utility function exists representing a consumer's preferences if and only if the consumer's preferences are complete, transitive, and continuous. == Applications == Utility can be represented through sets of indifference curve, which are level curves of the function itself and which plot the combination of commodities that an individual would accept to maintain a given level of satisfaction. Combining indifference curves with budget constraints allows for individual demand curves derivation. A diagram of a general indifference curve is shown below (Figure 1). The vertical and horizontal axes represent an individual's consumption of commodity Y and X respectively. All the combinations of commodity X and Y along the same indifference curve are regarded indifferently by individuals, which means all the combinations along an indifference curve result in the same utility value. Individual and social utility can be construed as the value of a utility function and a social welfare function, respectively. When coupled with production or commodity constraints, by some assumptions, these functions can be used to analyze Pareto efficiency, such as illustrated by Edgeworth boxes in contract curves. Such efficiency is a major concept in welfare economics. == Preference == While preferences are the conventional foundation of choice theory in microeconomics, it is often convenient to represent preferences with a utility function. Let X be the consumption set, the set of all mutually exclusive baskets the consumer could consume. The consumer's utility function u : X → R {\displaystyle u\colon X\to \mathbb {R} } ranks each possible outcome in the consumption set. If the consumer strictly prefers x to y or is indifferent between them, then u ( x ) ≥ u ( y ) {\displaystyle u(x)\geq u(y)} . For example, suppose a consumer's consumption set is X = {nothing, 1 apple,1 orange, 1 apple and 1 orange, 2 apples, 2 oranges}, and his utility function is u(nothing) = 0, u(1 apple) = 1, u(1 orange) = 2, u(1 apple and 1 orange) = 5, u(2 apples) = 2 and u(2 oranges) = 4. Then this consumer prefers 1 orange to 1 apple but prefers one of each to 2 oranges. In micro-economic models, there is usually a finite set of L commodities, and a consumer may consume an arbitrary amount of each commodity. This gives a consumption set of R + L {\displaystyle \mathbb {R} _{+}^{L}} , and each package x ∈ R + L {\displaystyle x\in \mathbb {R} _{+}^{L}} is a vector containing the amounts of each commodity. For the example, there are two commodities: apples and oranges. If we say apples are the first commodity, and oranges the second, then the consumption set is X = R + 2 {\displaystyle X=\mathbb {R} _{+}^{2}} and u(0, 0) = 0, u(1, 0) = 1, u(0, 1) = 2, u(1, 1) = 5, u(2, 0) = 2, u(0, 2) = 4 as before. For u to be a utility function on X, however, it must be defined for every package in X, so now the function must be defined for fractional apples and oranges too. One function that would fit these numbers is u ( x apples , x oranges ) = x apples + 2 x oranges + 2 x apples x oranges . {\displaystyle u(x_{\text{apples}},x_{\text{oranges}})=x_{\text{apples}}+2x_{\text{oranges}}+2x_{\text{apples}}x_{\text{oranges}}.} Preferences have three main properties: Completeness Assume an individual has two choices, A and B. By ranking the two choices, one and only one of the following relationships is true: an individual strictly prefers A (A > B); an individual strictly prefers B (B>A); an individual is indifferent between A and B (A = B). Either a ≥ b OR b ≥ a (OR both) for all (a,b) Transitivity Individuals' preferences are consistent over bundles. If an individual prefers bundle A to bundle B and bundle B to bundle C, then it can be assumed that the individual prefers bundle A to bundle C. (If a ≥ b and b ≥ c, then a ≥ c for all (a,b,c)). Non-satiation or monotonicity If bundle A contains all the goods that a bundle B contains, but A also includes more of at least one good than B. The individual prefers A over B. If, for example, bundle A = {1 apple,2 oranges}, and bundle B = {1 apple,1 orange}, then A is preferred over B. === Revealed preference === It was recognized that utility could not be measured or observed directly, so instead economists devised a way to infer relative utilities from observed choice. These 'revealed preferences', as termed by Paul Samuelson, were revealed e.g. in people's willingness to pay: Utility is assumed to be correlative to Desire or Want. It has been argued already that desires cannot be measured directly, but only indirectly, by the outward phenomena which they cause: and that in those cases with which economics is mainly concerned the measure is found by the price which a person is willing to pay for the fulfillment or satisfaction of his desire.: 78  == Functions == Utility functions, expressing utility as a function of the amounts of the various goods consumed, are treated as either cardinal or ordinal, depending on whether they are or are not interpreted as providing more information than simply the rank ordering of preferences among bundles of goods, such as information concerning the strength of preferences. === Cardinal === Cardinal utility states that the utilities obtained from consumption can be measured and ranked objectively and are representable by numbers. There are fundamental assumptions of cardinal utility. Economic agents should be able to rank different bundles of goods based on their preferences or utilities and sort different transitions between two bundles of goods. A cardinal utility function can be transformed to another utility function by a positive linear transformation (multiplying by a positive number, and adding some other number); however, both utility functions represent the same preferences. When cardinal utility is assumed, the magnitude of utility differences is treated as an ethically or behaviorally significant quantity. For example, suppose a cup of orange juice has utility of 120 "utils", a cup of tea has a utility of 80 utils, and a cup of water has a utility of 40 utils. With cardinal utility, it can be concluded that the cup of orange juice is better than the cup of tea by the same amount by which the cup of tea is better than the cup of water. This means that if a person has a cup of tea, they would be willing to take any bet with a probability, p, greater than .5 of getting a cup of juice, with a risk of getting a cup of water equal to 1-p. One cannot conclude, however, that the cup of tea is two-thirds of the goodness of the cup of juice because this conclusion would depend not only on magnitudes of utility differences but also on the "zero" of utility. For example, if the "zero" of utility were located at -40, then a cup of orange juice would be 160 utils more than zero, a cup of tea 120 utils more than zero. Cardinal utility can be considered as the assumption that quantifiable characteristics, such as height, weight, temperature, etc can measure utility. Neoclassical economics has largely retreated from using cardinal utility functions as the basis of economic behavior. A notable exception is in the context of analyzing choice with conditions of risk (see below). Sometimes cardinal utility is used to aggregate utilities across persons, to create a social welfare function. === Ordinal === Instead of giving actual numbers over different bundles, ordinal utilities are only the rankings of utilities received from different bundles of goods or services. For example, ordinal utility could tell that having two ice creams provide a greater utility to individuals in comparison to one ice cream but could not tell exactly how much extra utility received by the individual. Ordinal utility, it does not require individuals to specify how much extra utility they received from the preferred bundle of goods or services in comparison to other bundles. They are only needed to tell which bundles they prefer. When ordinal utilities are used, differences in utils (values assumed by the utility function) are treated as ethically or behaviorally meaningless: the utility index encodes a full behavioral ordering between members of a choice set, but tells nothing about the related strength of preferences. For the above example, it would only be possible to say that juice is preferred to tea to water. Thus, ordinal utility utilizes comparisons, such as "preferred to", "no more", "less than", etc. If a function u ( x ) {\displaystyle u(x)} is ordinal and non-negative, it is equivalent to the function u ( x ) 2 {\displaystyle u(x)^{2}} , because taking the square is an increasing monotone (or monotonic) transformation. This means that the ordinal preference induced by these functions is the same (although they are two different functions). In contrast, if u ( x ) {\displaystyle u(x)} is cardinal, it is not equivalent to u ( x ) 2 {\displaystyle u(x)^{2}} . === Examples === In order to simplify calculations, various alternative assumptions have been made concerning details of human preferences, and these imply various alternative utility functions such as: CES (constant elasticity of substitution). Isoelastic utility Exponential utility Quasilinear utility Homothetic preferences Stone–Geary utility function Gorman polar form Greenwood–Hercowitz–Huffman preferences King–Plosser–Rebelo preferences Hyperbolic absolute risk aversion Most utility functions used for modeling or theory are well-behaved. They are usually monotonic and quasi-concave. However, it is possible for rational preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function. == Marginal utility == Economists distinguish between total utility and marginal utility. Total utility is the utility of an alternative, an entire consumption bundle or situation in life. The rate of change of utility from changing the quantity of one good consumed is termed the marginal utility of that good. Marginal utility therefore measures the slope of the utility function with respect to the changes of one good. Marginal utility usually decreases with consumption of the good, the idea of "diminishing marginal utility". In calculus notation, the marginal utility of good X is M U x = ∂ U ∂ X {\displaystyle MU_{x}={\frac {\partial U}{\partial X}}} . When a good's marginal utility is positive, additional consumption of it increases utility; if zero, the consumer is satiated and indifferent about consuming more; if negative, the consumer would pay to reduce his consumption. === Law of diminishing marginal utility === Rational individuals only consume additional units of goods if it increases the marginal utility. However, the law of diminishing marginal utility means an additional unit consumed brings a lower marginal utility than that carried by the previous unit consumed. For example, drinking one bottle of water makes a thirsty person satisfied; as the consumption of water increases, he may feel begin to feel bad which causes the marginal utility to decrease to zero or even become negative. Furthermore, this is also used to analyze progressive taxes as the greater taxes can result in the loss of utility. === Marginal rate of substitution (MRS) === Marginal rate of substitution is the absolute value of the slope of the indifference curve, which measures how much an individual is willing to switch from one good to another. Using a mathematic equation, M R S = − d x 2 / d x 1 {\displaystyle MRS=-\operatorname {d} \!x_{2}/\operatorname {d} \!x_{1}} keeping U(x1,x2) constant. Thus, MRS is how much an individual is willing to pay for consuming a greater amount of x1. MRS is related to marginal utility. The relationship between marginal utility and MRS is: M R S = M U 1 M U 2 {\displaystyle MRS={\frac {MU_{1}}{MU_{2}}}} == Expected utility == Expected utility theory deals with the analysis of choices among risky projects with multiple (possibly multidimensional) outcomes. The St. Petersburg paradox was first proposed by Nicholas Bernoulli in 1713 and solved by Daniel Bernoulli in 1738, although the Swiss mathematician Gabriel Cramer proposed taking the expectation of a square-root utility function of money in an 1728 letter to N. Bernoulli. D. Bernoulli argued that the paradox could be resolved if decision-makers displayed risk aversion and argued for a logarithmic cardinal utility function. (Analysis of international survey data during the 21st century has shown that insofar as utility represents happiness, as for utilitarianism, it is indeed proportional to log of income.) The first important use of the expected utility theory was that of John von Neumann and Oskar Morgenstern, who used the assumption of expected utility maximization in their formulation of game theory. In finding the probability-weighted average of the utility from each possible outcome: EU = Pr ( z ) ⋅ u ( Value ( z ) ) + Pr ( y ) ⋅ u ( Value ( y ) ) {\displaystyle {\text{EU}}=\Pr(z)\cdot u({\text{Value}}(z))+\Pr(y)\cdot u({\text{Value}}(y))} === Von Neumann–Morgenstern === Von Neumann and Morgenstern addressed situations in which the outcomes of choices are not known with certainty, but have probabilities associated with them. A notation for a lottery is as follows: if options A and B have probability p and 1 − p in the lottery, we write it as a linear combination: L = p A + ( 1 − p ) B {\displaystyle L=pA+(1-p)B} More generally, for a lottery with many possible options: L = ∑ i p i A i , {\displaystyle L=\sum _{i}p_{i}A_{i},} where ∑ i p i = 1 {\displaystyle \sum _{i}p_{i}=1} . By making some reasonable assumptions about the way choices behave, von Neumann and Morgenstern showed that if an agent can choose between the lotteries, then this agent has a utility function such that the desirability of an arbitrary lottery can be computed as a linear combination of the utilities of its parts, with the weights being their probabilities of occurring. This is termed the expected utility theorem. The required assumptions are four axioms about the properties of the agent's preference relation over 'simple lotteries', which are lotteries with just two options. Writing B ⪯ A {\displaystyle B\preceq A} to mean 'A is weakly preferred to B' ('A is preferred at least as much as B'), the axioms are: completeness: For any two simple lotteries L {\displaystyle L} and M {\displaystyle M} , either L ⪯ M {\displaystyle L\preceq M} or M ⪯ L {\displaystyle M\preceq L} (or both, in which case they are viewed as equally desirable). transitivity: for any three lotteries L , M , N {\displaystyle L,M,N} , if L ⪯ M {\displaystyle L\preceq M} and M ⪯ N {\displaystyle M\preceq N} , then L ⪯ N {\displaystyle L\preceq N} . convexity/continuity (Archimedean property): If L ⪯ M ⪯ N {\displaystyle L\preceq M\preceq N} , then there is a p {\displaystyle p} between 0 and 1 such that the lottery p L + ( 1 − p ) N {\displaystyle pL+(1-p)N} is equally desirable as M {\displaystyle M} . independence: for any three lotteries L , M , N {\displaystyle L,M,N} and any probability p, L ⪯ M {\displaystyle L\preceq M} if and only if p L + ( 1 − p ) N ⪯ p M + ( 1 − p ) N {\displaystyle pL+(1-p)N\preceq pM+(1-p)N} . Intuitively, if the lottery formed by the probabilistic combination of L {\displaystyle L} and N {\displaystyle N} is no more preferable than the lottery formed by the same probabilistic combination of M {\displaystyle M} and N , {\displaystyle N,} then and only then L ⪯ M {\displaystyle L\preceq M} . Axioms 3 and 4 enable us to decide about the relative utilities of two assets or lotteries. In more formal language: A von Neumann–Morgenstern utility function is a function from choices to the real numbers: u : X → R {\displaystyle u\colon X\to \mathbb {R} } which assigns a real number to every outcome in a way that represents the agent's preferences over simple lotteries. Using the four assumptions mentioned above, the agent will prefer a lottery L 2 {\displaystyle L_{2}} to a lottery L 1 {\displaystyle L_{1}} if and only if, for the utility function characterizing that agent, the expected utility of L 2 {\displaystyle L_{2}} is greater than the expected utility of L 1 {\displaystyle L_{1}} : L 1 ⪯ L 2 iff u ( L 1 ) ≤ u ( L 2 ) {\displaystyle L_{1}\preceq L_{2}{\text{ iff }}u(L_{1})\leq u(L_{2})} . Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which omit or relax the independence axiom. == Indirect utility == An indirect utility function gives the optimal attainable value of a given utility function, which depends on the prices of the goods and the income or wealth level that the individual possesses. === Money === One use of the indirect utility concept is the notion of the utility of money. The (indirect) utility function for money is a nonlinear function that is bounded and asymmetric about the origin. The utility function is concave in the positive region, representing the phenomenon of diminishing marginal utility. The boundedness represents the fact that beyond a certain amount money ceases being useful at all, as the size of any economy at that time is itself bounded. The asymmetry about the origin represents the fact that gaining and losing money can have radically different implications both for individuals and businesses. The non-linearity of the utility function for money has profound implications in decision-making processes: in situations where outcomes of choices influence utility by gains or losses of money, which are the norm for most business settings, the optimal choice for a given decision depends on the possible outcomes of all other decisions in the same time-period. == Budget constraints == Individuals' consumptions are constrained by their budget allowance. The graph of budget line is a linear, downward-sloping line between X and Y axes. All the bundles of consumption under the budget line allow individuals to consume without using the whole budget as the total budget is greater than the total cost of bundles (Figure 2). If only considers prices and quantities of two goods in one bundle, a budget constraint could be formulated as p 1 X 1 + p 2 X 2 = Y {\displaystyle p_{1}X_{1}+p_{2}X_{2}=Y} , where p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} are prices of the two goods, X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} are quantities of the two goods. slope = − P ( x ) P ( y ) {\displaystyle {\text{slope}}={\frac {-P(x)}{P(y)}}} === Constrained utility optimisation === Rational consumers wish to maximise their utility. However, as they have budget constraints, a change of price would affect the quantity of demand. There are two factors could explain this situation: Purchasing power. Individuals obtain greater purchasing power when the price of a good decreases. The reduction of the price allows individuals to increase their savings so they could afford to buy other products. Substitution effect. If the price of good A decreases, then the good becomes relatively cheaper with respect to its substitutes. Thus, individuals would consume more of good A as the utility would increase by doing so. == Discussion and criticism == Cambridge economist Joan Robinson famously criticized utility for being a circular concept: "Utility is the quality in commodities that makes individuals want to buy them, and the fact that individuals want to buy commodities shows that they have utility".: 48  Robinson also stated that because the theory assumes that preferences are fixed this means that utility is not a testable assumption. This is so because if we observe changes of peoples' behavior in relation to a change in prices or a change in budget constraint we can never be sure to what extent the change in behavior was due to the change of price or budget constraint and how much was due to a change of preference. This criticism is similar to that of the philosopher Hans Albert who argued that the ceteris paribus (all else equal) conditions on which the marginalist theory of demand rested rendered the theory itself a meaningless tautology, incapable of being tested experimentally. In essence, a curve of demand and supply (a theoretical line of quantity of a product which would have been offered or requested for given price) is purely ontological and could never have been demonstrated empirically. Other questions of what arguments ought to be included in a utility function are difficult to answer, yet seem necessary to understanding utility. Whether people gain utility from coherence of wants, beliefs or a sense of duty is important to understanding their behavior in the utility organon. Likewise, choosing between alternatives is itself a process of determining what to consider as alternatives, a question of choice within uncertainty. An evolutionary psychology theory is that utility may be better considered as due to preferences that maximized evolutionary fitness in the ancestral environment but not necessarily in the current one. == Measuring utility functions == There are many empirical works trying to estimate the form of utility functions of agents with respect to money. == See also == Happiness economics Law of demand Marginal utility Utility maximization problem - a problem faced by consumers in a market: how to maximize their utility given their budget. Utility assessment - processes for estimating the utility functions of human subjects. == References == == Further reading == Anand, Paul (1993). Foundations of Rational Choice Under Risk. Oxford University Press. ISBN 0-19-823303-5. Fishburn, Peter C. (1970). Utility Theory for Decision Making. Huntington, NY: Robert E. Krieger. ISBN 0-88275-736-9. Georgescu-Roegen, Nicholas (August 1936). "The Pure Theory of Consumer's Behavior". Quarterly Journal of Economics. 50 (4): 545–593. doi:10.2307/1891094. JSTOR 1891094. Gilboa, Itzhak (2009). Theory of Decision under Uncertainty. Cambridge University Press. ISBN 978-0-521-74123-1. Kreps, David M. (1988). Notes on the Theory of Choice. Boulder, CO: West-view Press. ISBN 0-8133-7553-3. Nash, John F. (1950). "The Bargaining Problem". Econometrica. 18 (2): 155–162. doi:10.2307/1907266. JSTOR 1907266. S2CID 153422092. Neumann, John von & Morgenstern, Oskar (1944). Theory of Games and Economic Behavior. Princeton University Press. Nicholson, Walter (1978). Micro-economic Theory (Second ed.). Hinsdale: Dryden Press. pp. 53–87. ISBN 0-03-020831-9. Plous, S. (1993). The Psychology of Judgement and Decision Making. McGraw-Hill. ISBN 0-07-050477-6. Viner, Jacob (1925). "The Utility Concept in Value Theory and Its Critics". Journal of Political Economy. 33 (4): 369–387. Viner, Jacob (1925). "The Utility Concept in Value Theory and Its Critics". Journal of Political Economy. 33 (6): 638–659. == External links == Definition of Utility by Investopedia Anatomy of Cobb-Douglas Type Utility Functions in 3D Anatomy of CES Type Utility Functions in 3D Simpler Definition with example from Investopedia Maximization of Originality - redefinition of classic utility Utility Model of Marketing - Form, Place Archived 12 November 2015 at the Wayback Machine, Time Archived 30 October 2015 at the Wayback Machine, Possession and perhaps also Task
Wikipedia/Utility_function
An interest rate is the amount of interest due per period, as a proportion of the amount lent, deposited, or borrowed (called the principal sum). The total interest on an amount lent or borrowed depends on the principal sum, the interest rate, the compounding frequency, and the length of time over which it is lent, deposited, or borrowed. The annual interest rate is the rate over a period of one year. Other interest rates apply over different periods, such as a month or a day, but they are usually annualized. The interest rate has been characterized as "an index of the preference . . . for a dollar of present [income] over a dollar of future income". The borrower wants, or needs, to have money sooner, and is willing to pay a fee—the interest rate—for that privilege. == Influencing factors == Interest rates vary according to: the government's directives to the central bank to accomplish the government's goals the currency of the principal sum lent or borrowed the term to maturity of the investment the perceived default probability of the borrower supply and demand in the market the amount of collateral special features like call provisions reserve requirements compensating balance as well as other factors. == Example == A company borrows capital from a bank to buy assets for its business. In return, the bank charges the company interest. (The lender might also require rights over the new assets as collateral.) A bank will use the capital deposited by individuals to make loans to their clients. In return, the bank should pay interest to individuals who have deposited their capital. The amount of interest payment depends on the interest rate and the amount of capital they deposited. == Related terms == Base rate usually refers to the annualized effective interest rate offered on overnight deposits by the central bank or other monetary authority. The annual percentage rate (APR) may refer either to a nominal APR or an effective APR (EAPR). The difference between the two is that the EAPR accounts for fees and compounding, while the nominal APR does not. The annual equivalent rate (AER), also called the effective annual rate, is used to help consumers compare products with different compounding frequencies on a common basis, but does not account for fees. A discount rate is applied to calculate present value. For an interest-bearing security, coupon rate is the ratio of the annual coupon amount (the coupon paid per year) per unit of par value, whereas current yield is the ratio of the annual coupon divided by its current market price. Yield to maturity is a bond's expected internal rate of return, assuming it will be held to maturity, that is, the discount rate which equates all remaining cash flows to the investor (all remaining coupons and repayment of the par value at maturity) with the current market price. Based on the banking business, there are deposit interest rate and loan interest rate. Based on the relationship between supply and demand of market interest rate, there are fixed interest rate and floating interest rate. == Monetary policy == Interest rate targets are a vital tool of monetary policy and are taken into account when dealing with variables like investment, inflation, and unemployment. The central banks of countries generally tend to reduce interest rates when they wish to increase investment and consumption in the country's economy. However, a low interest rate as a macro-economic policy can be risky and may lead to the creation of an economic bubble, in which large amounts of investments are poured into the real-estate market and stock market. In developed economies, interest-rate adjustments are thus made to keep inflation within a target range for the health of economic activities or cap the interest rate concurrently with economic growth to safeguard economic momentum. == History == In the past two centuries, interest rates have been variously set either by national governments or central banks. For example, the Federal Reserve federal funds rate in the United States has varied between about 0.25% and 19% from 1954 to 2008, while the Bank of England base rate varied between 0.5% and 15% from 1989 to 2009, and Germany experienced rates close to 90% in the 1920s down to about 2% in the 2000s. During an attempt to tackle spiraling hyperinflation in 2007, the Central Bank of Zimbabwe increased interest rates for borrowing to 800%. The interest rates on prime credits in the late 1970s and early 1980s were far higher than had been recorded – higher than previous US peaks since 1800, than British peaks since 1700, or than Dutch peaks since 1600; "since modern capital markets came into existence, there have never been such high long-term rates" as in this period. Before modern capital markets, there have been accounts that savings deposits could achieve an annual return of at least 25% and up to as high as 50%. == Reasons for changes == Political short-term gain: Lowering interest rates can give the economy a short-run boost. Under normal conditions, most economists think a cut in interest rates will only give a short term gain in economic activity that will soon be offset by inflation. The quick boost can influence elections. Most economists advocate independent central banks to limit the influence of politics on interest rates. Deferred consumption: When money is loaned the lender delays spending the money on consumption goods. Since according to time preference theory people prefer goods now to goods later, in a free market there will be a positive interest rate. Inflationary expectations: Most economies generally exhibit inflation, meaning a given amount of money buys fewer goods in the future than it will now. The borrower needs to compensate the lender for this. Alternative investments: The lender has a choice between using his money in different investments. If he chooses one, he forgoes the returns from all the others. Different investments effectively compete for funds. Risks of investment: There is always a risk that the borrower will go bankrupt, abscond, die, or otherwise default on the loan. This means that a lender generally charges a risk premium to ensure that, across his investments, he is compensated for those that fail. Liquidity preference: People prefer to have their resources available in a form that can immediately be exchanged, rather than a form that takes time to realize. Taxes: Because some of the gains from interest may be subject to taxes, the lender may insist on a higher rate to make up for this loss. Banks: Banks can tend to change the interest rate to either slow down or speed up economy growth. This involves either raising interest rates to slow the economy down, or lowering interest rates to promote economic growth. Economy: Interest rates can fluctuate according to the status of the economy. It will generally be found that if the economy is strong then the interest rates will be high, if the economy is weak the interest rates will be low. == Real versus nominal == The nominal interest rate is the rate of interest with no adjustment for inflation. For example, suppose someone deposits $100 with a bank for one year, and they receive interest of $10 (before tax), so at the end of the year, their balance is $110 (before tax). In this case, regardless of the rate of inflation, the nominal interest rate is 10% per annum (before tax). The real interest rate measures the growth in real value of the loan plus interest, taking inflation into account. The repayment of principal plus interest is measured in real terms compared against the buying power of the amount at the time it was borrowed, lent, deposited or invested. If inflation is 10%, then the $110 in the account at the end of the year has the same purchasing power (that is, buys the same amount) as the $100 had a year ago. The real interest rate is zero in this case. The real interest rate is given by the Fisher equation: r = 1 + i 1 + p − 1 {\displaystyle r={\frac {1+i}{1+p}}-1\,\!} where p is the inflation rate. For low rates and short periods, the linear approximation applies: r ≈ i − p {\displaystyle r\approx i-p\,\!} The Fisher equation applies both ex ante and ex post. Ex ante, the rates are projected rates, whereas ex post, the rates are historical. == Market rates == There is a market for investments, including the money market, bond market, stock market, and currency market as well as retail banking. Interest rates reflect: The risk-free cost of capital Expected inflation Risk premium Transaction costs === Inflationary expectations === According to the theory of rational expectations, borrowers and lenders form an expectation of inflation in the future. The acceptable nominal interest rate at which they are willing and able to borrow or lend includes the real interest rate they require to receive, or are willing to pay, plus the rate of inflation they expect. Under behavioral expectations, the formation of expectations deviates from rational expectations due to cognitive limitations and information processing costs. Agents may exhibit myopia (limited attention) to certain economic variables, form expectations based on simplified heuristics, or update their beliefs more gradually than under full rationality. These behavioral frictions can affect monetary policy transmission and optimal policy design. === Risk === The level of risk in investments is taken into consideration. Riskier investments such as shares and junk bonds are normally expected to deliver higher returns than safer ones like government bonds. The additional return above the risk-free nominal interest rate which is expected from a risky investment is the risk premium. The risk premium an investor requires on an investment depends on the risk preferences of the investor. Evidence suggests that most lenders are risk-averse. A maturity risk premium applied to a longer-term investment reflects a higher perceived risk of default. There are four kinds of risk: repricing risk basis risk yield curve risk optionality === Liquidity preference === Most economic agents exhibit a liquidity preference, defined as the propensity to hold cash or highly liquid assets over less fungible investments, reflecting both precautionary and transactional motives. Liquidity preference manifests in the yield differential between assets of varying maturities and convertibility costs, where cash provides immediate transaction capability with zero conversion costs. This preference creates a term structure of required returns, exemplified by the higher yields typically demanded for longer-duration assets. For instance, while a 1-year loan offers relatively rapid convertibility to cash, a 10-year loan commands a greater liquidity premium. However, the existence of deep secondary markets can partially mitigate illiquidity costs, as evidenced by US Treasury bonds, which maintain significant liquidity despite longer maturities due to their unique status as a safe asset and the associated financial sector stability benefits. === A market model === A basic interest rate pricing model for an asset is i n = i r + p e + r p + l p {\displaystyle i_{n}=i_{r}+p_{e}+r_{p}+l_{p}\,\!} where in is the nominal interest rate on a given investment ir is the risk-free return to capital i*n is the nominal interest rate on a short-term risk-free liquid bond (such as U.S. treasury bills). rp is a risk premium reflecting the length of the investment and the likelihood the borrower will default lp is a liquidity premium (reflecting the perceived difficulty of converting the asset into money and thus into goods). pe is the expected inflation rate. Assuming perfect information, pe is the same for all participants in the market, and the interest rate model simplifies to i n = i n ∗ + r p + l p {\displaystyle i_{n}=i_{n}^{*}+r_{p}+l_{p}\,\!} === Spread === The spread of interest rates is the lending rate minus the deposit rate. This spread covers operating costs for banks providing loans and deposits. A negative spread is where a deposit rate is higher than the lending rate. == In macroeconomics == === Output, unemployment and inflation === Interest rates affect economic activity broadly, which is the reason why they are normally the main instrument of the monetary policies conducted by central banks. Changes in interest rates will affect firms' investment behaviour, either raising or lowering the opportunity cost of investing. Interest rate changes also affect asset prices like stock prices and house prices, which again influence households' consumption decisions through a wealth effect. Additionally, international interest rate differentials affect exchange rates and consequently exports and imports. These various channels are collectively known as the monetary transmission mechanism. Consumption, investment and net exports are all important components of aggregate demand. Consequently, by influencing the general interest rate level, monetary policy can affect overall demand for goods and services in the economy and hence output and employment. Changes in employment will over time affect wage setting, which again affects pricing and consequently ultimately inflation. The relation between employment (or unemployment) and inflation is known as the Phillips curve. For economies maintaining a fixed exchange rate system, determining the interest rate is also an important instrument of monetary policy as international capital flows are in part determined by interest rate differentials between countries. === Interest rate setting in the United States === The Federal Reserve (often referred to as 'the Fed') implements monetary policy largely by targeting the federal funds rate (FFR). This is the rate that banks charge each other for overnight loans of federal funds, which are the reserves held by banks at the Fed. Until the 2008 financial crisis, the Fed relied on open market operations, i.e. selling and buying securities in the open market to adjust the supply of reserve balances so as to keep the FFR close to the Fed's target. However, since 2008 the actual conduct of monetary policy implementation has changed considerably, the Fed using instead various administered interest rates (i.e., interest rates that are set directly by the Fed rather than being determined by the market forces of supply and demand) as the primary tools to steer short-term market interest rates towards the Fed's policy target. == Impact on savings and pensions == Financial economists such as World Pensions Council (WPC) researchers have argued that durably low interest rates in most G20 countries will have an adverse impact on the funding positions of pension funds as "without returns that outstrip inflation, pension investors face the real value of their savings declining rather than ratcheting up over the next few years". Current interest rates in savings accounts often fail to keep up with the pace of inflation. From 1982 until 2012, most Western economies experienced a period of low inflation combined with relatively high returns on investments across all asset classes including government bonds. This brought a certain sense of complacency amongst some pension actuarial consultants and regulators, making it seem reasonable to use optimistic economic assumptions to calculate the present value of future pension liabilities. == Mathematical note == Because interest and inflation are generally given as percentage increases, the formulae above are (linear) approximations. For instance, i n = i r + p e {\displaystyle i_{n}=i_{r}+p_{e}\,\!} is only approximate. In reality, the relationship is ( 1 + i n ) = ( 1 + i r ) ( 1 + p e ) {\displaystyle (1+i_{n})=(1+i_{r})(1+p_{e})\,\!} so i r = 1 + i n 1 + p e − 1 {\displaystyle i_{r}={\frac {1+i_{n}}{1+p_{e}}}-1\,\!} The two approximations, eliminating higher order terms, are: ( 1 + x ) ( 1 + y ) = 1 + x + y + x y ≈ 1 + x + y 1 1 + x = 1 − x + x 2 − x 3 + ⋯ ≈ 1 − x {\displaystyle {\begin{aligned}(1+x)(1+y)&=1+x+y+xy&&\approx 1+x+y\\{\frac {1}{1+x}}&=1-x+x^{2}-x^{3}+\cdots &&\approx 1-x\end{aligned}}} The formulae in this article are exact if logarithmic units are used for relative changes, or equivalently if logarithms of indices are used in place of rates, and hold even for large relative changes. == Zero rate policy == A so-called "zero interest-rate policy" (ZIRP) is a very low—near-zero—central bank target interest rate. At this zero lower bound the central bank faces difficulties with conventional monetary policy, because it is generally believed that market interest rates cannot realistically be pushed down into negative territory. In the United States, the policy was used in 2008-2015 (2008 financial crisis) and 2020-2022 (COVID-19 pandemic). == Negative nominal or real rates == Nominal interest rates are normally positive, but not always. In contrast, real interest rates can be negative, when nominal interest rates are below inflation. When this is done via government policy (for example, via reserve requirements), this is deemed financial repression, and was practiced by countries such as the United States and United Kingdom following World War II (from 1945) until the late 1970s or early 1980s (during and following the Post–World War II economic expansion). In the late 1970s, United States Treasury securities with negative real interest rates were deemed certificates of confiscation. === On central bank reserves === A so-called "negative interest rate policy" (NIRP) is a negative (below zero) central bank target interest rate. ==== Theory ==== Given the alternative of holding cash, and thus earning 0%, rather than lending it out, profit-seeking lenders will not lend below 0%, as that will guarantee a loss, and a bank offering a negative deposit rate will find few takers, as savers will instead hold cash. Negative interest rates have been proposed in the past, notably in the late 19th century by Silvio Gesell. A negative interest rate can be described (as by Gesell) as a "tax on holding money"; he proposed it as the Freigeld (free money) component of his Freiwirtschaft (free economy) system. To prevent people from holding cash (and thus earning 0%), Gesell suggested issuing money for a limited duration, after which it must be exchanged for new bills; attempts to hold money thus result in it expiring and becoming worthless. Along similar lines, John Maynard Keynes approvingly cited the idea of a carrying tax on money, (1936, The General Theory of Employment, Interest and Money) but dismissed it due to administrative difficulties. More recently, a carry tax on currency was proposed by a Federal Reserve employee (Marvin Goodfriend) in 1999, to be implemented via magnetic strips on bills, deducting the carry tax upon deposit, the tax being based on how long the bill had been held. It has been proposed that a negative interest rate can in principle be levied on existing paper currency via a serial number lottery, such as randomly choosing a number 0 through 9 and declaring that notes whose serial number end in that digit are worthless, yielding an average 10% loss of paper cash holdings to hoarders; a drawn two-digit number could match the last two digits on the note for a 1% loss. This was proposed by an anonymous student of Greg Mankiw, though more as a thought experiment than a genuine proposal. ==== Practice ==== Both the European Central Bank starting in 2014 and the Bank of Japan starting in early 2016 pursued the policy on top of their earlier and continuing quantitative easing policies. The latter's policy was said at its inception to be trying to "change Japan's 'deflationary mindset.'" In 2016 Sweden, Denmark and Switzerland—not directly participants in the Euro currency zone—also had NIRPs in place. Countries such as Sweden and Denmark have set negative interest on reserves—that is to say, they have charged interest on reserves. In July 2009, Sweden's central bank, the Riksbank, set its policy repo rate, the interest rate on its one-week deposit facility, at 0.25%, at the same time as setting its overnight deposit rate at −0.25%. The existence of the negative overnight deposit rate was a technical consequence of the fact that overnight deposit rates are generally set at 0.5% below or 0.75% below the policy rate. The Riksbank studied the impact of these changes and stated in a commentary report that they led to no disruptions in Swedish financial markets. === On government bond yields === During the European debt crisis, government bonds of some countries (Switzerland, Denmark, Germany, Finland, the Netherlands and Austria) have been sold at negative yields. Suggested explanations include desire for safety and protection against the eurozone breaking up (in which case some eurozone countries might redenominate their debt into a stronger currency). === On corporate bond yields === For practical purposes, investors and academics typically view the yields on government or quasi-government bonds guaranteed by a small number of the most creditworthy governments (United Kingdom, United States, Switzerland, EU, Japan) to effectively have negligible default risk. As financial theory would predict, investors and academics typically do not view non-government guaranteed corporate bonds in the same way. Most credit analysts value them at a spread to similar government bonds with similar duration, geographic exposure, and currency exposure. Through 2018 there have only been a few of these corporate bonds that have traded at negative nominal interest rates. The most notable example of this was Nestle, some of whose AAA-rated bonds traded at negative nominal interest rate in 2015. However, some academics and investors believe this may have been influenced by volatility in the currency market during this period. == See also == Forward rate Interest expense List of sovereign states by central bank interest rates Macroeconomics Rate of return Short-rate model Spot rate == Notes == == References == Homer, Sidney; Sylla, Richard Eugene; Sylla, Richard (1996). A History of Interest Rates. Rutgers University Press. ISBN 978-0-8135-2288-3. Retrieved 2008-10-27.
Wikipedia/Interest_rate
In decision theory, regret aversion (or anticipated regret) describes how the human emotional response of regret can influence decision-making under uncertainty. When individuals make choices without complete information, they often experience regret if they later discover that a different choice would have produced a better outcome. This regret can be quantified as the difference in value between the actual decision made and what would have been the optimal decision in hindsight. Unlike traditional models that consider regret as merely a post-decision emotional response, the theory of regret aversion proposes that decision-makers actively anticipate potential future regret and incorporate this anticipation into their current decision-making process. This anticipation can lead individuals to make choices specifically designed to minimize the possibility of experiencing regret later, even if those choices are not optimal from a purely probabilistic expected-value perspective. Regret is a powerful negative emotion with significant social and reputational implications, playing a central role in how humans learn from experience and in the psychology of risk aversion. The conscious anticipation of regret creates a feedback loop that elevates regret from being simply an emotional reaction—often modeled as mere human behavior—into a key factor in rational choice behavior that can be formally modeled in decision theory. This anticipatory mechanism helps explain various observed decision patterns that deviate from standard expected utility theory, including status quo bias, inaction inertia, and the tendency to avoid decisions that might lead to easily imagined counterfactual scenarios where a better outcome would have occurred. == Description == Regret theory is a model in theoretical economics simultaneously developed in 1982 by Graham Loomes and Robert Sugden, David E. Bell, and Peter C. Fishburn. Regret theory models choice under uncertainty taking into account the effect of anticipated regret. Subsequently, several other authors improved upon it. It incorporates a regret term in the utility function which depends negatively on the realized outcome and positively on the best alternative outcome given the uncertainty resolution. This regret term is usually an increasing, continuous and non-negative function subtracted to the traditional utility index. These type of preferences always violate transitivity in the traditional sense, although most satisfy a weaker version. For independent lotteries and when regret is evaluated over the difference between utilities and then averaged over the all combinations of outcomes, the regret can still be transitive but for only specific form of regret functional. It is shown that only hyperbolic sine function will maintain this property. This form of regret inherits most of desired features, such as holding right preferences in face of first order stochastic dominance, risk averseness for logarithmic utilities and the ability to explain Allais paradox. Regret aversion is not only a theoretical economics model, but a cognitive bias occurring as a decision has been made to abstain from regretting an alternative decision. To better preface, regret aversion can be seen through fear by either commission or omission; the prospect of committing to a failure or omitting an opportunity that we seek to avoid. Regret, feeling sadness or disappointment over something that has happened, can be rationalized for a certain decision, but can guide preferences and can lead people astray. This contributes to the spread of disinformation because things are not seen as one's personal responsibility. == Evidence == Several experiments over both incentivized and hypothetical choices attest to the magnitude of this effect. Experiments in first price auctions show that by manipulating the feedback the participants expect to receive, significant differences in the average bids are observed. In particular, "Loser's regret" can be induced by revealing the winning bid to all participants in the auction, and thus revealing to the losers whether they would have been able to make a profit and how much could it have been (a participant that has a valuation of $50, bids $30 and finds out the winning bid was $35 will also learn that he or she could have earned as much as $15 by bidding anything over $35.) This in turn allows for the possibility of regret and if bidders correctly anticipate this, they would tend to bid higher than in the case where no feedback on the winning bid is provided in order to decrease the possibility of regret. In decisions over lotteries, experiments also provide supporting evidence of anticipated regret. As in the case of first price auctions, differences in feedback over the resolution of the uncertainty can cause the possibility of regret and if this is anticipated, it may induce different preferences. For example, when faced with a choice between $40 with certainty and a coin toss that pays $100 if the outcome is guessed correctly and $0 otherwise, not only does the certain payment alternative minimizes the risk but also the possibility of regret, since typically the coin will not be tossed (and thus the uncertainty not resolved) while if the coin toss is chosen, the outcome that pays $0 will induce regret. If the coin is tossed regardless of the chosen alternative, then the alternative payoff will always be known and then there is no choice that will eliminate the possibility of regret. === Anticipated regret versus experienced regret === Anticipated regret tends to be overestimated for both choices and actions over which people perceive themselves to be responsible. People are particularly likely to overestimate the regret they will feel when missing a desired outcome by a narrow margin. In one study, commuters predicted they would experience greater regret if they missed a train by 1 minute more than missing a train by 5 minutes, for example, but commuters who actually missed their train by 1 or 5 minutes experienced (equal and) lower amounts of regret. Commuters appeared to overestimate the regret they would feel when missing the train by a narrow margin, because they tended to underestimate the extent to which they would attribute missing the train to external causes (e.g., missing their wallet or spending less time in the shower). == Applications == Besides the traditional setting of choices over lotteries, regret aversion has been proposed as an explanation for the typically observed overbidding in first price auctions, and the disposition effect, among others. == Minimax regret == The minimax regret approach is to minimize the worst-case regret, originally presented by Leonard Savage in 1951. The aim of this is to perform as closely as possible to the optimal course. Since the minimax criterion applied here is to the regret (difference or ratio of the payoffs) rather than to the payoff itself, it is not as pessimistic as the ordinary minimax approach. Similar approaches have been used in a variety of areas such as: Hypothesis testing Prediction Economics One benefit of minimax (as opposed to expected regret) is that it is independent of the probabilities of the various outcomes: thus if regret can be accurately computed, one can reliably use minimax regret. However, probabilities of outcomes are hard to estimate. This differs from the standard minimax approach in that it uses differences or ratios between outcomes, and thus requires interval or ratio measurements, as well as ordinal measurements (ranking), as in standard minimax. === Example === Suppose an investor has to choose between investing in stocks, bonds or the money market, and the total return depends on what happens to interest rates. The following table shows some possible returns: The crude maximin choice based on returns would be to invest in the money market, ensuring a return of at least 1. However, if interest rates fell then the regret associated with this choice would be large. This would be 11, which is the difference between the 12 which could have been received if the outcome had been known in advance and the 1 received. A mixed portfolio of about 11.1% in stocks and 88.9% in the money market would have ensured a return of at least 2.22; but, if interest rates fell, there would be a regret of about 9.78. The regret table for this example, constructed by subtracting actual returns from best returns, is as follows: Therefore, using a minimax choice based on regret, the best course would be to invest in bonds, ensuring a regret of no worse than 5. A mixed investment portfolio would do even better: 61.1% invested in stocks, and 38.9% in the money market would produce a regret no worse than about 4.28. == Example: Linear estimation setting == What follows is an illustration of how the concept of regret can be used to design a linear estimator. In this example, the problem is to construct a linear estimator of a finite-dimensional parameter vector x {\displaystyle x} from its noisy linear measurement with known noise covariance structure. The loss of reconstruction of x {\displaystyle x} is measured using the mean-squared error (MSE). The unknown parameter vector is known to lie in an ellipsoid E {\displaystyle E} centered at zero. The regret is defined to be the difference between the MSE of the linear estimator that doesn't know the parameter x {\displaystyle x} , and the MSE of the linear estimator that knows x {\displaystyle x} . Also, since the estimator is restricted to be linear, the zero MSE cannot be achieved in the latter case. In this case, the solution of a convex optimization problem gives the optimal, minimax regret-minimizing linear estimator, which can be seen by the following argument. According to the assumptions, the observed vector y {\displaystyle y} and the unknown deterministic parameter vector x {\displaystyle x} are tied by the linear model y = H x + w {\displaystyle y=Hx+w} where H {\displaystyle H} is a known n × m {\displaystyle n\times m} matrix with full column rank m {\displaystyle m} , and w {\displaystyle w} is a zero mean random vector with a known covariance matrix C w {\displaystyle C_{w}} . Let x ^ = G y {\displaystyle {\hat {x}}=Gy} be a linear estimate of x {\displaystyle x} from y {\displaystyle y} , where G {\displaystyle G} is some m × n {\displaystyle m\times n} matrix. The MSE of this estimator is given by M S E = E ( | | x ^ − x | | 2 ) = T r ( G C w G ∗ ) + x ∗ ( I − G H ) ∗ ( I − G H ) x . {\displaystyle MSE=E\left(||{\hat {x}}-x||^{2}\right)=Tr(GC_{w}G^{*})+x^{*}(I-GH)^{*}(I-GH)x.} Since the MSE depends explicitly on x {\displaystyle x} it cannot be minimized directly. Instead, the concept of regret can be used in order to define a linear estimator with good MSE performance. To define the regret here, consider a linear estimator that knows the value of the parameter x {\displaystyle x} , i.e., the matrix G {\displaystyle G} can explicitly depend on x {\displaystyle x} : x ^ o = G ( x ) y . {\displaystyle {\hat {x}}^{o}=G(x)y.} The MSE of x ^ o {\displaystyle {\hat {x}}^{o}} is M S E o = E ( | | x ^ o − x | | 2 ) = T r ( G ( x ) C w G ( x ) ∗ ) + x ∗ ( I − G ( x ) H ) ∗ ( I − G ( x ) H ) x . {\displaystyle MSE^{o}=E\left(||{\hat {x}}^{o}-x||^{2}\right)=Tr(G(x)C_{w}G(x)^{*})+x^{*}(I-G(x)H)^{*}(I-G(x)H)x.} To find the optimal G ( x ) {\displaystyle G(x)} , M S E o {\displaystyle MSE^{o}} is differentiated with respect to G {\displaystyle G} and the derivative is equated to 0 getting G ( x ) = x x ∗ H ∗ ( C w + H x x ∗ H ∗ ) − 1 . {\displaystyle G(x)=xx^{*}H^{*}(C_{w}+Hxx^{*}H^{*})^{-1}.} Then, using the Matrix Inversion Lemma G ( x ) = 1 1 + x ∗ H ∗ C w − 1 H x x x ∗ H ∗ C w − 1 . {\displaystyle G(x)={\frac {1}{1+x^{*}H^{*}C_{w}^{-1}Hx}}xx^{*}H^{*}C_{w}^{-1}.} Substituting this G ( x ) {\displaystyle G(x)} back into M S E o {\displaystyle MSE^{o}} , one gets M S E o = x ∗ x 1 + x ∗ H ∗ C w − 1 H x . {\displaystyle MSE^{o}={\frac {x^{*}x}{1+x^{*}H^{*}C_{w}^{-1}Hx}}.} This is the smallest MSE achievable with a linear estimate that knows x {\displaystyle x} . In practice this MSE cannot be achieved, but it serves as a bound on the optimal MSE. The regret of using the linear estimator specified by G {\displaystyle G} is equal to R ( x , G ) = M S E − M S E o = T r ( G C w G ∗ ) + x ∗ ( I − G H ) ∗ ( I − G H ) x − x ∗ x 1 + x ∗ H ∗ C w − 1 H x . {\displaystyle R(x,G)=MSE-MSE^{o}=Tr(GC_{w}G^{*})+x^{*}(I-GH)^{*}(I-GH)x-{\frac {x^{*}x}{1+x^{*}H^{*}C_{w}^{-1}Hx}}.} The minimax regret approach here is to minimize the worst-case regret, i.e., sup x ∈ E R ( x , G ) . {\displaystyle \sup _{x\in E}R(x,G).} This will allow a performance as close as possible to the best achievable performance in the worst case of the parameter x {\displaystyle x} . Although this problem appears difficult, it is an instance of convex optimization and in particular a numerical solution can be efficiently calculated. Similar ideas can be used when x {\displaystyle x} is random with uncertainty in the covariance matrix. == Regret in principal-agent problems == Camara, Hartline and Johnsen study principal-agent problems. These are incomplete-information games between two players called Principal and Agent, whose payoffs depend on a state of nature known only by the Agent. The Principal commits to a policy, then the agent responds, and then the state of nature is revealed. They assume that the principal and agent interact repeatedly, and may learn over time from the state history, using reinforcement learning. They assume that the agent is driven by regret-aversion. In particular, the agent minimizes his counterfactual internal regret. Based on this assumption, they develop mechanisms that minimize the principal's regret. Collina, Roth and Shao improve their mechanism both in running-time and in the bounds for regret (as a function of the number of distinct states of nature). == See also == Regret-free mechanism Competitive regret Decision theory Info-gap decision theory Loss function Minimax Swap regret Wald's maximin model == References == == External links == "TUTORIAL G05: Decision theory". Archived from the original on 3 July 2015.
Wikipedia/Regret_(decision_theory)
Info-gap decision theory seeks to optimize robustness to failure under severe uncertainty, in particular applying sensitivity analysis of the stability radius type to perturbations in the value of a given estimate of the parameter of interest. It has some connections with Wald's maximin model; some authors distinguish them, others consider them instances of the same principle. It was developed by Yakov Ben-Haim, and has found many applications and described as a theory for decision-making under "severe uncertainty". It has been criticized as unsuited for this purpose, and alternatives proposed, including such classical approaches as robust optimization. == Applications == Info-gap theory has generated a lot of literature. Info-gap theory has been studied or applied in a range of applications including engineering, biological conservation, theoretical biology, homeland security, economics, project management and statistics. Foundational issues related to info-gap theory have also been studied. === Engineering === A typical engineering application is the vibration analysis of a cracked beam, where the location, size, shape and orientation of the crack is unknown and greatly influence the vibration dynamics. Very little is usually known about these spatial and geometrical uncertainties. The info-gap analysis allows one to model these uncertainties, and to determine the degree of robustness - to these uncertainties - of properties such as vibration amplitude, natural frequencies, and natural modes of vibration. Another example is the structural design of a building subject to uncertain loads such as from wind or earthquakes. The response of the structure depends strongly on the spatial and temporal distribution of the loads. However, storms and earthquakes are highly idiosyncratic events, and the interaction between the event and the structure involves very site-specific mechanical properties which are rarely known. The info-gap analysis enables the design of the structure to enhance structural immunity against uncertain deviations from design-base or estimated worst-case loads. Another engineering application involves the design of a neural net for detecting faults in a mechanical system, based on real-time measurements. A major difficulty is that faults are highly idiosyncratic, so that training data for the neural net will tend to differ substantially from data obtained from real-time faults after the net has been trained. The info-gap robustness strategy enables one to design the neural net to be robust to the disparity between training data and future real events. === Biology === The conservation biologist faces info-gaps in using biological models. They use info-gap robustness curves to select among management options for spruce-budworm populations in Eastern Canada. Burgman uses the fact that the robustness curves of different alternatives can intersect. === Project management === Project management is another area where info-gap uncertainty is common. The project manager often has very limited information about the duration and cost of some of the tasks in the project, and info-gap robustness can assist in project planning and integration. Financial economics is another area where the future is unpredictable, which may be either pernicious or propitious. Info-gap robustness and opportuneness analyses can assist in portfolio design, credit rationing, and other applications. == Criticism == A general criticism of non-probabilistic decision rules, discussed in detail at decision theory: alternatives to probability theory, is that optimal decision rules (formally, admissible decision rules) can always be derived by probabilistic methods, with a suitable utility function and prior distribution (this is the statement of the complete class theorems), and thus that non-probabilistic methods such as info-gap are unnecessary and do not yield new or better decision rules. A more general criticism of decision making under uncertainty is the impact of outsized, unexpected events, ones that are not captured by the model. This is discussed particularly in black swan theory, and info-gap, used in isolation, is vulnerable to this, as are a fortiori all decision theories that use a fixed universe of possibilities, notably probabilistic ones. Sniedovich raises two points to info-gap decision theory, one substantive, one scholarly: 1. the info-gap uncertainty model is flawed and oversold One should consider the range of possibilities, not its subsets. Sniedovich argues that info-gap decision theory is therefore a "voodoo decision theory." 2. info-gap is maximin Ben-Haim states (Ben-Haim 1999, pp. 271–2) that "robust reliability is emphatically not a [min-max] worst-case analysis". Note that Ben-Haim compares info-gap to minimax, while Sniedovich considers it a case of maximin. Sniedovich has challenged the validity of info-gap theory for making decisions under severe uncertainty. Sniedovich notes that the info-gap robustness function is "local" to the region around u ~ {\displaystyle \displaystyle {\tilde {u}}} , where u ~ {\displaystyle \displaystyle {\tilde {u}}} is likely to be substantially in error. === Maximin === Symbolically, max α {\displaystyle \alpha } assuming min (worst-case) outcome, or maximin. In other words, while it is not a maximin analysis of outcome over the universe of uncertainty, it is a maximin analysis over a properly construed decision space. Ben-Haim argues that info-gap's robustness model is not min-max/maximin analysis because it is not worst-case analysis of outcomes; it is a satisficing model, not an optimization model – a (straightforward) maximin analysis would consider worst-case outcomes over the entire space which, since uncertainty is often potentially unbounded, would yield an unbounded bad worst case. === Stability radius === Sniedovich has shown that info-gap's robustness model is a simple stability radius model, namely a local stability model of the generic form ρ ^ ( p ~ ) := max { ρ ≥ 0 : p ∈ P ( s ) , ∀ p ∈ B ( ρ , p ~ ) } {\displaystyle {\hat {\rho }}({\tilde {p}}):=\max \ \{\rho \geq 0:p\in P(s),\forall p\in B(\rho ,{\tilde {p}})\}} where B ( ρ , p ~ ) {\displaystyle B(\rho ,{\tilde {p}})} denotes a ball of radius ρ {\displaystyle \rho } centered at p ~ {\displaystyle {\tilde {p}}} and P ( s ) {\displaystyle P(s)} denotes the set of values of p {\displaystyle p} that satisfy pre-determined stability conditions. In other words, info-gap's robustness model is a stability radius model characterized by a stability requirement of the form r c ≤ R ( q , p ) {\displaystyle r_{c}\leq R(q,p)} . Since stability radius models are designed for the analysis of small perturbations in a given nominal value of a parameter, Sniedovich argues that info-gap's robustness model is unsuitable for the treatment of severe uncertainty characterized by a poor estimate and a vast uncertainty space. == Discussion == === Satisficing and bounded rationality === It is correct that the info-gap robustness function is local, and has restricted quantitative value in some cases. However, a major purpose of decision analysis is to provide focus for subjective judgments. That is, regardless of the formal analysis, a framework for discussion is provided. Without entering into any particular framework, or characteristics of frameworks in general, discussion follows about proposals for such frameworks. Simon introduced the idea of bounded rationality. Limitations on knowledge, understanding, and computational capability constrain the ability of decision makers to identify optimal choices. Simon advocated satisficing rather than optimizing: seeking adequate (rather than optimal) outcomes given available resources. Schwartz, Conlisk and others discuss extensive evidence for the phenomenon of bounded rationality among human decision makers, as well as for the advantages of satisficing when knowledge and understanding are deficient. The info-gap robustness function provides a means of implementing a satisficing strategy under bounded rationality. For instance, in discussing bounded rationality and satisficing in conservation and environmental management, Burgman notes that "Info-gap theory ... can function sensibly when there are 'severe' knowledge gaps." The info-gap robustness and opportuneness functions provide "a formal framework to explore the kinds of speculations that occur intuitively when examining decision options." Burgman then proceeds to develop an info-gap robust-satisficing strategy for protecting the endangered orange-bellied parrot. Similarly, Vinot, Cogan and Cipolla discuss engineering design and note that "the downside of a model-based analysis lies in the knowledge that the model behavior is only an approximation to the real system behavior. Hence the question of the honest designer: how sensitive is my measure of design success to uncertainties in my system representation? ... It is evident that if model-based analysis is to be used with any level of confidence then ... [one must] attempt to satisfy an acceptable sub-optimal level of performance while remaining maximally robust to the system uncertainties." They proceed to develop an info-gap robust-satisficing design procedure for an aerospace application. == Alternatives == Of course, decision in the face of uncertainty is nothing new, and attempts to deal with it have a long history. A number of authors have noted and discussed similarities and differences between info-gap robustness and minimax or worst-case methods . Sniedovich has demonstrated formally that the info-gap robustness function can be represented as a maximin optimization, and is thus related to Wald's minimax theory. Sniedovich has claimed that info-gap's robustness analysis is conducted in the neighborhood of an estimate that is likely to be substantially wrong, concluding that the resulting robustness function is equally likely to be substantially wrong. On the other hand, the estimate is the best one has, so it is useful to know if it can err greatly and still yield an acceptable outcome. This critical question clearly raises the issue of whether robustness (as defined by info-gap theory) is qualified to judge whether confidence is warranted, and how it compares to methods used to inform decisions under uncertainty using considerations not limited to the neighborhood of a bad initial guess. Answers to these questions vary with the particular problem at hand. Some general comments follow. === Sensitivity analysis === Sensitivity analysis – how sensitive conclusions are to input assumptions – can be performed independently of a model of uncertainty: most simply, one may take two different assumed values for an input and compares the conclusions. From this perspective, info-gap can be seen as a technique of sensitivity analysis, though by no means the only one. === Robust optimization === The robust optimization literature provides methods and techniques that take a global approach to robustness analysis. These methods directly address decision under severe uncertainty, and have been used for this purpose for more than thirty years now. Wald's Maximin model is the main instrument used by these methods. The principal difference between the Maximin model employed by info-gap and the various Maximin models employed by robust optimization methods is in the manner in which the total region of uncertainty is incorporated in the robustness model. Info-gap takes a local approach that concentrates on the immediate neighborhood of the estimate. In sharp contrast, robust optimization methods set out to incorporate in the analysis the entire region of uncertainty, or at least an adequate representation thereof. In fact, some of these methods do not even use an estimate. === Comparative analysis === Classical decision theory, offers two approaches to decision-making under severe uncertainty, namely maximin and Laplaces' principle of insufficient reason (assume all outcomes equally likely); these may be considered alternative solutions to the problem info-gap addresses. Further, as discussed at decision theory: alternatives to probability theory, probabilists, particularly Bayesians probabilists, argue that optimal decision rules (formally, admissible decision rules) can always be derived by probabilistic methods (this is the statement of the complete class theorems), and thus that non-probabilistic methods such as info-gap are unnecessary and do not yield new or better decision rules. ==== Maximin ==== As attested by the rich literature on robust optimization, maximin provides a wide range of methods for decision making in the face of severe uncertainty. Indeed, as discussed in criticism of info-gap decision theory, info-gap's robustness model can be interpreted as an instance of the general maximin model. ==== Bayesian analysis ==== As for Laplaces' principle of insufficient reason, in this context it is convenient to view it as an instance of Bayesian analysis. The essence of the Bayesian analysis is applying probabilities for different possible realizations of the uncertain parameters. In the case of Knightian (non-probabilistic) uncertainty, these probabilities represent the decision maker's "degree of belief" in a specific realization. In our example, suppose there are only five possible realizations of the uncertain revenue to allocation function. The decision maker believes that the estimated function is the most likely, and that the likelihood decreases as the difference from the estimate increases. Figure 11 exemplifies such a probability distribution. Now, for any allocation, one can construct a probability distribution of the revenue, based on his prior beliefs. The decision maker can then choose the allocation with the highest expected revenue, with the lowest probability for an unacceptable revenue, etc. The most problematic step of this analysis is the choice of the realizations probabilities. When there is an extensive and relevant past experience, an expert may use this experience to construct a probability distribution. But even with extensive past experience, when some parameters change, the expert may only be able to estimate that A {\displaystyle A} is more likely than B {\displaystyle B} , but will not be able to reliably quantify this difference. Furthermore, when conditions change drastically, or when there is no past experience at all, it may prove to be difficult even estimating whether A {\displaystyle A} is more likely than B {\displaystyle B} . Nevertheless, methodologically speaking, this difficulty is not as problematic as basing the analysis of a problem subject to severe uncertainty on a single point estimate and its immediate neighborhood, as done by info-gap. And what is more, contrary to info-gap, this approach is global, rather than local. Still, it must be stressed that Bayesian analysis does not expressly concern itself with the question of robustness. Bayesian analysis raises the issue of learning from experience and adjusting probabilities accordingly. In other words, decision is not a one-stop process, but profits from a sequence of decisions and observations. == See also == == Notes == == References == == External links == Info-Gap Theory and Its Applications, further information on info-gap theory What is Info-Gap Theory? informal introduction Making Responsible Decisions (When it Seems that You Can't): Engineering Design and Strategic Planning Under Severe Uncertainty How Did Info-Gap Theory Start? How Does it Grow? Frequently Asked Questions about info-gap theory Info-Gap Campaign Archived 2011-07-14 at the Wayback Machine, further analysis and critique of info-gap Frequently Asked Questions about Info-Gap Decision Theory (PDF)
Wikipedia/Info-gap_decision_theory
Real business-cycle theory (RBC theory) is a class of new classical macroeconomics models in which business-cycle fluctuations are accounted for by real, in contrast to nominal, shocks. RBC theory sees business cycle fluctuations as the efficient response to exogenous changes in the real economic environment. That is, the level of national output necessarily maximizes expected utility. In RBC models, business cycles are described as "real" because they reflect optimal adjustments by economic agents rather than failures of markets to clear. As a result, RBC theory suggests that governments should concentrate on long-term structural change rather than intervention through discretionary fiscal or monetary policy. These ideas are strongly associated with freshwater economics within the neoclassical economics tradition, particularly the Chicago School of Economics. == Business cycles == If we were to take snapshots of an economy at different points in time, no two photos would look alike. This occurs for two reasons: Many advanced economies exhibit sustained growth over time. That is, snapshots taken many years apart will most likely depict higher levels of economic activity in the later period. There exist seemingly random fluctuations around this growth trend. Thus, given two snapshots, predicting the latter with the earlier is nearly impossible. A common way to observe such behavior is by looking at a time series of an economy's output, more specifically gross national product (GNP). This is just the value of the goods and services produced by a country's businesses and workers. Figure 1 shows the time series of real GNP for the United States from 1954 to 2005. While we see continuous output growth, it is not a steady increase. There are times of faster growth and times of slower growth. Figure 2 transforms these levels into growth rates of real GNP and extracts a smoother growth trend. The Hodrick–Prescott filter is a common method to obtain this trend. The basic idea is to find a balance between the extent to which the general growth trend follows the cyclical movement (since the long-term growth rate is not likely to be perfectly constant) and how smooth it is. The HP filter identifies the longer-term fluctuations as part of the growth trend while classifying the more jumpy fluctuations as part of the cyclical component. Observe the difference between this growth component and the jerkier data. Economists refer to these cyclical movements about the trend as business cycles. Figure 3 explicitly captures such deviations. Note the horizontal axis at 0. A point on this line indicates that there was no deviation from the trend that year. All other points above and below the line imply deviations. Using log real GNP, the distance between any point and the 0 line roughly equals the percentage deviation from the long-run growth trend. Also, note that the Y-axis uses very small values. This indicates that the deviations in real GNP are comparatively small and might be attributable to measurement errors rather than real deviations. We call large positive deviations (those above the zero axis) peaks. We call relatively large negative deviations (those below the zero axis) troughs. A series of positive deviations leading to peaks are booms, and a series of negative deviations leading to troughs are recessions. At a glance, the deviations look like a string of waves bunched together—nothing about it appears consistent. To explain the causes of such fluctuations may seem rather difficult, given these irregularities. However, considering other macroeconomic variables, we will observe patterns in these irregularities. For example, consider Figure 4, which depicts fluctuations in output and consumption spending, i.e., what people buy and use at any given period. Observe how the peaks and troughs align at almost the same places and how the upturns and downturns coincide. We might predict that other similar data may exhibit similar qualities. For example, (a) labor, hours worked (b) productivity, how effective firms use such capital or labor, (c) investment, amount of capital saved to help future endeavors, and (d) capital stock, value of machines, buildings and other equipment that help firms produce their goods. While Figure 5 shows a similar story for investment, the relationship with capital in Figure 6 departs from the story. We need to pin down a better story; one way is to look at some statistics. == Stylized facts == We can infer several regularities by eyeballing the data, sometimes called stylized facts. One is persistence. For example, if we take any point in the series above the trend (the x-axis in Figure 3), the probability the next period is still above the trend is very high. However, this persistence wears out over time. Economic activity in the short run is quite predictable, but due to the irregular long-term nature of fluctuations, forecasting in the long run is much more difficult, if not impossible. Another regularity is cyclical variability. Column A of Table 1 lists a measure of this with standard deviations. The magnitude of fluctuations in output and hours worked are nearly equal. Consumption and productivity are similarly much smoother than output, while investment fluctuates much more than output. The capital stock is the least volatile of the indicators. Yet another regularity is the co-movement between output and the other macroeconomic variables. Figures 4 – 6 illustrate such a relationship. We can measure this in more detail using correlations, as in column B of Table 1. A procyclical variable has a positive correlation since it usually increases during booms and decreases during recessions. Vice versa, a countercyclical variable has a negative correlation. An acyclical variable with a correlation close to zero implies no systematic relationship to the business cycle. We find that productivity is slightly procyclical, which suggests workers and capital are more productive when the economy is experiencing a boom. They are not quite as productive when the economy is experiencing a slowdown. Similar explanations follow for consumption and investment, which are strongly procyclical. Labor is also procyclical, while capital stock appears acyclical. Observing these similarities yet seemingly non-deterministic fluctuations in trends, the question arises as to why this occurs. Since people prefer economic booms over recessions, if everyone in the economy makes optimal decisions, these fluctuations are caused by something outside the decision-making process. So, the key question is: "What main factor influences and subsequently changes the decisions of all factors in an economy?" Economists have come up with many ideas to answer the above question. The one which currently dominates the academic literature on real business cycle theory was introduced by Finn E. Kydland and Edward C. Prescott in their 1982 work Time to Build And Aggregate Fluctuations. They envisioned this factor as technological shocks—i.e., random fluctuations in the productivity level that shifted the constant growth trend up or down. Examples of such shocks include innovations, bad weather, increased imports oil price, stricter environmental and safety regulations, etc. The general gist is that something directly changes the effectiveness of capital and/or labor. This affects the decisions of workers and firms, who in turn change what they buy and produce and thus eventually affect output. Given these shocks, RBC models predict time sequences of allocation for consumption, investment, etc. But exactly how do these productivity shocks cause ups and downs in economic activity? Consider a positive but temporary shock to productivity. This momentarily increases the effectiveness of workers and capital, allowing a given level of capital and labor to produce more output. Individuals face two types of tradeoffs. One is the consumption-investment decision. Since productivity is higher, people have more output to consume. An individual might choose to consume all of it today. But if he values future consumption, all that extra output might not be worth consuming today. Instead, he may consume some but invest the rest in capital to enhance production in subsequent periods and thus increase future consumption. This explains why investment spending is more volatile than consumption. The life-cycle hypothesis argues that households base their consumption decisions on expected lifetime income, so they prefer "smooth" consumption over time. They will thus save (and invest) in periods of high income and defer consumption of this to periods of low income. The other decision is the labor-leisure tradeoff. Higher productivity encourages substituting current work for future work since workers will earn more per hour today compared to tomorrow. More labor and less leisure results in greater output, consumption, and investment today. On the other hand, there is an opposing effect: since workers earn more, they may not want to work as much today and in the future. However, given the procyclical nature of labor, it seems that the above substitution effect dominates this income effect. The basic RBC model predicts that given a temporary shock, output, consumption, investment,t, and labor, all rise above their long-term trends and formative deviation. Furthermore, since more investment means more capital is available, a short-lived shock may impact the future. That is, above-trend behavior may persist even after the shock disappears. This capital accumulation is often referred to as an internal "propagation mechanism" since it may increase the persistence of shocks to output. A string of such productivity shocks will likely result in a boom. Similarly, recessions follow a string of bad shocks to the economy. Without shocks, the economy would continue following the growth trend with no business cycles. To quantitatively match the stylized facts in Table 1, Kydland and Prescott introduced calibration techniques. Using this methodology, the model closely mimics many business cycle properties. Yet current RBC models have not fully explained all behavior, and neoclassical economists are still searching for better variations. The main assumption in RBC theory is that individuals and firms respond optimally over the long run. It follows that business cycles exhibited in an economy are chosen in preference to no business cycles. This is not to say that people like to be in a recession. Slumps are preceded by an undesirable productivity shock, which constrains the situation. However, given these new constraints, people will still achieve the best outcomes possible, and markets will react efficiently. So when there is a slump, people choose to be in it because, given the situation, it is the best solution. This suggests laissez-faire (non-intervention) is the best policy of the government towards the economy, but given the abstract nature of the model, this has been debated. A precursor to RBC theory was developed by monetary economists Milton Friedman and Robert Lucas in the early 1970s. They envisioned that misperception of wages influenced people's decisions. Booms and recessions occurred when workers perceived wages as higher or lower than they were. This meant they worked and consumed more or less than otherwise. There would be no booms or recessions in a world of perfect information. === Calibration === Unlike estimation, which is usually used for constructing economic models, calibration only returns to the drawing board to change the model in the face of overwhelming evidence against the model being correct; this inverts the burden of proof away from the model builder. It is changing the model to fit the data. Since RBC models explain data ex-post, it is very difficult to falsify any one model that could be hypothesized to explain the data. RBC models are highly sample-specific, leading some to believe they have little or no predictive power. === Structural variables === Crucial to RBC models, "plausible values" for structural variables such as the discount and capital depreciation rates are used to create simulated variable paths. These tend to be estimated from econometric studies, with 95% confidence intervals. If the full range of possible values for these variables is used, correlation coefficients between actual and simulated paths of economic variables can shift wildly, leading some to question how successful a model that only achieves a coefficient of 80% is. == Criticisms == The real business cycle theory relies on three assumptions which, according to economists such as Greg Mankiw and Larry Summers, are unrealistic: 1. Large and sudden changes in available production technology drive the model. Summers noted that Prescott could not suggest any specific technological shock for an actual downturn apart from the oil price shock in the 1970s. Furthermore there is no microeconomic evidence for the large real shocks that need to drive these models. Real business cycle models, as a rule, are not subjected to tests against competing alternatives which are easy to support (Summers 1986). 2. Unemployment reflects changes in the amount people want to work. Economist Kevin D. Hoover argued that this assumption would mean that 25% unemployment at the height of the Great Depression (1933) would be the result of a mass decision to take a long vacation. 3. Monetary policy is irrelevant to economic fluctuations. Nowadays, it is widely agreed that wages and prices do not adjust as quickly as needed to restore equilibrium. Therefore, most economists, even among the new classicists, do not accept the policy-ineffectiveness proposition. Another major criticism is that real business cycle models can not account for the dynamics displayed by the U.S. gross national product. As Larry Summers said: "(My view is that) real business cycle models of the type urged on us by [Ed] Prescott have nothing to do with the business cycle phenomena observed in the United States or other capitalist economies." —(Summers 1986) == See also == == References == == Further reading == Cooley, Thomas F. (1995). Frontiers of Business Cycle Research. Princeton: Princeton University Press. ISBN 978-0-691-04323-4. Gomes, Joao; Greenwood, Jeremy; Rebelo, Sergio (2001). "Equilibrium Unemployment" (PDF). Journal of Monetary Economics. 48 (1): 109–152. doi:10.1016/S0304-3932(01)00071-X. S2CID 2503384. Hansen, Gary D. (1985). "Indivisible labor and the business cycle". Journal of Monetary Economics. 16 (3): 309–327. CiteSeerX 10.1.1.335.3000. doi:10.1016/0304-3932(85)90039-X. Heijdra, Ben J. (2009). "Real Business Cycles". Foundations of Modern Macroeconomics (2nd ed.). Oxford: Oxford University Press. pp. 495–552. ISBN 978-0-19-921069-5. Kydland, Finn E.; Prescott, Edward C. (1982). "Time to Build and Aggregate Fluctuations". Econometrica. 50 (6): 1345–1370. doi:10.2307/1913386. JSTOR 1913386. Long, John B. Jr.; Plosser, Charles (1983). "Real Business Cycles". Journal of Political Economy. 91 (1): 39–69. doi:10.1086/261128. S2CID 62882227. Lucas, Robert E. Jr. (1977). "Understanding Business Cycles". Carnegie-Rochester Conference Series on Public Policy. 5: 7–29. doi:10.1016/0167-2231(77)90002-1. Plosser, Charles I. (1989). "Understanding real business cycles". Journal of Economic Perspectives. 3 (3): 51–77. doi:10.1257/jep.3.3.51. JSTOR 1942760. Romer, David (2011). "Real-Business-Cycle Theory". Advanced Macroeconomics (Fourth ed.). New York: McGraw-Hill. pp. 189–237. ISBN 978-0-07-351137-5.
Wikipedia/Real_business-cycle_theory
Microeconomics is a branch of economics that studies the behavior of individuals and firms in making decisions regarding the allocation of scarce resources and the interactions among these individuals and firms. Microeconomics focuses on the study of individual markets, sectors, or industries as opposed to the economy as a whole, which is studied in macroeconomics. One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses. Microeconomics shows conditions under which free markets lead to desirable allocations. It also analyzes market failure, where markets fail to produce efficient results. While microeconomics focuses on firms and individuals, macroeconomics focuses on the total of economic activity, dealing with the issues of growth, inflation, and unemployment—and with national policies relating to these issues. Microeconomics also deals with the effects of economic policies (such as changing taxation levels) on microeconomic behavior and thus on the aforementioned aspects of the economy. Particularly in the wake of the Lucas critique, much of modern macroeconomic theories has been built upon microfoundations—i.e., based upon basic assumptions about micro-level behavior. == Assumptions and definitions == Microeconomic study historically has been performed according to general equilibrium theory, developed by Léon Walras in Elements of Pure Economics (1874) and partial equilibrium theory, introduced by Alfred Marshall in Principles of Economics (1890). Microeconomic theory typically begins with the study of a single rational and utility maximizing individual. To economists, rationality means an individual possesses stable preferences that are both complete and transitive. The technical assumption that preference relations are continuous is needed to ensure the existence of a utility function. Although microeconomic theory can continue without this assumption, it would make comparative statics impossible since there is no guarantee that the resulting utility function would be differentiable. Microeconomic theory progresses by defining a competitive budget set which is a subset of the consumption set. It is at this point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS (local non-satiation) there is no 100% guarantee but there would be a rational rise in individual utility. With the necessary tools and assumptions in place the utility maximization problem (UMP) is developed. The utility maximization problem is the heart of consumer theory. The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences and then mathematically modeling and analyzing the consequences. The utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well. That is, the utility maximization problem is used by economists to not only explain what or how individuals make choices but why individuals make choices as well. The utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists. That is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence. The utility maximization problem has so far been developed by taking consumer tastes (i.e. consumer utility) as primitive. However, an alternative way to develop microeconomic theory is by taking consumer choice as primitive. This model of microeconomic theory is referred to as revealed preference theory. The theory of supply and demand usually assumes that markets are perfectly competitive. This implies that there are many buyers and sellers in the market and none of them have the capacity to significantly influence prices of goods and services. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices. Quite often, a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the theory works well in situations meeting these assumptions. Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where market failures lead to resource allocation that is suboptimal and creates deadweight loss. A classic example of suboptimal resource allocation is that of a public good. In such cases, economists may attempt to find policies that avoid waste, either directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating "missing markets" to enable efficient trading where none had previously existed. This is studied in the field of collective action and public choice theory. "Optimal welfare" usually takes on a Paretian norm, which is a mathematical application of the Kaldor–Hicks method. This can diverge from the Utilitarian goal of maximizing utility because it does not consider the distribution of goods between people. Market failure in positive economics (microeconomics) is limited in implications without mixing the belief of the economist and their theory. The demand for various commodities by individuals is generally thought of as the outcome of a utility-maximizing process, with each individual trying to maximize their own utility under a budget constraint and a given consumption set. === Allocation of scarce resources === Individuals and firms need to allocate limited resources to ensure all agents in the economy are well off. Firms decide which goods and services to produce considering low costs involving labour, materials and capital as well as potential profit margins. Consumers choose the good and services they want that will maximize their happiness taking into account their limited wealth. The government can make these allocation decisions or they can be independently made by the consumers and firms. For example, in the former Soviet Union, the government played a part in informing car manufacturers which cars to produce and which consumers will gain access to a car. == History == Economists commonly consider themselves microeconomists or macroeconomists. The difference between microeconomics and macroeconomics likely was introduced in 1933 by the Norwegian economist Ragnar Frisch, the co-recipient of the first Nobel Memorial Prize in Economic Sciences in 1969. However, Frisch did not actually use the word "microeconomics", instead drawing distinctions between "micro-dynamic" and "macro-dynamic" analysis in a way similar to how the words "microeconomics" and "macroeconomics" are used today. The first known use of the term "microeconomics" in a published article was from Pieter de Wolff in 1941, who broadened the term "micro-dynamics" into "microeconomics". == Microeconomic theory == === Consumer demand theory === Consumer demand theory relates preferences for the consumption of both goods and services to the consumption expenditures; ultimately, this relationship between preferences and consumption expenditures is used to relate preferences to consumer demand curves. The link between personal preferences, consumption and the demand curve is one of the most closely studied relations in economics. It is a way of analyzing how consumers may achieve equilibrium between preferences and expenditures by maximizing utility subject to consumer budget constraints. === Production theory === Production theory is the study of production, or the economic process of converting inputs into outputs. Production uses resources to create a good or service that is suitable for use, gift-giving in a gift economy, or exchange in a market economy. This can include manufacturing, storing, shipping, and packaging. Some economists define production broadly as all economic activity other than consumption. They see every commercial activity other than the final purchase as some form of production. === Cost-of-production theory of value === The cost-of-production theory of value states that the price of an object or condition is determined by the sum of the cost of the resources that went into making it. The cost can comprise any of the factors of production (including labor, capital, or land) and taxation. Technology can be viewed either as a form of fixed capital (e.g. an industrial plant) or circulating capital (e.g. intermediate goods). In the mathematical model for the cost of production, the short-run total cost is equal to fixed cost plus total variable cost. The fixed cost refers to the cost that is incurred regardless of how much the firm produces. The variable cost is a function of the quantity of an object being produced. The cost function can be used to characterize production through the duality theory in economics, developed mainly by Ronald Shephard (1953, 1970) and other scholars (Sickles & Zelenyuk, 2019, ch. 2). === Fixed and variable costs === Fixed cost (FC) – This cost does not change with output. It includes business expenses such as rent, salaries and utility bills. Variable cost (VC) – This cost changes as output changes. This includes raw materials, delivery costs and production supplies. Over a short time period (few months), most costs are fixed costs as the firm will have to pay for salaries, contracted shipment and materials used to produce various goods. Over a longer time period (2-3 years), costs can become variable. Firms can decide to reduce output, purchase fewer materials and even sell some machinery. Over 10 years, most costs become variable as workers can be laid off or new machinery can be bought to replace the old machinery Sunk costs – This is a fixed cost that has already been incurred and cannot be recovered. An example of this can be in R&D development like in the pharmaceutical industry. Hundreds of millions of dollars are spent to achieve new drug breakthroughs but this is challenging as its increasingly harder to find new breakthroughs and meet tighter regulation standards. Thus many projects are written off leading to losses of millions of dollars === Opportunity cost === Opportunity cost is closely related to the idea of time constraints. One can do only one thing at a time, which means that, inevitably, one is always giving up other things. The opportunity cost of any activity is the value of the next-best alternative thing one may have done instead. Opportunity cost depends only on the value of the next-best alternative. It does not matter whether one has five alternatives or 5,000. Opportunity costs can tell when not to do something as well as when to do something. For example, one may like waffles, but like chocolate even more. If someone offers only waffles, one would take it. But if offered waffles or chocolate, one would take the chocolate. The opportunity cost of eating waffles is sacrificing the chance to eat chocolate. Because the cost of not eating the chocolate is higher than the benefits of eating the waffles, it makes no sense to choose waffles. Of course, if one chooses chocolate, they are still faced with the opportunity cost of giving up having waffles. But one is willing to do that because the waffle's opportunity cost is lower than the benefits of the chocolate. Opportunity costs are unavoidable constraints on behavior because one has to decide what's best and give up the next-best alternative. === Price theory === Microeconomics is also known as price theory to highlight the significance of prices in relation to buyer and sellers as these agents determine prices due to their individual actions. Price theory is a field of economics that uses the supply and demand framework to explain and predict human behavior. It is associated with the Chicago School of Economics. Price theory studies competitive equilibrium in markets to yield testable hypotheses that can be rejected. Price theory is not the same as microeconomics. Strategic behavior, such as the interactions among sellers in a market where they are few, is a significant part of microeconomics but is not emphasized in price theory. Price theorists focus on competition believing it to be a reasonable description of most markets that leaves room to study additional aspects of tastes and technology. As a result, price theory tends to use less game theory than microeconomics does. Price theory focuses on how agents respond to prices, but its framework can be applied to a wide variety of socioeconomic issues that might not seem to involve prices at first glance. Price theorists have influenced several other fields including developing public choice theory and law and economics. Price theory has been applied to issues previously thought of as outside the purview of economics such as criminal justice, marriage, and addiction. == Microeconomic models == === Supply and demand === Supply and demand is an economic model of price determination in a perfectly competitive market. It concludes that in a perfectly competitive market with no externalities, per unit taxes, or price controls, the unit price for a particular good is the price at which the quantity demanded by consumers equals the quantity supplied by producers. This price results in a stable economic equilibrium. Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power. For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximization" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred. The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply. Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged. That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors of inputs of production are all taken to be constant for a specific time period of evaluation of supply. Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply. For a given quantity of a consumer good, the point on the demand curve indicates the value, or marginal utility, to consumers for that unit. It measures what the consumer would be prepared to pay for that unit. The corresponding point on the supply curve measures marginal cost, the increase in total cost to the supplier for the corresponding unit of the good. The price in equilibrium is determined by supply and demand. In a perfectly competitive market, supply and demand equate marginal cost and marginal utility at equilibrium. On the supply side of the market, some factors of production are described as (relatively) variable in the short run, which affects the cost of changing output levels. Their usage rates can be changed easily, such as electrical power, raw-material inputs, and over-time and temp work. Other inputs are relatively fixed, such as plant and equipment and key personnel. In the long run, all inputs may be adjusted by management. These distinctions translate to differences in the elasticity (responsiveness) of the supply curve in the short and long runs and corresponding differences in the price-quantity change from a shift on the supply or demand side of the market. Marginalist theory, such as above, describes the consumers as attempting to reach most-preferred positions, subject to income and wealth constraints while producers attempt to maximize profits subject to their own constraints, including demand for goods produced, technology, and the price of inputs. For the consumer, that point comes where marginal utility of a good, net of price, reaches zero, leaving no net gain from further consumption increases. Analogously, the producer compares marginal revenue (identical to price for the perfect competitor) against the marginal cost of a good, with marginal profit the difference. At the point where marginal profit reaches zero, further increases in production of the good stop. For movement to market equilibrium and for changes in equilibrium, price and quantity also change "at the margin": more-or-less of something, rather than necessarily all-or-nothing. Other applications of demand and supply include the distribution of income among the factors of production, including labor and capital, through factor markets. In a competitive labor market for example the quantity of labor employed and the price of labor (the wage rate) depends on the demand for labor (from employers for production) and supply of labor (from potential workers). Labor economics examines the interaction of workers and employers through such markets to explain patterns and changes of wages and other labor income, labor mobility, and (un)employment, productivity through human capital, and related public-policy issues. Demand-and-supply analysis is used to explain the behavior of perfectly competitive markets, but as a standard of comparison it can be extended to any type of market. It can also be generalized to explain variables across the economy, for example, total output (estimated as real GDP) and the general price level, as studied in macroeconomics. Tracing the qualitative and quantitative effects of variables that change supply and demand, whether in the short or long run, is a standard exercise in applied economics. Economic theory may also specify conditions such that supply and demand through the market is an efficient mechanism for allocating resources. == Market structure == Market structure refers to features of a market, including the number of firms in the market, the distribution of market shares between them, product uniformity across firms, how easy it is for firms to enter and exit the market, and forms of competition in the market. A market structure can have several types of interacting market systems. Different forms of markets are a feature of capitalism and market socialism, with advocates of state socialism often criticizing markets and aiming to substitute or replace markets with varying degrees of government-directed economic planning. Competition acts as a regulatory mechanism for market systems, with government providing regulations where the market cannot be expected to regulate itself. Regulations help to mitigate negative externalities of goods and services when the private equilibrium of the market does not match the social equilibrium. One example of this is with regards to building codes, which if absent in a purely competition regulated market system, might result in several horrific injuries or deaths to be required before companies would begin improving structural safety, as consumers may at first not be as concerned or aware of safety issues to begin putting pressure on companies to provide them, and companies would be motivated not to provide proper safety features due to how it would cut into their profits. The concept of "market type" is different from the concept of "market structure". Nevertheless, there are a variety of types of markets. The different market structures produce cost curves based on the type of structure present. The different curves are developed based on the costs of production, specifically the graph contains marginal cost, average total cost, average variable cost, average fixed cost, and marginal revenue, which is sometimes equal to the demand, average revenue, and price in a price-taking firm. === Perfect competition === Perfect competition is a situation in which numerous small firms producing identical products compete against each other in a given industry. Perfect competition leads to firms producing the socially optimal output level at the minimum possible cost per unit. Firms in perfect competition are "price takers" (they do not have enough market power to profitably increase the price of their goods or services). A good example would be that of digital marketplaces, such as eBay, on which many different sellers sell similar products to many different buyers. Consumers in a perfect competitive market have perfect knowledge about the products that are being sold in this market. === Imperfect competition === Imperfect competition is a type of market structure showing some but not all features of competitive markets. In perfect competition, market power is not achievable due to a high level of producers causing high levels of competition. Therefore, prices are brought down to a marginal cost level. In a monopoly, market power is achieved by one firm leading to prices being higher than the marginal cost level. Between these two types of markets are firms that are neither perfectly competitive or monopolistic. Firms such as Pepsi and Coke and Sony, Nintendo and Microsoft dominate the cola and video game industry respectively. These firms are in imperfect competition === Monopolistic competition === Monopolistic competition is a situation in which many firms with slightly different products compete. Production costs are above what may be achieved by perfectly competitive firms, but society benefits from the product differentiation. Examples of industries with market structures similar to monopolistic competition include restaurants, cereal, clothing, shoes, and service industries in large cities. === Monopoly === A monopoly is a market structure in which a market or industry is dominated by a single supplier of a particular good or service. Because monopolies have no competition, they tend to sell goods and services at a higher price and produce below the socially optimal output level. However, not all monopolies are a bad thing, especially in industries where multiple firms would result in more costs than benefits (i.e. natural monopolies). Natural monopoly: A monopoly in an industry where one producer can produce output at a lower cost than many small producers. === Oligopoly === An oligopoly is a market structure in which a market or industry is dominated by a small number of firms (oligopolists). Oligopolies can create the incentive for firms to engage in collusion and form cartels that reduce competition leading to higher prices for consumers and less overall market output. Alternatively, oligopolies can be fiercely competitive and engage in flamboyant advertising campaigns. Duopoly: A special case of an oligopoly, with only two firms. Game theory can elucidate behavior in duopolies and oligopolies. === Monopsony === A monopsony is a market where there is only one buyer and many sellers. === Bilateral monopoly === A bilateral monopoly is a market consisting of both a monopoly (a single seller) and a monopsony (a single buyer). === Oligopsony === An oligopsony is a market where there are a few buyers and many sellers. == Game theory == Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. The term "game" here implies the study of any strategic interaction between people. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers & acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems, and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy. == Information economics == Information economics is a branch of microeconomic theory that studies how information and information systems affect an economy and economic decisions. Information has special characteristics. It is easy to create but hard to trust. It is easy to spread but hard to control. It influences many decisions. These special characteristics (as compared with other types of goods) complicate many standard economic theories. The economics of information has recently become of great interest to many - possibly due to the rise of information-based companies inside the technology industry. From a game theory approach, the usual constraints that agents have complete information can be loosened to further examine the consequences of having incomplete information. This gives rise to many results which are applicable to real life situations. For example, if one does loosen this assumption, then it is possible to scrutinize the actions of agents in situations of uncertainty. It is also possible to more fully understand the impacts – both positive and negative – of agents seeking out or acquiring information. == Applied == Applied microeconomics includes a range of specialized areas of study, many of which draw on methods from other fields. Economic history examines the evolution of the economy and economic institutions, using methods and techniques from the fields of economics, history, geography, sociology, psychology, and political science. Education economics examines the organization of education provision and its implication for efficiency and equity, including the effects of education on productivity. Financial economics examines topics such as the structure of optimal portfolios, the rate of return to capital, econometric analysis of security returns, and corporate financial behavior. Health economics examines the organization of health care systems, including the role of the health care workforce and health insurance programs. Industrial organization examines topics such as the entry and exit of firms, innovation, and the role of trademarks. Law and economics applies microeconomic principles to the selection and enforcement of competing legal regimes and their relative efficiencies. Political economy examines the role of political institutions in determining policy outcomes. Public economics examines the design of government tax and expenditure policies and economic effects of these policies (e.g., social insurance programs). Urban economics, which examines the challenges faced by cities, such as sprawl, air and water pollution, traffic congestion, and poverty, draws on the fields of urban geography and sociology. Labor economics examines primarily labor markets, but comprises a large range of public policy issues such as immigration, minimum wages, or inequality. == See also == Macroeconomics First-order approach Critique of political economy == References == == Further reading == == External links == X-Lab: A Collaborative Micro-Economics and Social Sciences Research Laboratory Simulations in Microeconomics Archived 2010-10-31 at the Wayback Machine A brief history of microeconomics
Wikipedia/Price_theory
Evidential decision theory (EDT) is a school of thought within decision theory which states that, when a rational agent is confronted with a set of possible actions, one should select the action with the highest news value, that is, the action which would be indicative of the best outcome in expectation if one received the "news" that it had been taken. In other words, it recommends to "do what you most want to learn that you will do.": 7  EDT contrasts with causal decision theory (CDT), which prescribes taking the action that will causally produce the best outcome. While these two theories agree in many cases, they give different verdicts in certain philosophical thought experiments. For example, EDT prescribes taking only one box in Newcomb's paradox, while CDT recommends taking both boxes.: 22–26  == Formal description == In a 1976 paper, Allan Gibbard and William Harper distinguished between two kinds of expected utility maximization. EDT proposes to maximize the expected utility of actions computed using conditional probabilities, namely V ( A ) = ∑ j P ( O j | A ) D ( O j ) , {\displaystyle V(A)=\sum \limits _{j}P(O_{j}|A)D(O_{j}),} where D ( O j ) {\displaystyle D(O_{j})} is the desirability of outcome O j {\displaystyle O_{j}} and P ( O j | A ) {\displaystyle P(O_{j}|A)} is the conditional probability of O j {\displaystyle O_{j}} given that action A {\displaystyle A} occurs. This is in contrast to the counterfactual formulation of expected utility used by causal decision theory U ( A ) = ∑ j P ( A ◻ → O j ) D ( O j ) , {\displaystyle U(A)=\sum \limits _{j}P(A\mathrel {\Box {\rightarrow }} O_{j})D(O_{j}),} where the expression P ( A ◻ → O j ) {\displaystyle P(A\mathrel {\Box {\rightarrow }} O_{j})} indicates the probability of outcome O j {\displaystyle O_{j}} in the counterfactual situation in which action A {\displaystyle A} is performed. Since P ( A ◻ → O j ) {\displaystyle P(A\mathrel {\Box {\rightarrow }} O_{j})} and P ( O j | A ) {\displaystyle P(O_{j}|A)} are not always equal, these formulations of expected utility are not equivalent, leading to differences in actions prescribed by EDT and CDT. == Thought experiments == Different decision theories are often examined in their recommendations for action in different thought experiments. === Newcomb's paradox === In Newcomb's paradox, there is a predictor, a player, and two boxes designated A and B. The predictor is able to reliably predict the player's choices— say, with 99% accuracy. The player is given a choice between taking only box B, or taking both boxes A and B. The player knows the following: Box A is transparent and always contains a visible $1,000. Box B is opaque, and its content has already been set by the predictor: If the predictor has predicted the player will take both boxes A and B, then box B contains nothing. If the predictor has predicted that the player will take only box B, then box B contains $1,000,000. The player does not know what the predictor predicted or what box B contains while making the choice. Should the player take both boxes, or only box B? Evidential decision theory recommends taking only box B in this scenario, because taking only box B is strong evidence that the predictor anticipated that the player would only take box B, and therefore it is very likely that box B contains $1,000,000. Conversely, choosing to take both boxes is strong evidence that the predictor knew that the player would take both boxes; therefore we should expect that box B contains nothing.: 22  Conversely, causal decision theory (CDT) would have recommended that the player takes both boxes, because by that time the predictor has already made a prediction (therefore, the action of the player will not affect the outcome). Formally, the expected utilities in EDT are V ( take only B ) = P ( 1M in box B | take only B ) × $ 1 , 000 , 000 + P ( nothing in box B | take only B ) × $ 0 = 0.99 × $ 1 , 000 , 000 + 0.01 × $ 0 = $ 990 , 000 V ( take both boxes ) = P ( 1M in box B | take both boxes ) × $ 1 , 001 , 000 + P ( nothing in box B | take both boxes ) × $ 1 , 000 = 0.01 × $ 1 , 001 , 000 + 0.99 × $ 1 , 000 = $ 11 , 000 {\displaystyle {\begin{aligned}V({\text{take only B}})&=P({\text{1M in box B}}|{\text{take only B}})\times \$1,000,000+P({\text{nothing in box B}}|{\text{take only B}})\times \$0\\&=0.99\times \$1,000,000+0.01\times \$0=\$990,000\\V({\text{take both boxes}})&=P({\text{1M in box B}}|{\text{take both boxes}})\times \$1,001,000+P({\text{nothing in box B}}|{\text{take both boxes}})\times \$1,000\\&=0.01\times \$1,001,000+0.99\times \$1,000=\$11,000\end{aligned}}} Since V ( take only B ) > V ( take both boxes ) {\displaystyle V({\text{take only B}})>V({\text{take both boxes}})} , EDT recommends taking only box B. === Twin prisoner's dilemma === In this variation on the Prisoner's Dilemma thought experiment, an agent must choose whether to cooperate or defect against her psychological twin, whose reasoning processes are exactly analogous to her own. Aomame and her psychological twin are put in separate rooms and cannot communicate. If they both cooperate, they each get $5. If they both defect, they each get $1. If one cooperates and the other defects, then one gets $10, and the other gets $0. Assuming Aomame only cares about her individual payout, what should she do? Evidential decision theory recommends cooperating in this situation, because Aomame's decision to cooperate is strong evidence that her psychological twin will also cooperate, meaning that her expected payoff is $5. On the other hand, if Aomame defects, this would be strong evidence that her twin will also defect, resulting in an expected payoff of $1. Formally, the expected utilities are V ( Aomame cooperates ) = P ( twin cooperates | Aomame cooperates ) × $ 5 + P ( twin defects | Aomame cooperates ) × $ 0 = 1 × $ 5 + 0 × $ 0 = $ 5 V ( Aomame defects ) = P ( twin cooperates | Aomame defects ) × $ 10 + P ( twin defects | Aomame defects ) × $ 1 = 0 × $ 10 + 1 × $ 1 = $ 1. {\displaystyle {\begin{aligned}V({\text{Aomame cooperates}})&=P({\text{twin cooperates}}|{\text{Aomame cooperates}})\times \$5+P({\text{twin defects}}|{\text{Aomame cooperates}})\times \$0\\&=1\times \$5+0\times \$0=\$5\\V({\text{Aomame defects}})&=P({\text{twin cooperates}}|{\text{Aomame defects}})\times \$10+P({\text{twin defects}}|{\text{Aomame defects}})\times \$1\\&=0\times \$10+1\times \$1=\$1.\end{aligned}}} Since V ( Aomame cooperates ) > V ( Aomame defects ) {\displaystyle V({\text{Aomame cooperates}})>V({\text{Aomame defects}})} , EDT recommends cooperating. == Other supporting arguments == Even if one puts less credence on evidential decision theory, it may be reasonable to act as if EDT were true. Namely, because EDT can involve the actions of many correlated decision-makers, its stakes may be higher than causal decision theory and thus take priority. == Criticism == David Lewis has characterized evidential decision theory as promoting "an irrational policy of managing the news". James M. Joyce asserted, "Rational agents choose acts on the basis of their causal efficacy, not their auspiciousness; they act to bring about good results even when doing so might betoken bad news." == See also == == References == == External links == Causal Decision Theory at the Stanford Encyclopedia of Philosophy
Wikipedia/Evidential_decision_theory
Modern monetary theory or modern money theory (MMT) is a heterodox macroeconomic theory that describes currency as a public monopoly and unemployment as evidence that a currency monopolist is overly restricting the supply of the financial assets needed to pay taxes and satisfy savings desires. According to MMT, governments do not need to worry about accumulating debt since they can pay interest by printing money. MMT argues that the primary risk once the economy reaches full employment is inflation, which acts as the only constraint on spending. MMT also argues that inflation can be controlled by increasing taxes on everyone, to reduce the spending capacity of the private sector. MMT is opposed to the mainstream understanding of macroeconomic theory and has been criticized heavily by many mainstream economists. MMT is also strongly opposed by members of the Austrian school of economics. == Principles == MMT's main tenets are that a government that issues its own fiat money: Can pay for goods, services, and financial assets without a need to first collect money in the form of taxes or debt issuance in advance of such purchases Cannot be forced to default on debt denominated in its own currency Is limited in its money creation and purchases only by inflation, which accelerates once the real resources (labour, capital and natural resources) of the economy are utilized at full employment Should strengthen automatic stabilisers to control demand-pull inflation, rather than relying upon discretionary tax changes Issues bonds as a monetary policy device, rather than as a funding device Uses taxation to provide the fiscal space to spend without causing inflation and also to give a value to the currency. Taxation is often said in MMT not to fund the spending of a currency-issuing government, but without it no real spending is possible. The first four MMT tenets do not conflict with mainstream economics understanding of how money creation and inflation works. However, MMT economists disagree with mainstream economics about the fifth tenet: the impact of government deficits on interest rates. == History == MMT synthesizes ideas from the state theory of money of Georg Friedrich Knapp (also known as chartalism) and the credit theory of money of Alfred Mitchell-Innes, the functional finance proposals of Abba Lerner, Hyman Minsky's views on the banking system and Wynne Godley's sectoral balances approach. Knapp wrote in 1905 that "money is a creature of law", rather than a commodity. Knapp contrasted his state theory of money with the Gold Standard view of "metallism", where the value of a unit of currency depends on the quantity of precious metal it contains or for which it may be exchanged. He said that the state can create pure paper money and make it exchangeable by recognizing it as legal tender, with the criterion for the money of a state being "that which is accepted at the public pay offices". The prevailing view of money was that it had evolved from systems of barter to become a medium of exchange because it represented a durable commodity which had some use value, but proponents of MMT such as Randall Wray and Mathew Forstater said that more general statements appearing to support a chartalist view of tax-driven paper money appear in the earlier writings of many classical economists, including Adam Smith, Jean-Baptiste Say, J. S. Mill, Karl Marx, and William Stanley Jevons. Alfred Mitchell-Innes wrote in 1914 that money exists not as a medium of exchange but as a standard of deferred payment, with government money being debt the government may reclaim through taxation. Innes said: Whenever a tax is imposed, each taxpayer becomes responsible for the redemption of a small part of the debt which the government has contracted by its issues of money, whether coins, certificates, notes, drafts on the treasury, or by whatever name this money is called. He has to acquire his portion of the debt from some holder of a coin or certificate or other form of government money, and present it to the Treasury in liquidation of his legal debt. He has to redeem or cancel that portion of the debt ... The redemption of government debt by taxation is the basic law of coinage and of any issue of government 'money' in whatever form. Knapp and "chartalism" are referenced by John Maynard Keynes in the opening pages of his 1930 Treatise on Money and appear to have influenced Keynesian ideas on the role of the state in the economy. By 1947, when Abba Lerner wrote his article "Money as a Creature of the State", economists had largely abandoned the idea that the value of money was closely linked to gold. Lerner said that responsibility for avoiding inflation and depressions lay with the state because of its ability to create or tax away money. Hyman Minsky seemed to favor a chartalist approach to understanding money creation in his Stabilizing an Unstable Economy, while Basil Moore, in his book Horizontalists and Verticalists, lists the differences between bank money and state money. In 1996, Wynne Godley wrote an article on his sectoral balances approach, which MMT draws from. Economists Warren Mosler, L. Randall Wray, Stephanie Kelton, Bill Mitchell and Pavlina R. Tcherneva are largely responsible for reviving the idea of chartalism as an explanation of money creation; Wray refers to this revived formulation as neo-chartalism. Rodger Malcolm Mitchell's book Free Money (1996) describes in layman's terms the essence of chartalism. Pavlina R. Tcherneva has developed the first mathematical framework for MMT and has largely focused on developing the idea of the job guarantee. Bill Mitchell, professor of economics and Director of the Centre of Full Employment and Equity (CoFEE) at the University of Newcastle in Australia, coined the term 'modern monetary theory'. In their 2008 book Full Employment Abandoned, Mitchell and Joan Muysken use the term to explain monetary systems in which national governments have a monopoly on issuing fiat currency and where a floating exchange rate frees monetary policy from the need to protect foreign exchange reserves. Some contemporary proponents, such as Wray, place MMT within post-Keynesian economics, while MMT has been proposed as an alternative or complementary theory to monetary circuit theory, both being forms of endogenous money, i.e., money created within the economy, as by government deficit spending or bank lending, rather than from outside, perhaps with gold. In the complementary view, MMT explains the "vertical" (government-to-private and vice versa) interactions, while circuit theory is a model of the "horizontal" (private-to-private) interactions. By 2013, MMT had attracted a popular following through academic blogs and other websites. In 2019, MMT became a major topic of debate after U.S. Representative Alexandria Ocasio-Cortez said in January that the theory should be a larger part of the conversation. In February 2019, Macroeconomics became the first academic textbook based on the theory, published by Bill Mitchell, Randall Wray, and Martin Watts. MMT became increasingly used by chief economists and Wall Street executives for economic forecasts and investment strategies. The theory was also intensely debated by lawmakers in Japan, which was planning to raise taxes after years of deficit spending. In June 2020, Stephanie Kelton's MMT book The Deficit Myth became a New York Times bestseller. In 2020 the Sri Lankan Central Bank, under the governor W. D. Lakshman, cited MMT as a justification for adopting unconventional monetary policy, which was continued by Ajith Nivard Cabraal. This has been heavily criticized and widely cited as causing accelerating inflation and exacerbating the Sri Lankan economic crisis. MMT scholars Stephanie Kelton and Fadhel Kaboub maintain that the Sri Lankan government's fiscal and monetary policy bore little resemblance to the recommendations of MMT economists. == Theoretical approach == In sovereign financial systems, banks can create money, but these "horizontal" transactions do not increase net financial assets because assets are offset by liabilities. According to MMT advocates, "The balance sheet of the government does not include any domestic monetary instrument on its asset side; it owns no money. All monetary instruments issued by the government are on its liability side and are created and destroyed with spending and taxing or bond offerings." In MMT, "vertical money" enters circulation through government spending. Taxation and its legal tender enable power to discharge debt and establish fiat money as currency, giving it value by creating demand for it in the form of a private tax obligation. In addition, fines, fees, and licenses create demand for the currency. This currency can be issued by the domestic government or by using a foreign, accepted currency. An ongoing tax obligation, in concert with private confidence and acceptance of the currency, underpins the value of the currency. Because the government can issue its own currency at will, MMT maintains that the level of taxation relative to government spending (the government's deficit spending or budget surplus) is in reality a policy tool that regulates inflation and unemployment, and not a means of funding the government's activities by itself. The approach of MMT typically reverses theories of governmental austerity. The policy implications of the two are likewise typically opposed. === Vertical transactions === MMT labels a transaction between a government entity (public sector) and a non-government entity (private sector) as a "vertical transaction". The government sector includes the treasury and central bank. The non-government sector includes domestic and foreign private individuals and firms (including the private banking system) and foreign buyers and sellers of the currency. == Interaction between government and the banking sector == MMT is based on an account of the "operational realities" of interactions between the government and its central bank, and the commercial banking sector, with proponents like Scott Fullwiler arguing that understanding reserve accounting is critical to understanding monetary policy options. A sovereign government typically has an operating account with the country's central bank. From this account, the government can spend and also receive taxes and other inflows. Each commercial bank also has an account with the central bank, by means of which it manages its reserves (that is, money for clearing and settling interbank transactions). When a government spends money, its central bank debits its Treasury's operating account and credits the reserve accounts of the commercial banks. The commercial bank of the final recipient will then credit up this recipient's deposit account by issuing bank money. This spending increases the total reserve deposits in the commercial bank sector. Taxation works in reverse: taxpayers have their bank deposit accounts debited, along with their bank's reserve account being debited to pay the government; thus, deposits in the commercial banking sector fall. === Government bonds and interest rate maintenance === Virtually all central banks set an interest rate target, and most now establish administered rates to anchor the short-term overnight interest rate at their target. These administered rates include interest paid directly on reserve balances held by commercial banks, a discount rate charged to banks for borrowing reserves directly from the central bank, and an Overnight Reverse Repurchase (ON RRP) facility rate paid to banks for temporarily forgoing reserves in exchange for Treasury securities. The latter facility is a type of open market operation to help ensure interest rates remain at a target level. According to MMT, the issuing of government bonds is best understood as an operation to offset government spending rather than a requirement to finance it. In most countries, commercial banks' reserve accounts with the central bank must have a positive balance at the end of every day; in some countries, the amount is specifically set as a proportion of the liabilities a bank has, i.e., its customer deposits. This is known as a reserve requirement. At the end of every day, a commercial bank will have to examine the status of their reserve accounts. Those that are in deficit have the option of borrowing the required funds from the Central Bank, where they may be charged a lending rate (sometimes known as a discount window or discount rate) on the amount they borrow. On the other hand, the banks that have excess reserves can simply leave them with the central bank and earn a support rate from the central bank. Some countries, such as Japan, have a support rate of zero. Banks with more reserves than they need will be willing to lend to banks with a reserve shortage on the interbank lending market. The surplus banks will want to earn a higher rate than the support rate that the central bank pays on reserves; whereas the deficit banks will want to pay a lower interest rate than the discount rate the central bank charges for borrowing. Thus, they will lend to each other until each bank has reached their reserve requirement. In a balanced system, where there are just enough total reserves for all the banks to meet requirements, the short-term interbank lending rate will be in between the support rate and the discount rate. Under an MMT framework where government spending injects new reserves into the commercial banking system, and taxes withdraw them from the banking system, government activity would have an instant effect on interbank lending. If on a particular day, the government spends more than it taxes, reserves have been added to the banking system (see vertical transactions). This action typically leads to a system-wide surplus of reserves, with competition between banks seeking to lend their excess reserves, forcing the short-term interest rate down to the support rate (or to zero if a support rate is not in place). At this point, banks will simply keep their reserve surplus with their central bank and earn the support rate. The alternate case is where the government receives more taxes on a particular day than it spends. Then there may be a system-wide deficit of reserves. Consequently, surplus funds will be in demand on the interbank market, and thus the short-term interest rate will rise towards the discount rate. Thus, if the central bank wants to maintain a target interest rate somewhere between the support rate and the discount rate, it must manage the liquidity in the system to ensure that the correct amount of reserves is on-hand in the banking system. Central banks manage liquidity by buying and selling government bonds on the open market. When excess reserves are in the banking system, the central bank sells bonds, removing reserves from the banking system, because private individuals pay for the bonds. When insufficient reserves are in the system, the central bank buys government bonds from the private sector, adding reserves to the banking system. The central bank buys bonds by simply creating money – it is not financed in any way. It is a net injection of reserves into the banking system. If a central bank is to maintain a target interest rate, then it must buy and sell government bonds on the open market in order to maintain the correct amount of reserves in the system. == Horizontal transactions == MMT economists describe any transactions within the private sector as "horizontal" transactions, including the expansion of the broad money supply through the extension of credit by banks. MMT economists regard the concept of the money multiplier, where a bank is completely constrained in lending through the deposits it holds and its capital requirement, as misleading. Rather than being a practical limitation on lending, the cost of borrowing funds from the interbank market (or the central bank) represents a profitability consideration when the private bank lends in excess of its reserve or capital requirements (see interaction between government and the banking sector). Effects on employment are used as evidence that a currency monopolist is overly restricting the supply of the financial assets needed to pay taxes and satisfy savings desires. According to MMT, bank credit should be regarded as a "leverage" of the monetary base and should not be regarded as increasing the net financial assets held by an economy: only the government or central bank is able to issue high-powered money with no corresponding liability. Stephanie Kelton said that bank money is generally accepted in settlement of debt and taxes because of state guarantees, but that state-issued high-powered money sits atop a "hierarchy of money". == Foreign sector == === Imports and exports === MMT proponents such as Warren Mosler say that trade deficits are sustainable and beneficial to the standard of living in the short term. Imports are an economic benefit to the importing nation because they provide the nation with real goods. Exports, however, are an economic cost to the exporting nation because it is losing real goods that it could have consumed. Currency transferred to foreign ownership, however, represents a future claim over goods of that nation. Cheap imports may also cause the failure of local firms providing similar goods at higher prices, and hence unemployment, but MMT proponents label that consideration as a subjective value-based one, rather than an economic-based one: It is up to a nation to decide whether it values the benefit of cheaper imports more than it values employment in a particular industry. Similarly a nation overly dependent on imports may face a supply shock if the exchange rate drops significantly, though central banks can and do trade on foreign exchange markets to avoid shocks to the exchange rate. === Foreign sector and government === MMT says that as long as demand exists for the issuer's currency, whether the bond holder is foreign or not, governments can never be insolvent when the debt obligations are in their own currency; this is because the government is not constrained in creating its own fiat currency (although the bond holder may affect the exchange rate by converting to local currency). MMT does agree with mainstream economics that debt in a foreign currency is a fiscal risk to governments, because the indebted government cannot create foreign currency. In this case, the only way the government can repay its foreign debt is to ensure that its currency is continually in high demand by foreigners over the period that it wishes to repay its debt; an exchange rate collapse would potentially multiply the debt many times over asymptotically, making it impossible to repay. In that case, the government can default, or attempt to shift to an export-led strategy or raise interest rates to attract foreign investment in the currency. Either one negatively affects the economy. == Policy implications == Economist Stephanie Kelton explained several points made by MMT in March 2019: Under MMT, fiscal policy (i.e., government taxing and spending decisions) is the primary means of achieving full employment, establishing the budget deficit at the level necessary to reach that goal. In mainstream economics, monetary policy (i.e., Central Bank adjustment of interest rates and its balance sheet) is the primary mechanism, assuming there is some interest rate low enough to achieve full employment. Kelton said that "cutting interest rates is ineffective in a slump" because businesses, expecting weak profits and few customers, will not invest even at very low interest rates. Government interest expenses are proportional to interest rates, so raising rates is a form of stimulus (it increases the budget deficit and injects money into the private sector, other things being equal); cutting rates is a form of austerity. Achieving full employment can be administered via a centrally-funded job guarantee, which acts as an automatic stabilizer. When private sector jobs are plentiful, the government spending on guaranteed jobs is lower, and vice versa. Under MMT, expansionary fiscal policy, i.e., money creation to fund purchases, can increase bank reserves, which can lower interest rates. In mainstream economics, expansionary fiscal policy, i.e., debt issuance and spending, can result in higher interest rates, crowding out economic activity. Economist John T. Harvey explained several of the premises of MMT and their policy implications in March 2019: The private sector treats labor as a cost to be minimized, so it cannot be expected to achieve full employment without government creating jobs, too, such as through a job guarantee. The public sector's deficit is the private sector's surplus and vice versa, by accounting identity, which increased private sector debt during the Clinton-era budget surpluses. Creating money activates idle resources, mainly labor. Not doing so is immoral. Demand can be insensitive to interest rate changes, so a key mainstream assumption, that lower interest rates lead to higher demand, is questionable. There is a "free lunch" in creating money to fund government expenditure to achieve full employment. Unemployment is a burden; full employment is not. Creating money alone does not cause inflation; spending it when the economy is at full employment can. MMT says that "borrowing" is a misnomer when applied to a sovereign government's fiscal operations, because the government is merely accepting its own IOUs, and nobody can borrow back their own debt instruments. Sovereign government goes into debt by issuing its own liabilities that are financial wealth to the private sector. "Private debt is debt, but government debt is financial wealth to the private sector." In this theory, sovereign government is not financially constrained in its ability to spend; the government can afford to buy anything that is for sale in currency that it issues; there may, however, be political constraints, like a debt ceiling law. The only constraint is that excessive spending by any sector of the economy, whether households, firms, or public, could cause inflationary pressures. MMT economists advocate a government-funded job guarantee scheme to eliminate involuntary unemployment. Proponents say that this activity can be consistent with price stability because it targets unemployment directly rather than attempting to increase private sector job creation indirectly through a much larger economic stimulus, and maintains a "buffer stock" of labor that can readily switch to the private sector when jobs become available. A job guarantee program could also be considered an automatic stabilizer to the economy, expanding when private sector activity cools down and shrinking in size when private sector activity heats up. MMT economists also say quantitative easing (QE) is unlikely to have the effects that its advocates hope for. Under MMT, QE – the purchasing of government debt by central banks – is simply an asset swap, exchanging interest-bearing dollars for non-interest-bearing dollars. The net result of this procedure is not to inject new investment into the real economy, but instead to drive up asset prices, shifting money from government bonds into other assets such as equities, which enhances economic inequality. The Bank of England's analysis of QE confirms that it has disproportionately benefited the wealthiest. MMT economists say that inflation can be better controlled (than by setting interest rates) with new or increased taxes to remove extra money from the economy. These tax increases would be on everyone, not just billionaires, since the majority of spending is by average Americans. == Comparison of MMT with mainstream Keynesian economics == MMT can be compared and contrasted with mainstream Keynesian economics in a variety of ways: == Criticism == A 2019 survey of leading economists by the University of Chicago Booth's Initiative on Global Markets showed a unanimous rejection of assertions attributed by the survey to MMT: "Countries that borrow in their own currency should not worry about government deficits because they can always create money to finance their debt" and "Countries that borrow in their own currency can finance as much real government spending as they want by creating money". Directly responding to the survey, MMT economist William K. Black said "MMT scholars do not make or support either claim." Multiple MMT academics regard the attribution of these claims as a smear. Freiwirtschaft economist Felix Fuders argues that the growth imperative created by modern monetary theory has harmful environmental, mental, and social consequences. Fuders concluded that it is impossible to meaningfully address the problem of unsustainable growth or fulfill the sustainable development goals proposed by the United Nations without completely overhauling the monetary system in favor of demurrage currency. The post-Keynesian economist Thomas Palley has stated that MMT is largely a restatement of elementary Keynesian economics, but prone to "over-simplistic analysis" and understating the risks of its policy implications. Palley has disagreed with proponents of MMT who have asserted that standard Keynesian analysis does not fully capture the accounting identities and financial restraints on a government that can issue its own money. He said that these insights are well captured by standard Keynesian stock-flow consistent IS-LM models, and have been well understood by Keynesian economists for decades. He claimed MMT "assumes away the problem of fiscal–monetary conflict" – that is, that the governmental body that creates the spending budget (e.g. the legislature) may refuse to cooperate with the governmental body that controls the money supply (e.g., the central bank). He stated the policies proposed by MMT proponents would cause serious financial instability in an open economy with flexible exchange rates, while using fixed exchange rates would restore hard financial constraints on the government and "undermines MMT's main claim about sovereign money freeing governments from standard market disciplines and financial constraints". Furthermore, Palley has asserted that MMT lacks a plausible theory of inflation, particularly in the context of full employment in the employer of last resort policy first proposed by Hyman Minsky and advocated by Bill Mitchell and other MMT theorists; of a lack of appreciation of the financial instability that could be caused by permanently zero interest rates; and of overstating the importance of government-created money. Palley concludes that MMT provides no new insights about monetary theory, while making unsubstantiated claims about macroeconomic policy, and that MMT has only received attention recently due to it being a "policy polemic for depressed times". Marc Lavoie has said that whilst the neochartalist argument is "essentially correct", many of its counter-intuitive claims depend on a "confusing" and "fictitious" consolidation of government and central banking operations, which is what Palley calls "the problem of fiscal–monetary conflict". New Keynesian economist and recipient of the Nobel Prize in Economics, Paul Krugman, asserted MMT goes too far in its support for government budget deficits, and ignores the inflationary implications of maintaining budget deficits when the economy is growing. Krugman accused MMT devotees as engaging in "calvinball" – a game from the comic strip Calvin and Hobbes in which the players change the rules at whim. Austrian School economist Robert P. Murphy stated that MMT is "dead wrong" and that "the MMT worldview doesn't live up to its promises". He said that MMT saying cutting government deficits erodes private saving is true "only for the portion of private saving that is not invested" and says that the national accounting identities used to explain this aspect of MMT could equally be used to support arguments that government deficits "crowd out" private sector investment. The chartalist view of money itself, and the MMT emphasis on the importance of taxes in driving money, is also a source of criticism. In 2015, three MMT economists, Scott Fullwiler, Stephanie Kelton, and L. Randall Wray, addressed what they saw as the main criticisms being made. == See also == Everything bubble Friedman's k-percent rule - money supply should be increased at a fixed percentage Debt based monetary system - monetary system where commercial banks create the new money as debt == References == This article incorporates text by Yasuhito Tanaka available under the CC BY 4.0 license. == Further reading == == External links == January 2012: Modern Monetary Theory: A Debate (Brett Fiebiger critiques and Scott Fullwiler, Stephanie Kelton, L. Randall Wray respond; Political Economy Research Institute, Amherst, MA) June 2012: Knut Wicksell and origins of modern monetary theory (Lars Pålsson Syll) September 2020: Degrowth and MMT: A thought Experiment (Jason Hickel) The Modern Money Network is currently headquartered at Columbia University in the city of New York. October 2023: Finding The Money at IMDb is a documentary film about an underdog group of MMT economists on a mission to instigate a paradigm shift by flipping our understanding of the national debt, and the nature of money, upside down.
Wikipedia/Modern_monetary_theory
Rational choice modeling refers to the use of decision theory (the theory of rational choice) as a set of guidelines to help understand economic and social behavior. The theory tries to approximate, predict, or mathematically model human behavior by analyzing the behavior of a rational actor facing the same costs and benefits. Rational choice models are most closely associated with economics, where mathematical analysis of behavior is standard. However, they are widely used throughout the social sciences, and are commonly applied to cognitive science, criminology, political science, and sociology. == Overview == The basic premise of rational choice theory is that the decisions made by individual actors will collectively produce aggregate social behaviour. The theory also assumes that individuals have preferences out of available choice alternatives. These preferences are assumed to be complete and transitive. Completeness refers to the individual being able to say which of the options they prefer (i.e. individual prefers A over B, B over A or are indifferent to both). Alternatively, transitivity is where the individual weakly prefers option A over B and weakly prefers option B over C, leading to the conclusion that the individual weakly prefers A over C. The rational agent will then perform their own cost–benefit analysis using a variety of criterion to perform their self-determined best choice of action. One version of rationality is instrumental rationality, which involves achieving a goal using the most cost effective method without reflecting on the worthiness of that goal. Duncan Snidal emphasises that the goals are not restricted to self-regarding, selfish, or material interests. They also include other-regarding, altruistic, as well as normative or ideational goals. Rational choice theory does not claim to describe the choice process, but rather it helps predict the outcome and pattern of choice. It is consequently assumed that the individual is a self-interested or “homo economicus”. Here, the individual comes to a decision that optimizes their preferences by balancing costs and benefits. Rational choice theory has proposed that there are two outcomes of two choices regarding human action. Firstly, the feasible region will be chosen within all the possible and related action. Second, after the preferred option has been chosen, the feasible region that has been selected was picked based on restriction of financial, legal, social, physical or emotional restrictions that the agent is facing. After that, a choice will be made based on the preference order. The concept of rationality used in rational choice theory is different from the colloquial and most philosophical use of the word. In this sense, "rational" behaviour can refer to "sensible", "predictable", or "in a thoughtful, clear-headed manner." Rational choice theory uses a much more narrow definition of rationality. At its most basic level, behavior is rational if it is reflective and consistent (across time and different choice situations). More specifically, behavior is only considered irrational if it is logically incoherent, i.e. self-contradictory. Early neoclassical economists writing about rational choice, including William Stanley Jevons, assumed that agents make consumption choices so as to maximize their happiness, or utility. Contemporary theory bases rational choice on a set of choice axioms that need to be satisfied, and typically does not specify where the goal (preferences, desires) comes from. It mandates just a consistent ranking of the alternatives.: 501  Individuals choose the best action according to their personal preferences and the constraints facing them. == Actions, assumptions, and individual preferences == Rational choice theory can be viewed in different contexts. At an individual level, the theory suggests that the agent will decide on the action (or outcome) they most prefer. If the actions (or outcomes) are evaluated in terms of costs and benefits, the choice with the maximum net benefit will be chosen by the rational individual. Rational behaviour is not solely driven by monetary gain, but can also be driven by emotional motives. The theory can be applied to general settings outside of those identified by costs and benefits. In general, rational decision making entails choosing among all available alternatives the alternative that the individual most prefers. The "alternatives" can be a set of actions ("what to do?") or a set of objects ("what to choose/buy"). In the case of actions, what the individual really cares about are the outcomes that results from each possible action. Actions, in this case, are only an instrument for obtaining a particular outcome. === Formal statement === The available alternatives are often expressed as a set of objects, for example a set of j exhaustive and exclusive actions: A = { a 1 , … , a i , … , a j } {\displaystyle A=\{a_{1},\ldots ,a_{i},\ldots ,a_{j}\}} For example, if a person can choose to vote for either Roger or Sara or to abstain, their set of possible alternatives is: A = { Vote for Roger, Vote for Sara, Abstain } {\displaystyle A=\{{\text{Vote for Roger, Vote for Sara, Abstain}}\}} The theory makes two technical assumptions about individuals' preferences over alternatives: Completeness – for any two alternatives ai and aj in the set, either ai is preferred to aj, or aj is preferred to ai, or the individual is indifferent between ai and aj. In other words, all pairs of alternatives can be compared with each other. Transitivity – if alternative a1 is preferred to a2, and alternative a2 is preferred to a3, then a1 is preferred to a3. Together these two assumptions imply that given a set of exhaustive and exclusive actions to choose from, an individual can rank the elements of this set in terms of his preferences in an internally consistent way (the ranking constitutes a total ordering, minus some assumptions), and the set has at least one maximal element. The preference between two alternatives can be: Strict preference occurs when an individual prefers a1 to a2 and does not view them as equally preferred. Weak preference implies that individual either strictly prefers a1 over a2 or is indifferent between them. Indifference occurs when an individual neither prefers a1 to a2, nor a2 to a1. Since (by completeness) the individual does not refuse a comparison, they must therefore be indifferent in this case. Research since the 1980s sought to develop models that weaken these assumptions and argue some cases of this behaviour can be considered rational. However, the Dutch book theorems show that this comes at a major cost of internal coherence, such that weakening any of the Von Neumann–Morgenstern axioms makes. The most severe consequences are associated with violating independence of irrelevant alternatives, and transitive preferences, or fully abandoning completeness rather than weakening it to "asymptotic" completeness. == Utility maximization == Often preferences are described by their utility function or payoff function. This is an ordinal number that an individual assigns over the available actions, such as: u ( a i ) > u ( a j ) . {\displaystyle u\left(a_{i}\right)>u\left(a_{j}\right).} The individual's preferences are then expressed as the relation between these ordinal assignments. For example, if an individual prefers the candidate Sara over Roger over abstaining, their preferences would have the relation: u ( Sara ) > u ( Roger ) > u ( abstain ) . {\displaystyle u\left({\text{Sara}}\right)>u\left({\text{Roger}}\right)>u\left({\text{abstain}}\right).} A preference relation that as above satisfies completeness, transitivity, and, in addition, continuity, can be equivalently represented by a utility function. == Benefits == The rational choice approach allows preferences to be represented as real-valued utility functions. Economic decision making then becomes a problem of maximizing this utility function, subject to constraints (e.g. a budget). This has many advantages. It provides a compact theory that makes empirical predictions with a relatively sparse model – just a description of the agent's objectives and constraints. Furthermore, optimization theory is a well-developed field of mathematics. These two factors make rational choice models tractable compared to other approaches to choice. Most importantly, this approach is strikingly general. It has been used to analyze not only personal and household choices about traditional economic matters like consumption and savings, but also choices about education, marriage, child-bearing, migration, crime and so on, as well as business decisions about output, investment, hiring, entry, exit, etc. with varying degrees of success. In the field of political science rational choice theory has been used to help predict human decision making and model for the future; therefore it is useful in creating effective public policy, and enables the government to develop solutions quickly and efficiently. Despite the empirical shortcomings of rational choice theory, the flexibility and tractability of rational choice models (and the lack of equally powerful alternatives) lead to them still being widely used. == Applications == Rational choice theory has become increasingly employed in social sciences other than economics, such as sociology, evolutionary theory and political science in recent decades. It has had far-reaching impacts on the study of political science, especially in fields like the study of interest groups, elections, behaviour in legislatures, coalitions, and bureaucracy. In these fields, the use of rational choice theory to explain broad social phenomena is the subject of controversy. === Rational choice theory in political science === Rational choice theory provides a framework to explain why groups of rational individuals can come to collectively irrational decisions. For example, while at the individual level a group of people may have common interests, applying a rational choice framework to their individually rational preferences can explain group-level outcomes that fail to accomplish any one individual's preferred objectives. Rational choice theory provides a framework to describe outcomes like this as the product of rational agents performing their own cost–benefit analysis to maximize their self-interests, a process that doesn't always align with the group's preferences. ==== Rational choice in voting behavior ==== Voter behaviour shifts significantly thanks to rational theory, which is ingrained in human nature, the most significant of which occurs when there are times of economic trouble. An example in economic policy, economist Anthony Downs concluded that a high income voter ‘votes for whatever party he believes would provide him with the highest utility income from government action’, using rational choice theory to explain people's income as their justification for their preferred tax rate. Downs' work provides a framework for analyzing tax-rate preference in a rational choice framework. He argues that an individual votes if it is in their rational interest to do so. Downs models this utility function as B + D > C, where B is the benefit of the voter winning, D is the satisfaction derived from voting and C is the cost of voting. It is from this that we can determine that parties have moved their policy outlook to be more centric in order to maximise the number of voters they have for support. It is from this very simple framework that more complex adjustments can be made to describe the success of politicians as an outcome of their ability or failure to satisfy the utility function of individual voters. ==== Rational choice theory in international relations ==== Rational choice theory has become one of the major tools used to study international relations. Proponents of its use in this field typically assume that states and the policies crafted at the national outcome are the outcome of self-interested, politically shrewd actors including, but not limited to, politicians, lobbyists, businesspeople, activists, regular voters and any other individual in the national audience. The use of rational choice theory as a framework to predict political behavior has led to a rich literature that describes the trajectory of policy to varying degrees of success. For example, some scholars have examined how states can make credible threats to deter other states from a (nuclear) attack. Others have explored under what conditions states wage war against each other. Yet others have investigated under what circumstances the threat and imposition of international economic sanctions tend to succeed and when they are likely to fail. === Rational choice theory in social interactions === Rational choice theory and social exchange theory involves looking at all social relations in the form of costs and rewards, both tangible and non tangible. According to Abell, Rational Choice Theory is "understanding individual actors... as acting, or more likely interacting, in a manner such that they can be deemed to be doing the best they can for themselves, given their objectives, resources, circumstances, as they seem them". Rational Choice Theory has been used to comprehend the complex social phenomena, of which derives from the actions and motivations of an individual. Individuals are often highly motivated by their wants and needs. By making calculative decisions, it is considered as rational action. Individuals are often making calculative decisions in social situations by weighing out the pros and cons of an action taken towards a person. The decision to act on a rational decision is also dependent on the unforeseen benefits of the friendship. Homan mentions that actions of humans are motivated by punishment or rewards. This reinforcement through punishments or rewards determines the course of action taken by a person in a social situation as well. Individuals are motivated by mutual reinforcement and are also fundamentally motivated by the approval of others. Attaining the approval of others has been a generalized character, along with money, as a means of exchange in both Social and Economic exchanges. In Economic exchanges, it involves the exchange of goods or services. In Social exchange, it is the exchange of approval and certain other valued behaviors. Rational Choice Theory in this instance, heavily emphasizes the individual's interest as a starting point for making social decisions. Despite differing view points about Rational choice theory, it all comes down to the individual as a basic unit of theory. Even though sharing, cooperation and cultural norms emerge, it all stems from an individual's initial concern about the self. G.S Becker offers an example of how Rational choice can be applied to personal decisions, specifically regarding the rationale that goes behind decisions on whether to marry or divorce another individual. Due to the self-serving drive on which the theory of rational choice is derived, Becker concludes that people marry if the expected utility from such marriage exceeds the utility one would gain from remaining single, and in the same way couples would separate should the utility of being together be less than expected and provide less (economic) benefit than being separated would. Since the theory behind rational choice is that individuals will take the course of action that best serves their personal interests, when considering relationships it is still assumed that they will display such mentality due to deep-rooted, self-interested aspects of human nature. Social Exchange and Rational Choice Theory both comes down to an individual's efforts to meet their own personal needs and interests through the choices they make. Even though some may be done sincerely for the welfare of others at that point of time, both theories point to the benefits received in return. These returns may be received immediately or in the future, be it tangible or not. Coleman discussed a number of theories to elaborate on the premises and promises of rational choice theory. One of the concepts that He introduced was Trust. It is where "individuals place trust, in both judgement and performance of others, based on rational considerations of what is best, given the alternatives they confront". In a social situation, there has to be a level of trust among the individuals. He noted that this level of trust is a consideration that an individual takes into concern before deciding on a rational action towards another individual. It affects the social situation as one navigates the risks and benefits of an action. By assessing the possible outcomes or alternatives to an action for another individual, the person is making a calculated decision. In another situation such as making a bet, you are calculating the possible lost and how much can be won. If the chances of winning exceeds the cost of losing, the rational decision would be to place the bet. Therefore, the decision to place trust in another individual involves the same rational calculations that are involved in the decision of making a bet. Even though rational theory is used in Economics and Social settings, there are some similarities and differences. The concept of reward and reinforcement is parallel to each other while the concept of cost is also parallel to the concept of punishment. However, there is a difference of underlying assumptions in both contexts. In a social setting, the focus is often on the current or past reinforcements, with no guarantee of immediate tangible or intangible returns from another individual in the future. In Economics, decisions are made with heavier emphasis on future rewards. Despite having both perspectives differ in focus, they primarily reflect on how individuals make different rational decisions when given an immediate or long-term circumstances to consider in their rational decision making. == Criticism == Both the assumptions and the behavioral predictions of rational choice theory have sparked criticism from various camps. === The limits of rationality === As mentioned above, some economists have developed models of bounded rationality, such as Herbert Simon, which hope to be more psychologically plausible without completely abandoning the idea that reason underlies decision-making processes. Simon argues factors such as imperfect information, uncertainty and time constraints all affect and limit our rationality, and therefore our decision-making skills. Furthermore, his concepts of 'satisficing' and 'optimizing' suggest sometimes because of these factors, we settle for a decision which is good enough, rather than the best decision. Other economists have developed more theories of human decision-making that allow for the roles of uncertainty, institutions, and determination of individual tastes by their socioeconomic environment (cf. Fernandez-Huerga, 2008). === Philosophical critiques === Martin Hollis and Edward J. Nell's 1975 book offers both a philosophical critique of neo-classical economics and an innovation in the field of economic methodology. Further, they outlined an alternative vision to neo-classicism based on a rationalist theory of knowledge. Within neo-classicism, the authors addressed consumer behaviour (in the form of indifference curves and simple versions of revealed preference theory) and marginalist producer behaviour in both product and factor markets. Both are based on rational optimizing behaviour. They consider imperfect as well as perfect markets since neo-classical thinking embraces many market varieties and disposes of a whole system for their classification. However, the authors believe that the issues arising from basic maximizing models have extensive implications for econometric methodology (Hollis and Nell, 1975, p. 2). In particular it is this class of models – rational behavior as maximizing behaviour – which provide support for specification and identification. And this, they argue, is where the flaw is to be found. Hollis and Nell (1975) argued that positivism (broadly conceived) has provided neo-classicism with important support, which they then show to be unfounded. They base their critique of neo-classicism not only on their critique of positivism but also on the alternative they propose, rationalism. Indeed, they argue that rationality is central to neo-classical economics – as rational choice – and that this conception of rationality is misused. Demands are made of it that it cannot fulfill. Ultimately, individuals do not always act rationally or conduct themselves in a utility maximising manner. Duncan K. Foley (2003, p. 1) has also provided an important criticism of the concept of rationality and its role in economics. He argued that“Rationality” has played a central role in shaping and establishing the hegemony of contemporary mainstream economics. As the specific claims of robust neoclassicism fade into the history of economic thought, an orientation toward situating explanations of economic phenomena in relation to rationality has increasingly become the touchstone by which mainstream economists identify themselves and recognize each other. This is not so much a question of adherence to any particular conception of rationality, but of taking rationality of individual behavior as the unquestioned starting point of economic analysis. Foley (2003, p. 9) went on to argue thatThe concept of rationality, to use Hegelian language, represents the relations of modern capitalist society one-sidedly. The burden of rational-actor theory is the assertion that ‘naturally’ constituted individuals facing existential conflicts over scarce resources would rationally impose on themselves the institutional structures of modern capitalist society, or something approximating them. But this way of looking at matters systematically neglects the ways in which modern capitalist society and its social relations in fact constitute the ‘rational’, calculating individual. The well-known limitations of rational-actor theory, its static quality, its logical antinomies, its vulnerability to arguments of infinite regress, its failure to develop a progressive concrete research program, can all be traced to this starting-point. More recently Edward J. Nell and Karim Errouaki (2011, Ch. 1) argued that:The DNA of neoclassical economics is defective. Neither the induction problem nor the problems of methodological individualism can be solved within the framework of neoclassical assumptions. The neoclassical approach is to call on rational economic man to solve both. Economic relationships that reflect rational choice should be ‘projectible’. But that attributes a deductive power to ‘rational’ that it cannot have consistently with positivist (or even pragmatist) assumptions (which require deductions to be simply analytic). To make rational calculations projectible, the agents may be assumed to have idealized abilities, especially foresight; but then the induction problem is out of reach because the agents of the world do not resemble those of the model. The agents of the model can be abstract, but they cannot be endowed with powers actual agents could not have. This also undermines methodological individualism; if behaviour cannot be reliably predicted on the basis of the ‘rational choices of agents’, a social order cannot reliably follow from the choices of agents. === Psychological critiques === The validity of Rational Choice Theory has been generally refuted by the results of research in behavioral psychology. The revision or alternative theory that arises from these discrepancies is called Prospect Theory. The 'doubly-divergent' critique of Rational Choice Theory implicit in Prospect Theory has sometimes been presented as a revision or alternative. Daniel Kahneman's work has been notably elaborated by research undertaken and supervised by Jonathan Haidt and other scholars. === Empirical critiques === In their 1994 work, Pathologies of Rational Choice Theory, Donald P. Green and Ian Shapiro argue that the empirical outputs of rational choice theory have been limited. They contend that much of the applicable literature, at least in political science, was done with weak statistical methods and that when corrected many of the empirical outcomes no longer hold. When taken in this perspective, rational choice theory has provided very little to the overall understanding of political interaction – and is an amount certainly disproportionately weak relative to its appearance in the literature. Yet, they concede that cutting-edge research, by scholars well-versed in the general scholarship of their fields (such as work on the U.S. Congress by Keith Krehbiel, Gary Cox, and Mat McCubbins) has generated valuable scientific progress. === Methodological critiques === Schram and Caterino (2006) contains a fundamental methodological criticism of rational choice theory for promoting the view that the natural science model is the only appropriate methodology in social science and that political science should follow this model, with its emphasis on quantification and mathematization. Schram and Caterino argue instead for methodological pluralism. The same argument is made by William E. Connolly, who in his work Neuropolitics shows that advances in neuroscience further illuminate some of the problematic practices of rational choice theory. === Sociological critiques === Pierre Bourdieu fiercely opposed rational choice theory as grounded in a misunderstanding of how social agents operate. Bourdieu argued that social agents do not continuously calculate according to explicit rational and economic criteria. According to Bourdieu, social agents operate according to an implicit practical logic – a practical sense – and bodily dispositions. Social agents act according to their "feel for the game" (the "feel" being, roughly, habitus, and the "game" being the field). Other social scientists, inspired in part by Bourdieu's thinking have expressed concern about the inappropriate use of economic metaphors in other contexts, suggesting that this may have political implications. The argument they make is that by treating everything as a kind of "economy" they make a particular vision of the way an economy works seem more natural. Thus, they suggest, rational choice is as much ideological as it is scientific. ==== Criticism based on motivational assumptions ==== Rational choice theorists discuss individual values and structural elements as equally important determinants of outcomes. However, for methodological reasons in the empirical application, more emphasis is usually placed on social structural determinants. Therefore, in line with structural functionalism and social network analysis perspectives, rational choice explanations are considered mainstream in sociology . ==== Criticism based on the assumption of realism ==== Some of the scepticism among sociologists regarding rational choice stems from a misunderstanding of the lack of realist assumptions. Social research has shown that social agents usually act solely based on habit or impulse, the power of emotion. Social Agents predict the expected consequences of options in stock markets and economic crises and choose the best option through collective "emotional drives," implying social forces rather than "rational" choices. However, sociology commonly misunderstands rational choice in its critique of rational choice theory. Rational choice theory does not explain what rational people would do in a given situation, which falls under decision theory. Theoretical choice focuses on social outcomes rather than individual outcomes. Social outcomes are identified as stable equilibria in which individuals have no incentive to deviate from their course of action. This orientation of others' behaviour toward social outcomes may be unintended or undesirable. Therefore, the conclusions generated in such cases are relegated to the "study of irrational behaviour". === Criticism based on the biopolitical paradigm === The basic assumptions of rational choice theory do not take into account external factors (social, cultural, economic) that interfere with autonomous decision-making. Representatives of the biopolitical paradigm such as Michel Foucault drew attention to the micro-power structures that shape the soul, body and mind and thus top-down impose certain decisions on individuals. Humans – according to the assumptions of the biopolitical paradigm – therefore conform to dominant social and cultural systems rather than to their own subjectively defined goals, which they would seek to achieve through rational and optimal decisions. === Critiques on the basis of evolutionary psychology === An evolutionary psychology perspective suggests that many of the seeming contradictions and biases regarding rational choice can be explained as being rational in the context of maximizing biological fitness in the ancestral environment but not necessarily in the current one. Thus, when living at subsistence level where a reduction of resources may have meant death it may have been rational to place a greater value on losses than on gains. Proponents argue it may also explain differences between groups. === Critiques on the basis of emotion research === Proponents of emotional choice theory criticize the rational choice paradigm by drawing on new findings from emotion research in psychology and neuroscience. They point out that rational choice theory is generally based on the assumption that decision-making is a conscious and reflective process based on thoughts and beliefs. It presumes that people decide on the basis of calculation and deliberation. However, cumulative research in neuroscience suggests that only a small part of the brain's activities operate at the level of conscious reflection. The vast majority of its activities consist of unconscious appraisals and emotions. The significance of emotions in decision-making has generally been ignored by rational choice theory, according to these critics. Moreover, emotional choice theorists contend that the rational choice paradigm has difficulty incorporating emotions into its models, because it cannot account for the social nature of emotions. Even though emotions are felt by individuals, psychologists and sociologists have shown that emotions cannot be isolated from the social environment in which they arise. Emotions are inextricably intertwined with people's social norms and identities, which are typically outside the scope of standard rational choice models. Emotional choice theory seeks to capture not only the social but also the physiological and dynamic character of emotions. It represents a unitary action model to organize, explain, and predict the ways in which emotions shape decision-making. === The difference between public and private spheres === Herbert Gintis has also provided an important criticism to rational choice theory. He argued that rationality differs between the public and private spheres. The public sphere being what you do in collective action and the private sphere being what you do in your private life. Gintis argues that this is because “models of rational choice in the private sphere treat agents’ choices as instrumental”. “Behaviour in the public sphere, by contrast, is largely non-instrumental because it is non-consequential". Individuals make no difference to the outcome, “much as single molecules make no difference to the properties of the gas" (Herbert, G). This is a weakness of rational choice theory as it shows that in situations such as voting in an election, the rational decision for the individual would be to not vote as their vote makes no difference to the outcome of the election. However, if everyone were to act in this way the democratic society would collapse as no one would vote. Therefore, we can see that rational choice theory does not describe how everything in the economic and political world works, and that there are other factors of human behaviour at play. == See also == == Notes == == References == Abella, Alex (2008). Soldiers of Reason: The RAND Corporation and the Rise of the American Empire. New York: Harcourt. Allingham, Michael (2002). Choice Theory: A Very Short Introduction, Oxford, ISBN 978-0192803030. Anand, P. (1993)."Foundations of Rational Choice Under Risk", Oxford: Oxford University Press. Amadae, S.M.(2003). Rationalizing Capitalist Democracy: The Cold War Origins of Rational Choice Liberalism, Chicago: University of Chicago Press. Arrow, Kenneth J. ([1987] 1989). "Economic Theory and the Hypothesis of Rationality," in The New Palgrave: Utility and Probability, pp. 25–39. Bicchieri, Cristina (1993). Rationality and Coordination. Cambridge University Press Bicchieri, Cristina (2003). “Rationality and Game Theory”, in The Handbook of Rationality, The Oxford Reference Library of Philosophy, Oxford University Press. Cristian Maquieira, Jan 2019, Japan's Withdrawal from the International Whaling Commission: A Disaster that Could Have Been Avoided, Available at: [2], November 2019 Downs, Anthony (1957). "An Economic Theory of Democracy." Harper. Anthony Downs, 1957, An Economic Theory of Political Action in a Democracy, Journal of Political Economy, Vol. 65, No. 2, pp. 135–150 Coleman, James S. (1990). Foundations of Social Theory Dixon, Huw (2001), Surfing Economics, Pearson. Especially chapters 7 and 8 Elster, Jon (1979). Ulysses and the Sirens, Cambridge University Press. Elster, Jon (1989). Nuts and Bolts for the Social Sciences, Cambridge University Press. Elster, Jon (2007). Explaining Social Behavior – more Nuts and Bolts for the Social Sciences, Cambridge University Press. Fernandez-Huerga (2008.) The Economic Behavior of Human Beings: The Institutionalist//Post-Keynesian Model Journal of Economic Issues. vol. 42 no. 3, September. Schram, Sanford F. and Brian Caterino, eds. (2006). Making Political Science Matter: Debating Knowledge, Research, and Method. New York and London: New York University Press. Walsh, Vivian (1996). Rationality, Allocation, and Reproduction, Oxford. Description and scroll to chapter-preview links. Martin Hollis and Edward J. Nell (1975) Rational Economic Man. Cambridge: Cambridge University Press. Foley, D. K. (1989) Ideology and Methodology. An unpublished lecture to Berkeley graduate students in 1989 discussing personal and collective survival strategies for non-mainstream economists. Foley, D.K. (1998). Introduction (chapter 1) in Peter S. Albin, Barriers and Bounds to Rationality: Essays on Economic Complexity and Dynamics in Interactive Systems. Princeton: Princeton University Press. Foley, D. K. (2003) Rationality and Ideology in Economics. lecture in the World Political Economy course at the Graduate Faculty of New School UM, New School. Boland, L. (1982) The Foundations of Economic Method. London: George Allen & Unwin Edward J. Nell and Errouaki, K. (2011) Rational Econometric Man. Cheltenham: E. Elgar. Pierre Bourdieu (2005) The Social Structures of the Economy, Polity 2005 Calhoun, C. et al. (1992) "Pierre Bourdieu: Critical Perspectives." University of Chicago Press. Gary Browning, Abigail Halcli, Frank Webster, 2000, Understanding Contemporary Society: Theories of the Present, London, Sage Publications Grenfell, M (2011) "Bourdieu, Language and Linguistics" London, Continuum. Grenfell, M. (ed) (2008) "Pierre Bourdieu: Key concepts" London, Acumen Press Herbert Gintis. Centre for the study of Governance and Society CSGS(Rational Choice and Political Behaviour: A lecture by Herbert Gintis. YouTube video. 23:57. Nov 21, 2018) == Further reading == Gilboa, Itzhak (2010). Rational Choice. Cambridge, MA: MIT Press. Green, Donald P., and Justin Fox (2007). "Rational Choice Theory," in The SAGE Handbook of Social Science Methodology, edited by William Outhwaite and Stephen P. Turner. London: Sage, pp. 269–282. Kydd, Andrew H. (2008). "Methodological Individualism and Rational Choice," The Oxford Handbook of International Relations, edited by Christian Reus-Smit and Duncan Snidal. Oxford: Oxford University Press, pp. 425–443. Mas-Colell, A., M. D. Whinston, and J. R. Green (1995). Microeconomic Theory. Oxford: Oxford University Press. Nedergaard, Peter (July 2006). "The 2003 reform of the Common Agricultural Policy: against all odds or rational explanations?" (PDF). Journal of European Integration. 28 (3): 203–223. doi:10.1080/07036330600785749. S2CID 154437960. == External links == Rational Choice Theory at the Stanford Encyclopedia of Philosophy Rational Choice Theory – Article by John Scott The New Nostradamus – on the use by Bruce Bueno de Mesquita of rational choice theory in political forecasting To See The Future, Use The Logic Of Self-Interest – NPR audio clip
Wikipedia/Rational_choice_models
The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. First introduced by Arthur P. Dempster in the context of statistical inference, the theory was later developed by Glenn Shafer into a general framework for modeling epistemic uncertainty—a mathematical theory of evidence. The theory allows one to combine evidence from different sources and arrive at a degree of belief (represented by a mathematical object called belief function) that takes into account all the available evidence. In a narrow sense, the term Dempster–Shafer theory refers to the original conception of the theory by Dempster and Shafer. However, it is more common to use the term in the wider sense of the same general approach, as adapted to specific kinds of situations. In particular, many authors have proposed different rules for combining evidence, often with a view to handling conflicts in evidence better. The early contributions have also been the starting points of many important developments, including the transferable belief model and the theory of hints. == Overview == Dempster–Shafer theory is a generalization of the Bayesian theory of subjective probability. Belief functions base degrees of belief (or confidence, or trust) for one question on the subjective probabilities for a related question. The degrees of belief themselves may or may not have the mathematical properties of probabilities; how much they differ depends on how closely the two questions are related. Put another way, it is a way of representing epistemic plausibilities, but it can yield answers that contradict those arrived at using probability theory. Often used as a method of sensor fusion, Dempster–Shafer theory is based on two ideas: obtaining degrees of belief for one question from subjective probabilities for a related question, and Dempster's rule for combining such degrees of belief when they are based on independent items of evidence. In essence, the degree of belief in a proposition depends primarily upon the number of answers (to the related questions) containing the proposition, and the subjective probability of each answer. Also contributing are the rules of combination that reflect general assumptions about the data. In this formalism a degree of belief (also referred to as a mass) is represented as a belief function rather than a Bayesian probability distribution. Probability values are assigned to sets of possibilities rather than single events: their appeal rests on the fact they naturally encode evidence in favor of propositions. Dempster–Shafer theory assigns its masses to all of the subsets of the set of states of a system—in set-theoretic terms, the power set of the states. For instance, assume a situation where there are two possible states of a system. For this system, any belief function assigns mass to the first state, the second, to both, and to neither. === Belief and plausibility === Shafer's formalism starts from a set of possibilities under consideration, for instance numerical values of a variable, or pairs of linguistic variables like "date and place of origin of a relic" (asking whether it is antique or a recent fake). A hypothesis is represented by a subset of this frame of discernment, like "(Ming dynasty, China)", or "(19th century, Germany)".: p.35f.  Shafer's framework allows for belief about such propositions to be represented as intervals, bounded by two values, belief (or support) and plausibility: belief ≤ plausibility. In a first step, subjective probabilities (masses) are assigned to all subsets of the frame; usually, only a restricted number of sets will have non-zero mass (focal elements).: 39f.  Belief in a hypothesis is constituted by the sum of the masses of all subsets of the hypothesis-set. It is the amount of belief that directly supports either the given hypothesis or a more specific one, thus forming a lower bound on its probability. Belief (usually denoted Bel) measures the strength of the evidence in favor of a proposition p. It ranges from 0 (indicating no evidence) to 1 (denoting certainty). Plausibility is 1 minus the sum of the masses of all sets whose intersection with the hypothesis is empty. Or, it can be obtained as the sum of the masses of all sets whose intersection with the hypothesis is not empty. It is an upper bound on the possibility that the hypothesis could be true, because there is only so much evidence that contradicts that hypothesis. Plausibility (denoted by Pl) is thus related to Bel by Pl(p) = 1 − Bel(~p). It also ranges from 0 to 1 and measures the extent to which evidence in favor of ~p leaves room for belief in p. For example, suppose we have a belief of 0.5 for a proposition, say "the cat in the box is dead." This means that we have evidence that allows us to state strongly that the proposition is true with a confidence of 0.5. However, the evidence contrary to that hypothesis (i.e. "the cat is alive") only has a confidence of 0.2. The remaining mass of 0.3 (the gap between the 0.5 supporting evidence on the one hand, and the 0.2 contrary evidence on the other) is "indeterminate," meaning that the cat could either be dead or alive. This interval represents the level of uncertainty based on the evidence in the system. The "neither" hypothesis is set to zero by definition (it corresponds to "no solution"). The orthogonal hypotheses "Alive" and "Dead" have probabilities of 0.2 and 0.5, respectively. This could correspond to "Live/Dead Cat Detector" signals, which have respective reliabilities of 0.2 and 0.5. Finally, the all-encompassing "Either" hypothesis (which simply acknowledges there is a cat in the box) picks up the slack so that the sum of the masses is 1. The belief for the "Alive" and "Dead" hypotheses matches their corresponding masses because they have no subsets; belief for "Either" consists of the sum of all three masses (Either, Alive, and Dead) because "Alive" and "Dead" are each subsets of "Either". The "Alive" plausibility is 1 − m (Dead): 0.5 and the "Dead" plausibility is 1 − m (Alive): 0.8. In other way, the "Alive" plausibility is m(Alive) + m(Either) and the "Dead" plausibility is m(Dead) + m(Either). Finally, the "Either" plausibility sums m(Alive) + m(Dead) + m(Either). The universal hypothesis ("Either") will always have 100% belief and plausibility—it acts as a checksum of sorts. Here is a somewhat more elaborate example where the behavior of belief and plausibility begins to emerge. We're looking through a variety of detector systems at a single faraway signal light, which can only be coloured in one of three colours (red, yellow, or green): Events of this kind would not be modeled as distinct entities in probability space as they are here in mass assignment space. Rather the event "Red or Yellow" would be considered as the union of the events "Red" and "Yellow", and (see probability axioms) P(Red or Yellow) ≥ P(Yellow), and P(Any) = 1, where Any refers to Red or Yellow or Green. In DST the mass assigned to Any refers to the proportion of evidence that can not be assigned to any of the other states, which here means evidence that says there is a light but does not say anything about what color it is. In this example, the proportion of evidence that shows the light is either Red or Green is given a mass of 0.05. Such evidence might, for example, be obtained from a R/G color blind person. DST lets us extract the value of this sensor's evidence. Also, in DST the empty set is considered to have zero mass, meaning here that the signal light system exists and we are examining its possible states, not speculating as to whether it exists at all. === Combining beliefs === Beliefs from different sources can be combined with various fusion operators to model specific situations of belief fusion, e.g. with Dempster's rule of combination, which combines belief constraints that are dictated by independent belief sources, such as in the case of combining hints or combining preferences. Note that the probability masses from propositions that contradict each other can be used to obtain a measure of conflict between the independent belief sources. Other situations can be modeled with different fusion operators, such as cumulative fusion of beliefs from independent sources, which can be modeled with the cumulative fusion operator. Dempster's rule of combination is sometimes interpreted as an approximate generalisation of Bayes' rule. In this interpretation the priors and conditionals need not be specified, unlike traditional Bayesian methods, which often use a symmetry (minimax error) argument to assign prior probabilities to random variables (e.g. assigning 0.5 to binary values for which no information is available about which is more likely). However, any information contained in the missing priors and conditionals is not used in Dempster's rule of combination unless it can be obtained indirectly—and arguably is then available for calculation using Bayes equations. Dempster–Shafer theory allows one to specify a degree of ignorance in this situation instead of being forced to supply prior probabilities that add to unity. This sort of situation, and whether there is a real distinction between risk and ignorance, has been extensively discussed by statisticians and economists. See, for example, the contrasting views of Daniel Ellsberg, Howard Raiffa, Kenneth Arrow and Frank Knight. == Formal definition == Let X be the universe: the set representing all possible states of a system under consideration. The power set 2 X {\displaystyle 2^{X}\,\!} is the set of all subsets of X, including the empty set ∅ {\displaystyle \emptyset } . For example, if: X = { a , b } {\displaystyle X=\left\{a,b\right\}\,\!} then 2 X = { ∅ , { a } , { b } , X } . {\displaystyle 2^{X}=\left\{\emptyset ,\left\{a\right\},\left\{b\right\},X\right\}.\,} The elements of the power set can be taken to represent propositions concerning the actual state of the system, by containing all and only the states in which the proposition is true. The theory of evidence assigns a belief mass to each element of the power set. Formally, a function m : 2 X → [ 0 , 1 ] {\displaystyle m:2^{X}\rightarrow [0,1]\,\!} is called a basic belief assignment (BBA), when it has two properties. First, the mass of the empty set is zero: m ( ∅ ) = 0. {\displaystyle m(\emptyset )=0.\,\!} Second, the masses of all the members of the power set add up to a total of 1: ∑ A ∈ 2 X m ( A ) = 1. {\displaystyle \sum _{A\in 2^{X}}m(A)=1.} The mass m(A) of A, a given member of the power set, expresses the proportion of all relevant and available evidence that supports the claim that the actual state belongs to A but to no particular subset of A. The value of m(A) pertains only to the set A and makes no additional claims about any subsets of A, each of which have, by definition, their own mass. From the mass assignments, the upper and lower bounds of a probability interval can be defined. This interval contains the precise probability of a set of interest (in the classical sense), and is bounded by two non-additive continuous measures called belief (or support) and plausibility: bel ⁡ ( A ) ≤ P ( A ) ≤ pl ⁡ ( A ) . {\displaystyle \operatorname {bel} (A)\leq P(A)\leq \operatorname {pl} (A).} The belief bel(A) for a set A is defined as the sum of all the masses of subsets of the set of interest: bel ⁡ ( A ) = ∑ B ∣ B ⊆ A m ( B ) . {\displaystyle \operatorname {bel} (A)=\sum _{B\mid B\subseteq A}m(B).\,} The plausibility pl(A) is the sum of all the masses of the sets B that intersect the set of interest A: pl ⁡ ( A ) = ∑ B ∣ B ∩ A ≠ ∅ m ( B ) . {\displaystyle \operatorname {pl} (A)=\sum _{B\mid B\cap A\neq \emptyset }m(B).\,} The two measures are related to each other as follows: pl ⁡ ( A ) = 1 − bel ⁡ ( A ¯ ) . {\displaystyle \operatorname {pl} (A)=1-\operatorname {bel} ({\overline {A}}).\,} And conversely, for finite A, given the belief measure bel(B) for all subsets B of A, we can find the masses m(A) with the following inverse function: m ( A ) = ∑ B ∣ B ⊆ A ( − 1 ) | A − B | bel ⁡ ( B ) {\displaystyle m(A)=\sum _{B\mid B\subseteq A}(-1)^{|A-B|}\operatorname {bel} (B)\,} where |A − B| is the difference of the cardinalities of the two sets. It follows from the last two equations that, for a finite set X, one needs to know only one of the three (mass, belief, or plausibility) to deduce the other two; though one may need to know the values for many sets in order to calculate one of the other values for a particular set. In the case of an infinite X, there can be well-defined belief and plausibility functions but no well-defined mass function. == Dempster's rule of combination == The problem we now face is how to combine two independent sets of probability mass assignments in specific situations. In case different sources express their beliefs over the frame in terms of belief constraints such as in the case of giving hints or in the case of expressing preferences, then Dempster's rule of combination is the appropriate fusion operator. This rule derives common shared belief between multiple sources and ignores all the conflicting (non-shared) belief through a normalization factor. Use of that rule in other situations than that of combining belief constraints has come under serious criticism, such as in case of fusing separate belief estimates from multiple sources that are to be integrated in a cumulative manner, and not as constraints. Cumulative fusion means that all probability masses from the different sources are reflected in the derived belief, so no probability mass is ignored. Specifically, the combination (called the joint mass) is calculated from the two sets of masses m1 and m2 in the following manner: m 1 , 2 ( ∅ ) = 0 {\displaystyle m_{1,2}(\emptyset )=0\,} m 1 , 2 ( A ) = ( m 1 ⊕ m 2 ) ( A ) = 1 1 − K ∑ B ∩ C = A ≠ ∅ m 1 ( B ) m 2 ( C ) {\displaystyle m_{1,2}(A)=(m_{1}\oplus m_{2})(A)={\frac {1}{1-K}}\sum _{B\cap C=A\neq \emptyset }m_{1}(B)m_{2}(C)\,\!} where K = ∑ B ∩ C = ∅ m 1 ( B ) m 2 ( C ) . {\displaystyle K=\sum _{B\cap C=\emptyset }m_{1}(B)m_{2}(C).\,} K is a measure of the amount of conflict between the two mass sets. === Effects of conflict === The normalization factor above, 1 − K, has the effect of completely ignoring conflict and attributing any mass associated with conflict to the empty set. This combination rule for evidence can therefore produce counterintuitive results, as we show next. ==== Example producing correct results in case of high conflict ==== The following example shows how Dempster's rule produces intuitive results when applied in a preference fusion situation, even when there is high conflict. Suppose that two friends, Alice and Bob, want to see a film at the cinema one evening, and that there are only three films showing: X, Y and Z. Alice expresses her preference for film X with probability 0.99, and her preference for film Y with a probability of only 0.01. Bob expresses his preference for film Z with probability 0.99, and his preference for film Y with a probability of only 0.01. When combining the preferences with Dempster's rule of combination it turns out that their combined preference results in probability 1.0 for film Y, because it is the only film that they both agree to see. Dempster's rule of combination produces intuitive results even in case of totally conflicting beliefs when interpreted in this way. Assume that Alice prefers film X with probability 1.0, and that Bob prefers film Z with probability 1.0. When trying to combine their preferences with Dempster's rule it turns out that it is undefined in this case, which means that there is no solution. This would mean that they can not agree on seeing any film together, so they do not go to the cinema together that evening. However, the semantics of interpreting preference as a probability is vague: if it is referring to the probability of seeing film X tonight, then we face the fallacy of the excluded middle: the event that actually occurs, seeing none of the films tonight, has a probability mass of 0. ==== Example producing counter-intuitive results in case of high conflict ==== An example with exactly the same numerical values was introduced by Lotfi Zadeh in 1979, to point out counter-intuitive results generated by Dempster's rule when there is a high degree of conflict. The example goes as follows: Suppose that one has two equi-reliable doctors and one doctor believes a patient has either a brain tumor, with a probability (i.e. a basic belief assignment—bba's, or mass of belief) of 0.99; or meningitis, with a probability of only 0.01. A second doctor believes the patient has a concussion, with a probability of 0.99, and believes the patient suffers from meningitis, with a probability of only 0.01. Applying Dempster's rule to combine these two sets of masses of belief, one gets finally m(meningitis)=1 (the meningitis is diagnosed with 100 percent of confidence). Such result goes against common sense since both doctors agree that there is a little chance that the patient has a meningitis. This example has been the starting point of many research works for trying to find a solid justification for Dempster's rule and for foundations of Dempster–Shafer theory or to show the inconsistencies of this theory. ==== Example producing counter-intuitive results in case of low conflict ==== The following example shows where Dempster's rule produces a counter-intuitive result, even when there is low conflict. Suppose that one doctor believes a patient has either a brain tumor, with a probability of 0.99, or meningitis, with a probability of only 0.01. A second doctor also believes the patient has a brain tumor, with a probability of 0.99, and believes the patient suffers from concussion, with a probability of only 0.01. If we calculate m (brain tumor) with Dempster's rule, we obtain m ( brain tumor ) = Bel ⁡ ( brain tumor ) = 1. {\displaystyle m({\text{brain tumor}})=\operatorname {Bel} ({\text{brain tumor}})=1.\,} This result implies complete support for the diagnosis of a brain tumor, which both doctors believed very likely. The agreement arises from the low degree of conflict between the two sets of evidence comprised by the two doctors' opinions. In either case, it would be reasonable to expect that: m ( brain tumor ) < 1 and Bel ⁡ ( brain tumor ) < 1 , {\displaystyle m({\text{brain tumor}})<1{\text{ and }}\operatorname {Bel} ({\text{brain tumor}})<1,\,} since the existence of non-zero belief probabilities for other diagnoses implies less than complete support for the brain tumour diagnosis. == Dempster–Shafer as a generalisation of Bayesian theory == As in Dempster–Shafer theory, a Bayesian belief function bel : 2 X → [ 0 , 1 ] {\displaystyle \operatorname {bel} :2^{X}\rightarrow [0,1]\,\!} has the properties bel ⁡ ( ∅ ) = 0 {\displaystyle \operatorname {bel} (\emptyset )=0} and bel ⁡ ( X ) = 1 {\displaystyle \operatorname {bel} (X)=1} . The third condition, however, is subsumed by, but relaxed in DS theory:: p. 19  If A ∩ B = ∅ , then bel ⁡ ( A ∪ B ) = bel ⁡ ( A ) + bel ⁡ ( B ) . {\displaystyle {\text{If }}A\cap B=\emptyset ,{\text{ then}}\operatorname {bel} (A\cup B)=\operatorname {bel} (A)+\operatorname {bel} (B).} Either of the following conditions implies the Bayesian special case of the DS theory:: p. 37, 45  bel ⁡ ( A ) + bel ⁡ ( A ¯ ) = 1 for all A ⊆ X . {\displaystyle \operatorname {bel} (A)+\operatorname {bel} ({\bar {A}})=1{\text{ for all }}A\subseteq X.} For finite X, all focal elements of the belief function are singletons. As an example of how the two approaches differ, a Bayesian could model the color of a car as a probability distribution over (red, green, blue), assigning one number to each color. Dempster–Shafer would assign numbers to each of (red, green, blue, (red or green), (red or blue), (green or blue), (red or green or blue)). These numbers do not have to be coherent; for example, Bel(red)+Bel(green) does not have to equal Bel(red or green). Thus, Bayes' conditional probability can be considered as a special case of Dempster's rule of combination.: p. 19f.  However, it lacks many (if not most) of the properties that make Bayes' rule intuitively desirable, leading some to argue that it cannot be considered a generalization in any meaningful sense. For example, DS theory violates the requirements for Cox's theorem, which implies that it cannot be considered a coherent (contradiction-free) generalization of classical logic—specifically, DS theory violates the requirement that a statement be either true or false (but not both). As a result, DS theory is subject to the Dutch Book argument, implying that any agent using DS theory would agree to a series of bets that result in a guaranteed loss. (Note: Some of the criticism was later found to be erroneous and inappropriate.). == Bayesian approximation == The Bayesian approximation reduces a given bpa m {\displaystyle m} to a (discrete) probability distribution, i.e. only singleton subsets of the frame of discernment are allowed to be focal elements of the approximated version m _ {\displaystyle {\underline {m}}} of m {\displaystyle m} : m _ ( A ) = { ∑ B | A ⊆ B m ( B ) ∑ C m ( C ) ⋅ | C | , | A | = 1 0 , otherwise {\displaystyle {\underline {m}}(A)=\left\{{\begin{aligned}&{\frac {\sum \limits _{B|A\subseteq B}m(B)}{\sum \limits _{C}m(C)\cdot |C|}},&|A|=1\\&0,&{\text{otherwise}}\end{aligned}}\right.} It's useful for those who are only interested in the single state hypothesis. We can perform it in the 'light' example. == Criticism == Judea Pearl (1988a, chapter 9; 1988b and 1990) has argued that it is misleading to interpret belief functions as representing either "probabilities of an event," or "the confidence one has in the probabilities assigned to various outcomes," or "degrees of belief (or confidence, or trust) in a proposition," or "degree of ignorance in a situation." Instead, belief functions represent the probability that a given proposition is provable from a set of other propositions, to which probabilities are assigned. Confusing probabilities of truth with probabilities of provability may lead to counterintuitive results in reasoning tasks such as (1) representing incomplete knowledge, (2) belief-updating and (3) evidence pooling. He further demonstrated that, if partial knowledge is encoded and updated by belief function methods, the resulting beliefs cannot serve as a basis for rational decisions. Kłopotek and Wierzchoń proposed to interpret the Dempster–Shafer theory in terms of statistics of decision tables (of the rough set theory), whereby the operator of combining evidence should be seen as relational joining of decision tables. In another interpretation M. A. Kłopotek and S. T. Wierzchoń propose to view this theory as describing destructive material processing (under loss of properties), e.g. like in some semiconductor production processes. Under both interpretations reasoning in DST gives correct results, contrary to the earlier probabilistic interpretations, criticized by Pearl in the cited papers and by other researchers. Jøsang proved that Dempster's rule of combination actually is a method for fusing belief constraints. It only represents an approximate fusion operator in other situations, such as cumulative fusion of beliefs, but generally produces incorrect results in such situations. The confusion around the validity of Dempster's rule therefore originates in the failure of correctly interpreting the nature of situations to be modeled. Dempster's rule of combination always produces correct and intuitive results in situation of fusing belief constraints from different sources. Yang et al. prove that Dempster’s rule is inherently probabilistic, extends Bayes’ rule and reduces to Bayes’ rule when precise probabilities are available and fully reliable, regardless of whether prior is uniform. They further extend Bayes’ and Dempster’s rules to the Evidential Reasoning (ER) rule which is probabilistic when probabilities are not precise or fully reliable. They address some criticisms of the behaviour of Dempster’s rule from a probabilistic perspective and explain the rationality of the behaviour. == Relational measures == In considering preferences one might use the partial order of a lattice instead of the total order of the real line as found in Dempster–Schafer theory. Indeed, Gunther Schmidt has proposed this modification and outlined the method. Given a set of criteria C and a bounded lattice L with ordering ≤, Schmidt defines a relational measure to be a function μ from the power set of C into L that respects the order ⊆ on P {\displaystyle \mathbb {P} } (C): A ⊆ B ⟹ μ ( A ) ≤ μ ( B ) {\displaystyle A\subseteq B\implies \mu (A)\leq \mu (B)} and such that μ takes the empty subset of P {\displaystyle \mathbb {P} } (C) to the least element of L, and takes C to the greatest element of L. Schmidt compares μ with the belief function of Schafer, and he also considers a method of combining measures generalizing the approach of Dempster (when new evidence is combined with previously held evidence). He also introduces a relational integral and compares it to the Choquet integral and Sugeno integral. Any relation m between C and L may be introduced as a "direct valuation", then processed with the calculus of relations to obtain a possibility measure μ. == See also == == References == == Further reading == Yang, J. B. and Xu, D. L. (2013) "Evidential Reasoning Rule for Evidence Combination", Artificial Intelligence, 205: 1–29. Yager, R. R., & Liu, L. (2008). Classic works of the Dempster–Shafer theory of belief functions. Studies in fuzziness and soft computing, v. 219. Berlin: Springer. ISBN 978-3-540-25381-5. Joseph C. Giarratano and Gary D. Riley (2005); Expert Systems: principles and programming, ed. Thomson Course Tech., ISBN 0-534-38447-1 Beynon, M., Curry, B. and Morgan, P. (2000) "The Dempster–Shafer theory of evidence: an alternative approach to multicriteria decision modelling", Omega, Vol.28, pp. 37–50 == External links == BFAS: Belief Functions and Applications Society
Wikipedia/Dempster–Shafer_theory
In mechanics, the net force is the sum of all the forces acting on an object. For example, if two forces are acting upon an object in opposite directions, and one force is greater than the other, the forces can be replaced with a single force that is the difference of the greater and smaller force. That force is the net force. When forces act upon an object, they change its acceleration. The net force is the combined effect of all the forces on the object's acceleration, as described by Newton's second law of motion. When the net force is applied at a specific point on an object, the associated torque can be calculated. The sum of the net force and torque is called the resultant force, which causes the object to rotate in the same way as all the forces acting upon it would if they were applied individually. It is possible for all the forces acting upon an object to produce no torque at all. This happens when the net force is applied along the line of action. In some texts, the terms resultant force and net force are used as if they mean the same thing. This is not always true, especially in complex topics like the motion of spinning objects or situations where everything is perfectly balanced, known as static equilibrium. In these cases, it is important to understand that "net force" and "resultant force" can have distinct meanings. == Concept == In physics, a force is considered a vector quantity. This means that it not only has a size (or magnitude) but also a direction in which it acts. We typically represent force with the symbol F in boldface, or sometimes, we place an arrow over the symbol to indicate its vector nature, like this: F {\displaystyle \mathbf {F} } . When we need to visually represent a force, we draw a line segment. This segment starts at a point A, where the force is applied, and ends at another point B. This line not only gives us the direction of the force (from A to B) but also its magnitude: the longer the line, the stronger the force. One of the essential concepts in physics is that forces can be added together, which is the basis of vector addition. This concept has been central to physics since the times of Galileo and Newton, forming the cornerstone of Vector calculus, which came into its own in the late 1800s and early 1900s. The picture to the right shows how to add two forces using the "tip-to-tail" method. This method involves drawing forces a {\displaystyle {\mathbf {\mathbf {a} }}} , and b {\displaystyle {\mathbf {\mathbf {b} }}} from the tip of the first force. The resulting force, or "total" force, F t = a + b {\displaystyle \mathbf {F} _{t}={\mathbf {\mathbf {a} }}+{\mathbf {\mathbf {b} }}} , is then drawn from the start of the first force (the tail) to the end of the second force (the tip). Grasping this concept is fundamental to understanding how forces interact and combine to influence the motion and equilibrium of objects. When forces are applied to an extended body (a body that's not a single point), they can be applied at different points. Such forces are called 'bound vectors'. It's important to remember that to add these forces together, they need to be considered at the same point. The concept of "net force" comes into play when you look at the total effect of all of these forces on the body. However, the net force alone may not necessarily preserve the motion of the body. This is because, besides the net force, the 'torque' or rotational effect associated with these forces also matters. The net force must be applied at the right point, and with the right associated torque, to replicate the effect of the original forces. When the net force and the appropriate torque are applied at a single point, they together constitute what is known as the resultant force. This resultant force-and-torque combination will have the same effect on the body as all the original forces and their associated torques. == Parallelogram rule for the addition of forces == A force is known as a bound vector—which means it has a direction and magnitude and a point of application. A convenient way to define a force is by a line segment from a point A to a point B. If we denote the coordinates of these points as A = (Ax, Ay, Az) and B = (Bx, By, Bz), then the force vector applied at A is given by F = B − A = ( B x − A x , B y − A y , B z − A z ) . {\displaystyle \mathbf {F} =\mathbf {B} -\mathbf {A} =(B_{x}-A_{x},B_{y}-A_{y},B_{z}-A_{z}).} The length of the vector B − A {\displaystyle \mathbf {\mathbf {B}} -\mathbf {\mathbf {A}} } defines the magnitude of F {\displaystyle \mathbf {\mathbf {F}} } and is given by | F | = ( B x − A x ) 2 + ( B y − A y ) 2 + ( B z − A z ) 2 . {\displaystyle |\mathbf {F} |={\sqrt {(B_{x}-A_{x})^{2}+(B_{y}-A_{y})^{2}+(B_{z}-A_{z})^{2}}}.} The sum of two forces F1 and F2 applied at A can be computed from the sum of the segments that define them. Let F1 = B−A and F2 = D−A, then the sum of these two vectors is F = F 1 + F 2 = B − A + D − A , {\displaystyle \mathbf {F} =\mathbf {F} _{1}+\mathbf {F} _{2}=\mathbf {B} -\mathbf {A} +\mathbf {D} -\mathbf {A} ,} which can be written as F = F 1 + F 2 = 2 ( B + D 2 − A ) = 2 ( E − A ) , {\displaystyle \mathbf {F} =\mathbf {F} _{1}+\mathbf {F} _{2}=2\left({\frac {\mathbf {B} +\mathbf {D} }{2}}-\mathbf {A} \right)=2(\mathbf {E} -\mathbf {A} ),} where E is the midpoint of the segment BD that joins the points B and D. Thus, the sum of the forces F1 and F2 is twice the segment joining A to the midpoint E of the segment joining the endpoints B and D of the two forces. The doubling of this length is easily achieved by defining a segments BC and DC parallel to AD and AB, respectively, to complete the parallelogram ABCD. The diagonal AC of this parallelogram is the sum of the two force vectors. This is known as the parallelogram rule for the addition of forces. == Translation and rotation due to a force == === Point forces === When a force acts on a particle, it is applied to a single point (the particle volume is negligible): this is a point force and the particle is its application point. But an external force on an extended body (object) can be applied to a number of its constituent particles, i.e. can be "spread" over some volume or surface of the body. However, determining its rotational effect on the body requires that we specify its point of application (actually, the line of application, as explained below). The problem is usually resolved in the following ways: Often, the volume or surface on which the force acts is relatively small compared to the size of the body, so that it can be approximated by a point. It is usually not difficult to determine whether the error caused by such approximation is acceptable. If it is not acceptable (obviously e.g. in the case of gravitational force), such "volume/surface" force should be described as a system of forces (components), each acting on a single particle, and then the calculation should be done for each of them separately. Such a calculation is typically simplified by the use of differential elements of the body volume/surface, and the integral calculus. In a number of cases, though, it can be shown that such a system of forces may be replaced by a single point force without the actual calculation (as in the case of uniform gravitational force). In any case, the analysis of the rigid body motion begins with the point force model. And when a force acting on a body is shown graphically, the oriented line segment representing the force is usually drawn so as to "begin" (or "end") at the application point. === Rigid bodies === In the example shown in the diagram opposite, a single force F {\displaystyle \mathbf {F} } acts at the application point H on a free rigid body. The body has the mass m {\displaystyle m} and its center of mass is the point C. In the constant mass approximation, the force causes changes in the body motion described by the following expressions: a = F m {\displaystyle \mathbf {a} ={\mathbf {F} \over m}} is the center of mass acceleration; and α = τ I {\displaystyle \mathbf {\alpha } ={\mathbf {\tau } \over I}} is the angular acceleration of the body. In the second expression, τ {\displaystyle \mathbf {\tau } } is the torque or moment of force, whereas I {\displaystyle I} is the moment of inertia of the body. A torque caused by a force F {\displaystyle \mathbf {F} } is a vector quantity defined with respect to some reference point: τ = r × F {\displaystyle \mathbf {\tau } =\mathbf {r} \times \mathbf {F} } is the torque vector, and τ = F k {\displaystyle \ \tau =Fk} is the amount of torque. The vector r {\displaystyle \mathbf {r} } is the position vector of the force application point, and in this example it is drawn from the center of mass as the reference point of (see diagram). The straight line segment k {\displaystyle k} is the lever arm of the force F {\displaystyle \mathbf {F} } with respect to the center of mass. As the illustration suggests, the torque does not change (the same lever arm) if the application point is moved along the line of the application of the force (dotted black line). More formally, this follows from the properties of the vector product, and shows that rotational effect of the force depends only on the position of its line of application, and not on the particular choice of the point of application along that line. The torque vector is perpendicular to the plane defined by the force and the vector r {\displaystyle \mathbf {r} } , and in this example, it is directed towards the observer; the angular acceleration vector has the same direction. The right-hand rule relates this direction to the clockwise or counterclockwise rotation in the plane of the drawing. The moment of inertia I {\displaystyle I} is calculated with respect to the axis through the center of mass that is parallel with the torque. If the body shown in the illustration is a homogeneous disc, this moment of inertia is I = m r 2 / 2 {\displaystyle I=mr^{2}/2} . If the disc has the mass 0,5 kg and the radius 0,8 m, the moment of inertia is 0,16 kgm2. If the amount of force is 2 N, and the lever arm 0,6 m, the amount of torque is 1,2 Nm. At the instant shown, the force gives to the disc the angular acceleration α = τ/I = 7,5 rad/s2, and to its center of mass it gives the linear acceleration a = F/m = 4 m/s2. == Resultant force == Resultant force and torque replaces the effects of a system of forces acting on the movement of a rigid body. An interesting special case is a torque-free resultant, which can be found as follows: Vector addition is used to find the net force; Use the equation to determine the point of application with zero torque: r × F R = ∑ i = 1 N ( r i × F i ) {\displaystyle \mathbf {r} \times \mathbf {F} _{\mathrm {R} }=\sum _{i=1}^{N}(\mathbf {r} _{i}\times \mathbf {F} _{i})} where F R {\displaystyle \mathbf {F} _{\mathrm {R} }} is the net force, r {\displaystyle \mathbf {r} } locates its application point, and individual forces are F i {\displaystyle \mathbf {F} _{i}} with application points r i {\displaystyle \mathbf {r} _{i}} . It may be that there is no point of application that yields a torque-free resultant. The diagram opposite illustrates simple graphical methods for finding the line of application of the resultant force of simple planar systems: Lines of application of the actual forces F 1 {\displaystyle \mathbf {F} _{1}} and F 2 {\displaystyle \mathbf {F} _{2}} on the leftmost illustration intersect. After vector addition is performed "at the location of F 1 {\displaystyle \mathbf {F} _{1}} ", the net force obtained is translated so that its line of application passes through the common intersection point. With respect to that point all torques are zero, so the torque of the resultant force F R {\displaystyle \mathbf {F} _{\mathrm {R} }} is equal to the sum of the torques of the actual forces. The illustration in the middle of the diagram shows two parallel actual forces. After vector addition "at the location of F 2 {\displaystyle \mathbf {F} _{2}} ", the net force is translated to the appropriate line of application, where it becomes the resultant force F R {\displaystyle \mathbf {F} _{\mathrm {R} }} . The procedure is based on decomposition of all forces into components for which the lines of application (pale dotted lines) intersect at one point (the so-called pole, arbitrarily set at the right side of the illustration). Then the arguments from the previous case are applied to the forces and their components to demonstrate the torque relationships. The rightmost illustration shows a couple, two equal but opposite forces for which the amount of the net force is zero, but they produce the net torque τ = F d {\displaystyle \tau =Fd} where d {\displaystyle \ d} is the distance between their lines of application. Since there is no resultant force, this torque can be [is?] described as "pure" torque. == Usage == In general, a system of forces acting on a rigid body can always be replaced by one force plus one pure (see previous section) torque. The force is the net force, but to calculate the additional torque, the net force must be assigned the line of action. The line of action can be selected arbitrarily, but the additional pure torque depends on this choice. In a special case, it is possible to find such line of action that this additional torque is zero. The resultant force and torque can be determined for any configuration of forces. However, an interesting special case is a torque-free resultant. This is useful, both conceptually and practically, because the body moves without rotating as if it was a particle. Some authors do not distinguish the resultant force from the net force and use the terms as synonyms. == See also == Screw theory Center of mass Centers of gravity in non-uniform fields == References ==
Wikipedia/Net_force
In physics and engineering, a free body diagram (FBD; also called a force diagram) is a graphical illustration used to visualize the applied forces, moments, and resulting reactions on a free body in a given condition. It depicts a body or connected bodies with all the applied forces and moments, and reactions, which act on the body(ies). The body may consist of multiple internal members (such as a truss), or be a compact body (such as a beam). A series of free bodies and other diagrams may be necessary to solve complex problems. Sometimes in order to calculate the resultant force graphically the applied forces are arranged as the edges of a polygon of forces or force polygon (see § Polygon of forces). == Free body == A body is said to be "free" when it is singled out from other bodies for the purposes of dynamic or static analysis. The object does not have to be "free" in the sense of being unforced, and it may or may not be in a state of equilibrium; rather, it is not fixed in place and is thus "free" to move in response to forces and torques it may experience. Figure 1 shows, on the left, green, red, and blue widgets stacked on top of each other, and for some reason the red cylinder happens to be the body of interest. (It may be necessary to calculate the stress to which it is subjected, for example.) On the right, the red cylinder has become the free body. In figure 2, the interest has shifted to just the left half of the red cylinder and so now it is the free body on the right. The example illustrates the context sensitivity of the term "free body". A cylinder can be part of a free body, it can be a free body by itself, and, as it is composed of parts, any of those parts may be a free body in itself. Figure 1 and 2 are not yet free body diagrams. In a completed free body diagram, the free body would be shown with forces acting on it. == Purpose == Free body diagrams are used to visualize forces and moments applied to a body and to calculate reactions in mechanics and design problems. These diagrams are frequently used both to determine the loading of individual structural components and to calculate internal forces within a structure. They are used by most engineering disciplines from biomechanics to Structural Engineering. In the educational environment, a free body diagram is an important step in understanding certain topics, such as statics, dynamics and other forms of classical mechanics. == Features == A free body diagram is not a scaled drawing, it is a diagram. The symbols used in a free body diagram depends upon how a body is modeled. Free body diagrams consist of: A simplified version of the body (often a dot or a box) Forces shown as straight arrows pointing in the direction they act on the body Moments are shown as curves with an arrow head or a vector with two arrow heads pointing in the direction they act on the body One or more reference coordinate systems By convention, reactions to applied forces are shown with hash marks through the stem of the vector The number of forces and moments shown depends upon the specific problem and the assumptions made. Common assumptions are neglecting air resistance and friction and assuming rigid body action. In statics all forces and moments must balance to zero; the physical interpretation is that if they do not, the body is accelerating and the principles of statics do not apply. In dynamics the resultant forces and moments can be non-zero. Free body diagrams may not represent an entire physical body. Portions of a body can be selected for analysis. This technique allows calculation of internal forces, making them appear external, allowing analysis. This can be used multiple times to calculate internal forces at different locations within a physical body. For example, a gymnast performing the iron cross: modeling the ropes and person allows calculation of overall forces (body weight, neglecting rope weight, breezes, buoyancy, electrostatics, relativity, rotation of the earth, etc.). Then remove the person and show only one rope; you get force direction. Then only looking at the person the forces on the hand can be calculated. Now only look at the arm to calculate the forces and moments at the shoulders, and so on until the component you need to analyze can be calculated. === Modeling the body === A body may be modeled in three ways: a particle. This model may be used when any rotational effects are zero or have no interest even though the body itself may be extended. The body may be represented by a small symbolic blob and the diagram reduces to a set of concurrent arrows. A force on a particle is a bound vector. rigid extended. Stresses and strains are of no interest but rotational effects are. A force arrow should lie along the line of force, but where along the line is irrelevant. A force on an extended rigid body is a sliding vector. non-rigid extended. The point of application of a force becomes crucial and has to be indicated on the diagram. A force on a non-rigid body is a bound vector. Some use the tail of the arrow to indicate the point of application. Others use the tip. === What is included === An FBD represents the body of interest and the external forces acting on it. The body: This is usually a schematic depending on the body—particle/extended, rigid/non-rigid—and on what questions are to be answered. Thus if rotation of the body and torque is in consideration, an indication of size and shape of the body is needed. For example, the brake dive of a motorcycle cannot be found from a single point, and a sketch with finite dimensions is required. The external forces: These are indicated by labelled arrows. In a fully solved problem, a force arrow is capable of indicating the direction and the line of action the magnitude the point of application a reaction, as opposed to an applied force, if a hash is present through the stem of the arrow Often a provisional free body is drawn before everything is known. The purpose of the diagram is to help to determine magnitude, direction, and point of application of external loads. When a force is originally drawn, its length may not indicate the magnitude. Its line may not correspond to the exact line of action. Even its orientation may not be correct. External forces known to have negligible effect on the analysis may be omitted after careful consideration (e.g. buoyancy forces of the air in the analysis of a chair, or atmospheric pressure on the analysis of a frying pan). External forces acting on an object may include friction, gravity, normal force, drag, tension, or a human force due to pushing or pulling. When in a non-inertial reference frame (see coordinate system, below), fictitious forces, such as centrifugal pseudoforce are appropriate. At least one coordinate system is always included, and chosen for convenience. Judicious selection of a coordinate system can make defining the vectors simpler when writing the equations of motion or statics. The x direction may be chosen to point down the ramp in an inclined plane problem, for example. In that case the friction force only has an x component, and the normal force only has a y component. The force of gravity would then have components in both the x and y directions: mgsin(θ) in the x and mgcos(θ) in the y, where θ is the angle between the ramp and the horizontal. === Exclusions === A free body diagram should not show: Bodies other than the free body. Constraints. (The body is not free from constraints; the constraints have just been replaced by the forces and moments exerted on the body.) Forces exerted by the free body. (A diagram showing the forces exerted both on and by a body is likely to be confusing since all the forces will cancel out. By Newton's 3rd law if body A exerts a force on body B then B exerts an equal and opposite force on A. This should not be confused with the equal and opposite forces that are necessary to hold a body in equilibrium.) Internal forces. (For example, if an entire truss is being analyzed, the forces between the individual truss members are not included.) Velocity or acceleration vectors. == Analysis == In an analysis, a free body diagram is used by summing all forces and moments (often accomplished along or about each of the axes). When the sum of all forces and moments is zero, the body is at rest or moving and/or rotating at a constant velocity, by Newton's first law. If the sum is not zero, then the body is accelerating in a direction or about an axis according to Newton's second law. === Forces not aligned to an axis === Determining the sum of the forces and moments is straightforward if they are aligned with coordinate axes, but it is more complex if some are not. It is convenient to use the components of the forces, in which case the symbols ΣFx and ΣFy are used instead of ΣF (the variable M is used for moments). Forces and moments that are at an angle to a coordinate axis can be rewritten as two vectors that are equivalent to the original (or three, for three dimensional problems)—each vector directed along one of the axes (Fx) and (Fy). == Example: A block on an inclined plane == A simple free-body diagram, shown above, of a block on a ramp, illustrates this. All external supports and structures have been replaced by the forces they generate. These include: mg: the product of the mass of the block and the constant of gravitation acceleration: its weight. N: the normal force of the ramp. Ff: the friction force of the ramp. The force vectors show the direction and point of application and are labelled with their magnitude. It contains a coordinate system that can be used when describing the vectors. Some care is needed in interpreting the diagram. The normal force has been shown to act at the midpoint of the base, but if the block is in static equilibrium its true location is directly below the centre of mass, where the weight acts because that is necessary to compensate for the moment of the friction. Unlike the weight and normal force, which are expected to act at the tip of the arrow, the friction force is a sliding vector and thus the point of application is not relevant, and the friction acts along the whole base. == Polygon of forces == In the case of two applied forces, their sum (resultant force) can be found graphically using a parallelogram of forces. To graphically determine the resultant force of multiple forces, the acting forces can be arranged as edges of a polygon by attaching the beginning of one force vector to the end of another in an arbitrary order. Then the vector value of the resultant force would be determined by the missing edge of the polygon. In the diagram, the forces P1 to P6 are applied to the point O. The polygon is constructed starting with P1 and P2 using the parallelogram of forces (vertex a). The process is repeated (adding P3 yields the vertex b, etc.). The remaining edge of the polygon O-e represents the resultant force R. == Kinetic diagram == In dynamics a kinetic diagram is a pictorial device used in analyzing mechanics problems when there is determined to be a net force and/or moment acting on a body. They are related to and often used with free body diagrams, but depict only the net force and moment rather than all of the forces being considered. Kinetic diagrams are not required to solve dynamics problems; their use in teaching dynamics is argued against by some in favor of other methods that they view as simpler. They appear in some dynamics texts but are absent in others. == See also == Classical mechanics Force field analysis – applications of force diagram in social science Kinematic diagram Physics Shear and moment diagrams Strength of materials == References == == Sources == Rennie, Richard; Law, Jonathan, eds. (2019). "polygon of forces". A Dictionary of Physics (8th ed.). Oxford University Press. ISBN 9780198821472. == Notes == == External links == "Form Diagram - Force Diagram - Free Body Diagram" (PDF). eQUILIBRIUM. Block Research Group (BRG) at the Institute of Technology in Architecture at ETH Zürich. Retrieved 31 January 2024.
Wikipedia/Polygon_of_forces
In physics and engineering, a resultant force is the single force and associated torque obtained by combining a system of forces and torques acting on a rigid body via vector addition. The defining feature of a resultant force, or resultant force-torque, is that it has the same effect on the rigid body as the original system of forces. Calculating and visualizing the resultant force on a body is done through computational analysis, or (in the case of sufficiently simple systems) a free body diagram. The point of application of the resultant force determines its associated torque. The term resultant force should be understood to refer to both the forces and torques acting on a rigid body, which is why some use the term resultant force–torque. The force equal to the resultant force in magnitude, yet pointed in the opposite direction, is called an equilibrant force. == Illustration == The diagram illustrates simple graphical methods for finding the line of application of the resultant force of simple planar systems. Lines of application of the actual forces F → 1 {\displaystyle {\scriptstyle {\vec {F}}_{1}}} and F → 2 {\displaystyle \scriptstyle {\vec {F}}_{2}} in the leftmost illustration intersect. After vector addition is performed "at the location of F → 1 {\displaystyle \scriptstyle {\vec {F}}_{1}} ", the net force obtained is translated so that its line of application passes through the common intersection point. With respect to that point all torques are zero, so the torque of the resultant force F → R {\displaystyle \scriptstyle {\vec {F}}_{R}} is equal to the sum of the torques of the actual forces. Illustration in the middle of the diagram shows two parallel actual forces. After vector addition "at the location of F → 2 {\displaystyle \scriptstyle {\vec {F}}_{2}} ", the net force is translated to the appropriate line of application, whereof it becomes the resultant force F → R {\displaystyle \scriptstyle {\vec {F}}_{R}} . The procedure is based on a decomposition of all forces into components for which the lines of application (pale dotted lines) intersect at one point (the so-called pole, arbitrarily set at the right side of the illustration). Then the arguments from the previous case are applied to the forces and their components to demonstrate the torque relationships. The rightmost illustration shows a couple, two equal but opposite forces for which the amount of the net force is zero, but they produce the net torque τ = F d {\displaystyle \scriptstyle \tau =Fd} where d {\displaystyle \scriptstyle d} is the distance between their lines of application. This is "pure" torque, since there is no resultant force. == Bound vector == A force applied to a body has a point of application. The effect of the force is different for different points of application. For this reason a force is called a bound vector, which means that it is bound to its point of application. Forces applied at the same point can be added together to obtain the same effect on the body. However, forces with different points of application cannot be added together and maintain the same effect on the body. It is a simple matter to change the point of application of a force by introducing equal and opposite forces at two different points of application that produce a pure torque on the body. In this way, all of the forces acting on a body can be moved to the same point of application with associated torques. A system of forces on a rigid body is combined by moving the forces to the same point of application and computing the associated torques. The sum of these forces and torques yields the resultant force-torque. == Associated torque == If a point R is selected as the point of application of the resultant force F of a system of n forces Fi then the associated torque T is determined from the formulas F = ∑ i = 1 n F i , {\displaystyle \mathbf {F} =\sum _{i=1}^{n}\mathbf {F} _{i},} and T = ∑ i = 1 n ( R i − R ) × F i . {\displaystyle \mathbf {T} =\sum _{i=1}^{n}(\mathbf {R} _{i}-\mathbf {R} )\times \mathbf {F} _{i}.} It is useful to note that the point of application R of the resultant force may be anywhere along the line of action of F without changing the value of the associated torque. To see this add the vector kF to the point of application R in the calculation of the associated torque, T = ∑ i = 1 n ( R i − ( R + k F ) ) × F i . {\displaystyle \mathbf {T} =\sum _{i=1}^{n}(\mathbf {R} _{i}-(\mathbf {R} +k\mathbf {F} ))\times \mathbf {F} _{i}.} The right side of this equation can be separated into the original formula for T plus the additional term including kF, T = ∑ i = 1 n ( R i − R ) × F i − ∑ i = 1 n k F × F i = ∑ i = 1 n ( R i − R ) × F i , {\displaystyle \mathbf {T} =\sum _{i=1}^{n}(\mathbf {R} _{i}-\mathbf {R} )\times \mathbf {F} _{i}-\sum _{i=1}^{n}k\mathbf {F} \times \mathbf {F} _{i}=\sum _{i=1}^{n}(\mathbf {R} _{i}-\mathbf {R} )\times \mathbf {F} _{i},} because the second term is zero. To see this notice that F is the sum of the vectors Fi which yields ∑ i = 1 n k F × F i = k F × ( ∑ i = 1 n F i ) = 0 , {\displaystyle \sum _{i=1}^{n}k\mathbf {F} \times \mathbf {F} _{i}=k\mathbf {F} \times (\sum _{i=1}^{n}\mathbf {F} _{i})=0,} thus the value of the associated torque is unchanged. == Torque-free resultant == It is useful to consider whether there is a point of application R such that the associated torque is zero. This point is defined by the property R × F = ∑ i = 1 n R i × F i , {\displaystyle \mathbf {R} \times \mathbf {F} =\sum _{i=1}^{n}\mathbf {R} _{i}\times \mathbf {F} _{i},} where F is resultant force and Fi form the system of forces. Notice that this equation for R has a solution only if the sum of the individual torques on the right side yield a vector that is perpendicular to F. Thus, the condition that a system of forces has a torque-free resultant can be written as F ⋅ ( ∑ i = 1 n R i × F i ) = 0. {\displaystyle \mathbf {F} \cdot (\sum _{i=1}^{n}\mathbf {R} _{i}\times \mathbf {F} _{i})=0.} If this condition is satisfied then there is a point of application for the resultant which results in a pure force. If this condition is not satisfied, then the system of forces includes a pure torque for every point of application. == Wrench == The forces and torques acting on a rigid body can be assembled into the pair of vectors called a wrench. If a system of forces and torques has a net resultant force F and a net resultant torque T, then the entire system can be replaced by a force F and an arbitrarily located couple that yields a torque of T. In general, if F and T are orthogonal, it is possible to derive a radial vector R such that R × F = T {\displaystyle \mathbf {R} \times \mathbf {F} =\mathbf {T} } , meaning that the single force F, acting at displacement R, can replace the system. If the system is zero-force (torque only), it is termed a screw and is mathematically formulated as screw theory. The resultant force and torque on a rigid body obtained from a system of forces Fi i=1,...,n, is simply the sum of the individual wrenches Wi, that is W = ∑ i = 1 n W i = ∑ i = 1 n ( F i , R i × F i ) . {\displaystyle {\mathsf {W}}=\sum _{i=1}^{n}{\mathsf {W}}_{i}=\sum _{i=1}^{n}(\mathbf {F} _{i},\mathbf {R} _{i}\times \mathbf {F} _{i}).} Notice that the case of two equal but opposite forces F and -F acting at points A and B respectively, yields the resultant W=(F-F, A×F - B× F) = (0, (A-B)×F). This shows that wrenches of the form W=(0, T) can be interpreted as pure torques. == References == == Sources == Hardy, E. (1904). The Elementary Principles of Graphic Statics. B.T. Batsford. Retrieved 2024-02-02.
Wikipedia/Resultant_force
Borůvka's algorithm is a greedy algorithm for finding a minimum spanning tree in a graph, or a minimum spanning forest in the case of a graph that is not connected. It was first published in 1926 by Otakar Borůvka as a method of constructing an efficient electricity network for Moravia. The algorithm was rediscovered by Choquet in 1938; again by Florek, Łukasiewicz, Perkal, Steinhaus, and Zubrzycki in 1951; and again by Georges Sollin in 1965. This algorithm is frequently called Sollin's algorithm, especially in the parallel computing literature. The algorithm begins by finding the minimum-weight edge incident to each vertex of the graph, and adding all of those edges to the forest. Then, it repeats a similar process of finding the minimum-weight edge from each tree constructed so far to a different tree, and adding all of those edges to the forest. Each repetition of this process reduces the number of trees, within each connected component of the graph, to at most half of this former value, so after logarithmically many repetitions the process finishes. When it does, the set of edges it has added forms the minimum spanning forest. == Pseudocode == The following pseudocode illustrates a basic implementation of Borůvka's algorithm. In the conditional clauses, every edge uv is considered cheaper than "None". The purpose of the completed variable is to determine whether the forest F is yet a spanning forest. If edges do not have distinct weights, then a consistent tie-breaking rule must be used, e.g. based on some total order on vertices or edges. This can be achieved by representing vertices as integers and comparing them directly; comparing their memory addresses; etc. A tie-breaking rule is necessary to ensure that the created graph is indeed a forest, that is, it does not contain cycles. For example, consider a triangle graph with nodes {a,b,c} and all edges of weight 1. Then a cycle could be created if we select ab as the minimal weight edge for {a}, bc for {b}, and ca for {c}. A tie-breaking rule which orders edges first by source, then by destination, will prevent creation of a cycle, resulting in the minimal spanning tree {ab, bc}. algorithm Borůvka is input: A weighted undirected graph G = (V, E). output: F, a minimum spanning forest of G. Initialize a forest F to (V, E′) where E′ = {}. completed := false while not completed do Find the connected components of F and assign to each vertex its component Initialize the cheapest edge for each component to "None" for each edge uv in E, where u and v are in different components of F: let wx be the cheapest edge for the component of u if is-preferred-over(uv, wx) then Set uv as the cheapest edge for the component of u let yz be the cheapest edge for the component of v if is-preferred-over(uv, yz) then Set uv as the cheapest edge for the component of v if all components have cheapest edge set to "None" then // no more trees can be merged -- we are finished completed := true else completed := false for each component whose cheapest edge is not "None" do Add its cheapest edge to E' function is-preferred-over(edge1, edge2) is return (edge2 is "None") or (weight(edge1) < weight(edge2)) or (weight(edge1) = weight(edge2) and tie-breaking-rule(edge1, edge2)) function tie-breaking-rule(edge1, edge2) is The tie-breaking rule; returns true if and only if edge1 is preferred over edge2 in the case of a tie. As an optimization, one could remove from G each edge that is found to connect two vertices in the same component, so that it does not contribute to the time for searching for cheapest edges in later components. == Complexity == Borůvka's algorithm can be shown to take O(log V) iterations of the outer loop until it terminates, and therefore to run in time O(E log V), where E is the number of edges, and V is the number of vertices in G (assuming E ≥ V). In planar graphs, and more generally in families of graphs closed under graph minor operations, it can be made to run in linear time, by removing all but the cheapest edge between each pair of components after each stage of the algorithm. == Example == == Other algorithms == Other algorithms for this problem include Prim's algorithm and Kruskal's algorithm. Fast parallel algorithms can be obtained by combining Prim's algorithm with Borůvka's. A faster randomized minimum spanning tree algorithm based in part on Borůvka's algorithm due to Karger, Klein, and Tarjan runs in expected O(E) time. The best known (deterministic) minimum spanning tree algorithm by Bernard Chazelle is also based in part on Borůvka's and runs in O(E α(E,V)) time, where α is the inverse Ackermann function. These randomized and deterministic algorithms combine steps of Borůvka's algorithm, reducing the number of components that remain to be connected, with steps of a different type that reduce the number of edges between pairs of components. == Notes ==
Wikipedia/Borůvka's_algorithm
Dijkstra's algorithm ( DYKE-strəz) is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, a road network. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later. Dijkstra's algorithm finds the shortest path from a given source node to every other node.: 196–206  It can be used to find the shortest path to a specific destination node, by terminating the algorithm after determining the shortest path to the destination node. For example, if the nodes of the graph represent cities, and the costs of edges represent the distances between pairs of cities connected by a direct road, then Dijkstra's algorithm can be used to find the shortest route between one city and all other cities. A common application of shortest path algorithms is network routing protocols, most notably IS-IS (Intermediate System to Intermediate System) and OSPF (Open Shortest Path First). It is also employed as a subroutine in algorithms such as Johnson's algorithm. The algorithm uses a min-priority queue data structure for selecting the shortest paths known so far. Before more advanced priority queue structures were discovered, Dijkstra's original algorithm ran in Θ ( | V | 2 ) {\displaystyle \Theta (|V|^{2})} time, where | V | {\displaystyle |V|} is the number of nodes. Fredman & Tarjan 1984 proposed a Fibonacci heap priority queue to optimize the running time complexity to Θ ( | E | + | V | log ⁡ | V | ) {\displaystyle \Theta (|E|+|V|\log |V|)} . This is asymptotically the fastest known single-source shortest-path algorithm for arbitrary directed graphs with unbounded non-negative weights. However, specialized cases (such as bounded/integer weights, directed acyclic graphs etc.) can be improved further. If preprocessing is allowed, algorithms such as contraction hierarchies can be up to seven orders of magnitude faster. Dijkstra's algorithm is commonly used on graphs where the edge weights are positive integers or real numbers. It can be generalized to any graph where the edge weights are partially ordered, provided the subsequent labels (a subsequent label is produced when traversing an edge) are monotonically non-decreasing. In many fields, particularly artificial intelligence, Dijkstra's algorithm or a variant offers a uniform cost search and is formulated as an instance of the more general idea of best-first search. == History == What is the shortest way to travel from Rotterdam to Groningen, in general: from given city to given city. It is the algorithm for the shortest path, which I designed in about twenty minutes. One morning I was shopping in Amsterdam with my young fiancée, and tired, we sat down on the café terrace to drink a cup of coffee and I was just thinking about whether I could do this, and I then designed the algorithm for the shortest path. As I said, it was a twenty-minute invention. In fact, it was published in '59, three years later. The publication is still readable, it is, in fact, quite nice. One of the reasons that it is so nice was that I designed it without pencil and paper. I learned later that one of the advantages of designing without pencil and paper is that you are almost forced to avoid all avoidable complexities. Eventually, that algorithm became to my great amazement, one of the cornerstones of my fame. Dijkstra thought about the shortest path problem while working as a programmer at the Mathematical Center in Amsterdam in 1956. He wanted to demonstrate the capabilities of the new ARMAC computer. His objective was to choose a problem and a computer solution that non-computing people could understand. He designed the shortest path algorithm and later implemented it for ARMAC for a slightly simplified transportation map of 64 cities in the Netherlands (he limited it to 64, so that 6 bits would be sufficient to encode the city number). A year later, he came across another problem advanced by hardware engineers working on the institute's next computer: minimize the amount of wire needed to connect the pins on the machine's back panel. As a solution, he re-discovered Prim's minimal spanning tree algorithm (known earlier to Jarník, and also rediscovered by Prim). Dijkstra published the algorithm in 1959, two years after Prim and 29 years after Jarník. == Algorithm == The algorithm requires a starting node, and computes the shortest distance from that starting node to each other node. Dijkstra's algorithm starts with infinite distances and tries to improve them step by step: Create a set of all unvisited nodes: the unvisited set. Assign to every node a distance from start value: for the starting node, it is zero, and for all other nodes, it is infinity, since initially no path is known to these nodes. During execution, the distance of a node N is the length of the shortest path discovered so far between the starting node and N. From the unvisited set, select the current node to be the one with the smallest (finite) distance; initially, this is the starting node (distance zero). If the unvisited set is empty, or contains only nodes with infinite distance (which are unreachable), then the algorithm terminates by skipping to step 6. If the only concern is the path to a target node, the algorithm terminates once the current node is the target node. Otherwise, the algorithm continues. For the current node, consider all of its unvisited neighbors and update their distances through the current node; compare the newly calculated distance to the one currently assigned to the neighbor and assign the smaller one to it. For example, if the current node A is marked with a distance of 6, and the edge connecting it with its neighbor B has length 2, then the distance to B through A is 6 + 2 = 8. If B was previously marked with a distance greater than 8, then update it to 8 (the path to B through A is shorter). Otherwise, keep its current distance (the path to B through A is not the shortest). After considering all of the current node's unvisited neighbors, the current node is removed from the unvisited set. Thus a visited node is never rechecked, which is correct because the distance recorded on the current node is minimal (as ensured in step 3), and thus final. Repeat from step 3. Once the loop exits (steps 3–5), every visited node contains its shortest distance from the starting node. == Description == The shortest path between two intersections on a city map can be found by this algorithm using pencil and paper. Every intersection is listed on a separate line: one is the starting point and is labeled (given a distance of) 0. Every other intersection is initially labeled with a distance of infinity. This is done to note that no path to these intersections has yet been established. At each iteration one intersection becomes the current intersection. For the first iteration, this is the starting point. From the current intersection, the distance to every neighbor (directly-connected) intersection is assessed by summing the label (value) of the current intersection and the distance to the neighbor and then relabeling the neighbor with the lesser of that sum and the neighbor's existing label. I.e., the neighbor is relabeled if the path to it through the current intersection is shorter than previously assessed paths. If so, mark the road to the neighbor with an arrow pointing to it, and erase any other arrow that points to it. After the distances to each of the current intersection's neighbors have been assessed, the current intersection is marked as visited. The unvisited intersection with the smallest label becomes the current intersection and the process repeats until all nodes with labels less than the destination's label have been visited. Once no unvisited nodes remain with a label smaller than the destination's label, the remaining arrows show the shortest path. == Pseudocode == In the following pseudocode, dist is an array that contains the current distances from the source to other vertices, i.e. dist[u] is the current distance from the source to the vertex u. The prev array contains pointers to previous-hop nodes on the shortest path from source to the given vertex (equivalently, it is the next-hop on the path from the given vertex to the source). The code u ← vertex in Q with min dist[u], searches for the vertex u in the vertex set Q that has the least dist[u] value. Graph.Edges(u, v) returns the length of the edge joining (i.e. the distance between) the two neighbor-nodes u and v. The variable alt on line 14 is the length of the path from the source node to the neighbor node v if it were to go through u. If this path is shorter than the current shortest path recorded for v, then the distance of v is updated to alt. 1 function Dijkstra(Graph, source): 2 3 for each vertex v in Graph.Vertices: 4 dist[v] ← INFINITY 5 prev[v] ← UNDEFINED 6 add v to Q 7 dist[source] ← 0 8 9 while Q is not empty: 10 u ← vertex in Q with minimum dist[u] 11 Q.remove(u) 12 13 for each arc (u, v) in Q: 14 alt ← dist[u] + Graph.Edges(u, v) 15 if alt < dist[v]: 16 dist[v] ← alt 17 prev[v] ← u 18 19 return dist[], prev[] To find the shortest path between vertices source and target, the search terminates after line 10 if u = target. The shortest path from source to target can be obtained by reverse iteration: 1 S ← empty sequence 2 u ← target 3 if prev[u] is defined or u = source: // Proceed if the vertex is reachable 4 while u is defined: // Construct the shortest path with a stack S 5 S.push(u) // Push the vertex onto the stack 6 u ← prev[u] // Traverse from target to source Now sequence S is the list of vertices constituting one of the shortest paths from source to target, or the empty sequence if no path exists. A more general problem is to find all the shortest paths between source and target (there might be several of the same length). Then instead of storing only a single node in each entry of prev[] all nodes satisfying the relaxation condition can be stored. For example, if both r and source connect to target and they lie on different shortest paths through target (because the edge cost is the same in both cases), then both r and source are added to prev[target]. When the algorithm completes, prev[] data structure describes a graph that is a subset of the original graph with some edges removed. Its key property is that if the algorithm was run with some starting node, then every path from that node to any other node in the new graph is the shortest path between those nodes graph, and all paths of that length from the original graph are present in the new graph. Then to actually find all these shortest paths between two given nodes, a path finding algorithm on the new graph, such as depth-first search would work. === Using a priority queue === A min-priority queue is an abstract data type that provides 3 basic operations: add_with_priority(), decrease_priority() and extract_min(). As mentioned earlier, using such a data structure can lead to faster computing times than using a basic queue. Notably, Fibonacci heap or Brodal queue offer optimal implementations for those 3 operations. As the algorithm is slightly different in appearance, it is mentioned here, in pseudocode as well: 1 function Dijkstra(Graph, source): 2 Q ← Queue storing vertex priority 3 4 dist[source] ← 0 // Initialization 5 Q.add_with_priority(source, 0) // associated priority equals dist[·] 6 7 for each vertex v in Graph.Vertices: 8 if v ≠ source 9 prev[v] ← UNDEFINED // Predecessor of v 10 dist[v] ← INFINITY // Unknown distance from source to v 11 Q.add_with_priority(v, INFINITY) 12 13 14 while Q is not empty: // The main loop 15 u ← Q.extract_min() // Remove and return best vertex 16 for each arc (u, v) : // Go through all v neighbors of u 17 alt ← dist[u] + Graph.Edges(u, v) 18 if alt < dist[v]: 19 prev[v] ← u 20 dist[v] ← alt 21 Q.decrease_priority(v, alt) 22 23 return (dist, prev) Instead of filling the priority queue with all nodes in the initialization phase, it is possible to initialize it to contain only source; then, inside the if alt < dist[v] block, the decrease_priority() becomes an add_with_priority() operation.: 198  Yet another alternative is to add nodes unconditionally to the priority queue and to instead check after extraction (u ← Q.extract_min()) that it isn't revisiting, or that no shorter connection was found yet in the if alt < dist[v] block. This can be done by additionally extracting the associated priority p from the queue and only processing further if p == dist[u] inside the while Q is not empty loop. These alternatives can use entirely array-based priority queues without decrease-key functionality, which have been found to achieve even faster computing times in practice. However, the difference in performance was found to be narrower for denser graphs. == Proof == To prove the correctness of Dijkstra's algorithm, mathematical induction can be used on the number of visited nodes. Invariant hypothesis: For each visited node v, dist[v] is the shortest distance from source to v, and for each unvisited node u, dist[u] is the shortest distance from source to u when traveling via visited nodes only, or infinity if no such path exists. (Note: we do not assume dist[u] is the actual shortest distance for unvisited nodes, while dist[v] is the actual shortest distance) === Base case === The base case is when there is just one visited node, source. Its distance is defined to be zero, which is the shortest distance, since negative weights are not allowed. Hence, the hypothesis holds. === Induction === Assuming that the hypothesis holds for k {\displaystyle k} visited nodes, to show it holds for k + 1 {\displaystyle k+1} nodes, let u be the next visited node, i.e. the node with minimum dist[u]. The claim is that dist[u] is the shortest distance from source to u. The proof is by contradiction. If a shorter path were available, then this shorter path either contains another unvisited node or not. In the former case, let w be the first unvisited node on this shorter path. By induction, the shortest paths from source to u and w through visited nodes only have costs dist[u] and dist[w] respectively. This means the cost of going from source to u via w has the cost of at least dist[w] + the minimal cost of going from w to u. As the edge costs are positive, the minimal cost of going from w to u is a positive number. However, dist[u] is at most dist[w] because otherwise w would have been picked by the priority queue instead of u. This is a contradiction, since it has already been established that dist[w] + a positive number < dist[u]. In the latter case, let w be the last but one node on the shortest path. That means dist[w] + Graph.Edges[w,u] < dist[u]. That is a contradiction because by the time w is visited, it should have set dist[u] to at most dist[w] + Graph.Edges[w,u]. For all other visited nodes v, the dist[v] is already known to be the shortest distance from source already, because of the inductive hypothesis, and these values are unchanged. After processing u, it is still true that for each unvisited node w, dist[w] is the shortest distance from source to w using visited nodes only. Any shorter path that did not use u, would already have been found, and if a shorter path used u it would have been updated when processing u. After all nodes are visited, the shortest path from source to any node v consists only of visited nodes. Therefore, dist[v] is the shortest distance. == Running time == Bounds of the running time of Dijkstra's algorithm on a graph with edges E and vertices V can be expressed as a function of the number of edges, denoted | E | {\displaystyle |E|} , and the number of vertices, denoted | V | {\displaystyle |V|} , using big-O notation. The complexity bound depends mainly on the data structure used to represent the set Q. In the following, upper bounds can be simplified because | E | {\displaystyle |E|} is O ( | V | 2 ) {\displaystyle O(|V|^{2})} for any simple graph, but that simplification disregards the fact that in some problems, other upper bounds on | E | {\displaystyle |E|} may hold. For any data structure for the vertex set Q, the running time i s: Θ ( | E | ⋅ T d k + | V | ⋅ T e m ) , {\displaystyle \Theta (|E|\cdot T_{\mathrm {dk} }+|V|\cdot T_{\mathrm {em} }),} where T d k {\displaystyle T_{\mathrm {dk} }} and T e m {\displaystyle T_{\mathrm {em} }} are the complexities of the decrease-key and extract-minimum operations in Q, respectively. The simplest version of Dijkstra's algorithm stores the vertex set Q as a linked list or array, and edges as an adjacency list or matrix. In this case, extract-minimum is simply a linear search through all vertices in Q, so the running time is Θ ( | E | + | V | 2 ) = Θ ( | V | 2 ) {\displaystyle \Theta (|E|+|V|^{2})=\Theta (|V|^{2})} . For sparse graphs, that is, graphs with far fewer than | V | 2 {\displaystyle |V|^{2}} edges, Dijkstra's algorithm can be implemented more efficiently by storing the graph in the form of adjacency lists and using a self-balancing binary search tree, binary heap, pairing heap, Fibonacci heap or a priority heap as a priority queue to implement extracting minimum efficiently. To perform decrease-key steps in a binary heap efficiently, it is necessary to use an auxiliary data structure that maps each vertex to its position in the heap, and to update this structure as the priority queue Q changes. With a self-balancing binary search tree or binary heap, the algorithm requires Θ ( ( | E | + | V | ) log ⁡ | V | ) {\displaystyle \Theta ((|E|+|V|)\log |V|)} time in the worst case; for connected graphs this time bound can be simplified to Θ ( | E | log ⁡ | V | ) {\displaystyle \Theta (|E|\log |V|)} . The Fibonacci heap improves this to Θ ( | E | + | V | log ⁡ | V | ) . {\displaystyle \Theta (|E|+|V|\log |V|).} When using binary heaps, the average case time complexity is lower than the worst-case: assuming edge costs are drawn independently from a common probability distribution, the expected number of decrease-key operations is bounded by Θ ( | V | log ⁡ ( | E | / | V | ) ) {\displaystyle \Theta (|V|\log(|E|/|V|))} , giving a total running time of: 199–200  O ( | E | + | V | log ⁡ | E | | V | log ⁡ | V | ) . {\displaystyle O\left(|E|+|V|\log {\frac {|E|}{|V|}}\log |V|\right).} === Practical optimizations and infinite graphs === In common presentations of Dijkstra's algorithm, initially all nodes are entered into the priority queue. This is, however, not necessary: the algorithm can start with a priority queue that contains only one item, and insert new items as they are discovered (instead of doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise insert it).: 198  This variant has the same worst-case bounds as the common variant, but maintains a smaller priority queue in practice, speeding up queue operations. Moreover, not inserting all nodes in a graph makes it possible to extend the algorithm to find the shortest path from a single source to the closest of a set of target nodes on infinite graphs or those too large to represent in memory. The resulting algorithm is called uniform-cost search (UCS) in the artificial intelligence literature and can be expressed in pseudocode as procedure uniform_cost_search(start) is node ← start frontier ← priority queue containing node only expanded ← empty set do if frontier is empty then return failure node ← frontier.pop() if node is a goal state then return solution(node) expanded.add(node) for each of node's neighbors n do if n is not in expanded and not in frontier then frontier.add(n) else if n is in frontier with higher cost replace existing node with n Its complexity can be expressed in an alternative way for very large graphs: when C* is the length of the shortest path from the start node to any node satisfying the "goal" predicate, each edge has cost at least ε, and the number of neighbors per node is bounded by b, then the algorithm's worst-case time and space complexity are both in O(b1+⌊C* ⁄ ε⌋). Further optimizations for the single-target case include bidirectional variants, goal-directed variants such as the A* algorithm (see § Related problems and algorithms), graph pruning to determine which nodes are likely to form the middle segment of shortest paths (reach-based routing), and hierarchical decompositions of the input graph that reduce s–t routing to connecting s and t to their respective "transit nodes" followed by shortest-path computation between these transit nodes using a "highway". Combinations of such techniques may be needed for optimal practical performance on specific problems. === Optimality for comparison-sorting by distance === As well as simply computing distances and paths, Dijkstra's algorithm can be used to sort vertices by their distances from a given starting vertex. In 2023, Haeupler, Rozhoň, Tětek, Hladík, and Tarjan (one of the inventors of the 1984 heap), proved that, for this sorting problem on a positively-weighted directed graph, a version of Dijkstra's algorithm with a special heap data structure has a runtime and number of comparisons that is within a constant factor of optimal among comparison-based algorithms for the same sorting problem on the same graph and starting vertex but with variable edge weights. To achieve this, they use a comparison-based heap whose cost of returning/removing the minimum element from the heap is logarithmic in the number of elements inserted after it rather than in the number of elements in the heap. === Specialized variants === When arc weights are small integers (bounded by a parameter C {\displaystyle C} ), specialized queues can be used for increased speed. The first algorithm of this type was Dial's algorithm for graphs with positive integer edge weights, which uses a bucket queue to obtain a running time O ( | E | + | V | C ) {\displaystyle O(|E|+|V|C)} . The use of a Van Emde Boas tree as the priority queue brings the complexity to O ( | E | + | V | log ⁡ C / log ⁡ log ⁡ | V | C ) {\displaystyle O(|E|+|V|\log C/\log \log |V|C)} . Another interesting variant based on a combination of a new radix heap and the well-known Fibonacci heap runs in time O ( | E | + | V | log ⁡ C ) {\displaystyle O(|E|+|V|{\sqrt {\log C}})} . Finally, the best algorithms in this special case run in O ( | E | log ⁡ log ⁡ | V | ) {\displaystyle O(|E|\log \log |V|)} time and O ( | E | + | V | min { ( log ⁡ | V | ) 1 / 3 + ε , ( log ⁡ C ) 1 / 4 + ε } ) {\displaystyle O(|E|+|V|\min\{(\log |V|)^{1/3+\varepsilon },(\log C)^{1/4+\varepsilon }\})} time. == Related problems and algorithms == Dijkstra's original algorithm can be extended with modifications. For example, sometimes it is desirable to present solutions which are less than mathematically optimal. To obtain a ranked list of less-than-optimal solutions, the optimal solution is first calculated. A single edge appearing in the optimal solution is removed from the graph, and the optimum solution to this new graph is calculated. Each edge of the original solution is suppressed in turn and a new shortest-path calculated. The secondary solutions are then ranked and presented after the first optimal solution. Dijkstra's algorithm is usually the working principle behind link-state routing protocols. OSPF and IS-IS are the most common. Unlike Dijkstra's algorithm, the Bellman–Ford algorithm can be used on graphs with negative edge weights, as long as the graph contains no negative cycle reachable from the source vertex s. The presence of such cycles means that no shortest path can be found, since the label becomes lower each time the cycle is traversed. (This statement assumes that a "path" is allowed to repeat vertices. In graph theory that is normally not allowed. In theoretical computer science it often is allowed.) It is possible to adapt Dijkstra's algorithm to handle negative weights by combining it with the Bellman-Ford algorithm (to remove negative edges and detect negative cycles): Johnson's algorithm. The A* algorithm is a generalization of Dijkstra's algorithm that reduces the size of the subgraph that must be explored, if additional information is available that provides a lower bound on the distance to the target. The process that underlies Dijkstra's algorithm is similar to the greedy process used in Prim's algorithm. Prim's purpose is to find a minimum spanning tree that connects all nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate the total weight of the path from the starting node, only the individual edges. Breadth-first search can be viewed as a special-case of Dijkstra's algorithm on unweighted graphs, where the priority queue degenerates into a FIFO queue. The fast marching method can be viewed as a continuous version of Dijkstra's algorithm which computes the geodesic distance on a triangle mesh. === Dynamic programming perspective === From a dynamic programming point of view, Dijkstra's algorithm is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method. In fact, Dijkstra's explanation of the logic behind the algorithm: Problem 2. Find the path of minimum total length between two given nodes P and Q. We use the fact that, if R is a node on the minimal path from P to Q, knowledge of the latter implies the knowledge of the minimal path from P to R. is a paraphrasing of Bellman's Principle of Optimality in the context of the shortest path problem. == See also == A* search algorithm Bellman–Ford algorithm Euclidean shortest path Floyd–Warshall algorithm Johnson's algorithm Longest path problem Parallel all-pairs shortest path algorithm == Notes == == References == Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 24.3: Dijkstra's algorithm". Introduction to Algorithms (Second ed.). MIT Press and McGraw–Hill. pp. 595–601. ISBN 0-262-03293-7. Dial, Robert B. (1969). "Algorithm 360: Shortest-path forest with topological ordering [H]". Communications of the ACM. 12 (11): 632–633. doi:10.1145/363269.363610. S2CID 6754003. Fredman, Michael Lawrence; Tarjan, Robert E. (1984). Fibonacci heaps and their uses in improved network optimization algorithms. 25th Annual Symposium on Foundations of Computer Science. IEEE. pp. 338–346. doi:10.1109/SFCS.1984.715934. Fredman, Michael Lawrence; Tarjan, Robert E. (1987). "Fibonacci heaps and their uses in improved network optimization algorithms". Journal of the Association for Computing Machinery. 34 (3): 596–615. doi:10.1145/28869.28874. S2CID 7904683. Zhan, F. Benjamin; Noon, Charles E. (February 1998). "Shortest Path Algorithms: An Evaluation Using Real Road Networks". Transportation Science. 32 (1): 65–73. doi:10.1287/trsc.32.1.65. S2CID 14986297. Leyzorek, M.; Gray, R. S.; Johnson, A. A.; Ladew, W. C.; Meaker, Jr., S. R.; Petry, R. M.; Seitz, R. N. (1957). Investigation of Model Techniques – First Annual Report – 6 June 1956 – 1 July 1957 – A Study of Model Techniques for Communication Systems. Cleveland, Ohio: Case Institute of Technology. Knuth, D.E. (1977). "A Generalization of Dijkstra's Algorithm". Information Processing Letters. 6 (1): 1–5. doi:10.1016/0020-0190(77)90002-3. Ahuja, Ravindra K.; Mehlhorn, Kurt; Orlin, James B.; Tarjan, Robert E. (April 1990). "Faster Algorithms for the Shortest Path Problem" (PDF). Journal of the ACM. 37 (2): 213–223. doi:10.1145/77600.77615. hdl:1721.1/47994. S2CID 5499589. Raman, Rajeev (1997). "Recent results on the single-source shortest paths problem". SIGACT News. 28 (2): 81–87. doi:10.1145/261342.261352. S2CID 18031586. Thorup, Mikkel (2000). "On RAM priority Queues". SIAM Journal on Computing. 30 (1): 86–109. doi:10.1137/S0097539795288246. S2CID 5221089. Thorup, Mikkel (1999). "Undirected single-source shortest paths with positive integer weights in linear time". Journal of the ACM. 46 (3): 362–394. doi:10.1145/316542.316548. S2CID 207654795. == External links == Oral history interview with Edsger W. Dijkstra, Charles Babbage Institute, University of Minnesota, Minneapolis Implementation of Dijkstra's algorithm using TDD, Robert Cecil Martin, The Clean Code Blog
Wikipedia/Dijkstra's_algorithm
In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. RSM is an empirical model which employs the use of mathematical and statistical techniques to relate input variables, otherwise known as factors, to the response. RSM became very useful because other methods available, such as the theoretical model, could be very cumbersome to use, time-consuming, inefficient, error-prone, and unreliable. The method was introduced by George E. P. Box and K. B. Wilson in 1951. The main idea of RSM is to use a sequence of designed experiments to obtain an optimal response. Box and Wilson suggest using a second-degree polynomial model to do this. They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process. Statistical approaches such as RSM can be employed to maximize the production of a special substance by optimization of operational factors. Of late, for formulation optimization, the RSM, using proper design of experiments (DoE), has become extensively used. In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques. == Basic approach of response surface methodology == An easy way to estimate a first-degree polynomial model is to use a factorial experiment or a fractional factorial design. This is sufficient to determine which explanatory variables affect the response variable(s) of interest. Once it is suspected that only significant explanatory variables are left, then a more complicated design, such as a central composite design can be implemented to estimate a second-degree polynomial model, which is still only an approximation at best. However, the second-degree model can be used to optimize (maximize, minimize, or attain a specific target for) the response variable(s) of interest. == Important RSM properties and features == Orthogonality The property that allows individual effects of the k-factors to be estimated independently without (or with minimal) confounding. Also orthogonality provides minimum variance estimates of the model coefficient so that they are uncorrelated. Rotatability The property of rotating points of the design about the center of the factor space. The moments of the distribution of the design points are constant. Uniformity A third property of CCD designs used to control the number of center points is uniform precision (or Uniformity). == Special geometries == === Cube === Cubic designs are discussed by Kiefer, by Atkinson, Donev, and Tobias and by Hardin and Sloane. === Sphere === Spherical designs are discussed by Kiefer and by Hardin and Sloane. === Simplex geometry and mixture experiments === Mixture experiments are discussed in many books on the design of experiments, and in the response-surface methodology textbooks of Box and Draper and of Atkinson, Donev and Tobias. An extensive discussion and survey appears in the advanced textbook by John Cornell. == Extensions == === Multiple objective functions === Some extensions of response surface methodology deal with the multiple response problem. Multiple response variables create difficulty because what is optimal for one response may not be optimal for other responses. Other extensions are used to reduce variability in a single response while targeting a specific value, or attaining a near maximum or minimum while preventing variability in that response from getting too large. == Practical concerns == Response surface methodology uses statistical models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model. Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, Box's original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years. The engineers had not been able to afford to fit a cubic three-level design to estimate a quadratic model, and their biased linear-models estimated the gradient to be zero. Box's design reduced the costs of experimentation so that a quadratic model could be fit, which led to a (long-sought) ascent direction. == See also == Box–Behnken design Central composite design Gradient-enhanced kriging (GEK) IOSO method based on response-surface methodology Optimal designs Plackett–Burman design Polynomial and rational function modeling Polynomial regression Probabilistic design Surrogate model Bayesian Optimization == References == Box, G.E.P.; Wilson, K.B. (1951). "On the Experimental Attainment of Optimum Conditions". Journal of the Royal Statistical Society, Series B. 13 (1): 1–45. doi:10.1111/j.2517-6161.1951.tb00067.x. Box, G. E. P. and Draper, Norman. 2007. Response Surfaces, Mixtures, and Ridge Analyses, Second Edition [of Empirical Model-Building and Response Surfaces, 1987], Wiley. Atkinson, A.C.; Donev, A.N.; Tobias, R.D. (2007). Optimum Experimental Designs, with SAS. Oxford University Press. pp. 511+xvi. ISBN 978-0-19-929660-6. Cornell, John (2002). Experiments with Mixtures: Designs, Models, and the Analysis of Mixture Data (third ed.). Wiley. ISBN 978-0-471-07916-3. Goos, Peter] (2002). The Optimal Design of Blocked and Split-plot Experiments. Lecture Notes in Statistics. Vol. 164. Springer. ISBN 978-0-387-95515-5. Kiefer, Jack Carl (1985). L. D. Brown; et al. (eds.). Jack Carl Kiefer Collected Papers III Design of Experiments. Springer-Verlag. ISBN 978-0-387-96004-3. Pukelsheim, Friedrich (2006). Optimal Design of Experiments. SIAM. ISBN 978-0-89871-604-7. Hardin, R.H.; Sloane, N.J.A. (1993). "A New Approach to the Construction of Optimal Designs" (PDF). Journal of Statistical Planning and Inference. 37 (3): 339–369. doi:10.1016/0378-3758(93)90112-J. Hardin, R.H.; Sloane, N.J.A. "Computer-Generated Minimal (and Larger) Response Surface Designs: (I) The Sphere" (PDF). Hardin, R.H.; Sloane, N.J.A. "Computer-Generated Minimal (and Larger) Response Surface Designs: (II) The Cube" (PDF). Ghosh, S.; Rao, C. R., eds. (1996). Design and Analysis of Experiments. Handbook of Statistics. Vol. 13. North-Holland. ISBN 978-0-444-82061-7. Draper, Norman; Lin, Dennis K.J. (1996). "11 Response surface designs". Response surface designs. Handbook of Statistics. Vol. 13. pp. 343–375. doi:10.1016/S0169-7161(96)13013-3. ISBN 9780444820617. Gaffke, Norbert; Heiligers, Berthold (1996). "30 Approximate designs for polynomial regression: Invariance, admissibility, and optimality". Approximate designs for polynomial regression: Invariance, admissibility, and optimality. Handbook of Statistics. Vol. 13. pp. 1149–99. doi:10.1016/S0169-7161(96)13032-7. ISBN 9780444820617. === Historical === Gergonne, J. D. (1974) [1815]. "The application of the method of least squares to the interpolation of sequences". Historia Mathematica. 1 (4) (Translated by Ralph St. John and S. M. Stigler from the 1815 French ed.): 439–447. doi:10.1016/0315-0860(74)90034-2. Stigler, Stephen M. (1974). "Gergonne's 1815 paper on the design and analysis of polynomial regression experiments". Historia Mathematica. 1 (4): 431–9. doi:10.1016/0315-0860(74)90033-0. Peirce, C. S. (1876). "Note on the Theory of the Economy of Research" (PDF). Coast Survey Report. Appendix No. 14: 197–201. Reprinted in Collected Papers of Charles Sanders Peirce. Vol. 7. 1958. paragraphs 139–157, and in Peirce, C. S. (July–August 1967). "Note on the Theory of the Economy of Research". Operations Research. 15 (4): 643–8. doi:10.1287/opre.15.4.643. JSTOR 168276. Smith, Kirstine (1918). "On the Standard Deviations of Adjusted and Interpolated Values of an Observed Polynomial Function and its Constants and the Guidance They Give Towards a Proper Choice of the Distribution of the Observations". Biometrika. 12 (1/2): 1–85. doi:10.2307/2331929. JSTOR 2331929. == External links == Response surface designs
Wikipedia/Response_surface_methodology
Powell's dog leg method, also called Powell's hybrid method, is an iterative optimisation algorithm for the solution of non-linear least squares problems, introduced in 1970 by Michael J. D. Powell. Similarly to the Levenberg–Marquardt algorithm, it combines the Gauss–Newton algorithm with gradient descent, but it uses an explicit trust region. At each iteration, if the step from the Gauss–Newton algorithm is within the trust region, it is used to update the current solution. If not, the algorithm searches for the minimum of the objective function along the steepest descent direction, known as Cauchy point. If the Cauchy point is outside of the trust region, it is truncated to the boundary of the latter and it is taken as the new solution. If the Cauchy point is inside the trust region, the new solution is taken at the intersection between the trust region boundary and the line joining the Cauchy point and the Gauss-Newton step (dog leg step). The name of the method derives from the resemblance between the construction of the dog leg step and the shape of a dogleg hole in golf. == Formulation == Given a least squares problem in the form F ( x ) = 1 2 ‖ f ( x ) ‖ 2 = 1 2 ∑ i = 1 m ( f i ( x ) ) 2 {\displaystyle F({\boldsymbol {x}})={\frac {1}{2}}\left\|{\boldsymbol {f}}({\boldsymbol {x}})\right\|^{2}={\frac {1}{2}}\sum _{i=1}^{m}\left(f_{i}({\boldsymbol {x}})\right)^{2}} with f i : R n → R {\displaystyle f_{i}:\mathbb {R} ^{n}\to \mathbb {R} } , Powell's dog leg method finds the optimal point x ∗ = argmin x ⁡ F ( x ) {\displaystyle {\boldsymbol {x}}^{*}=\operatorname {argmin} _{\boldsymbol {x}}F({\boldsymbol {x}})} by constructing a sequence x k = x k − 1 + δ k {\displaystyle {\boldsymbol {x}}_{k}={\boldsymbol {x}}_{k-1}+\delta _{k}} that converges to x ∗ {\displaystyle {\boldsymbol {x}}^{*}} . At a given iteration, the Gauss–Newton step is given by δ g n = − ( J ⊤ J ) − 1 J ⊤ f ( x ) {\displaystyle {\boldsymbol {\delta _{gn}}}=-\left({\boldsymbol {J}}^{\top }{\boldsymbol {J}}\right)^{-1}{\boldsymbol {J}}^{\top }{\boldsymbol {f}}({\boldsymbol {x}})} where J = ( ∂ f i ∂ x j ) {\displaystyle {\boldsymbol {J}}=\left({\frac {\partial {f_{i}}}{\partial {x_{j}}}}\right)} is the Jacobian matrix, while the steepest descent direction is given by δ s d = − J ⊤ f ( x ) . {\displaystyle {\boldsymbol {\delta _{sd}}}=-{\boldsymbol {J}}^{\top }{\boldsymbol {f}}({\boldsymbol {x}}).} The objective function is linearised along the steepest descent direction F ( x + t δ s d ) ≈ 1 2 ‖ f ( x ) + t J ( x ) δ s d ‖ 2 = F ( x ) + t δ s d ⊤ J ⊤ f ( x ) + 1 2 t 2 ‖ J δ s d ‖ 2 . {\displaystyle {\begin{aligned}F({\boldsymbol {x}}+t{\boldsymbol {\delta _{sd}}})&\approx {\frac {1}{2}}\left\|{\boldsymbol {f}}({\boldsymbol {x}})+t{\boldsymbol {J}}({\boldsymbol {x}}){\boldsymbol {\delta _{sd}}}\right\|^{2}\\&=F({\boldsymbol {x}})+t{\boldsymbol {\delta _{sd}}}^{\top }{\boldsymbol {J}}^{\top }{\boldsymbol {f}}({\boldsymbol {x}})+{\frac {1}{2}}t^{2}\left\|{\boldsymbol {J}}{\boldsymbol {\delta _{sd}}}\right\|^{2}.\end{aligned}}} To compute the value of the parameter t {\displaystyle t} at the Cauchy point, the derivative of the last expression with respect to t {\displaystyle t} is imposed to be equal to zero, giving t = − δ s d ⊤ J ⊤ f ( x ) ‖ J δ s d ‖ 2 = ‖ δ s d ‖ 2 ‖ J δ s d ‖ 2 . {\displaystyle t=-{\frac {{\boldsymbol {\delta _{sd}}}^{\top }{\boldsymbol {J}}^{\top }{\boldsymbol {f}}({\boldsymbol {x}})}{\left\|{\boldsymbol {J}}{\boldsymbol {\delta _{sd}}}\right\|^{2}}}={\frac {\left\|{\boldsymbol {\delta _{sd}}}\right\|^{2}}{\left\|{\boldsymbol {J}}{\boldsymbol {\delta _{sd}}}\right\|^{2}}}.} Given a trust region of radius Δ {\displaystyle \Delta } , Powell's dog leg method selects the update step δ k {\displaystyle {\boldsymbol {\delta _{k}}}} as equal to: δ g n {\displaystyle {\boldsymbol {\delta _{gn}}}} , if the Gauss–Newton step is within the trust region ( ‖ δ g n ‖ ≤ Δ {\displaystyle \left\|{\boldsymbol {\delta _{gn}}}\right\|\leq \Delta } ); Δ ‖ δ s d ‖ δ s d {\displaystyle {\frac {\Delta }{\left\|{\boldsymbol {\delta _{sd}}}\right\|}}{\boldsymbol {\delta _{sd}}}} if both the Gauss–Newton and the steepest descent steps are outside the trust region ( t ‖ δ s d ‖ > Δ {\displaystyle t\left\|{\boldsymbol {\delta _{sd}}}\right\|>\Delta } ); t δ s d + s ( δ g n − t δ s d ) {\displaystyle t{\boldsymbol {\delta _{sd}}}+s\left({\boldsymbol {\delta _{gn}}}-t{\boldsymbol {\delta _{sd}}}\right)} with s {\displaystyle s} such that ‖ δ ‖ = Δ {\displaystyle \left\|{\boldsymbol {\delta }}\right\|=\Delta } , if the Gauss–Newton step is outside the trust region but the steepest descent step is inside (dog leg step). == References == == Sources == Lourakis, M.L.A.; Argyros, A.A. (2005). "Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?". Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1. pp. 1526–1531. doi:10.1109/ICCV.2005.128. ISBN 0-7695-2334-X. S2CID 16542484. Yuan, Ya-xiang (2000). "A review of trust region algorithms for optimization". Iciam. Vol. 99. Powell, M.J.D. (1970). "A new algorithm for unconstrained optimization". In Rosen, J.B.; Mangasarian, O.L.; Ritter, K. (eds.). Nonlinear Programming. New York: Academic Press. pp. 31–66. Powell, M.J.D. (1970). "A hybrid method for nonlinear equations". In Robinowitz, P. (ed.). Numerical Methods for Nonlinear Algebraic Equations. London: Gordon and Breach Science. pp. 87–144. == External links == "Equation Solving Algorithms". MathWorks.
Wikipedia/Powell's_dog_leg_method
In numerical analysis, a quasi-Newton method is an iterative numerical method used either to find zeroes or to find local maxima and minima of functions via an iterative recurrence formula much like the one for Newton's method, except using approximations of the derivatives of the functions in place of exact derivatives. Newton's method requires the Jacobian matrix of all partial derivatives of a multivariate function when used to search for zeros or the Hessian matrix when used for finding extrema. Quasi-Newton methods, on the other hand, can be used when the Jacobian matrices or Hessian matrices are unavailable or are impractical to compute at every iteration. Some iterative methods that reduce to Newton's method, such as sequential quadratic programming, may also be considered quasi-Newton methods. == Search for zeros: root finding == Newton's method to find zeroes of a function g {\displaystyle g} of multiple variables is given by x n + 1 = x n − [ J g ( x n ) ] − 1 g ( x n ) {\displaystyle x_{n+1}=x_{n}-[J_{g}(x_{n})]^{-1}g(x_{n})} , where [ J g ( x n ) ] − 1 {\displaystyle [J_{g}(x_{n})]^{-1}} is the left inverse of the Jacobian matrix J g ( x n ) {\displaystyle J_{g}(x_{n})} of g {\displaystyle g} evaluated for x n {\displaystyle x_{n}} . Strictly speaking, any method that replaces the exact Jacobian J g ( x n ) {\displaystyle J_{g}(x_{n})} with an approximation is a quasi-Newton method. For instance, the chord method (where J g ( x n ) {\displaystyle J_{g}(x_{n})} is replaced by J g ( x 0 ) {\displaystyle J_{g}(x_{0})} for all iterations) is a simple example. The methods given below for optimization refer to an important subclass of quasi-Newton methods, secant methods. Using methods developed to find extrema in order to find zeroes is not always a good idea, as the majority of the methods used to find extrema require that the matrix that is used is symmetrical. While this holds in the context of the search for extrema, it rarely holds when searching for zeroes. Broyden's "good" and "bad" methods are two methods commonly used to find extrema that can also be applied to find zeroes. Other methods that can be used are the column-updating method, the inverse column-updating method, the quasi-Newton least squares method and the quasi-Newton inverse least squares method. More recently quasi-Newton methods have been applied to find the solution of multiple coupled systems of equations (e.g. fluid–structure interaction problems or interaction problems in physics). They allow the solution to be found by solving each constituent system separately (which is simpler than the global system) in a cyclic, iterative fashion until the solution of the global system is found. == Search for extrema: optimization == The search for a minimum or maximum of a scalar-valued function is closely related to the search for the zeroes of the gradient of that function. Therefore, quasi-Newton methods can be readily applied to find extrema of a function. In other words, if g {\displaystyle g} is the gradient of f {\displaystyle f} , then searching for the zeroes of the vector-valued function g {\displaystyle g} corresponds to the search for the extrema of the scalar-valued function f {\displaystyle f} ; the Jacobian of g {\displaystyle g} now becomes the Hessian of f {\displaystyle f} . The main difference is that the Hessian matrix is a symmetric matrix, unlike the Jacobian when searching for zeroes. Most quasi-Newton methods used in optimization exploit this symmetry. In optimization, quasi-Newton methods (a special case of variable-metric methods) are algorithms for finding local maxima and minima of functions. Quasi-Newton methods for optimization are based on Newton's method to find the stationary points of a function, points where the gradient is 0. Newton's method assumes that the function can be locally approximated as a quadratic in the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and the Hessian matrix of second derivatives of the function to be minimized. In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of the secant method to find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation is under-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian. The first quasi-Newton algorithm was proposed by William C. Davidon, a physicist working at Argonne National Laboratory. He developed the first quasi-Newton algorithm in 1959: the DFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently the SR1 formula (for "symmetric rank-one"), the BHHH method, the widespread BFGS method (suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970), and its low-memory extension L-BFGS. The Broyden's class is a linear combination of the DFP and BFGS methods. The SR1 formula does not guarantee the update matrix to maintain positive-definiteness and can be used for indefinite problems. The Broyden's method does not require the update matrix to be symmetric and is used to find the root of a general system of equations (rather than the gradient) by updating the Jacobian (rather than the Hessian). One of the chief advantages of quasi-Newton methods over Newton's method is that the Hessian matrix (or, in the case of quasi-Newton methods, its approximation) B {\displaystyle B} does not need to be inverted. Newton's method, and its derivatives such as interior point methods, require the Hessian to be inverted, which is typically implemented by solving a system of linear equations and is often quite costly. In contrast, quasi-Newton methods usually generate an estimate of B − 1 {\displaystyle B^{-1}} directly. As in Newton's method, one uses a second-order approximation to find the minimum of a function f ( x ) {\displaystyle f(x)} . The Taylor series of f ( x ) {\displaystyle f(x)} around an iterate is f ( x k + Δ x ) ≈ f ( x k ) + ∇ f ( x k ) T Δ x + 1 2 Δ x T B Δ x , {\displaystyle f(x_{k}+\Delta x)\approx f(x_{k})+\nabla f(x_{k})^{\mathrm {T} }\,\Delta x+{\frac {1}{2}}\Delta x^{\mathrm {T} }B\,\Delta x,} where ( ∇ f {\displaystyle \nabla f} ) is the gradient, and B {\displaystyle B} an approximation to the Hessian matrix. The gradient of this approximation (with respect to Δ x {\displaystyle \Delta x} ) is ∇ f ( x k + Δ x ) ≈ ∇ f ( x k ) + B Δ x , {\displaystyle \nabla f(x_{k}+\Delta x)\approx \nabla f(x_{k})+B\,\Delta x,} and setting this gradient to zero (which is the goal of optimization) provides the Newton step: Δ x = − B − 1 ∇ f ( x k ) . {\displaystyle \Delta x=-B^{-1}\nabla f(x_{k}).} The Hessian approximation B {\displaystyle B} is chosen to satisfy ∇ f ( x k + Δ x ) = ∇ f ( x k ) + B Δ x , {\displaystyle \nabla f(x_{k}+\Delta x)=\nabla f(x_{k})+B\,\Delta x,} which is called the secant equation (the Taylor series of the gradient itself). In more than one dimension B {\displaystyle B} is underdetermined. In one dimension, solving for B {\displaystyle B} and applying the Newton's step with the updated value is equivalent to the secant method. The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent). Most methods (but with exceptions, such as Broyden's method) seek a symmetric solution ( B T = B {\displaystyle B^{T}=B} ); furthermore, the variants listed below can be motivated by finding an update B k + 1 {\displaystyle B_{k+1}} that is as close as possible to B k {\displaystyle B_{k}} in some norm; that is, B k + 1 = argmin B ⁡ ‖ B − B k ‖ V {\displaystyle B_{k+1}=\operatorname {argmin} _{B}\|B-B_{k}\|_{V}} , where V {\displaystyle V} is some positive-definite matrix that defines the norm. An approximate initial value B 0 = β I {\displaystyle B_{0}=\beta I} is often sufficient to achieve rapid convergence, although there is no general strategy to choose β {\displaystyle \beta } . Note that B 0 {\displaystyle B_{0}} should be positive-definite. The unknown x k {\displaystyle x_{k}} is updated applying the Newton's step calculated using the current approximate Hessian matrix B k {\displaystyle B_{k}} : Δ x k = − α k B k − 1 ∇ f ( x k ) {\displaystyle \Delta x_{k}=-\alpha _{k}B_{k}^{-1}\nabla f(x_{k})} , with α {\displaystyle \alpha } chosen to satisfy the Wolfe conditions; x k + 1 = x k + Δ x k {\displaystyle x_{k+1}=x_{k}+\Delta x_{k}} ; The gradient computed at the new point ∇ f ( x k + 1 ) {\displaystyle \nabla f(x_{k+1})} , and y k = ∇ f ( x k + 1 ) − ∇ f ( x k ) {\displaystyle y_{k}=\nabla f(x_{k+1})-\nabla f(x_{k})} is used to update the approximate Hessian B k + 1 {\displaystyle B_{k+1}} , or directly its inverse H k + 1 = B k + 1 − 1 {\displaystyle H_{k+1}=B_{k+1}^{-1}} using the Sherman–Morrison formula. A key property of the BFGS and DFP updates is that if B k {\displaystyle B_{k}} is positive-definite, and α k {\displaystyle \alpha _{k}} is chosen to satisfy the Wolfe conditions, then B k + 1 {\displaystyle B_{k+1}} is also positive-definite. The most popular update formulas are: Other methods are Pearson's method, McCormick's method, the Powell symmetric Broyden (PSB) method and Greenstadt's method. These recursive low-rank matrix updates can also represented as an initial matrix plus a low-rank correction. This is the Compact quasi-Newton representation, which is particularly effective for constrained and/or large problems. == Relationship to matrix inversion == When f {\displaystyle f} is a convex quadratic function with positive-definite Hessian B {\displaystyle B} , one would expect the matrices H k {\displaystyle H_{k}} generated by a quasi-Newton method to converge to the inverse Hessian H = B − 1 {\displaystyle H=B^{-1}} . This is indeed the case for the class of quasi-Newton methods based on least-change updates. == Notable implementations == Implementations of quasi-Newton methods are available in many programming languages. Notable open source implementations include: GNU Octave uses a form of BFGS in its fsolve function, with trust region extensions. GNU Scientific Library implements the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. ALGLIB implements (L)BFGS in C++ and C# R's optim general-purpose optimizer routine uses the BFGS method by using method="BFGS". Scipy.optimize has fmin_bfgs. In the SciPy extension to Python, the scipy.optimize.minimize function includes, among other methods, a BFGS implementation. Notable proprietary implementations include: Mathematica includes quasi-Newton solvers. The NAG Library contains several routines for minimizing or maximizing a function which use quasi-Newton algorithms. In MATLAB's Optimization Toolbox, the fminunc function uses (among other methods) the BFGS quasi-Newton method. Many of the constrained methods of the Optimization toolbox use BFGS and the variant L-BFGS. == See also == == References == == Further reading == Bonnans, J. F.; Gilbert, J. Ch.; Lemaréchal, C.; Sagastizábal, C. A. (2006). Numerical Optimization : Theoretical and Numerical Aspects (Second ed.). Springer. ISBN 3-540-35445-X. Fletcher, Roger (1987), Practical methods of optimization (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-91547-8. Nocedal, Jorge; Wright, Stephen J. (1999). "Quasi-Newton Methods". Numerical Optimization. New York: Springer. pp. 192–221. ISBN 0-387-98793-2. Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 10.9. Quasi-Newton or Variable Metric Methods in Multidimensions". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Scales, L. E. (1985). Introduction to Non-Linear Optimization. New York: MacMillan. pp. 84–106. ISBN 0-333-32552-4.
Wikipedia/Quasi-Newton_method
In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value increases to infinity as its argument approaches the boundary of the feasible region of an optimization problem. Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle. A barrier function is also called an interior penalty function, as it is a penalty function that forces the solution to remain within the interior of the feasible region. The two most common types of barrier functions are inverse barrier functions and logarithmic barrier functions. Resumption of interest in logarithmic barrier functions was motivated by their connection with primal-dual interior point methods. == Motivation == Consider the following constrained optimization problem: minimize f(x) subject to x ≤ b where b is some constant. If one wishes to remove the inequality constraint, the problem can be reformulated as minimize f(x) + c(x), where c(x) = ∞ if x > b, and zero otherwise. This problem is equivalent to the first. It gets rid of the inequality, but introduces the issue that the penalty function c, and therefore the objective function f(x) + c(x), is discontinuous, preventing the use of calculus to solve it. A barrier function, now, is a continuous approximation g to c that tends to infinity as x approaches b from above. Using such a function, a new optimization problem is formulated, viz. minimize f(x) + μ g(x) where μ > 0 is a free parameter. This problem is not equivalent to the original, but as μ approaches zero, it becomes an ever-better approximation. == Logarithmic barrier function == For logarithmic barrier functions, g ( x , b ) {\displaystyle g(x,b)} is defined as − log ⁡ ( b − x ) {\displaystyle -\log(b-x)} when x < b {\displaystyle x<b} and ∞ {\displaystyle \infty } otherwise (in one dimension; see below for a definition in higher dimensions). This essentially relies on the fact that log ⁡ t {\displaystyle \log t} tends to negative infinity as t {\displaystyle t} tends to 0. This introduces a gradient to the function being optimized which favors less extreme values of x {\displaystyle x} (in this case values lower than b {\displaystyle b} ), while having relatively low impact on the function away from these extremes. Logarithmic barrier functions may be favored over less computationally expensive inverse barrier functions depending on the function being optimized. === Higher dimensions === Extending to higher dimensions is simple, provided each dimension is independent. For each variable x i {\displaystyle x_{i}} which should be limited to be strictly lower than b i {\displaystyle b_{i}} , add − log ⁡ ( b i − x i ) {\displaystyle -\log(b_{i}-x_{i})} . === Formal definition === Minimize c T x {\displaystyle \mathbf {c} ^{T}x} subject to a i T x ≤ b i , i = 1 , … , m {\displaystyle \mathbf {a} _{i}^{T}x\leq b_{i},i=1,\ldots ,m} Assume strictly feasible: { x | A x < b } ≠ ∅ {\displaystyle \{\mathbf {x} |Ax<b\}\neq \emptyset } Define logarithmic barrier g ( x ) = { ∑ i = 1 m − log ⁡ ( b i − a i T x ) for A x < b + ∞ otherwise {\displaystyle g(x)={\begin{cases}\sum _{i=1}^{m}-\log(b_{i}-a_{i}^{T}x)&{\text{for }}Ax<b\\+\infty &{\text{otherwise}}\end{cases}}} == See also == Penalty method Augmented Lagrangian method == References == == External links == Lecture 14: Barrier method from Professor Lieven Vandenberghe of UCLA
Wikipedia/Barrier_function
Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also polynomial time but proved to be inefficient in practice. Denoting by n {\displaystyle n} the number of variables, m the number of inequality constraints, and L {\displaystyle L} the number of bits of input to the algorithm, Karmarkar's algorithm requires O ( m 1.5 n 2 L ) {\displaystyle O(m^{1.5}n^{2}L)} operations on O ( L ) {\displaystyle O(L)} -digit numbers, as compared to O ( n 3 ( n + m ) L ) {\displaystyle O(n^{3}(n+m)L)} such operations for the ellipsoid algorithm. In "square" problems, when m is in O(n), Karmarkar's algorithm requires O ( n 3.5 L ) {\displaystyle O(n^{3.5}L)} operations on O ( L ) {\displaystyle O(L)} -digit numbers, as compared to O ( n 4 L ) {\displaystyle O(n^{4}L)} such operations for the ellipsoid algorithm. The runtime of Karmarkar's algorithm is thus O ( n 3.5 L 2 ⋅ log ⁡ L ⋅ log ⁡ log ⁡ L ) , {\displaystyle O(n^{3.5}L^{2}\cdot \log L\cdot \log \log L),} using FFT-based multiplication (see Big O notation). Karmarkar's algorithm falls within the class of interior-point methods: the current guess for the solution does not follow the boundary of the feasible set as in the simplex method, but moves through the interior of the feasible region, improving the approximation of the optimal solution by a definite fraction with every iteration and converging to an optimal solution with rational data. == The algorithm == Consider a linear programming problem in matrix form: Karmarkar's algorithm determines the next feasible direction toward optimality and scales back by a factor 0 < γ ≤ 1. It is described in a number of sources. Karmarkar also has extended the method to solve problems with integer constraints and non-convex problems. == Example == Consider the linear program maximize x 1 + x 2 subject to 2 p x 1 + x 2 ≤ p 2 + 1 , p = 0.0 , 0.1 , 0.2 , … , 0.9 , 1.0. {\displaystyle {\begin{array}{lrclr}{\text{maximize}}&x_{1}+x_{2}\\{\text{subject to}}&2px_{1}+x_{2}&\leq &p^{2}+1,&p=0.0,0.1,0.2,\ldots ,0.9,1.0.\end{array}}} That is, there are 2 variables x 1 , x 2 {\displaystyle x_{1},x_{2}} and 11 constraints associated with varying values of p {\displaystyle p} . This figure shows each iteration of the algorithm as red circle points. The constraints are shown as blue lines. == Patent controversy == At the time he invented the algorithm, Karmarkar was employed by IBM as a postdoctoral fellow in the IBM San Jose Research Laboratory in California. On August 11, 1983 he gave a seminar at Stanford University explaining the algorithm, with his affiliation still listed as IBM. By the fall of 1983 Karmarkar started to work at AT&T and submitted his paper to the 1984 ACM Symposium on Theory of Computing (STOC, held April 30 - May 2, 1984) stating AT&T Bell Laboratories as his affiliation. After applying the algorithm to optimizing AT&T's telephone network, they realized that his invention could be of practical importance. In April 1985, AT&T promptly applied for a patent on his algorithm. The patent became more fuel for the ongoing controversy over the issue of software patents. This left many mathematicians uneasy, such as Ronald Rivest (himself one of the holders of the patent on the RSA algorithm), who expressed the opinion that research proceeded on the basis that algorithms should be free. Even before the patent was actually granted, it was argued that there might have been prior art that was applicable. Mathematicians who specialized in numerical analysis, including Philip Gill and others, claimed that Karmarkar's algorithm is equivalent to a projected Newton barrier method with a logarithmic barrier function, if the parameters are chosen suitably. Legal scholar Andrew Chin opines that Gill's argument was flawed, insofar as the method they describe does not constitute an "algorithm", since it requires choices of parameters that don't follow from the internal logic of the method, but rely on external guidance, essentially from Karmarkar's algorithm. Furthermore, Karmarkar's contributions are considered far from obvious in light of all prior work, including Fiacco-McCormick, Gill and others cited by Saltzman. The patent was granted in recognition of the essential originality of Karmarkar's work, as U.S. patent 4,744,028: "Methods and apparatus for efficient resource allocation" in May 1988. AT&T designed a vector multi-processor computer system specifically to run Karmarkar's algorithm, calling the resulting combination of hardware and software KORBX, and marketed this system at a price of US$8.9 million. Its first customer was the Pentagon. Opponents of software patents have further argued that the patents ruined the positive interaction cycles that previously characterized the relationship between researchers in linear programming and industry, and specifically it isolated Karmarkar himself from the network of mathematical researchers in his field. The patent itself expired in April 2006, and the algorithm is presently in the public domain. The United States Supreme Court has held that mathematics cannot be patented in Gottschalk v. Benson, In that case, the Court first addressed whether computer algorithms could be patented and it held that they could not because the patent system does not protect ideas and similar abstractions. In Diamond v. Diehr, the Supreme Court stated, "A mathematical formula as such is not accorded the protection of our patent laws, and this principle cannot be circumvented by attempting to limit the use of the formula to a particular technological environment. In Mayo Collaborative Services v. Prometheus Labs., Inc., the Supreme Court explained further that "simply implementing a mathematical principle on a physical machine, namely a computer, [i]s not a patentable application of that principle." == Applications == Karmarkar's algorithm was used by the US Army for logistic planning during the Gulf War. == References == Adler, Ilan; Karmarkar, Narendra; Resende, Mauricio G.C.; Veiga, Geraldo (1989). "An Implementation of Karmarkar's Algorithm for Linear Programming". Mathematical Programming. 44 (1–3): 297–335. doi:10.1007/bf01587095. S2CID 12851754. Narendra Karmarkar (1984). "A New Polynomial Time Algorithm for Linear Programming", Combinatorica, Vol 4, nr. 4, p. 373–395.
Wikipedia/Karmarkar's_algorithm
In the design of experiments, optimal experimental designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statistician Kirstine Smith. In the design of experiments for estimating statistical models, optimal designs allow parameters to be estimated without bias and with minimum variance. A non-optimal design requires a greater number of experimental runs to estimate the parameters with the same precision as an optimal design. In practical terms, optimal experiments can reduce the costs of experimentation. The optimality of a design depends on the statistical model and is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding of statistical theory and practical knowledge with designing experiments. == Advantages == Optimal designs offer three advantages over sub-optimal experimental designs: Optimal designs reduce the costs of experimentation by allowing statistical models to be estimated with fewer experimental runs. Optimal designs can accommodate multiple types of factors, such as process, mixture, and discrete factors. Designs can be optimized when the design-space is constrained, for example, when the mathematical process-space contains factor-settings that are practically infeasible (e.g. due to safety concerns). == Minimizing the variance of estimators == Experimental designs are evaluated using statistical criteria. It is known that the least squares estimator minimizes the variance of mean-unbiased estimators (under the conditions of the Gauss–Markov theorem). In the estimation theory for statistical models with one real parameter, the reciprocal of the variance of an ("efficient") estimator is called the "Fisher information" for that estimator. Because of this reciprocity, minimizing the variance corresponds to maximizing the information. When the statistical model has several parameters, however, the mean of the parameter-estimator is a vector and its variance is a matrix. The inverse matrix of the variance-matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using statistical theory, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these "information criteria" can be maximized. The traditional optimality-criteria are invariants of the information matrix; algebraically, the traditional optimality-criteria are functionals of the eigenvalues of the information matrix. A-optimality ("average" or trace) One criterion is A-optimality, which seeks to minimize the trace of the inverse of the information matrix. This criterion results in minimizing the average variance of the estimates of the regression coefficients. C-optimality This criterion minimizes the variance of a best linear unbiased estimator of a predetermined linear combination of model parameters. D-optimality (determinant) A popular criterion is D-optimality, which seeks to minimize |(X'X)−1|, or equivalently maximize the determinant of the information matrix X'X of the design. This criterion results in maximizing the differential Shannon information content of the parameter estimates. E-optimality (eigenvalue) Another design is E-optimality, which maximizes the minimum eigenvalue of the information matrix. S-optimality This criterion maximizes a quantity measuring the mutual column orthogonality of X and the determinant of the information matrix. T-optimality This criterion maximizes the discrepancy between two proposed models at the design locations. Other optimality-criteria are concerned with the variance of predictions: G-optimality A popular criterion is G-optimality, which seeks to minimize the maximum entry in the diagonal of the hat matrix X(X'X)−1X'. This has the effect of minimizing the maximum variance of the predicted values. I-optimality (integrated) A second criterion on prediction variance is I-optimality, which seeks to minimize the average prediction variance over the design space. V-optimality (variance) A third criterion on prediction variance is V-optimality, which seeks to minimize the average prediction variance over a set of m specific points. === Contrasts === In many applications, the statistician is most concerned with a "parameter of interest" rather than with "nuisance parameters". More generally, statisticians consider linear combinations of parameters, which are estimated via linear combinations of treatment-means in the design of experiments and in the analysis of variance; such linear combinations are called contrasts. Statisticians can use appropriate optimality-criteria for such parameters of interest and for contrasts. == Implementation == Catalogs of optimal designs occur in books and in software libraries. In addition, major statistical systems like SAS and R have procedures for optimizing a design according to a user's specification. The experimenter must specify a model for the design and an optimality-criterion before the method can compute an optimal design. == Practical considerations == Some advanced topics in optimal design require more statistical theory and practical knowledge in designing experiments. === Model dependence and robustness === Since the optimality criterion of most optimal designs is based on some function of the information matrix, the 'optimality' of a given design is model dependent: While an optimal design is best for that model, its performance may deteriorate on other models. On other models, an optimal design can be either better or worse than a non-optimal design. Therefore, it is important to benchmark the performance of designs under alternative models. === Choosing an optimality criterion and robustness === The choice of an appropriate optimality criterion requires some thought, and it is useful to benchmark the performance of designs with respect to several optimality criteria. Cornell writes that since the [traditional optimality] criteria . . . are variance-minimizing criteria, . . . a design that is optimal for a given model using one of the . . . criteria is usually near-optimal for the same model with respect to the other criteria. Indeed, there are several classes of designs for which all the traditional optimality-criteria agree, according to the theory of "universal optimality" of Kiefer. The experience of practitioners like Cornell and the "universal optimality" theory of Kiefer suggest that robustness with respect to changes in the optimality-criterion is much greater than is robustness with respect to changes in the model. ==== Flexible optimality criteria and convex analysis ==== High-quality statistical software provide a combination of libraries of optimal designs or iterative methods for constructing approximately optimal designs, depending on the model specified and the optimality criterion. Users may use a standard optimality-criterion or may program a custom-made criterion. All of the traditional optimality-criteria are convex (or concave) functions, and therefore optimal-designs are amenable to the mathematical theory of convex analysis and their computation can use specialized methods of convex minimization. The practitioner need not select exactly one traditional, optimality-criterion, but can specify a custom criterion. In particular, the practitioner can specify a convex criterion using the maxima of convex optimality-criteria and nonnegative combinations of optimality criteria (since these operations preserve convex functions). For convex optimality criteria, the Kiefer-Wolfowitz equivalence theorem allows the practitioner to verify that a given design is globally optimal. The Kiefer-Wolfowitz equivalence theorem is related with the Legendre-Fenchel conjugacy for convex functions. If an optimality-criterion lacks convexity, then finding a global optimum and verifying its optimality often are difficult. === Model uncertainty and Bayesian approaches === ==== Model selection ==== When scientists wish to test several theories, then a statistician can design an experiment that allows optimal tests between specified models. Such "discrimination experiments" are especially important in the biostatistics supporting pharmacokinetics and pharmacodynamics, following the work of Cox and Atkinson. ==== Bayesian experimental design ==== When practitioners need to consider multiple models, they can specify a probability-measure on the models and then select any design maximizing the expected value of such an experiment. Such probability-based optimal-designs are called optimal Bayesian designs. Such Bayesian designs are used especially for generalized linear models (where the response follows an exponential-family distribution). The use of a Bayesian design does not force statisticians to use Bayesian methods to analyze the data, however. Indeed, the "Bayesian" label for probability-based experimental-designs is disliked by some researchers. Alternative terminology for "Bayesian" optimality includes "on-average" optimality or "population" optimality. == Iterative experimentation == Scientific experimentation is an iterative process, and statisticians have developed several approaches to the optimal design of sequential experiments. === Sequential analysis === Sequential analysis was pioneered by Abraham Wald. In 1972, Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs were surveyed later by S. Zacks. Of course, much work on the optimal design of experiments is related to the theory of optimal decisions, especially the statistical decision theory of Abraham Wald. === Response-surface methodology === Optimal designs for response-surface models are discussed in the textbook by Atkinson, Donev and Tobias, and in the survey of Gaffke and Heiligers and in the mathematical text of Pukelsheim. The blocking of optimal designs is discussed in the textbook of Atkinson, Donev and Tobias and also in the monograph by Goos. The earliest optimal designs were developed to estimate the parameters of regression models with continuous variables, for example, by J. D. Gergonne in 1815 (Stigler). In English, two early contributions were made by Charles S. Peirce and Kirstine Smith. Pioneering designs for multivariate response-surfaces were proposed by George E. P. Box. However, Box's designs have few optimality properties. Indeed, the Box–Behnken design requires excessive experimental runs when the number of variables exceeds three. Box's "central-composite" designs require more experimental runs than do the optimal designs of Kôno. === System identification and stochastic approximation === The optimization of sequential experimentation is studied also in stochastic programming and in systems and control. Popular methods include stochastic approximation and other methods of stochastic optimization. Much of this research has been associated with the subdiscipline of system identification. In computational optimal control, D. Judin & A. Nemirovskii and Boris Polyak has described methods that are more efficient than the (Armijo-style) step-size rules introduced by G. E. P. Box in response-surface methodology. Adaptive designs are used in clinical trials, and optimal adaptive designs are surveyed in the Handbook of Experimental Designs chapter by Shelemyahu Zacks. == Specifying the number of experimental runs == === Using a computer to find a good design === There are several methods of finding an optimal design, given an a priori restriction on the number of experimental runs or replications. Some of these methods are discussed by Atkinson, Donev and Tobias and in the paper by Hardin and Sloane. Of course, fixing the number of experimental runs a priori would be impractical. Prudent statisticians examine the other optimal designs, whose number of experimental runs differ. === Discretizing probability-measure designs === In the mathematical theory on optimal experiments, an optimal design can be a probability measure that is supported on an infinite set of observation-locations. Such optimal probability-measure designs solve a mathematical problem that neglected to specify the cost of observations and experimental runs. Nonetheless, such optimal probability-measure designs can be discretized to furnish approximately optimal designs. In some cases, a finite set of observation-locations suffices to support an optimal design. Such a result was proved by Kôno and Kiefer in their works on response-surface designs for quadratic models. The Kôno–Kiefer analysis explains why optimal designs for response-surfaces can have discrete supports, which are very similar as do the less efficient designs that have been traditional in response surface methodology. == History == In 1815, an article on optimal designs for polynomial regression was published by Joseph Diaz Gergonne, according to Stigler. Charles S. Peirce proposed an economic theory of scientific experimentation in 1876, which sought to maximize the precision of the estimates. Peirce's optimal allocation immediately improved the accuracy of gravitational experiments and was used for decades by Peirce and his colleagues. In his 1882 published lecture at Johns Hopkins University, Peirce introduced experimental design with these words: Logic will not undertake to inform you what kind of experiments you ought to make in order best to determine the acceleration of gravity, or the value of the Ohm; but it will tell you how to proceed to form a plan of experimentation. [....] Unfortunately practice generally precedes theory, and it is the usual fate of mankind to get things done in some boggling way first, and find out afterward how they could have been done much more easily and perfectly. Kirstine Smith proposed optimal designs for polynomial models in 1918. (Kirstine Smith had been a student of the Danish statistician Thorvald N. Thiele and was working with Karl Pearson in London.) == See also == == Notes == == References == Atkinson, A. C.; Donev, A. N.; Tobias, R. D. (2007). Optimum experimental designs, with SAS. Oxford University Press. pp. 511+xvi. ISBN 978-0-19-929660-6. Chernoff, Herman (1972). Sequential analysis and optimal design. Society for Industrial and Applied Mathematics. ISBN 978-0-89871-006-9. Fedorov, V. V. (1972). Theory of Optimal Experiments. Academic Press. Fedorov, Valerii V.; Hackl, Peter (1997). Model-Oriented Design of Experiments. Lecture Notes in Statistics. Vol. 125. Springer-Verlag. Goos, Peter (2002). The Optimal Design of Blocked and Split-plot Experiments. Lecture Notes in Statistics. Vol. 164. Springer. Kiefer, Jack Carl (1985). Brown; Olkin, Ingram; Sacks, Jerome; et al. (eds.). Jack Carl Kiefer: Collected papers III—Design of experiments. Springer-Verlag and the Institute of Mathematical Statistics. pp. 718+xxv. ISBN 978-0-387-96004-3. Logothetis, N.; Wynn, H. P. (1989). Quality through design: Experimental design, off-line quality control, and Taguchi's contributions. Oxford U. P. pp. 464+xi. ISBN 978-0-19-851993-5. Nordström, Kenneth (May 1999). "The life and work of Gustav Elfving". Statistical Science. 14 (2): 174–196. doi:10.1214/ss/1009212244. JSTOR 2676737. MR 1722074. Pukelsheim, Friedrich (2006). Optimal design of experiments. Classics in Applied Mathematics. Vol. 50 (republication with errata-list and new preface of Wiley (0-471-61971-X) 1993 ed.). Society for Industrial and Applied Mathematics. pp. 454+xxxii. ISBN 978-0-89871-604-7. Shah, Kirti R. & Sinha, Bikas K. (1989). Theory of Optimal Designs. Lecture Notes in Statistics. Vol. 54. Springer-Verlag. pp. 171+viii. ISBN 978-0-387-96991-6. == Further reading == === Textbooks for practitioners and students === ==== Textbooks emphasizing regression and response-surface methodology ==== The textbook by Atkinson, Donev and Tobias has been used for short courses for industrial practitioners as well as university courses. Atkinson, A. C.; Donev, A. N.; Tobias, R. D. (2007). Optimum experimental designs, with SAS. Oxford University Press. pp. 511+xvi. ISBN 978-0-19-929660-6. Logothetis, N.; Wynn, H. P. (1989). Quality through design: Experimental design, off-line quality control, and Taguchi's contributions. Oxford U. P. pp. 464+xi. ISBN 978-0-19-851993-5. ==== Textbooks emphasizing block designs ==== Optimal block designs are discussed by Bailey and by Bapat. The first chapter of Bapat's book reviews the linear algebra used by Bailey (or the advanced books below). Bailey's exercises and discussion of randomization both emphasize statistical concepts (rather than algebraic computations). Bailey, R. A. (2008). Design of Comparative Experiments. Cambridge U. P. ISBN 978-0-521-68357-9. Draft available on-line. (Especially Chapter 11.8 "Optimality") Bapat, R. B. (2000). Linear Algebra and Linear Models (Second ed.). Springer. ISBN 978-0-387-98871-9. (Chapter 5 "Block designs and optimality", pages 99–111) Optimal block designs are discussed in the advanced monograph by Shah and Sinha and in the survey-articles by Cheng and by Majumdar. === Books for professional statisticians and researchers === Chernoff, Herman (1972). Sequential Analysis and Optimal Design. SIAM. ISBN 978-0-89871-006-9. Fedorov, V. V. (1972). Theory of Optimal Experiments. Academic Press. Fedorov, Valerii V.; Hackl, Peter (1997). Model-Oriented Design of Experiments. Vol. 125. Springer-Verlag. Goos, Peter (2002). The Optimal Design of Blocked and Split-plot Experiments. Vol. 164. Springer. Goos, Peter & Jones, Bradley (2011). Optimal design of experiments: a case study approach. Chichester Wiley. p. 304. ISBN 978-0-470-74461-1. Kiefer, Jack Carl. (1985). Brown, Lawrence D.; Olkin, Ingram; Jerome Sacks; Wynn, Henry P (eds.). Jack Carl Kiefer Collected Papers III Design of Experiments. Springer-Verlag and the Institute of Mathematical Statistics. ISBN 978-0-387-96004-3. Pukelsheim, Friedrich (2006). Optimal Design of Experiments. Vol. 50. Society for Industrial and Applied Mathematics. ISBN 978-0-89871-604-7. Republication with errata-list and new preface of Wiley (0-471-61971-X) 1993 Shah, Kirti R. & Sinha, Bikas K. (1989). Theory of Optimal Designs. Vol. 54. Springer-Verlag. ISBN 978-0-387-96991-6. === Articles and chapters === Chaloner, Kathryn & Verdinelli, Isabella (1995). "Bayesian Experimental Design: A Review". Statistical Science. 10 (3): 273–304. CiteSeerX 10.1.1.29.5355. doi:10.1214/ss/1177009939. Ghosh, S.; Rao, C. R., eds. (1996). Design and Analysis of Experiments. Handbook of Statistics. Vol. 13. North-Holland. ISBN 978-0-444-82061-7. "Model Robust Designs". Design and Analysis of Experiments. Handbook of Statistics. pp. 1055–1099. Cheng, C.-S. "Optimal Design: Exact Theory". Design and Analysis of Experiments. Handbook of Statistics. pp. 977–1006. DasGupta, A. "Review of Optimal Bayesian Designs". Design and Analysis of Experiments. Handbook of Statistics. pp. 1099–1148. Gaffke, N. & Heiligers, B. "Approximate Designs for Polynomial Regression: Invariance, Admissibility, and Optimality". Design and Analysis of Experiments. Handbook of Statistics. pp. 1149–1199. Majumdar, D. "Optimal and Efficient Treatment-Control Designs". Design and Analysis of Experiments. Handbook of Statistics. pp. 1007–1054. Stufken, J. "Optimal Crossover Designs". Design and Analysis of Experiments. Handbook of Statistics. pp. 63–90. Zacks, S. "Adaptive Designs for Parametric Models". Design and Analysis of Experiments. Handbook of Statistics. pp. 151–180. Kôno, Kazumasa (1962). "Optimum designs for quadratic regression on k-cube" (PDF). Memoirs of the Faculty of Science. Kyushu University. Series A. Mathematics. 16 (2): 114–122. doi:10.2206/kyushumfs.16.114. === Historical === Gergonne, J. D. (November 1974) [1815]. "The application of the method of least squares to the interpolation of sequences". Historia Mathematica. 1 (4) (Translated by Ralph St. John and S. M. Stigler from the 1815 French ed.): 439–447. doi:10.1016/0315-0860(74)90034-2. Stigler, Stephen M. (November 1974). "Gergonne's 1815 paper on the design and analysis of polynomial regression experiments". Historia Mathematica. 1 (4): 431–439. doi:10.1016/0315-0860(74)90033-0. Peirce, C. S (1876). "Note on the Theory of the Economy of Research". Coast Survey Report: 197–201. (Appendix No. 14). NOAA PDF Eprint. Reprinted in Collected Papers of Charles Sanders Peirce. Vol. 7. 1958. paragraphs 139–157, and in Peirce, C. S. (July–August 1967). "Note on the Theory of the Economy of Research". Operations Research. 15 (4): 643–648. doi:10.1287/opre.15.4.643. JSTOR 168276. Smith, Kirstine (1918). "On the Standard Deviations of Adjusted and Interpolated Values of an Observed Polynomial Function and its Constants and the Guidance They Give Towards a Proper Choice of the Distribution of the Observations". Biometrika. 12 (1/2): 1–85. doi:10.2307/2331929. JSTOR 2331929.
Wikipedia/Optimal_design
In mathematics, the spiral optimization (SPO) algorithm is a metaheuristic inspired by spiral phenomena in nature. The first SPO algorithm was proposed for two-dimensional unconstrained optimization based on two-dimensional spiral models. This was extended to n-dimensional problems by generalizing the two-dimensional spiral model to an n-dimensional spiral model. There are effective settings for the SPO algorithm: the periodic descent direction setting and the convergence setting. == Metaphor == The motivation for focusing on spiral phenomena was due to the insight that the dynamics that generate logarithmic spirals share the diversification and intensification behavior. The diversification behavior can work for a global search (exploration) and the intensification behavior enables an intensive search around a current found good solution (exploitation). == Algorithm == The SPO algorithm is a multipoint search algorithm that has no objective function gradient, which uses multiple spiral models that can be described as deterministic dynamical systems. As search points follow logarithmic spiral trajectories towards the common center, defined as the current best point, better solutions can be found and the common center can be updated. The general SPO algorithm for a minimization problem under the maximum iteration k max {\displaystyle k_{\max }} (termination criterion) is as follows: 0) Set the number of search points m ≥ 2 {\displaystyle m\geq 2} and the maximum iteration number k max {\displaystyle k_{\max }} . 1) Place the initial search points x i ( 0 ) ∈ R n ( i = 1 , … , m ) {\displaystyle x_{i}(0)\in \mathbb {R} ^{n}~(i=1,\ldots ,m)} and determine the center x ⋆ ( 0 ) = x i b ( 0 ) {\displaystyle x^{\star }(0)=x_{i_{\text{b}}}(0)} , i b = argmin i = 1 , … , m ⁡ { f ( x i ( 0 ) ) } {\displaystyle \displaystyle i_{\text{b}}=\mathop {\text{argmin}} _{i=1,\ldots ,m}\{f(x_{i}(0))\}} ,and then set k = 0 {\displaystyle k=0} . 2) Decide the step rate r ( k ) {\displaystyle r(k)} by a rule. 3) Update the search points: x i ( k + 1 ) = x ⋆ ( k ) + r ( k ) R ( θ ) ( x i ( k ) − x ⋆ ( k ) ) ( i = 1 , … , m ) . {\displaystyle x_{i}(k+1)=x^{\star }(k)+r(k)R(\theta )(x_{i}(k)-x^{\star }(k))\quad (i=1,\ldots ,m).} 4) Update the center: x ⋆ ( k + 1 ) = { x i b ( k + 1 ) ( if f ( x i b ( k + 1 ) ) < f ( x ⋆ ( k ) ) ) , x ⋆ ( k ) ( otherwise ) , {\displaystyle x^{\star }(k+1)={\begin{cases}x_{i_{\text{b}}}(k+1)&{\big (}{\text{if }}f(x_{i_{\text{b}}}(k+1))<f(x^{\star }(k)){\big )},\\x^{\star }(k)&{\big (}{\text{otherwise}}{\big )},\end{cases}}} where i b = argmin i = 1 , … , m ⁡ { f ( x i ( k + 1 ) ) } {\displaystyle \displaystyle i_{\text{b}}=\mathop {\text{argmin}} _{i=1,\ldots ,m}\{f(x_{i}(k+1))\}} . 5) Set k := k + 1 {\displaystyle k:=k+1} . If k = k max {\displaystyle k=k_{\max }} is satisfied then terminate and output x ⋆ ( k ) {\displaystyle x^{\star }(k)} . Otherwise, return to Step 2). == Setting == The search performance depends on setting the composite rotation matrix R ( θ ) {\displaystyle R(\theta )} , the step rate r ( k ) {\displaystyle r(k)} , and the initial points x i ( 0 ) ( i = 1 , … , m ) {\displaystyle x_{i}(0)~(i=1,\ldots ,m)} . The following settings are new and effective. === Setting 1 (Periodic Descent Direction Setting) === This setting is an effective setting for high dimensional problems under the maximum iteration k max {\displaystyle k_{\max }} . The conditions on R ( θ ) {\displaystyle R(\theta )} and x i ( 0 ) ( i = 1 , … , m ) {\displaystyle x_{i}(0)~(i=1,\ldots ,m)} together ensure that the spiral models generate descent directions periodically. The condition of r ( k ) {\displaystyle r(k)} works to utilize the periodic descent directions under the search termination k max {\displaystyle k_{\max }} . Set R ( θ ) {\displaystyle R(\theta )} as follows: R ( θ ) = [ 0 n − 1 ⊤ − 1 I n − 1 0 n − 1 ] {\displaystyle R(\theta )={\begin{bmatrix}0_{n-1}^{\top }&-1\\I_{n-1}&0_{n-1}\\\end{bmatrix}}} where I n − 1 {\displaystyle I_{n-1}} is the ( n − 1 ) × ( n − 1 ) {\displaystyle (n-1)\times (n-1)} identity matrix and 0 n − 1 {\displaystyle 0_{n-1}} is the ( n − 1 ) × 1 {\displaystyle (n-1)\times 1} zero vector. Place the initial points x i ( 0 ) ∈ R n {\displaystyle x_{i}(0)\in \mathbb {R} ^{n}} ( i = 1 , … , m ) {\displaystyle (i=1,\ldots ,m)} at random to satisfy the following condition: min i = 1 , … , m { max j = 1 , … , m { rank [ d j , i ( 0 ) R ( θ ) d j , i ( 0 ) ⋯ R ( θ ) 2 n − 1 d j , i ( 0 ) ] } } = n {\displaystyle \min _{i=1,\ldots ,m}\{\max _{j=1,\ldots ,m}{\bigl \{}{\text{rank}}{\bigl [}d_{j,i}(0)~R(\theta )d_{j,i}(0)~~\cdots ~~R(\theta )^{2n-1}d_{j,i}(0){\bigr ]}{\bigr \}}{\bigr \}}=n} where d j , i ( 0 ) = x j ( 0 ) − x i ( 0 ) {\displaystyle d_{j,i}(0)=x_{j}(0)-x_{i}(0)} . Note that this condition is almost all satisfied by a random placing and thus no check is actually fine. Set r ( k ) {\displaystyle r(k)} at Step 2) as follows: r ( k ) = r = δ k max (constant value) {\displaystyle r(k)=r={\sqrt[{k_{\max }}]{\delta }}~~~~{\text{(constant value)}}} where a sufficiently small δ > 0 {\displaystyle \delta >0} such as δ = 1 / k max {\displaystyle \delta =1/k_{\max }} or δ = 10 − 3 {\displaystyle \delta =10^{-3}} . === Setting 2 (Convergence Setting) === This setting ensures that the SPO algorithm converges to a stationary point under the maximum iteration k max = ∞ {\displaystyle k_{\max }=\infty } . The settings of R ( θ ) {\displaystyle R(\theta )} and the initial points x i ( 0 ) ( i = 1 , … , m ) {\displaystyle x_{i}(0)~(i=1,\ldots ,m)} are the same with the above Setting 1. The setting of r ( k ) {\displaystyle r(k)} is as follows. Set r ( k ) {\displaystyle r(k)} at Step 2) as follows: r ( k ) = { 1 ( k ⋆ ≦ k ≦ k ⋆ + 2 n − 1 ) , h ( k ≧ k ⋆ + 2 n ) , {\displaystyle r(k)={\begin{cases}1&(k^{\star }\leqq k\leqq k^{\star }+2n-1),\\h&(k\geqq k^{\star }+2n),\end{cases}}} where k ⋆ {\displaystyle k^{\star }} is an iteration when the center is newly updated at Step 4) and h = δ 2 n , δ ∈ ( 0 , 1 ) {\displaystyle h={\sqrt[{2n}]{\delta }},\delta \in (0,1)} such as δ = 0.5 {\displaystyle \delta =0.5} . Thus we have to add the following rules about k ⋆ {\displaystyle k^{\star }} to the Algorithm: •(Step 1) k ⋆ = 0 {\displaystyle k^{\star }=0} . •(Step 4) If x ⋆ ( k + 1 ) ≠ x ⋆ ( k ) {\displaystyle x^{\star }(k+1)\neq x^{\star }(k)} then k ⋆ = k + 1 {\displaystyle k^{\star }=k+1} . == Future works == The algorithms with the above settings are deterministic. Thus, incorporating some random operations make this algorithm powerful for global optimization. Cruz-Duarte et al. demonstrated it by including stochastic disturbances in spiral searching trajectories. However, this door remains open to further studies. To find an appropriate balance between diversification and intensification spirals depending on the target problem class (including k max {\displaystyle k_{\max }} ) is important to enhance the performance. == Extended works == Many extended studies have been conducted on the SPO due to its simple structure and concept; these studies have helped improve its global search performance and proposed novel applications. == References ==
Wikipedia/Spiral_optimization_algorithm
In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized secant method. Since the updates of the BFGS curvature matrix do not require matrix inversion, its computational complexity is only O ( n 2 ) {\displaystyle {\mathcal {O}}(n^{2})} , compared to O ( n 3 ) {\displaystyle {\mathcal {O}}(n^{3})} in Newton's method. Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints. The BFGS matrix also admits a compact representation, which makes it better suited for large constrained problems. The algorithm is named after Charles George Broyden, Roger Fletcher, Donald Goldfarb and David Shanno. == Rationale == The optimization problem is to minimize f ( x ) {\displaystyle f(\mathbf {x} )} , where x {\displaystyle \mathbf {x} } is a vector in R n {\displaystyle \mathbb {R} ^{n}} , and f {\displaystyle f} is a differentiable scalar function. There are no constraints on the values that x {\displaystyle \mathbf {x} } can take. The algorithm begins at an initial estimate x 0 {\displaystyle \mathbf {x} _{0}} for the optimal value and proceeds iteratively to get a better estimate at each stage. The search direction pk at stage k is given by the solution of the analogue of the Newton equation: B k p k = − ∇ f ( x k ) , {\displaystyle B_{k}\mathbf {p} _{k}=-\nabla f(\mathbf {x} _{k}),} where B k {\displaystyle B_{k}} is an approximation to the Hessian matrix at x k {\displaystyle \mathbf {x} _{k}} , which is updated iteratively at each stage, and ∇ f ( x k ) {\displaystyle \nabla f(\mathbf {x} _{k})} is the gradient of the function evaluated at xk. A line search in the direction pk is then used to find the next point xk+1 by minimizing f ( x k + γ p k ) {\displaystyle f(\mathbf {x} _{k}+\gamma \mathbf {p} _{k})} over the scalar γ > 0. {\displaystyle \gamma >0.} The quasi-Newton condition imposed on the update of B k {\displaystyle B_{k}} is B k + 1 ( x k + 1 − x k ) = ∇ f ( x k + 1 ) − ∇ f ( x k ) . {\displaystyle B_{k+1}(\mathbf {x} _{k+1}-\mathbf {x} _{k})=\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k}).} Let y k = ∇ f ( x k + 1 ) − ∇ f ( x k ) {\displaystyle \mathbf {y} _{k}=\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k})} and s k = x k + 1 − x k {\displaystyle \mathbf {s} _{k}=\mathbf {x} _{k+1}-\mathbf {x} _{k}} , then B k + 1 {\displaystyle B_{k+1}} satisfies B k + 1 s k = y k {\displaystyle B_{k+1}\mathbf {s} _{k}=\mathbf {y} _{k}} , which is the secant equation. The curvature condition s k ⊤ y k > 0 {\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}>0} should be satisfied for B k + 1 {\displaystyle B_{k+1}} to be positive definite, which can be verified by pre-multiplying the secant equation with s k T {\displaystyle \mathbf {s} _{k}^{T}} . If the function is not strongly convex, then the condition has to be enforced explicitly e.g. by finding a point xk+1 satisfying the Wolfe conditions, which entail the curvature condition, using line search. Instead of requiring the full Hessian matrix at the point x k + 1 {\displaystyle \mathbf {x} _{k+1}} to be computed as B k + 1 {\displaystyle B_{k+1}} , the approximate Hessian at stage k is updated by the addition of two matrices: B k + 1 = B k + U k + V k . {\displaystyle B_{k+1}=B_{k}+U_{k}+V_{k}.} Both U k {\displaystyle U_{k}} and V k {\displaystyle V_{k}} are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS and DFP updating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known as symmetric rank-one method, which does not guarantee the positive definiteness. In order to maintain the symmetry and positive definiteness of B k + 1 {\displaystyle B_{k+1}} , the update form can be chosen as B k + 1 = B k + α u u ⊤ + β v v ⊤ {\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }} . Imposing the secant condition, B k + 1 s k = y k {\displaystyle B_{k+1}\mathbf {s} _{k}=\mathbf {y} _{k}} . Choosing u = y k {\displaystyle \mathbf {u} =\mathbf {y} _{k}} and v = B k s k {\displaystyle \mathbf {v} =B_{k}\mathbf {s} _{k}} , we can obtain: α = 1 y k T s k , {\displaystyle \alpha ={\frac {1}{\mathbf {y} _{k}^{T}\mathbf {s} _{k}}},} β = − 1 s k T B k s k . {\displaystyle \beta =-{\frac {1}{\mathbf {s} _{k}^{T}B_{k}\mathbf {s} _{k}}}.} Finally, we substitute α {\displaystyle \alpha } and β {\displaystyle \beta } into B k + 1 = B k + α u u ⊤ + β v v ⊤ {\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }} and get the update equation of B k + 1 {\displaystyle B_{k+1}} : B k + 1 = B k + y k y k T y k T s k − B k s k s k T B k T s k T B k s k . {\displaystyle B_{k+1}=B_{k}+{\frac {\mathbf {y} _{k}\mathbf {y} _{k}^{\mathrm {T} }}{\mathbf {y} _{k}^{\mathrm {T} }\mathbf {s} _{k}}}-{\frac {B_{k}\mathbf {s} _{k}\mathbf {s} _{k}^{\mathrm {T} }B_{k}^{\mathrm {T} }}{\mathbf {s} _{k}^{\mathrm {T} }B_{k}\mathbf {s} _{k}}}.} == Algorithm == Consider the following unconstrained optimization problem minimize x ∈ R n f ( x ) , {\displaystyle {\begin{aligned}{\underset {\mathbf {x} \in \mathbb {R} ^{n}}{\text{minimize}}}\quad &f(\mathbf {x} ),\end{aligned}}} where f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a nonlinear objective function. From an initial guess x 0 ∈ R n {\displaystyle \mathbf {x} _{0}\in \mathbb {R} ^{n}} and an initial guess of the Hessian matrix B 0 ∈ R n × n {\displaystyle B_{0}\in \mathbb {R} ^{n\times n}} the following steps are repeated as x k {\displaystyle \mathbf {x} _{k}} converges to the solution: Obtain a direction p k {\displaystyle \mathbf {p} _{k}} by solving B k p k = − ∇ f ( x k ) {\displaystyle B_{k}\mathbf {p} _{k}=-\nabla f(\mathbf {x} _{k})} . Perform a one-dimensional optimization (line search) to find an acceptable stepsize α k {\displaystyle \alpha _{k}} in the direction found in the first step. If an exact line search is performed, then α k = arg ⁡ min f ( x k + α p k ) {\displaystyle \alpha _{k}=\arg \min f(\mathbf {x} _{k}+\alpha \mathbf {p} _{k})} . In practice, an inexact line search usually suffices, with an acceptable α k {\displaystyle \alpha _{k}} satisfying Wolfe conditions. Set s k = α k p k {\displaystyle \mathbf {s} _{k}=\alpha _{k}\mathbf {p} _{k}} and update x k + 1 = x k + s k {\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\mathbf {s} _{k}} . y k = ∇ f ( x k + 1 ) − ∇ f ( x k ) {\displaystyle \mathbf {y} _{k}={\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k})}} . B k + 1 = B k + y k y k T y k T s k − B k s k s k T B k T s k T B k s k {\displaystyle B_{k+1}=B_{k}+{\frac {\mathbf {y} _{k}\mathbf {y} _{k}^{\mathrm {T} }}{\mathbf {y} _{k}^{\mathrm {T} }\mathbf {s} _{k}}}-{\frac {B_{k}\mathbf {s} _{k}\mathbf {s} _{k}^{\mathrm {T} }B_{k}^{\mathrm {T} }}{\mathbf {s} _{k}^{\mathrm {T} }B_{k}\mathbf {s} _{k}}}} . Convergence can be determined by observing the norm of the gradient; given some ϵ > 0 {\displaystyle \epsilon >0} , one may stop the algorithm when | | ∇ f ( x k ) | | ≤ ϵ . {\displaystyle ||\nabla f(\mathbf {x} _{k})||\leq \epsilon .} If B 0 {\displaystyle B_{0}} is initialized with B 0 = I {\displaystyle B_{0}=I} , the first step will be equivalent to a gradient descent, but further steps are more and more refined by B k {\displaystyle B_{k}} , the approximation to the Hessian. The first step of the algorithm is carried out using the inverse of the matrix B k {\displaystyle B_{k}} , which can be obtained efficiently by applying the Sherman–Morrison formula to the step 5 of the algorithm, giving B k + 1 − 1 = ( I − s k y k T y k T s k ) B k − 1 ( I − y k s k T y k T s k ) + s k s k T y k T s k . {\displaystyle B_{k+1}^{-1}=\left(I-{\frac {\mathbf {s} _{k}\mathbf {y} _{k}^{T}}{\mathbf {y} _{k}^{T}\mathbf {s} _{k}}}\right)B_{k}^{-1}\left(I-{\frac {\mathbf {y} _{k}\mathbf {s} _{k}^{T}}{\mathbf {y} _{k}^{T}\mathbf {s} _{k}}}\right)+{\frac {\mathbf {s} _{k}\mathbf {s} _{k}^{T}}{\mathbf {y} _{k}^{T}\mathbf {s} _{k}}}.} This can be computed efficiently without temporary matrices, recognizing that B k − 1 {\displaystyle B_{k}^{-1}} is symmetric, and that y k T B k − 1 y k {\displaystyle \mathbf {y} _{k}^{\mathrm {T} }B_{k}^{-1}\mathbf {y} _{k}} and s k T y k {\displaystyle \mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}} are scalars, using an expansion such as B k + 1 − 1 = B k − 1 + ( s k T y k + y k T B k − 1 y k ) ( s k s k T ) ( s k T y k ) 2 − B k − 1 y k s k T + s k y k T B k − 1 s k T y k . {\displaystyle B_{k+1}^{-1}=B_{k}^{-1}+{\frac {(\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}+\mathbf {y} _{k}^{\mathrm {T} }B_{k}^{-1}\mathbf {y} _{k})(\mathbf {s} _{k}\mathbf {s} _{k}^{\mathrm {T} })}{(\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k})^{2}}}-{\frac {B_{k}^{-1}\mathbf {y} _{k}\mathbf {s} _{k}^{\mathrm {T} }+\mathbf {s} _{k}\mathbf {y} _{k}^{\mathrm {T} }B_{k}^{-1}}{\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}}}.} Therefore, in order to avoid any matrix inversion, the inverse of the Hessian can be approximated instead of the Hessian itself: H k = def B k − 1 . {\displaystyle H_{k}{\overset {\operatorname {def} }{=}}B_{k}^{-1}.} From an initial guess x 0 {\displaystyle \mathbf {x} _{0}} and an approximate inverted Hessian matrix H 0 {\displaystyle H_{0}} the following steps are repeated as x k {\displaystyle \mathbf {x} _{k}} converges to the solution: Obtain a direction p k {\displaystyle \mathbf {p} _{k}} by solving p k = − H k ∇ f ( x k ) {\displaystyle \mathbf {p} _{k}=-H_{k}\nabla f(\mathbf {x} _{k})} . Perform a one-dimensional optimization (line search) to find an acceptable stepsize α k {\displaystyle \alpha _{k}} in the direction found in the first step. If an exact line search is performed, then α k = arg ⁡ min f ( x k + α p k ) {\displaystyle \alpha _{k}=\arg \min f(\mathbf {x} _{k}+\alpha \mathbf {p} _{k})} . In practice, an inexact line search usually suffices, with an acceptable α k {\displaystyle \alpha _{k}} satisfying Wolfe conditions. Set s k = α k p k {\displaystyle \mathbf {s} _{k}=\alpha _{k}\mathbf {p} _{k}} and update x k + 1 = x k + s k {\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\mathbf {s} _{k}} . y k = ∇ f ( x k + 1 ) − ∇ f ( x k ) {\displaystyle \mathbf {y} _{k}={\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k})}} . H k + 1 = H k + ( s k T y k + y k T H k y k ) ( s k s k T ) ( s k T y k ) 2 − H k y k s k T + s k y k T H k s k T y k {\displaystyle H_{k+1}=H_{k}+{\frac {(\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}+\mathbf {y} _{k}^{\mathrm {T} }H_{k}\mathbf {y} _{k})(\mathbf {s} _{k}\mathbf {s} _{k}^{\mathrm {T} })}{(\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k})^{2}}}-{\frac {H_{k}\mathbf {y} _{k}\mathbf {s} _{k}^{\mathrm {T} }+\mathbf {s} _{k}\mathbf {y} _{k}^{\mathrm {T} }H_{k}}{\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}}}} . In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix . However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix. == Further developments == The BFGS update formula heavily relies on the curvature s k ⊤ y k {\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}} being strictly positive and bounded away from zero. This condition is satisfied when we perform a line search with Wolfe conditions on a convex target. However, some real-life applications (like Sequential Quadratic Programming methods) routinely produce negative or nearly-zero curvatures. This can occur when optimizing a nonconvex target or when employing a trust-region approach instead of a line search. It is also possible to produce spurious values due to noise in the target. In such cases, one of the so-called damped BFGS updates can be used (see ) which modify s k {\displaystyle \mathbf {s} _{k}} and/or y k {\displaystyle \mathbf {y} _{k}} in order to obtain a more robust update. == Notable implementations == Notable open source implementations are: ALGLIB implements BFGS and its limited-memory version in C++ and C# GNU Octave uses a form of BFGS in its fsolve function, with trust region extensions. The GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2. In R, the BFGS algorithm (and the L-BFGS-B version that allows box constraints) is implemented as an option of the base function optim(). In SciPy, the scipy.optimize.fmin_bfgs function implements BFGS. It is also possible to run BFGS using any of the L-BFGS algorithms by setting the parameter L to a very large number. It is also one of the default methods used when running scipy.optimize.minimize with no constraints. In Julia, the Optim.jl package implements BFGS and L-BFGS as a solver option to the optimize() function (among other options). Notable proprietary implementations include: The large scale nonlinear optimization software Artelys Knitro implements, among others, both BFGS and L-BFGS algorithms. In the MATLAB Optimization Toolbox, the fminunc function uses BFGS with cubic line search when the problem size is set to "medium scale." Mathematica includes BFGS. LS-DYNA also uses BFGS to solve implicit Problems. == See also == == References == == Further reading == Avriel, Mordecai (2003), Nonlinear Programming: Analysis and Methods, Dover Publishing, ISBN 978-0-486-43227-4 Bonnans, J. Frédéric; Gilbert, J. Charles; Lemaréchal, Claude; Sagastizábal, Claudia A. (2006), "Newtonian Methods", Numerical Optimization: Theoretical and Practical Aspects (Second ed.), Berlin: Springer, pp. 51–66, ISBN 3-540-35445-X Fletcher, Roger (1987), Practical Methods of Optimization (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-91547-8 Luenberger, David G.; Ye, Yinyu (2008), Linear and nonlinear programming, International Series in Operations Research & Management Science, vol. 116 (Third ed.), New York: Springer, pp. xiv+546, ISBN 978-0-387-74502-2, MR 2423726 Kelley, C. T. (1999), Iterative Methods for Optimization, Philadelphia: Society for Industrial and Applied Mathematics, pp. 71–86, ISBN 0-89871-433-8
Wikipedia/BFGS_method
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines. The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case. This distinguishes them from heuristics such as annealing or genetic algorithms, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail. There is widespread interest in theoretical computer science to better understand the limits to which we can approximate certain famous optimization problems. For example, one of the long-standing open questions in computer science is to determine whether there is an algorithm that outperforms the 2-approximation for the Steiner Forest problem by Agrawal et al. The desire to understand hard optimization problems from the perspective of approximability is motivated by the discovery of surprising mathematical connections and broadly applicable techniques to design algorithms for hard optimization problems. One well-known example of the former is the Goemans–Williamson algorithm for maximum cut, which solves a graph theoretic problem using high dimensional geometry. == Introduction == A simple example of an approximation algorithm is one for the minimum vertex cover problem, where the goal is to choose the smallest set of vertices such that every edge in the input graph contains at least one chosen vertex. One way to find a vertex cover is to repeat the following process: find an uncovered edge, add both its endpoints to the cover, and remove all edges incident to either vertex from the graph. As any vertex cover of the input graph must use a distinct vertex to cover each edge that was considered in the process (since it forms a matching), the vertex cover produced, therefore, is at most twice as large as the optimal one. In other words, this is a constant-factor approximation algorithm with an approximation factor of 2. Under the recent unique games conjecture, this factor is even the best possible one. NP-hard problems vary greatly in their approximability; some, such as the knapsack problem, can be approximated within a multiplicative factor 1 + ϵ {\displaystyle 1+\epsilon } , for any fixed ϵ > 0 {\displaystyle \epsilon >0} , and therefore produce solutions arbitrarily close to the optimum (such a family of approximation algorithms is called a polynomial-time approximation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial, factor unless P = NP, as in the case of the maximum clique problem. Therefore, an important benefit of studying approximation algorithms is a fine-grained classification of the difficulty of various NP-hard problems beyond the one afforded by the theory of NP-completeness. In other words, although NP-complete problems may be equivalent (under polynomial-time reductions) to each other from the perspective of exact solutions, the corresponding optimization problems behave very differently from the perspective of approximate solutions. == Algorithm design techniques == By now there are several established techniques to design approximation algorithms. These include the following ones. Greedy algorithm Local search Enumeration and dynamic programming (which is also often used for parameterized approximations) Solving a convex programming relaxation to get a fractional solution. Then converting this fractional solution into a feasible solution by some appropriate rounding. The popular relaxations include the following. Linear programming relaxations Semidefinite programming relaxations Primal-dual methods Dual fitting Embedding the problem in some metric and then solving the problem on the metric. This is also known as metric embedding. Random sampling and the use of randomness in general in conjunction with the methods above. == A posteriori guarantees == While approximation algorithms always provide an a priori worst case guarantee (be it additive or multiplicative), in some cases they also provide an a posteriori guarantee that is often much better. This is often the case for algorithms that work by solving a convex relaxation of the optimization problem on the given input. For example, there is a different approximation algorithm for minimum vertex cover that solves a linear programming relaxation to find a vertex cover that is at most twice the value of the relaxation. Since the value of the relaxation is never larger than the size of the optimal vertex cover, this yields another 2-approximation algorithm. While this is similar to the a priori guarantee of the previous approximation algorithm, the guarantee of the latter can be much better (indeed when the value of the LP relaxation is far from the size of the optimal vertex cover). == Hardness of approximation == Approximation algorithms as a research area is closely related to and informed by inapproximability theory where the non-existence of efficient algorithms with certain approximation ratios is proved (conditioned on widely believed hypotheses such as the P ≠ NP conjecture) by means of reductions. In the case of the metric traveling salesman problem, the best known inapproximability result rules out algorithms with an approximation ratio less than 123/122 ≈ 1.008196 unless P = NP, Karpinski, Lampis, Schmied. Coupled with the knowledge of the existence of Christofides' 1.5 approximation algorithm, this tells us that the threshold of approximability for metric traveling salesman (if it exists) is somewhere between 123/122 and 1.5. While inapproximability results have been proved since the 1970s, such results were obtained by ad hoc means and no systematic understanding was available at the time. It is only since the 1990 result of Feige, Goldwasser, Lovász, Safra and Szegedy on the inapproximability of Independent Set and the famous PCP theorem, that modern tools for proving inapproximability results were uncovered. The PCP theorem, for example, shows that Johnson's 1974 approximation algorithms for Max SAT, set cover, independent set and coloring all achieve the optimal approximation ratio, assuming P ≠ NP. == Practicality == Not all approximation algorithms are suitable for direct practical applications. Some involve solving non-trivial linear programming/semidefinite relaxations (which may themselves invoke the ellipsoid algorithm), complex data structures, or sophisticated algorithmic techniques, leading to difficult implementation issues or improved running time performance (over exact algorithms) only on impractically large inputs. Implementation and running time issues aside, the guarantees provided by approximation algorithms may themselves not be strong enough to justify their consideration in practice. Despite their inability to be used "out of the box" in practical applications, the ideas and insights behind the design of such algorithms can often be incorporated in other ways in practical algorithms. In this way, the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights. In other cases, even if the initial results are of purely theoretical interest, over time, with an improved understanding, the algorithms may be refined to become more practical. One such example is the initial PTAS for Euclidean TSP by Sanjeev Arora (and independently by Joseph Mitchell) which had a prohibitive running time of n O ( 1 / ϵ ) {\displaystyle n^{O(1/\epsilon )}} for a 1 + ϵ {\displaystyle 1+\epsilon } approximation. Yet, within a year these ideas were incorporated into a near-linear time O ( n log ⁡ n ) {\displaystyle O(n\log n)} algorithm for any constant ϵ > 0 {\displaystyle \epsilon >0} . == Structure of approximation algorithms == Given an optimization problem: Π : I × S {\displaystyle \Pi :I\times S} where Π {\displaystyle \Pi } is an approximation problem, I {\displaystyle I} the set of inputs and S {\displaystyle S} the set of solutions, we can define the cost function: c : S → R + {\displaystyle c:S\rightarrow \mathbb {R} ^{+}} and the set of feasible solutions: ∀ i ∈ I , S ( i ) = s ∈ S : i Π s {\displaystyle \forall i\in I,S(i)={s\in S:i\Pi _{s}}} finding the best solution s ∗ {\displaystyle s^{*}} for a maximization or a minimization problem: s ∗ ∈ S ( i ) {\displaystyle s^{*}\in S(i)} , c ( s ∗ ) = m i n / m a x c ( S ( i ) ) {\displaystyle c(s^{*})=min/max\ c(S(i))} Given a feasible solution s ∈ S ( i ) {\displaystyle s\in S(i)} , with s ≠ s ∗ {\displaystyle s\neq s^{*}} , we would want a guarantee of the quality of the solution, which is a performance to be guaranteed (approximation factor). Specifically, having A Π ( i ) ∈ S i {\displaystyle A_{\Pi }(i)\in S_{i}} , the algorithm has an approximation factor (or approximation ratio) of ρ ( n ) {\displaystyle \rho (n)} if ∀ i ∈ I s . t . | i | = n {\displaystyle \forall i\in I\ s.t.|i|=n} , we have: for a minimization problem: c ( A Π ( i ) ) c ( s ∗ ( i ) ) ≤ ρ ( n ) {\displaystyle {\frac {c(A_{\Pi }(i))}{c(s^{*}(i))}}\leq \rho (n)} , which in turn means the solution taken by the algorithm divided by the optimal solution achieves a ratio of ρ ( n ) {\displaystyle \rho (n)} ; for a maximization problem: c ( s ∗ ( i ) ) c ( A Π ( i ) ) ≤ ρ ( n ) {\displaystyle {\frac {c(s^{*}(i))}{c(A_{\Pi }(i))}}\leq \rho (n)} , which in turn means the optimal solution divided by the solution taken by the algorithm achieves a ratio of ρ ( n ) {\displaystyle \rho (n)} ; The approximation can be proven tight (tight approximation) by demonstrating that there exist instances where the algorithm performs at the approximation limit, indicating the tightness of the bound. In this case, it's enough to construct an input instance designed to force the algorithm into a worst-case scenario. == Performance guarantees == For some approximation algorithms it is possible to prove certain properties about the approximation of the optimum result. For example, a ρ-approximation algorithm A is defined to be an algorithm for which it has been proven that the value/cost, f(x), of the approximate solution A(x) to an instance x will not be more (or less, depending on the situation) than a factor ρ times the value, OPT, of an optimum solution. { O P T ≤ f ( x ) ≤ ρ O P T , if ρ > 1 ; ρ O P T ≤ f ( x ) ≤ O P T , if ρ < 1. {\displaystyle {\begin{cases}\mathrm {OPT} \leq f(x)\leq \rho \mathrm {OPT} ,\qquad {\mbox{if }}\rho >1;\\\rho \mathrm {OPT} \leq f(x)\leq \mathrm {OPT} ,\qquad {\mbox{if }}\rho <1.\end{cases}}} The factor ρ is called the relative performance guarantee. An approximation algorithm has an absolute performance guarantee or bounded error c, if it has been proven for every instance x that ( O P T − c ) ≤ f ( x ) ≤ ( O P T + c ) . {\displaystyle (\mathrm {OPT} -c)\leq f(x)\leq (\mathrm {OPT} +c).} Similarly, the performance guarantee, R(x,y), of a solution y to an instance x is defined as R ( x , y ) = max ( O P T f ( y ) , f ( y ) O P T ) , {\displaystyle R(x,y)=\max \left({\frac {OPT}{f(y)}},{\frac {f(y)}{OPT}}\right),} where f(y) is the value/cost of the solution y for the instance x. Clearly, the performance guarantee is greater than or equal to 1 and equal to 1 if and only if y is an optimal solution. If an algorithm A guarantees to return solutions with a performance guarantee of at most r(n), then A is said to be an r(n)-approximation algorithm and has an approximation ratio of r(n). Likewise, a problem with an r(n)-approximation algorithm is said to be r(n)-approximable or have an approximation ratio of r(n). For minimization problems, the two different guarantees provide the same result and that for maximization problems, a relative performance guarantee of ρ is equivalent to a performance guarantee of r = ρ − 1 {\displaystyle r=\rho ^{-1}} . In the literature, both definitions are common but it is clear which definition is used since, for maximization problems, as ρ ≤ 1 while r ≥ 1. The absolute performance guarantee P A {\displaystyle \mathrm {P} _{A}} of some approximation algorithm A, where x refers to an instance of a problem, and where R A ( x ) {\displaystyle R_{A}(x)} is the performance guarantee of A on x (i.e. ρ for problem instance x) is: P A = inf { r ≥ 1 ∣ R A ( x ) ≤ r , ∀ x } . {\displaystyle \mathrm {P} _{A}=\inf\{r\geq 1\mid R_{A}(x)\leq r,\forall x\}.} That is to say that P A {\displaystyle \mathrm {P} _{A}} is the largest bound on the approximation ratio, r, that one sees over all possible instances of the problem. Likewise, the asymptotic performance ratio R A ∞ {\displaystyle R_{A}^{\infty }} is: R A ∞ = inf { r ≥ 1 ∣ ∃ n ∈ Z + , R A ( x ) ≤ r , ∀ x , | x | ≥ n } . {\displaystyle R_{A}^{\infty }=\inf\{r\geq 1\mid \exists n\in \mathbb {Z} ^{+},R_{A}(x)\leq r,\forall x,|x|\geq n\}.} That is to say that it is the same as the absolute performance ratio, with a lower bound n on the size of problem instances. These two types of ratios are used because there exist algorithms where the difference between these two is significant. == Epsilon terms == In the literature, an approximation ratio for a maximization (minimization) problem of c - ϵ (min: c + ϵ) means that the algorithm has an approximation ratio of c ∓ ϵ for arbitrary ϵ > 0 but that the ratio has not (or cannot) be shown for ϵ = 0. An example of this is the optimal inapproximability — inexistence of approximation — ratio of 7 / 8 + ϵ for satisfiable MAX-3SAT instances due to Johan Håstad. As mentioned previously, when c = 1, the problem is said to have a polynomial-time approximation scheme. An ϵ-term may appear when an approximation algorithm introduces a multiplicative error and a constant error while the minimum optimum of instances of size n goes to infinity as n does. In this case, the approximation ratio is c ∓ k / OPT = c ∓ o(1) for some constants c and k. Given arbitrary ϵ > 0, one can choose a large enough N such that the term k / OPT < ϵ for every n ≥ N. For every fixed ϵ, instances of size n < N can be solved by brute force, thereby showing an approximation ratio — existence of approximation algorithms with a guarantee — of c ∓ ϵ for every ϵ > 0. == See also == Domination analysis considers guarantees in terms of the rank of the computed solution. PTAS - a type of approximation algorithm that takes the approximation ratio as a parameter Parameterized approximation algorithm - a type of approximation algorithm that runs in FPT time APX is the class of problems with some constant-factor approximation algorithm Approximation-preserving reduction Exact algorithm == Citations == == References == Vazirani, Vijay V. (2003). Approximation Algorithms. Berlin: Springer. ISBN 978-3-540-65367-7. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 35: Approximation Algorithms, pp. 1022–1056. Dorit S. Hochbaum, ed. Approximation Algorithms for NP-Hard problems, PWS Publishing Company, 1997. ISBN 0-534-94968-1. Chapter 9: Various Notions of Approximations: Good, Better, Best, and More Williamson, David P.; Shmoys, David B. (April 26, 2011), The Design of Approximation Algorithms, Cambridge University Press, ISBN 978-0521195270 == External links == Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski and Gerhard Woeginger, A compendium of NP optimization problems.
Wikipedia/Approximation_algorithm
A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time. For example, a greedy strategy for the travelling salesman problem (which is of high computational complexity) is the following heuristic: "At each step of the journey, visit the nearest unvisited city." This heuristic does not intend to find the best solution, but it terminates in a reasonable number of steps; finding an optimal solution to such a complex problem typically requires unreasonably many steps. In mathematical optimization, greedy algorithms optimally solve combinatorial problems having the properties of matroids and give constant-factor approximations to optimization problems with the submodular structure. == Specifics == Greedy algorithms produce good solutions on some mathematical problems, but not on others. Most problems for which they work will have two properties: Greedy choice property Whichever choice seems best at a given moment can be made and then (recursively) solve the remaining sub-problems. The choice made by a greedy algorithm may depend on choices made so far, but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage and may reconsider the previous stage's algorithmic path to the solution. Optimal substructure "A problem exhibits optimal substructure if an optimal solution to the problem contains optimal solutions to the sub-problems." === Correctness Proofs === A common technique for proving the correctness of greedy algorithms uses an inductive exchange argument. The exchange argument demonstrates that any solution different from the greedy solution can be transformed into the greedy solution without degrading its quality. This proof pattern typically follows these steps: This proof pattern typically follows these steps (by contradiction): Assume there exists an optimal solution different from the greedy solution Identify the first point where the optimal and greedy solutions differ Prove that exchanging the optimal choice for the greedy choice at this point cannot worsen the solution Conclude by induction that there must exist an optimal solution identical to the greedy solution In some cases, an additional step may be needed to prove that no optimal solution can strictly improve upon the greedy solution. === Cases of failure === Greedy algorithms fail to produce the optimal solution for many other problems and may even produce the unique worst possible solution. One example is the travelling salesman problem mentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest-neighbour heuristic produces the unique worst possible tour. For other possible examples, see horizon effect. == Types == Greedy algorithms can be characterized as being 'short sighted', and also as 'non-recoverable'. They are ideal only for problems that have an 'optimal substructure'. Despite this, for many simple problems, the best-suited algorithms are greedy. It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch-and-bound algorithm. There are a few variations to the greedy algorithm: Pure greedy algorithms Orthogonal greedy algorithms Relaxed greedy algorithms == Theory == Greedy algorithms have a long history of study in combinatorial optimization and theoretical computer science. Greedy heuristics are known to produce suboptimal results on many problems, and so natural questions are: For which problems do greedy algorithms perform optimally? For which problems do greedy algorithms guarantee an approximately optimal solution? For which problems are greedy algorithms guaranteed not to produce an optimal solution? A large body of literature exists answering these questions for general classes of problems, such as matroids, as well as for specific problems, such as set cover. === Matroids === A matroid is a mathematical structure that generalizes the notion of linear independence from vector spaces to arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally. === Submodular functions === A function f {\displaystyle f} defined on subsets of a set Ω {\displaystyle \Omega } is called submodular if for every S , T ⊆ Ω {\displaystyle S,T\subseteq \Omega } we have that f ( S ) + f ( T ) ≥ f ( S ∪ T ) + f ( S ∩ T ) {\displaystyle f(S)+f(T)\geq f(S\cup T)+f(S\cap T)} . Suppose one wants to find a set S {\displaystyle S} which maximizes f {\displaystyle f} . The greedy algorithm, which builds up a set S {\displaystyle S} by incrementally adding the element which increases f {\displaystyle f} the most at each step, produces as output a set that is at least ( 1 − 1 / e ) max X ⊆ Ω f ( X ) {\displaystyle (1-1/e)\max _{X\subseteq \Omega }f(X)} . That is, greedy performs within a constant factor of ( 1 − 1 / e ) ≈ 0.63 {\displaystyle (1-1/e)\approx 0.63} as good as the optimal solution. Similar guarantees are provable when additional constraints, such as cardinality constraints, are imposed on the output, though often slight variations on the greedy algorithm are required. See for an overview. === Other problems with guarantees === Other problems for which the greedy algorithm gives a strong guarantee, but not an optimal solution, include Set cover The Steiner tree problem Load balancing Independent set Many of these problems have matching lower bounds; i.e., the greedy algorithm does not perform better than the guarantee in the worst case. == Applications == Greedy algorithms typically (but not always) fail to find the globally optimal solution because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early, preventing them from finding the best overall solution later. For example, all known greedy coloring algorithms for the graph coloring problem and all other NP-complete problems do not consistently find optimum solutions. Nevertheless, they are useful because they are quick to think up and often give good approximations to the optimum. If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like dynamic programming. Examples of such greedy algorithms are Kruskal's algorithm and Prim's algorithm for finding minimum spanning trees and the algorithm for finding optimum Huffman trees. Greedy algorithms appear in network routing as well. Using greedy routing, a message is forwarded to the neighbouring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as in geographic routing used by ad hoc networks. Location may also be an entirely artificial construct as in small world routing and distributed hash table. == Examples == The activity selection problem is characteristic of this class of problems, where the goal is to pick the maximum number of activities that do not clash with each other. In the Macintosh computer game Crystal Quest the objective is to collect crystals, in a fashion similar to the travelling salesman problem. The game has a demo mode, where the game uses a greedy algorithm to go to every crystal. The artificial intelligence does not account for obstacles, so the demo mode often ends quickly. The matching pursuit is an example of a greedy algorithm applied on signal approximation. A greedy algorithm finds the optimal solution to Malfatti's problem of finding three disjoint circles within a given triangle that maximize the total area of the circles; it is conjectured that the same greedy algorithm is optimal for any number of circles. A greedy algorithm is used to construct a Huffman tree during Huffman coding where it finds an optimal solution. In decision tree learning, greedy algorithms are commonly used, however they are not guaranteed to find the optimal solution. One popular such algorithm is the ID3 algorithm for decision tree construction. Dijkstra's algorithm and the related A* search algorithm are verifiably optimal greedy algorithms for graph search and shortest path finding. A* search is conditionally optimal, requiring an "admissible heuristic" that will not overestimate path costs. Kruskal's algorithm and Prim's algorithm are greedy algorithms for constructing minimum spanning trees of a given connected graph. They always find an optimal solution, which may not be unique in general. The Sequitur and Lempel-Ziv-Welch algorithms are greedy algorithms for grammar induction. == See also == == References == === Sources === == External links == "Greedy algorithm", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Gift, Noah. "Python greedy coin example".
Wikipedia/Greedy_algorithm
The Nelder–Mead method (also downhill simplex method, amoeba method, or polytope method) is a numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is a direct search method (based on function comparison) and is often applied to nonlinear optimization problems for which derivatives may not be known. However, the Nelder–Mead technique is a heuristic search method that can converge to non-stationary points on problems that can be solved by alternative methods. The Nelder–Mead technique was proposed by John Nelder and Roger Mead in 1965, as a development of the method of Spendley et al. == Overview == The method uses the concept of a simplex, which is a special polytope of n + 1 vertices in n dimensions. Examples of simplices include a line segment in one-dimensional space, a triangle in two-dimensional space, a tetrahedron in three-dimensional space, and so forth. The method approximates a local optimum of a problem with n variables when the objective function varies smoothly and is unimodal. Typical implementations minimize functions, and we maximize f ( x ) {\displaystyle f(\mathbf {x} )} by minimizing − f ( x ) {\displaystyle -f(\mathbf {x} )} . For example, a suspension bridge engineer has to choose how thick each strut, cable, and pier must be. These elements are interdependent, but it is not easy to visualize the impact of changing any specific element. Simulation of such complicated structures is often extremely computationally expensive to run, possibly taking upwards of hours per execution. The Nelder–Mead method requires, in the original variant, no more than two evaluations per iteration, except for the shrink operation described later, which is attractive compared to some other direct-search optimization methods. However, the overall number of iterations to proposed optimum may be high. Nelder–Mead in n dimensions maintains a set of n + 1 test points arranged as a simplex. It then extrapolates the behavior of the objective function measured at each test point in order to find a new test point and to replace one of the old test points with the new one, and so the technique progresses. The simplest approach is to replace the worst point with a point reflected through the centroid of the remaining n points. If this point is better than the best current point, then we can try stretching exponentially out along this line. On the other hand, if this new point isn't much better than the previous value, then we are stepping across a valley, so we shrink the simplex towards a better point. An intuitive explanation of the algorithm from "Numerical Recipes": The downhill simplex method now takes a series of steps, most steps just moving the point of the simplex where the function is largest (“highest point”) through the opposite face of the simplex to a lower point. These steps are called reflections, and they are constructed to conserve the volume of the simplex (and hence maintain its nondegeneracy). When it can do so, the method expands the simplex in one or another direction to take larger steps. When it reaches a “valley floor”, the method contracts itself in the transverse direction and tries to ooze down the valley. If there is a situation where the simplex is trying to “pass through the eye of a needle”, it contracts itself in all directions, pulling itself in around its lowest (best) point. Unlike modern optimization methods, the Nelder–Mead heuristic can converge to a non-stationary point, unless the problem satisfies stronger conditions than are necessary for modern methods. Modern improvements over the Nelder–Mead heuristic have been known since 1979. Many variations exist depending on the actual nature of the problem being solved. A common variant uses a constant-size, small simplex that roughly follows the gradient direction (which gives steepest descent). Visualize a small triangle on an elevation map flip-flopping its way down a valley to a local bottom. This method is also known as the flexible polyhedron method. This, however, tends to perform poorly against the method described in this article because it makes small, unnecessary steps in areas of little interest. == One possible variation of the NM algorithm == (This approximates the procedure in the original Nelder–Mead article.) We are trying to minimize the function f ( x ) {\displaystyle f(\mathbf {x} )} , where x ∈ R n {\displaystyle \mathbf {x} \in \mathbb {R} ^{n}} . Our current test points are x 1 , … , x n + 1 {\displaystyle \mathbf {x} _{1},\ldots ,\mathbf {x} _{n+1}} . Note: α {\displaystyle \alpha } , γ {\displaystyle \gamma } , ρ {\displaystyle \rho } and σ {\displaystyle \sigma } are respectively the reflection, expansion, contraction and shrink coefficients. Standard values are α = 1 {\displaystyle \alpha =1} , γ = 2 {\displaystyle \gamma =2} , ρ = 1 / 2 {\displaystyle \rho =1/2} and σ = 1 / 2 {\displaystyle \sigma =1/2} . For the reflection, since x n + 1 {\displaystyle \mathbf {x} _{n+1}} is the vertex with the higher associated value among the vertices, we can expect to find a lower value at the reflection of x n + 1 {\displaystyle \mathbf {x} _{n+1}} in the opposite face formed by all vertices x i {\displaystyle \mathbf {x} _{i}} except x n + 1 {\displaystyle \mathbf {x} _{n+1}} . For the expansion, if the reflection point x r {\displaystyle \mathbf {x} _{r}} is the new minimum along the vertices, we can expect to find interesting values along the direction from x o {\displaystyle \mathbf {x} _{o}} to x r {\displaystyle \mathbf {x} _{r}} . Concerning the contraction, if f ( x r ) > f ( x n ) {\displaystyle f(\mathbf {x} _{r})>f(\mathbf {x} _{n})} , we can expect that a better value will be inside the simplex formed by all the vertices x i {\displaystyle \mathbf {x} _{i}} . Finally, the shrink handles the rare case that contracting away from the largest point increases f {\displaystyle f} , something that cannot happen sufficiently close to a non-singular minimum. In that case we contract towards the lowest point in the expectation of finding a simpler landscape. However, Nash notes that finite-precision arithmetic can sometimes fail to actually shrink the simplex, and implemented a check that the size is actually reduced. == Initial simplex == The initial simplex is important. Indeed, a too small initial simplex can lead to a local search, consequently the NM can get more easily stuck. So this simplex should depend on the nature of the problem. However, the original article suggested a simplex where an initial point is given as x 1 {\displaystyle \mathbf {x} _{1}} , with the others generated with a fixed step along each dimension in turn. Thus the method is sensitive to scaling of the variables that make up x {\displaystyle \mathbf {x} } . == Termination == Criteria are needed to break the iterative cycle. Nelder and Mead used the sample standard deviation of the function values of the current simplex. If these fall below some tolerance, then the cycle is stopped and the lowest point in the simplex returned as a proposed optimum. Note that a very "flat" function may have almost equal function values over a large domain, so that the solution will be sensitive to the tolerance. Nash adds the test for shrinkage as another termination criterion. Note that programs terminate, while iterations may converge. == See also == == References == == Further reading == Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN 978-0-486-43227-4. Coope, I. D.; Price, C. J. (2002). "Positive Bases in Numerical Optimization". Computational Optimization and Applications. 21 (2): 169–176. doi:10.1023/A:1013760716801. S2CID 15947440. Gill, Philip E.; Murray, Walter; Wright, Margaret H. (1981). "Methods for Multivariate Non-Smooth Functions". Practical Optimization. New York: Academic Press. pp. 93–96. ISBN 978-0-12-283950-4. Kowalik, J.; Osborne, M. R. (1968). Methods for Unconstrained Optimization Problems. New York: Elsevier. pp. 24–27. ISBN 0-444-00041-0. Swann, W. H. (1972). "Direct Search Methods". In Murray, W. (ed.). Numerical Methods for Unconstrained Optimization. New York: Academic Press. pp. 13–28. ISBN 978-0-12-512250-4. == External links == Nelder–Mead (Downhill Simplex) explanation and visualization with the Rosenbrock banana function John Burkardt: Nelder–Mead code in Matlab - note that a variation of the Nelder–Mead method is also implemented by the Matlab function fminsearch. Nelder-Mead optimization in Python in the SciPy library. nelder-mead - A Python implementation of the Nelder–Mead method NelderMead() - A Go/Golang implementation SOVA 1.0 (freeware) - Simplex Optimization for Various Applications [1] - HillStormer, a practical tool for nonlinear, multivariate and linear constrained Simplex Optimization by Nelder Mead.
Wikipedia/Nelder–Mead_method
Powell's method, strictly Powell's conjugate direction method, is an algorithm proposed by Michael J. D. Powell for finding a local minimum of a function. The function need not be differentiable, and no derivatives are taken. The function must be a real-valued function of a fixed number of real-valued inputs. The caller passes in the initial point. The caller also passes in a set of initial search vectors. Typically N search vectors (say { s 1 , … , s N } {\textstyle \{s_{1},\dots ,s_{N}\}} ) are passed in which are simply the normals aligned to each axis. The method minimises the function by a bi-directional search along each search vector, in turn. The bi-directional line search along each search vector can be done by Golden-section search or Brent's method. Let the minima found during each bi-directional line search be { x 0 + α 1 s 1 , x 0 + ∑ i = 1 2 α i s i , … , x 0 + ∑ i = 1 N α i s i } {\textstyle \{x_{0}+\alpha _{1}s_{1},{x}_{0}+\sum _{i=1}^{2}\alpha _{i}{s}_{i},\dots ,{x}_{0}+\sum _{i=1}^{N}\alpha _{i}{s}_{i}\}} , where x 0 {\textstyle {x}_{0}} is the initial starting point and α i {\textstyle \alpha _{i}} is the scalar determined during bi-directional search along s i {\textstyle {s}_{i}} . The new position ( x 1 {\textstyle x_{1}} ) can then be expressed as a linear combination of the search vectors i.e. x 1 = x 0 + ∑ i = 1 N α i s i {\textstyle x_{1}=x_{0}+\sum _{i=1}^{N}\alpha _{i}s_{i}} . The new displacement vector ( ∑ i = 1 N α i s i {\textstyle \sum _{i=1}^{N}\alpha _{i}s_{i}} ) becomes a new search vector, and is added to the end of the search vector list. Meanwhile, the search vector which contributed most to the new direction, i.e. the one which was most successful ( i d = arg ⁡ max i = 1 N | α i | ‖ s i ‖ {\textstyle i_{d}=\arg \max _{i=1}^{N}|\alpha _{i}|\|s_{i}\|} ), is deleted from the search vector list. The new set of N search vectors is { s 1 , … , s i d − 1 , s i d + 1 , … , s N , ∑ i = 1 N α i s i } {\textstyle \{s_{1},\dots ,s_{i_{d}-1},s_{i_{d}+1},\dots ,s_{N},\sum _{i=1}^{N}\alpha _{i}s_{i}\}} . The algorithm iterates an arbitrary number of times until no significant improvement is made. The method is useful for calculating the local minimum of a continuous but complex function, especially one without an underlying mathematical definition, because it is not necessary to take derivatives. The basic algorithm is simple; the complexity is in the linear searches along the search vectors, which can be achieved via Brent's method. == References == Powell, M. J. D. (1964). "An efficient method for finding the minimum of a function of several variables without calculating derivatives". Computer Journal. 7 (2): 155–162. doi:10.1093/comjnl/7.2.155. hdl:10338.dmlcz/103029. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 10.7. Direction Set (Powell's) Methods in Multidimensions". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Brent, Richard P. (1973). "Section 7.3: Powell's algorithm". Algorithms for minimization without derivatives. Englewood Cliffs, N.J.: Prentice-Hall. ISBN 0-486-41998-3.
Wikipedia/Powell's_method
Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective, but the augmented Lagrangian method adds yet another term designed to mimic a Lagrange multiplier. The augmented Lagrangian is related to, but not identical with, the method of Lagrange multipliers. Viewed differently, the unconstrained objective is the Lagrangian of the constrained problem, with an additional penalty term (the augmentation). The method was originally known as the method of multipliers and was studied in the 1970s and 1980s as a potential alternative to penalty methods. It was first discussed by Magnus Hestenes and then by Michael Powell in 1969. The method was studied by R. Tyrrell Rockafellar in relation to Fenchel duality, particularly in relation to proximal-point methods, Moreau–Yosida regularization, and maximal monotone operators; these methods were used in structural optimization. The method was also studied by Dimitri Bertsekas, notably in his 1982 book, together with extensions involving non-quadratic regularization functions (e.g., entropic regularization). This combined study gives rise to the "exponential method of multipliers" which handles inequality constraints with a twice-differentiable augmented Lagrangian function. Since the 1970s, sequential quadratic programming (SQP) and interior point methods (IPM) have been given more attention, in part because they more easily use sparse matrix subroutines from numerical software libraries, and in part because IPMs possess proven complexity results via the theory of self-concordant functions. The augmented Lagrangian method was rejuvenated by the optimization systems LANCELOT, ALGENCAN and AMPL, which allowed sparse matrix techniques to be used on seemingly dense but "partially-separable" problems. The method is still useful for some problems. Around 2007, there was a resurgence of augmented Lagrangian methods in fields such as total variation denoising and compressed sensing. In particular, a variant of the standard augmented Lagrangian method that uses partial updates (similar to the Gauss–Seidel method for solving linear equations) known as the alternating direction method of multipliers or ADMM gained some attention. == General method == Consider solving the following constrained optimization problem: min f ( x ) {\displaystyle \min f(\mathbf {x} )} subject to c i ( x ) = 0 ∀ i ∈ E , {\displaystyle c_{i}(\mathbf {x} )=0~\forall i\in {\mathcal {E}},} where E {\displaystyle {\mathcal {E}}} denotes the indices for equality constraints. This problem can be solved as a series of unconstrained minimization problems. For reference, we first list the kth step of the penalty method approach: min Φ k ( x ) = f ( x ) + μ k ∑ i ∈ E c i ( x ) 2 . {\displaystyle \min \Phi _{k}(\mathbf {x} )=f(\mathbf {x} )+\mu _{k}~\sum _{i\in {\mathcal {E}}}~c_{i}(\mathbf {x} )^{2}.} The penalty method solves this problem, then at the next iteration it re-solves the problem using a larger value of μ k {\displaystyle \mu _{k}} and using the old solution as the initial guess or "warm start". The augmented Lagrangian method uses the following unconstrained objective: min Φ k ( x ) = f ( x ) + μ k 2 ∑ i ∈ E c i ( x ) 2 + ∑ i ∈ E λ i c i ( x ) {\displaystyle \min \Phi _{k}(\mathbf {x} )=f(\mathbf {x} )+{\frac {\mu _{k}}{2}}~\sum _{i\in {\mathcal {E}}}~c_{i}(\mathbf {x} )^{2}+\sum _{i\in {\mathcal {E}}}~\lambda _{i}c_{i}(\mathbf {x} )} and after each iteration, in addition to updating μ k {\displaystyle \mu _{k}} , the variable λ {\displaystyle \lambda } is also updated according to the rule λ i ← λ i + μ k c i ( x k ) {\displaystyle \lambda _{i}\leftarrow \lambda _{i}+\mu _{k}c_{i}(\mathbf {x} _{k})} where x k {\displaystyle \mathbf {x} _{k}} is the solution to the unconstrained problem at the kth step (i.e. x k = argmin Φ k ( x ) {\displaystyle \mathbf {x} _{k}={\text{argmin}}\Phi _{k}(\mathbf {x} )} ). The variable λ {\displaystyle \lambda } is an estimate of the Lagrange multiplier, and the accuracy of this estimate improves at every step. The major advantage of the method is that unlike the penalty method, it is not necessary to take μ → ∞ {\displaystyle \mu \rightarrow \infty } in order to solve the original constrained problem. Because of the presence of the Lagrange multiplier term, μ {\displaystyle \mu } can stay much smaller, and thus avoiding ill-conditioning. Nevertheless, it is common in practical implementations to project multipliers estimates in a large bounded set (safeguards) which avoids numerical instabilities and leads to strong theoretical convergence. The method can be extended to handle inequality constraints. For a discussion of practical improvements, see refs. == Alternating direction method of multipliers == The alternating direction method of multipliers (ADMM) is a variant of the augmented Lagrangian scheme that uses partial updates for the dual variables. This method is often applied to solve problems such as, min x f ( x ) + g ( M x ) . {\displaystyle \min _{x}f(x)+g(Mx).} This is equivalent to the constrained problem, min x , y f ( x ) + g ( y ) , subject to M x = y . {\displaystyle \min _{x,y}f(x)+g(y),\quad {\text{subject to}}\quad Mx=y.} Though this change may seem trivial, the problem can now be attacked using methods of constrained optimization (in particular, the augmented Lagrangian method), and the objective function is separable in x and y. The dual update requires solving a proximity function in x and y at the same time; the ADMM technique allows this problem to be solved approximately by first solving for x with y fixed and then solving for y with x fixed. Rather than iterate this process until convergence (like the Jacobi method), the ADMM algorithm proceeds directly to updating the dual variable and then repeats the process. This is not equivalent to the exact minimization, but the method still converges to the correct solution under some assumptions. Because of it does not minimize or approximately minimize the augmented Lagrangian, the algorithm is distinct from the ordinary augmented Lagrangian method. The ADMM can be viewed as an application of the Douglas-Rachford splitting algorithm, and the Douglas-Rachford algorithm is in turn an instance of the Proximal point algorithm; details can be found in ref. There are several modern software packages, including YALL1 (2009), SpaRSA (2009) and SALSA (2009), which solve Basis pursuit and variants and use the ADMM. There are also packages which use the ADMM to solve more general problems, some of which can exploit multiple computing cores (e.g., SNAPVX (2015), parADMM (2016)). == Stochastic optimization == Stochastic optimization considers the problem of minimizing a loss function with access to noisy samples of the (gradient of the) function. The goal is to have an estimate of the optimal parameter (minimizer) per new sample. With some modifications, ADMM can be used for stochastic optimization. In a stochastic setting, only noisy samples of a gradient are accessible, so an inexact approximation of the Lagrangian is used: L ^ ρ , k = f 1 ( x k ) + ⟨ ∇ f ( x k , ζ k + 1 ) , x ⟩ + g ( y ) − z T ( A x + B y − c ) + ρ 2 ‖ A x + B y − c ‖ 2 + ‖ x − x k ‖ 2 2 η k + 1 , {\displaystyle {\hat {\mathcal {L}}}_{\rho ,k}=f_{1}(x_{k})+\langle \nabla f(x_{k},\zeta _{k+1}),x\rangle +g(y)-z^{T}(Ax+By-c)+{\frac {\rho }{2}}\Vert Ax+By-c\Vert ^{2}+{\frac {\Vert x-x_{k}\Vert ^{2}}{2\eta _{k+1}}},} where η k + 1 {\displaystyle \eta _{k+1}} is a time-varying step size. ADMM has been applied to solve regularized problems, where the function optimization and regularization can be carried out locally and then coordinated globally via constraints. Regularized optimization problems are especially relevant in the high-dimensional regime as regularization is a natural mechanism to overcome ill-posedness and to encourage parsimony in the optimal solution (e.g., sparsity and low rank). ADMM's effectiveness for solving regularized problems may mean it could be useful for solving high-dimensional stochastic optimization problems. == Alternative approaches == Sequential quadratic programming Sequential linear programming Sequential linear-quadratic programming == Software == Open source and non-free/commercial implementations of the augmented Lagrangian method: Accord.NET (C# implementation of augmented Lagrangian optimizer) ALGLIB (C# and C++ implementations of preconditioned augmented Lagrangian solver) PENNON (GPL 3, commercial license available) LANCELOT (free "internal use" license, paid commercial options) MINOS (also uses an augmented Lagrangian method for some types of problems). The code for Apache 2.0 licensed REASON is available online. ALGENCAN (Fortran implementation of augmented Lagrangian method with safeguards). Available online. NLOPT (C++ implementation of augmented Lagrangian optimizer, accessible from different programming languages) PyProximal (Python implementation of augmented Lagrangian method). == See also == Barrier function Interior-point method Lagrange multiplier Penalty method == References == == Bibliography == Bertsekas, Dimitri P. (1999), Nonlinear Programming (2nd ed.), Belmont, Mass: Athena Scientific, ISBN 978-1-886529-00-7 Birgin, E. G.; Martínez, J. M. (2014), Practical Augmented Lagrangian Methods for Constrained Optimization, Philadelphia: Society for Industrial and Applied Mathematics, doi:10.1137/1.9781611973365, ISBN 978-1-611973-35-8 Nocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-30303-1
Wikipedia/Augmented_Lagrangian_method
Subgradient methods are convex optimization methods which use subderivatives. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of gradient descent. Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks. In recent years, some interior-point methods have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage. Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem. == Classical subgradient rules == Let f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } be a convex function with domain R n . {\displaystyle \mathbb {R} ^{n}.} A classical subgradient method iterates x ( k + 1 ) = x ( k ) − α k g ( k ) {\displaystyle x^{(k+1)}=x^{(k)}-\alpha _{k}g^{(k)}\ } where g ( k ) {\displaystyle g^{(k)}} denotes any subgradient of f {\displaystyle f\ } at x ( k ) , {\displaystyle x^{(k)},\ } and x ( k ) {\displaystyle x^{(k)}} is the k t h {\displaystyle k^{th}} iterate of x . {\displaystyle x.} If f {\displaystyle f\ } is differentiable, then its only subgradient is the gradient vector ∇ f {\displaystyle \nabla f} itself. It may happen that − g ( k ) {\displaystyle -g^{(k)}} is not a descent direction for f {\displaystyle f\ } at x ( k ) . {\displaystyle x^{(k)}.} We therefore maintain a list f b e s t {\displaystyle f_{\rm {best}}\ } that keeps track of the lowest objective function value found so far, i.e. f b e s t ( k ) = min { f b e s t ( k − 1 ) , f ( x ( k ) ) } . {\displaystyle f_{\rm {best}}^{(k)}=\min\{f_{\rm {best}}^{(k-1)},f(x^{(k)})\}.} === Step size rules === Many different types of step-size rules are used by subgradient methods. This article notes five classical step-size rules for which convergence proofs are known: Constant step size, α k = α . {\displaystyle \alpha _{k}=\alpha .} Constant step length, α k = γ / ‖ g ( k ) ‖ 2 , {\displaystyle \alpha _{k}=\gamma /\lVert g^{(k)}\rVert _{2},} which gives ‖ x ( k + 1 ) − x ( k ) ‖ 2 = γ . {\displaystyle \lVert x^{(k+1)}-x^{(k)}\rVert _{2}=\gamma .} Square summable but not summable step size, i.e. any step sizes satisfying α k ≥ 0 , ∑ k = 1 ∞ α k 2 < ∞ , ∑ k = 1 ∞ α k = ∞ . {\displaystyle \alpha _{k}\geq 0,\qquad \sum _{k=1}^{\infty }\alpha _{k}^{2}<\infty ,\qquad \sum _{k=1}^{\infty }\alpha _{k}=\infty .} Nonsummable diminishing, i.e. any step sizes satisfying α k ≥ 0 , lim k → ∞ α k = 0 , ∑ k = 1 ∞ α k = ∞ . {\displaystyle \alpha _{k}\geq 0,\qquad \lim _{k\to \infty }\alpha _{k}=0,\qquad \sum _{k=1}^{\infty }\alpha _{k}=\infty .} Nonsummable diminishing step lengths, i.e. α k = γ k / ‖ g ( k ) ‖ 2 , {\displaystyle \alpha _{k}=\gamma _{k}/\lVert g^{(k)}\rVert _{2},} where γ k ≥ 0 , lim k → ∞ γ k = 0 , ∑ k = 1 ∞ γ k = ∞ . {\displaystyle \gamma _{k}\geq 0,\qquad \lim _{k\to \infty }\gamma _{k}=0,\qquad \sum _{k=1}^{\infty }\gamma _{k}=\infty .} For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations. This "off-line" property of subgradient methods differs from the "on-line" step-size rules used for descent methods for differentiable functions: Many methods for minimizing differentiable functions satisfy Wolfe's sufficient conditions for convergence, where step-sizes typically depend on the current point and the current search-direction. An extensive discussion of stepsize rules for subgradient methods, including incremental versions, is given in the books by Bertsekas and by Bertsekas, Nedic, and Ozdaglar. === Convergence results === For constant step-length and scaled subgradients having Euclidean norm equal to one, the subgradient method converges to an arbitrarily close approximation to the minimum value, that is lim k → ∞ f b e s t ( k ) − f ∗ < ϵ {\displaystyle \lim _{k\to \infty }f_{\rm {best}}^{(k)}-f^{*}<\epsilon } by a result of Shor. These classical subgradient methods have poor performance and are no longer recommended for general use. However, they are still used widely in specialized applications because they are simple and they can be easily adapted to take advantage of the special structure of the problem at hand. == Subgradient-projection and bundle methods == During the 1970s, Claude Lemaréchal and Phil Wolfe proposed "bundle methods" of descent for problems of convex minimization. The meaning of the term "bundle methods" has changed significantly since that time. Modern versions and full convergence analysis were provided by Kiwiel. Contemporary bundle-methods often use "level control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods. == Constrained optimization == === Projected subgradient === One extension of the subgradient method is the projected subgradient method, which solves the constrained optimization problem minimize f ( x ) {\displaystyle f(x)\ } subject to x ∈ C {\displaystyle x\in {\mathcal {C}}} where C {\displaystyle {\mathcal {C}}} is a convex set. The projected subgradient method uses the iteration x ( k + 1 ) = P ( x ( k ) − α k g ( k ) ) {\displaystyle x^{(k+1)}=P\left(x^{(k)}-\alpha _{k}g^{(k)}\right)} where P {\displaystyle P} is projection on C {\displaystyle {\mathcal {C}}} and g ( k ) {\displaystyle g^{(k)}} is any subgradient of f {\displaystyle f\ } at x ( k ) . {\displaystyle x^{(k)}.} === General constraints === The subgradient method can be extended to solve the inequality constrained problem minimize f 0 ( x ) {\displaystyle f_{0}(x)\ } subject to f i ( x ) ≤ 0 , i = 1 , … , m {\displaystyle f_{i}(x)\leq 0,\quad i=1,\ldots ,m} where f i {\displaystyle f_{i}} are convex. The algorithm takes the same form as the unconstrained case x ( k + 1 ) = x ( k ) − α k g ( k ) {\displaystyle x^{(k+1)}=x^{(k)}-\alpha _{k}g^{(k)}\ } where α k > 0 {\displaystyle \alpha _{k}>0} is a step size, and g ( k ) {\displaystyle g^{(k)}} is a subgradient of the objective or one of the constraint functions at x . {\displaystyle x.\ } Take g ( k ) = { ∂ f 0 ( x ) if f i ( x ) ≤ 0 ∀ i = 1 … m ∂ f j ( x ) for some j such that f j ( x ) > 0 {\displaystyle g^{(k)}={\begin{cases}\partial f_{0}(x)&{\text{ if }}f_{i}(x)\leq 0\;\forall i=1\dots m\\\partial f_{j}(x)&{\text{ for some }}j{\text{ such that }}f_{j}(x)>0\end{cases}}} where ∂ f {\displaystyle \partial f} denotes the subdifferential of f . {\displaystyle f.\ } If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint. == See also == Stochastic gradient descent – Optimization algorithm == References == == Further reading == Bertsekas, Dimitri P. (1999). Nonlinear Programming. Belmont, MA.: Athena Scientific. ISBN 1-886529-00-0. Bertsekas, Dimitri P.; Nedic, Angelia; Ozdaglar, Asuman (2003). Convex Analysis and Optimization (Second ed.). Belmont, MA.: Athena Scientific. ISBN 1-886529-45-0. Bertsekas, Dimitri P. (2015). Convex Optimization Algorithms. Belmont, MA.: Athena Scientific. ISBN 978-1-886529-28-1. Shor, Naum Z. (1985). Minimization Methods for Non-differentiable Functions. Springer-Verlag. ISBN 0-387-12763-1. Ruszczyński, Andrzej (2006). Nonlinear Optimization. Princeton, NJ: Princeton University Press. pp. xii+454. ISBN 978-0691119151. MR 2199043. == External links == EE364A and EE364B, Stanford's convex optimization course sequence.
Wikipedia/Subgradient_method
Dinic's algorithm or Dinitz's algorithm is a strongly polynomial algorithm for computing the maximum flow in a flow network, conceived in 1970 by Israeli (formerly Soviet) computer scientist Yefim Dinitz. The algorithm runs in O ( | V | 2 | E | ) {\displaystyle O(|V|^{2}|E|)} time and is similar to the Edmonds–Karp algorithm, which runs in O ( | V | | E | 2 ) {\displaystyle O(|V||E|^{2})} time, in that it uses shortest augmenting paths. The introduction of the concepts of the level graph and blocking flow enable Dinic's algorithm to achieve its performance. == History == Dinitz invented the algorithm in January 1969, as a master's student in Georgy Adelson-Velsky's group. A few decades later, he would recall: In Adel'son-Vel'sky's Algorithms class, the lecturer had a habit of giving the problem to be discussed at the next meeting as an exercise to students. The DA was invented in response to such an exercise. At that time, the author was not aware of the basic facts regarding [the Ford–Fulkerson algorithm]…. ⋮ Ignorance sometimes has its merits. Very probably, DA would not have been invented then, if the idea of possible saturated edge desaturation had been known to the author. In 1970, Dinitz published a description of the algorithm in Doklady Akademii Nauk SSSR. In 1974, Shimon Even and (his then Ph.D. student) Alon Itai at the Technion in Haifa were very curious and intrigued by Dinitz's algorithm as well as Alexander V. Karzanov's related idea of blocking flow. However it was hard for them to decipher these two papers, each being limited to four pages to meet the restrictions of journal Doklady Akademii Nauk SSSR. Even did not give up, and after three days of effort managed to understand both papers except for the layered network maintenance issue. Over the next couple of years, Even gave lectures on "Dinic's algorithm", mispronouncing the name of the author while popularizing it. Even and Itai also contributed to this algorithm by combining BFS and DFS, which is how the algorithm is now commonly presented. For about 10 years of time after the Ford–Fulkerson algorithm was invented, it was unknown if it could be made to terminate in polynomial time in the general case of irrational edge capacities. This caused a lack of any known polynomial-time algorithm to solve the max flow problem in generic cases. Dinitz's algorithm and the Edmonds–Karp algorithm (published in 1972) both independently showed that in the Ford–Fulkerson algorithm, if each augmenting path is the shortest one, then the length of the augmenting paths is non-decreasing and the algorithm always terminates. == Definition == Let G = ( ( V , E ) , c , f , s , t ) {\displaystyle G=((V,E),c,f,s,t)} be a network with c ( u , v ) {\displaystyle c(u,v)} and f ( u , v ) {\displaystyle f(u,v)} the capacity and the flow of the edge ( u , v ) {\displaystyle (u,v)} , respectively. The residual capacity is a mapping c f : V × V → R + {\displaystyle c_{f}\colon V\times V\to R^{+}} defined as, if ( u , v ) ∈ E {\displaystyle (u,v)\in E} , c f ( u , v ) = c ( u , v ) − f ( u , v ) {\displaystyle c_{f}(u,v)=c(u,v)-f(u,v)} if ( v , u ) ∈ E {\displaystyle (v,u)\in E} , c f ( u , v ) = f ( v , u ) {\displaystyle c_{f}(u,v)=f(v,u)} c f ( u , v ) = 0 {\displaystyle c_{f}(u,v)=0} otherwise. The residual graph is an unweighted graph G f = ( ( V , E f ) , c f | E f , s , t ) {\displaystyle G_{f}=((V,E_{f}),c_{f}|_{E_{f}},s,t)} , where E f = { ( u , v ) ∈ V × V : c f ( u , v ) > 0 } {\displaystyle E_{f}=\{(u,v)\in V\times V\colon \;c_{f}(u,v)>0\}} . An augmenting path is an s {\displaystyle s} – t {\displaystyle t} path in the residual graph G f {\displaystyle G_{f}} . Define dist ⁡ ( v ) {\displaystyle \operatorname {dist} (v)} to be the length of the shortest path from s {\displaystyle s} to v {\displaystyle v} in G f {\displaystyle G_{f}} . Then the level graph of G f {\displaystyle G_{f}} is the graph G L = ( ( V , E L ) , c f | E L , s , t ) {\displaystyle G_{L}=((V,E_{L}),c_{f}|_{E_{L}},s,t)} , where E L = { ( u , v ) ∈ E f : dist ⁡ ( v ) = dist ⁡ ( u ) + 1 } {\displaystyle E_{L}=\{(u,v)\in E_{f}\colon \;\operatorname {dist} (v)=\operatorname {dist} (u)+1\}} . A blocking flow is an s {\displaystyle s} – t {\displaystyle t} flow f ′ {\displaystyle f'} such that the graph G ′ = ( ( V , E L ′ ) , s , t ) {\displaystyle G'=((V,E_{L}'),s,t)} with E L ′ = { ( u , v ) : f ′ ( u , v ) < c f | E L ( u , v ) } {\displaystyle E_{L}'=\{(u,v)\colon \;f'(u,v)<c_{f}|_{E_{L}}(u,v)\}} contains no s {\displaystyle s} – t {\displaystyle t} path. == Algorithm == Dinic's Algorithm Input: A network G = ( ( V , E ) , c , s , t ) {\displaystyle G=((V,E),c,s,t)} . Output: An s {\displaystyle s} – t {\displaystyle t} flow f {\displaystyle f} of maximum value. Set f ( e ) = 0 {\displaystyle f(e)=0} for each e ∈ E {\displaystyle e\in E} . Construct G L {\displaystyle G_{L}} from G f {\displaystyle G_{f}} of G {\displaystyle G} . If dist ⁡ ( t ) = ∞ {\displaystyle \operatorname {dist} (t)=\infty } , stop and output f {\displaystyle f} . Find a blocking flow f ′ {\displaystyle f'} in G L {\displaystyle G_{L}} . Augment flow f {\displaystyle f} by f ′ {\displaystyle f'} and go back to step 2. == Analysis == It can be shown that the number of layers in each blocking flow increases by at least 1 each time and thus there are at most | V | − 1 {\displaystyle |V|-1} blocking flows in the algorithm. For each of them: the level graph G L {\displaystyle G_{L}} can be constructed by breadth-first search in O ( E ) {\displaystyle O(E)} time a blocking flow in the level graph G L {\displaystyle G_{L}} can be found in O ( V E ) {\displaystyle O(VE)} time with total running time O ( E + V E ) = O ( V E ) {\displaystyle O(E+VE)=O(VE)} for each layer. As a consequence, the running time of Dinic's algorithm is O ( V 2 E ) {\displaystyle O(V^{2}E)} . Using a data structure called dynamic trees, the running time of finding a blocking flow in each phase can be reduced to O ( E log ⁡ V ) {\displaystyle O(E\log V)} and therefore the running time of Dinic's algorithm can be improved to O ( V E log ⁡ V ) {\displaystyle O(VE\log V)} . === Special cases === In networks with unit capacities, a much stronger time bound holds. Each blocking flow can be found in O ( E ) {\displaystyle O(E)} time, and it can be shown that the number of phases does not exceed O ( E ) {\displaystyle O({\sqrt {E}})} and O ( V 2 / 3 ) {\displaystyle O(V^{2/3})} . Thus the algorithm runs in O ( min { V 2 / 3 , E 1 / 2 } E ) {\displaystyle O(\min\{V^{2/3},E^{1/2}\}E)} time. In networks that arise from the bipartite matching problem, the number of phases is bounded by O ( V ) {\displaystyle O({\sqrt {V}})} , therefore leading to the O ( V E ) {\displaystyle O({\sqrt {V}}E)} time bound. The resulting algorithm is also known as Hopcroft–Karp algorithm. More generally, this bound holds for any unit network — a network in which each vertex, except for source and sink, either has a single entering edge of capacity one, or a single outgoing edge of capacity one, and all other capacities are arbitrary integers. == Example == The following is a simulation of Dinic's algorithm. In the level graph G L {\displaystyle G_{L}} , the vertices with labels in red are the values dist ⁡ ( v ) {\displaystyle \operatorname {dist} (v)} . The paths in blue form a blocking flow. == See also == Ford–Fulkerson algorithm Maximum flow problem == Notes == == References == Dinitz, Yefim (2006). "Dinitz' Algorithm: The Original Version and Even's Version". In Oded Goldreich; Arnold L. Rosenberg; Alan L. Selman (eds.). Theoretical Computer Science: Essays in Memory of Shimon Even. Lecture Notes in Computer Science. Vol. 3895. Springer. pp. 218–240. doi:10.1007/11685654_10. ISBN 978-3-540-32880-3. Kadar, Ilan; Albagli, Sivan (18 April 2019). Dinitz's algorithm for finding a maximum flow in a network. Ben-Gurion University. Archived from the original on 22 December 2023. Korte, B. H.; Vygen, Jens (2008). "8.4 Blocking Flows and Fujishige's Algorithm". Combinatorial Optimization: Theory and Algorithms (Algorithms and Combinatorics, 21). Springer Berlin Heidelberg. pp. 174–176. ISBN 978-3-540-71844-4. Tarjan, R. E. (1983). Data structures and network algorithms.
Wikipedia/Dinic's_algorithm
In computer science, Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. The algorithm operates by building this tree one vertex at a time, from an arbitrary starting vertex, at each step adding the cheapest possible connection from the tree to another vertex. The algorithm was developed in 1930 by Czech mathematician Vojtěch Jarník and later rediscovered and republished by computer scientists Robert C. Prim in 1957 and Edsger W. Dijkstra in 1959. Therefore, it is also sometimes called the Jarník's algorithm, Prim–Jarník algorithm, Prim–Dijkstra algorithm or the DJP algorithm. Other well-known algorithms for this problem include Kruskal's algorithm and Borůvka's algorithm. These algorithms find the minimum spanning forest in a possibly disconnected graph; in contrast, the most basic form of Prim's algorithm only finds minimum spanning trees in connected graphs. However, running Prim's algorithm separately for each connected component of the graph, it can also be used to find the minimum spanning forest. In terms of their asymptotic time complexity, these three algorithms are equally fast for sparse graphs, but slower than other more sophisticated algorithms. However, for graphs that are sufficiently dense, Prim's algorithm can be made to run in linear time, meeting or improving the time bounds for other algorithms. == Description == The algorithm may informally be described as performing the following steps: In more detail, it may be implemented following the pseudocode below. function Prim(vertices, edges) is for each vertex in vertices do cheapestCost[vertex] ← ∞ cheapestEdge[vertex] ← null explored ← empty set unexplored ← set containing all vertices startVertex ← any element of vertices cheapestCost[startVertex] ← 0 while unexplored is not empty do // Select vertex in unexplored with minimum cost currentVertex ← vertex in unexplored with minimum cheapestCost[vertex] unexplored.remove(currentVertex) explored.add(currentVertex) for each edge (currentVertex, neighbor) in edges do if neighbor in unexplored and weight(currentVertex, neighbor) < cheapestCost[neighbor] THEN cheapestCost[neighbor] ← weight(currentVertex, neighbor) cheapestEdge[neighbor] ← (currentVertex, neighbor) resultEdges ← empty list for each vertex in vertices do if cheapestEdge[vertex] ≠ null THEN resultEdges.append(cheapestEdge[vertex]) return resultEdges As described above, the starting vertex for the algorithm will be chosen arbitrarily, because the first iteration of the main loop of the algorithm will have a set of vertices in Q that all have equal weights, and the algorithm will automatically start a new tree in F when it completes a spanning tree of each connected component of the input graph. The algorithm may be modified to start with any particular vertex s by setting C[s] to be a number smaller than the other values of C (for instance, zero), and it may be modified to only find a single spanning tree rather than an entire spanning forest (matching more closely the informal description) by stopping whenever it encounters another vertex flagged as having no associated edge. Different variations of the algorithm differ from each other in how the set Q is implemented: as a simple linked list or array of vertices, or as a more complicated priority queue data structure. This choice leads to differences in the time complexity of the algorithm. In general, a priority queue will be quicker at finding the vertex v with minimum cost, but will entail more expensive updates when the value of C[w] changes. == Time complexity == The time complexity of Prim's algorithm depends on the data structures used for the graph and for ordering the edges by weight, which can be done using a priority queue. The following table shows the typical choices: A simple implementation of Prim's, using an adjacency matrix or an adjacency list graph representation and linearly searching an array of weights to find the minimum weight edge to add, requires O(|V|2) running time. However, this running time can be greatly improved by using heaps to implement finding minimum weight edges in the algorithm's inner loop. A first improved version uses a heap to store all edges of the input graph, ordered by their weight. This leads to an O(|E| log |E|) worst-case running time. But storing vertices instead of edges can improve it still further. The heap should order the vertices by the smallest edge-weight that connects them to any vertex in the partially constructed minimum spanning tree (MST) (or infinity if no such edge exists). Every time a vertex v is chosen and added to the MST, a decrease-key operation is performed on all vertices w outside the partial MST such that v is connected to w, setting the key to the minimum of its previous value and the edge cost of (v,w). Using a simple binary heap data structure, Prim's algorithm can now be shown to run in time O(|E| log |V|) where |E| is the number of edges and |V| is the number of vertices. Using a more sophisticated Fibonacci heap, this can be brought down to O(|E| + |V| log |V|), which is asymptotically faster when the graph is dense enough that |E| is ω(|V|), and linear time when |E| is at least |V| log |V|. For graphs of even greater density (having at least |V|c edges for some c > 1), Prim's algorithm can be made to run in linear time even more simply, by using a d-ary heap in place of a Fibonacci heap. == Proof of correctness == Let P be a connected, weighted graph. At every iteration of Prim's algorithm, an edge must be found that connects a vertex in a subgraph to a vertex outside the subgraph. Since P is connected, there will always be a path to every vertex. The output Y of Prim's algorithm is a tree, because the edge and vertex added to tree Y are connected. Let Y1 be a minimum spanning tree of graph P. If Y1=Y then Y is a minimum spanning tree. Otherwise, let e be the first edge added during the construction of tree Y that is not in tree Y1, and V be the set of vertices connected by the edges added before edge e. Then one endpoint of edge e is in set V and the other is not. Since tree Y1 is a spanning tree of graph P, there is a path in tree Y1 joining the two endpoints. As one travels along the path, one must encounter an edge f joining a vertex in set V to one that is not in set V. Now, at the iteration when edge e was added to tree Y, edge f could also have been added and it would be added instead of edge e if its weight was less than e, and since edge f was not added, we conclude that w ( f ) ≥ w ( e ) . {\displaystyle w(f)\geq w(e).} Let tree Y2 be the graph obtained by removing edge f from and adding edge e to tree Y1. It is easy to show that tree Y2 is connected, has the same number of edges as tree Y1, and the total weights of its edges is not larger than that of tree Y1, therefore it is also a minimum spanning tree of graph P and it contains edge e and all the edges added before it during the construction of set V. Repeat the steps above and we will eventually obtain a minimum spanning tree of graph P that is identical to tree Y. This shows Y is a minimum spanning tree. The minimum spanning tree allows for the first subset of the sub-region to be expanded into a larger subset X, which we assume to be the minimum. == Parallel algorithm == The main loop of Prim's algorithm is inherently sequential and thus not parallelizable. However, the inner loop, which determines the next edge of minimum weight that does not form a cycle, can be parallelized by dividing the vertices and edges between the available processors. The following pseudocode demonstrates this. This algorithm can generally be implemented on distributed machines as well as on shared memory machines. The running time is O ( | V | 2 | P | ) + O ( | V | log ⁡ | P | ) {\displaystyle O({\tfrac {|V|^{2}}{|P|}})+O(|V|\log |P|)} , assuming that the reduce and broadcast operations can be performed in O ( log ⁡ | P | ) {\displaystyle O(\log |P|)} . A variant of Prim's algorithm for shared memory machines, in which Prim's sequential algorithm is being run in parallel, starting from different vertices, has also been explored. It should, however, be noted that more sophisticated algorithms exist to solve the distributed minimum spanning tree problem in a more efficient manner. == See also == Dijkstra's algorithm, a very similar algorithm for the shortest path problem Greedoids offer a general way to understand the correctness of Prim's algorithm == References == == External links == Prim's Algorithm progress on randomly distributed points Media related to Prim's algorithm at Wikimedia Commons
Wikipedia/Prim's_algorithm
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers. The algorithm was first proposed by Alfonso Shimbel (1955), but is instead named after Richard Bellman and Lester Ford Jr., who published it in 1958 and 1956, respectively. Edward F. Moore also published a variation of the algorithm in 1959, and for this reason it is also sometimes called the Bellman–Ford–Moore algorithm. Negative edge weights are found in various applications of graphs. This is why this algorithm is useful. If a graph contains a "negative cycle" (i.e. a cycle whose edges sum to a negative value) that is reachable from the source, then there is no cheapest path: any path that has a point on the negative cycle can be made cheaper by one more walk around the negative cycle. In such a case, the Bellman–Ford algorithm can detect and report the negative cycle. == Algorithm == Like Dijkstra's algorithm, Bellman–Ford proceeds by relaxation, in which approximations to the correct distance are replaced by better ones until they eventually reach the solution. In both algorithms, the approximate distance to each vertex is always an overestimate of the true distance, and is replaced by the minimum of its old value and the length of a newly found path. However, Dijkstra's algorithm uses a priority queue to greedily select the closest vertex that has not yet been processed, and performs this relaxation process on all of its outgoing edges; by contrast, the Bellman–Ford algorithm simply relaxes all the edges, and does this | V | − 1 {\displaystyle |V|-1} times, where | V | {\displaystyle |V|} is the number of vertices in the graph. In each of these repetitions, the number of vertices with correctly calculated distances grows, from which it follows that eventually all vertices will have their correct distances. This method allows the Bellman–Ford algorithm to be applied to a wider class of inputs than Dijkstra's algorithm. The intermediate answers depend on the order of edges relaxed, but the final answer remains the same. Bellman–Ford runs in O ( | V | ⋅ | E | ) {\displaystyle O(|V|\cdot |E|)} time, where | V | {\displaystyle |V|} and | E | {\displaystyle |E|} are the number of vertices and edges respectively. function BellmanFord(list vertices, list edges, vertex source) is // This implementation takes in a graph, represented as // lists of vertices (represented as integers [0..n-1]) and // edges, and fills two arrays (distance and predecessor) // holding the shortest path from the source to each vertex distance := list of size n predecessor := list of size n // Step 1: initialize graph for each vertex v in vertices do // Initialize the distance to all vertices to infinity distance[v] := inf // And having a null predecessor predecessor[v] := null // The distance from the source to itself is zero distance[source] := 0 // Step 2: relax edges repeatedly repeat |V|−1 times: for each edge (u, v) with weight w in edges do if distance[u] + w < distance[v] then distance[v] := distance[u] + w predecessor[v] := u // Step 3: check for negative-weight cycles for each edge (u, v) with weight w in edges do if distance[u] + w < distance[v] then predecessor[v] := u // A negative cycle exists; // find a vertex on the cycle visited := list of size n initialized with false visited[v] := true while not visited[u] do visited[u] := true u := predecessor[u] // u is a vertex in a negative cycle, // find the cycle itself ncycle := [u] v := predecessor[u] while v != u do ncycle := concatenate([v], ncycle) v := predecessor[v] error "Graph contains a negative-weight cycle", ncycle return distance, predecessor Simply put, the algorithm initializes the distance to the source to 0 and all other nodes to infinity. Then for all edges, if the distance to the destination can be shortened by taking the edge, the distance is updated to the new lower value. The core of the algorithm is a loop that scans across all edges at every loop. For every i ≤ | V | − 1 {\displaystyle i\leq |V|-1} , at the end of the i {\displaystyle i} -th iteration, from any vertex v, following the predecessor trail recorded in predecessor yields a path that has a total weight that is at most distance[v], and further, distance[v] is a lower bound to the length of any path from source to v that uses at most i edges. Since the longest possible path without a cycle can be | V | − 1 {\displaystyle |V|-1} edges, the edges must be scanned | V | − 1 {\displaystyle |V|-1} times to ensure the shortest path has been found for all nodes. A final scan of all the edges is performed and if any distance is updated, then a path of length | V | {\displaystyle |V|} edges has been found which can only occur if at least one negative cycle exists in the graph. The edge (u, v) that is found in step 3 must be reachable from a negative cycle, but it isn't necessarily part of the cycle itself, which is why it's necessary to follow the path of predecessors backwards until a cycle is detected. The above pseudo-code uses a Boolean array (visited) to find a vertex on the cycle, but any cycle finding algorithm can be used to find a vertex on the cycle. A common improvement when implementing the algorithm is to return early when an iteration of step 2 fails to relax any edges, which implies all shortest paths have been found, and therefore there are no negative cycles. In that case, the complexity of the algorithm is reduced from O ( | V | ⋅ | E | ) {\displaystyle O(|V|\cdot |E|)} to O ( l ⋅ | E | ) {\displaystyle O(l\cdot |E|)} where l {\displaystyle l} is the maximum length of a shortest path in the graph. == Proof of correctness == The correctness of the algorithm can be shown by induction: Lemma. After i repetitions of for loop, if Distance(u) is not infinity, it is equal to the length of some path from s to u; and if there is a path from s to u with at most i edges, then Distance(u) is at most the length of the shortest path from s to u with at most i edges. Proof. For the base case of induction, consider i=0 and the moment before for loop is executed for the first time. Then, for the source vertex, source.distance = 0, which is correct. For other vertices u, u.distance = infinity, which is also correct because there is no path from source to u with 0 edges. For the inductive case, we first prove the first part. Consider a moment when a vertex's distance is updated by v.distance := u.distance + uv.weight. By inductive assumption, u.distance is the length of some path from source to u. Then u.distance + uv.weight is the length of the path from source to v that follows the path from source to u and then goes to v. For the second part, consider a shortest path P (there may be more than one) from source to v with at most i edges. Let u be the last vertex before v on this path. Then, the part of the path from source to u is a shortest path from source to u with at most i-1 edges, since if it were not, then there must be some strictly shorter path from source to u with at most i-1 edges, and we could then append the edge uv to this path to obtain a path with at most i edges that is strictly shorter than P—a contradiction. By inductive assumption, u.distance after i−1 iterations is at most the length of this path from source to u. Therefore, uv.weight + u.distance is at most the length of P. In the ith iteration, v.distance gets compared with uv.weight + u.distance, and is set equal to it if uv.weight + u.distance is smaller. Therefore, after i iterations, v.distance is at most the length of P, i.e., the length of the shortest path from source to v that uses at most i edges. If there are no negative-weight cycles, then every shortest path visits each vertex at most once, so at step 3 no further improvements can be made. Conversely, suppose no improvement can be made. Then for any cycle with vertices v[0], ..., v[k−1], v[i].distance <= v[i-1 (mod k)].distance + v[i-1 (mod k)]v[i].weight Summing around the cycle, the v[i].distance and v[i−1 (mod k)].distance terms cancel, leaving 0 <= sum from 1 to k of v[i-1 (mod k)]v[i].weight I.e., every cycle has nonnegative weight. == Finding negative cycles == When the algorithm is used to find shortest paths, the existence of negative cycles is a problem, preventing the algorithm from finding a correct answer. However, since it terminates upon finding a negative cycle, the Bellman–Ford algorithm can be used for applications in which this is the target to be sought – for example in cycle-cancelling techniques in network flow analysis. == Applications in routing == A distributed variant of the Bellman–Ford algorithm is used in distance-vector routing protocols, for example the Routing Information Protocol (RIP). The algorithm is distributed because it involves a number of nodes (routers) within an Autonomous system (AS), a collection of IP networks typically owned by an ISP. It consists of the following steps: Each node calculates the distances between itself and all other nodes within the AS and stores this information as a table. Each node sends its table to all neighboring nodes. When a node receives distance tables from its neighbors, it calculates the shortest routes to all other nodes and updates its own table to reflect any changes. The main disadvantages of the Bellman–Ford algorithm in this setting are as follows: It does not scale well. Changes in network topology are not reflected quickly since updates are spread node-by-node. Count to infinity if link or node failures render a node unreachable from some set of other nodes, those nodes may spend forever gradually increasing their estimates of the distance to it, and in the meantime there may be routing loops. == Improvements == The Bellman–Ford algorithm may be improved in practice (although not in the worst case) by the observation that, if an iteration of the main loop of the algorithm terminates without making any changes, the algorithm can be immediately terminated, as subsequent iterations will not make any more changes. With this early termination condition, the main loop may in some cases use many fewer than |V| − 1 iterations, even though the worst case of the algorithm remains unchanged. The following improvements all maintain the O ( | V | ⋅ | E | ) {\displaystyle O(|V|\cdot |E|)} worst-case time complexity. A variation of the Bellman–Ford algorithm described by Moore (1959), reduces the number of relaxation steps that need to be performed within each iteration of the algorithm. If a vertex v has a distance value that has not changed since the last time the edges out of v were relaxed, then there is no need to relax the edges out of v a second time. In this way, as the number of vertices with correct distance values grows, the number whose outgoing edges that need to be relaxed in each iteration shrinks, leading to a constant-factor savings in time for dense graphs. This variation can be implemented by keeping a collection of vertices whose outgoing edges need to be relaxed, removing a vertex from this collection when its edges are relaxed, and adding to the collection any vertex whose distance value is changed by a relaxation step. In China, this algorithm was popularized by Fanding Duan, who rediscovered it in 1994, as the "shortest path faster algorithm". Yen (1970) described another improvement to the Bellman–Ford algorithm. His improvement first assigns some arbitrary linear order on all vertices and then partitions the set of all edges into two subsets. The first subset, Ef, contains all edges (vi, vj) such that i < j; the second, Eb, contains edges (vi, vj) such that i > j. Each vertex is visited in the order v1, v2, ..., v|V|, relaxing each outgoing edge from that vertex in Ef. Each vertex is then visited in the order v|V|, v|V|−1, ..., v1, relaxing each outgoing edge from that vertex in Eb. Each iteration of the main loop of the algorithm, after the first one, adds at least two edges to the set of edges whose relaxed distances match the correct shortest path distances: one from Ef and one from Eb. This modification reduces the worst-case number of iterations of the main loop of the algorithm from |V| − 1 to | V | / 2 {\displaystyle |V|/2} . Another improvement, by Bannister & Eppstein (2012), replaces the arbitrary linear order of the vertices used in Yen's second improvement by a random permutation. This change makes the worst case for Yen's improvement (in which the edges of a shortest path strictly alternate between the two subsets Ef and Eb) very unlikely to happen. With a randomly permuted vertex ordering, the expected number of iterations needed in the main loop is at most | V | / 3 {\displaystyle |V|/3} . Fineman (2024), at Georgetown University, created an improved algorithm that with high probability runs in O ~ ( | V | 8 9 ⋅ | E | ) {\displaystyle {\tilde {O}}(|V|^{\frac {8}{9}}\cdot |E|)} time. Here, the O ~ {\displaystyle {\tilde {O}}} is a variant of big O notation that hides logarithmic factors. == Notes == == References == === Original sources === Shimbel, A. (1955). Structure in communication nets. Proceedings of the Symposium on Information Networks. New York, New York: Polytechnic Press of the Polytechnic Institute of Brooklyn. pp. 199–203. Bellman, Richard (1958). "On a routing problem". Quarterly of Applied Mathematics. 16: 87–90. doi:10.1090/qam/102435. MR 0102435. Ford, Lester R. Jr. (August 14, 1956). Network Flow Theory. Paper P-923. Santa Monica, California: RAND Corporation. Moore, Edward F. (1959). The shortest path through a maze. Proc. Internat. Sympos. Switching Theory 1957, Part II. Cambridge, Massachusetts: Harvard Univ. Press. pp. 285–292. MR 0114710. Yen, Jin Y. (1970). "An algorithm for finding shortest routes from all source nodes to a given destination in general networks". Quarterly of Applied Mathematics. 27 (4): 526–530. doi:10.1090/qam/253822. MR 0253822. Bannister, M. J.; Eppstein, D. (2012). "Randomized speedup of the Bellman–Ford algorithm". Analytic Algorithmics and Combinatorics (ANALCO12), Kyoto, Japan. pp. 41–47. arXiv:1111.5414. doi:10.1137/1.9781611973020.6. Fineman, Jeremy T. (2024). "Single-source shortest paths with negative real weights in O ~ ( m n 8 / 9 ) {\displaystyle {\tilde {O}}(mn^{8/9})} time". In Mohar, Bojan; Shinkar, Igor; O'Donnell, Ryan (eds.). Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, Vancouver, BC, Canada, June 24–28, 2024. Association for Computing Machinery. pp. 3–14. arXiv:2311.02520. doi:10.1145/3618260.3649614. === Secondary sources === Ford, L. R. Jr.; Fulkerson, D. R. (1962). "A shortest chain algorithm". Flows in Networks. Princeton University Press. pp. 130–134. Bang-Jensen, Jørgen; Gutin, Gregory (2000). "Section 2.3.4: The Bellman-Ford-Moore algorithm". Digraphs: Theory, Algorithms and Applications (First ed.). Springer. ISBN 978-1-84800-997-4. Schrijver, Alexander (2005). "On the history of combinatorial optimization (till 1960)" (PDF). Handbook of Discrete Optimization. Elsevier: 1–68. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. Introduction to Algorithms. MIT Press and McGraw-Hill., Fourth Edition. MIT Press, 2022. ISBN 978-0-262-04630-5. Section 22.1: The Bellman–Ford algorithm, pp. 612–616. Problem 22–1, p. 640. Heineman, George T.; Pollice, Gary; Selkow, Stanley (2008). "Chapter 6: Graph Algorithms". Algorithms in a Nutshell. O'Reilly Media. pp. 160–164. ISBN 978-0-596-51624-6. Kleinberg, Jon; Tardos, Éva (2006). Algorithm Design. New York: Pearson Education, Inc. Sedgewick, Robert (2002). "Section 21.7: Negative Edge Weights". Algorithms in Java (3rd ed.). Addison-Wesley. ISBN 0-201-36121-3. Archived from the original on 2008-05-31. Retrieved 2007-05-28.
Wikipedia/Bellman–Ford_algorithm
Bayesian experimental design provides a general probability-theoretical framework from which other theories on experimental design can be derived. It is based on Bayesian inference to interpret the observations/data acquired during the experiment. This allows accounting for both any prior knowledge on the parameters to be determined as well as uncertainties in observations. The theory of Bayesian experimental design is to a certain extent based on the theory for making optimal decisions under uncertainty. The aim when designing an experiment is to maximize the expected utility of the experiment outcome. The utility is most commonly defined in terms of a measure of the accuracy of the information provided by the experiment (e.g., the Shannon information or the negative of the variance) but may also involve factors such as the financial cost of performing the experiment. What will be the optimal experiment design depends on the particular utility criterion chosen. == Relations to more specialized optimal design theory == === Linear theory === If the model is linear, the prior probability density function (PDF) is homogeneous and observational errors are normally distributed, the theory simplifies to the classical optimal experimental design theory. === Approximate normality === In numerous publications on Bayesian experimental design, it is (often implicitly) assumed that all posterior probabilities will be approximately normal. This allows for the expected utility to be calculated using linear theory, averaging over the space of model parameters. Caution must however be taken when applying this method, since approximate normality of all possible posteriors is difficult to verify, even in cases of normal observational errors and uniform prior probability. === Posterior distribution === In many cases, the posterior distribution is not available in closed form and has to be approximated using numerical methods. The most common approach is to use Markov chain Monte Carlo methods to generate samples from the posterior, which can then be used to approximate the expected utility. Another approach is to use a variational Bayes approximation of the posterior, which can often be calculated in closed form. This approach has the advantage of being computationally more efficient than Monte Carlo methods, but the disadvantage that the approximation might not be very accurate. Some authors proposed approaches that use the posterior predictive distribution to assess the effect of new measurements on prediction uncertainty, while others suggest maximizing the mutual information between parameters, predictions and potential new experiments. == Mathematical formulation == Given a vector θ {\displaystyle \theta } of parameters to determine, a prior probability p ( θ ) {\displaystyle p(\theta )} over those parameters and a likelihood p ( y ∣ θ , ξ ) {\displaystyle p(y\mid \theta ,\xi )} for making observation y {\displaystyle y} , given parameter values θ {\displaystyle \theta } and an experiment design ξ {\displaystyle \xi } , the posterior probability can be calculated using Bayes' theorem p ( θ ∣ y , ξ ) = p ( y ∣ θ , ξ ) p ( θ ) p ( y ∣ ξ ) , {\displaystyle p(\theta \mid y,\xi )={\frac {p(y\mid \theta ,\xi )p(\theta )}{p(y\mid \xi )}}\,,} where p ( y ∣ ξ ) {\displaystyle p(y\mid \xi )} is the marginal probability density in observation space p ( y ∣ ξ ) = ∫ p ( θ ) p ( y ∣ θ , ξ ) d θ . {\displaystyle p(y\mid \xi )=\int p(\theta )p(y\mid \theta ,\xi )\,d\theta \,.} The expected utility of an experiment with design ξ {\displaystyle \xi } can then be defined U ( ξ ) = ∫ p ( y ∣ ξ ) U ( y , ξ ) d y , {\displaystyle U(\xi )=\int p(y\mid \xi )U(y,\xi )\,dy,} where U ( y , ξ ) {\displaystyle U(y,\xi )} is some real-valued functional of the posterior probability p ( θ ∣ y , ξ ) {\displaystyle p(\theta \mid y,\xi )} after making observation y {\displaystyle y} using an experiment design ξ {\displaystyle \xi } . === Gain in Shannon information as utility === Utility may be defined as the prior-posterior gain in Shannon information U ( y , ξ ) = ∫ log ⁡ ( p ( θ ∣ y , ξ ) ) p ( θ | y , ξ ) d θ − ∫ log ⁡ ( p ( θ ) ) p ( θ ) d θ . {\displaystyle U(y,\xi )=\int \log(p(\theta \mid y,\xi ))\,p(\theta |y,\xi )\,d\theta -\int \log(p(\theta ))\,p(\theta )\,d\theta \,.} Another possibility is to define the utility as U ( y , ξ ) = D K L ( p ( θ ∣ y , ξ ) ‖ p ( θ ) ) , {\displaystyle U(y,\xi )=D_{KL}(p(\theta \mid y,\xi )\|p(\theta ))\,,} the Kullback–Leibler divergence of the prior from the posterior distribution. Lindley (1956) noted that the expected utility will then be coordinate-independent and can be written in two forms U ( ξ ) = ∫ ∫ log ⁡ ( p ( θ ∣ y , ξ ) ) p ( θ , y ∣ ξ ) d θ d y − ∫ log ⁡ ( p ( θ ) ) p ( θ ) d θ = ∫ ∫ log ⁡ ( p ( y ∣ θ , ξ ) ) p ( θ , y ∣ ξ ) d y d θ − ∫ log ⁡ ( p ( y ∣ ξ ) ) p ( y ∣ ξ ) d y , {\displaystyle {\begin{alignedat}{2}U(\xi )&=\int \int \log(p(\theta \mid y,\xi ))\,p(\theta ,y\mid \xi )\,d\theta \,dy-\int \log(p(\theta ))\,p(\theta )\,d\theta \\&=\int \int \log(p(y\mid \theta ,\xi ))\,p(\theta ,y\mid \xi )\,dy\,d\theta -\int \log(p(y\mid \xi ))\,p(y\mid \xi )\,dy,\end{alignedat}}\,} of which the latter can be evaluated without the need for evaluating individual posterior probability p ( θ ∣ y , ξ ) {\displaystyle p(\theta \mid y,\xi )} for all possible observations y {\displaystyle y} . It is worth noting that the second term on the second equation line will not depend on the design ξ {\displaystyle \xi } , as long as the observational uncertainty doesn't. On the other hand, the integral of p ( θ ) log ⁡ p ( θ ) {\displaystyle p(\theta )\log p(\theta )} in the first form is constant for all ξ {\displaystyle \xi } , so if the goal is to choose the design with the highest utility, the term need not be computed at all. Several authors have considered numerical techniques for evaluating and optimizing this criterion. Note that U ( ξ ) = I ( θ ; y ) , {\displaystyle U(\xi )=I(\theta ;y)\,,} the expected information gain being exactly the mutual information between the parameter θ and the observation y. An example of Bayesian design for linear dynamical model discrimination is given in Bania (2019). Since I ( θ ; y ) , {\displaystyle I(\theta ;y)\,,} was difficult to calculate, its lower bound has been used as a utility function. The lower bound is then maximized under the signal energy constraint. Proposed Bayesian design has been also compared with classical average D-optimal design. It was shown that the Bayesian design is superior to D-optimal design. The Kelly criterion also describes such a utility function for a gambler seeking to maximize profit, which is used in gambling and information theory; Kelly's situation is identical to the foregoing, with the side information, or "private wire" taking the place of the experiment. == See also == Bayesian optimization Optimal design Active Learning Expected value of sample information == References == == Further reading == DasGupta, A. (1996), "Review of optimal Bayes designs" (PDF), in Ghosh, S.; Rao, C. R. (eds.), Design and Analysis of Experiments, Handbook of Statistics, vol. 13, North-Holland, pp. 1099–1148, ISBN 978-0-444-82061-7 Rainforth, Tom; et al. (2023), Modern Bayesian Experimental Design, arXiv:2302.14545
Wikipedia/Bayesian_experimental_design
In mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear inequality constraints and nonlinear objective functions; there are criss-cross algorithms for linear-fractional programming problems, quadratic-programming problems, and linear complementarity problems. Like the simplex algorithm of George B. Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. Both algorithms visit all 2D corners of a (perturbed) cube in dimension D, the Klee–Minty cube (after Victor Klee and George J. Minty), in the worst case. However, when it is started at a random corner, the criss-cross algorithm on average visits only D additional corners. Thus, for the three-dimensional cube, the algorithm visits all 8 corners in the worst case and exactly 3 additional corners on average. == History == The criss-cross algorithm was published independently by Tamas Terlaky and by Zhe-Min Wang; related algorithms appeared in unpublished reports by other authors. == Comparison with the simplex algorithm for linear optimization == In linear programming, the criss-cross algorithm pivots between a sequence of bases but differs from the simplex algorithm. The simplex algorithm first finds a (primal-) feasible basis by solving a "phase-one problem"; in "phase two", the simplex algorithm pivots between a sequence of basic feasible solutions so that the objective function is non-decreasing with each pivot, terminating with an optimal solution (also finally finding a "dual feasible" solution). The criss-cross algorithm is simpler than the simplex algorithm, because the criss-cross algorithm only has one phase. Its pivoting rules are similar to the least-index pivoting rule of Bland. Bland's rule uses only signs of coefficients rather than their (real-number) order when deciding eligible pivots. Bland's rule selects an entering variables by comparing values of reduced costs, using the real-number ordering of the eligible pivots. Unlike Bland's rule, the criss-cross algorithm is "purely combinatorial", selecting an entering variable and a leaving variable by considering only the signs of coefficients rather than their real-number ordering. The criss-cross algorithm has been applied to furnish constructive proofs of basic results in linear algebra, such as the lemma of Farkas. While most simplex variants are monotonic in the objective (strictly in the non-degenerate case), most variants of the criss-cross algorithm lack a monotone merit function which can be a disadvantage in practice. == Description == The criss-cross algorithm works on a standard pivot tableau (or on-the-fly calculated parts of a tableau, if implemented like the revised simplex method). In a general step, if the tableau is primal or dual infeasible, it selects one of the infeasible rows / columns as the pivot row / column using an index selection rule. An important property is that the selection is made on the union of the infeasible indices and the standard version of the algorithm does not distinguish column and row indices (that is, the column indices basic in the rows). If a row is selected then the algorithm uses the index selection rule to identify a position to a dual type pivot, while if a column is selected then it uses the index selection rule to find a row position and carries out a primal type pivot. == Computational complexity: Worst and average cases == The time complexity of an algorithm counts the number of arithmetic operations sufficient for the algorithm to solve the problem. For example, Gaussian elimination requires on the order of D3 operations, and so it is said to have polynomial time-complexity, because its complexity is bounded by a cubic polynomial. There are examples of algorithms that do not have polynomial-time complexity. For example, a generalization of Gaussian elimination called Buchberger's algorithm has for its complexity an exponential function of the problem data (the degree of the polynomials and the number of variables of the multivariate polynomials). Because exponential functions eventually grow much faster than polynomial functions, an exponential complexity implies that an algorithm has slow performance on large problems. Several algorithms for linear programming—Khachiyan's ellipsoidal algorithm, Karmarkar's projective algorithm, and central-path algorithms—have polynomial time-complexity (in the worst case and thus on average). The ellipsoidal and projective algorithms were published before the criss-cross algorithm. However, like the simplex algorithm of Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. Terlaky's criss-cross algorithm visits all the 2D corners of a (perturbed) cube in dimension D, according to a paper of Roos; Roos's paper modifies the Klee–Minty construction of a cube on which the simplex algorithm takes 2D steps. Like the simplex algorithm, the criss-cross algorithm visits all 8 corners of the three-dimensional cube in the worst case. When it is initialized at a random corner of the cube, the criss-cross algorithm visits only D additional corners, however, according to a 1994 paper by Fukuda and Namiki. Trivially, the simplex algorithm takes on average D steps for a cube. Like the simplex algorithm, the criss-cross algorithm visits exactly 3 additional corners of the three-dimensional cube on average. == Variants == The criss-cross algorithm has been extended to solve more general problems than linear programming problems. === Other optimization problems with linear constraints === There are variants of the criss-cross algorithm for linear programming, for quadratic programming, and for the linear-complementarity problem with "sufficient matrices"; conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix. A sufficient matrix is a generalization both of a positive-definite matrix and of a P-matrix, whose principal minors are each positive. The criss-cross algorithm has been adapted also for linear-fractional programming. === Vertex enumeration === The criss-cross algorithm was used in an algorithm for enumerating all the vertices of a polytope, which was published by David Avis and Komei Fukuda in 1992. Avis and Fukuda presented an algorithm which finds the v vertices of a polyhedron defined by a nondegenerate system of n linear inequalities in D dimensions (or, dually, the v facets of the convex hull of n points in D dimensions, where each facet contains exactly D given points) in time O(nDv) and O(nD) space. === Oriented matroids === The criss-cross algorithm is often studied using the theory of oriented matroids (OMs), which is a combinatorial abstraction of linear-optimization theory. Indeed, Bland's pivoting rule was based on his previous papers on oriented-matroid theory. However, Bland's rule exhibits cycling on some oriented-matroid linear-programming problems. The first purely combinatorial algorithm for linear programming was devised by Michael J. Todd. Todd's algorithm was developed not only for linear-programming in the setting of oriented matroids, but also for quadratic-programming problems and linear-complementarity problems. Todd's algorithm is complicated even to state, unfortunately, and its finite-convergence proofs are somewhat complicated. The criss-cross algorithm and its proof of finite termination can be simply stated and readily extend the setting of oriented matroids. The algorithm can be further simplified for linear feasibility problems, that is for linear systems with nonnegative variables; these problems can be formulated for oriented matroids. The criss-cross algorithm has been adapted for problems that are more complicated than linear programming: There are oriented-matroid variants also for the quadratic-programming problem and for the linear-complementarity problem. == Summary == The criss-cross algorithm is a simply stated algorithm for linear programming. It was the second fully combinatorial algorithm for linear programming. The partially combinatorial simplex algorithm of Bland cycles on some (nonrealizable) oriented matroids. The first fully combinatorial algorithm was published by Todd, and it is also like the simplex algorithm in that it preserves feasibility after the first feasible basis is generated; however, Todd's rule is complicated. The criss-cross algorithm is not a simplex-like algorithm, because it need not maintain feasibility. The criss-cross algorithm does not have polynomial time-complexity, however. Researchers have extended the criss-cross algorithm for many optimization-problems, including linear-fractional programming. The criss-cross algorithm can solve quadratic programming problems and linear complementarity problems, even in the setting of oriented matroids. Even when generalized, the criss-cross algorithm remains simply stated. == See also == Jack Edmonds (pioneer of combinatorial optimization and oriented-matroid theorist; doctoral advisor of Komei Fukuda) == Notes == == References == Avis, David; Fukuda, Komei (December 1992). "A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra". Discrete and Computational Geometry. 8 (ACM Symposium on Computational Geometry (North Conway, NH, 1991) number 1): 295–313. doi:10.1007/BF02293050. MR 1174359. Csizmadia, Zsolt; Illés, Tibor (2006). "New criss-cross type algorithms for linear complementarity problems with sufficient matrices" (PDF). Optimization Methods and Software. 21 (2): 247–266. doi:10.1080/10556780500095009. MR 2195759. S2CID 24418835. Archived from the original (pdf) on 23 September 2015. Retrieved 30 August 2011. Fukuda, Komei; Namiki, Makoto (March 1994). "On extremal behaviors of Murty's least index method". Mathematical Programming. 64 (1): 365–370. doi:10.1007/BF01582581. MR 1286455. S2CID 21476636. Fukuda, Komei; Terlaky, Tamás (1997). Liebling, Thomas M.; de Werra, Dominique (eds.). "Criss-cross methods: A fresh view on pivot algorithms". Mathematical Programming, Series B. 79 (Papers from the 16th International Symposium on Mathematical Programming held in Lausanne, 1997, number 1–3): 369–395. CiteSeerX 10.1.1.36.9373. doi:10.1007/BF02614325. MR 1464775. S2CID 2794181. Postscript preprint. den Hertog, D.; Roos, C.; Terlaky, T. (1 July 1993). "The linear complementarity problem, sufficient matrices, and the criss-cross method" (PDF). Linear Algebra and Its Applications. 187: 1–14. doi:10.1016/0024-3795(93)90124-7. MR 1221693. Illés, Tibor; Szirmai, Ákos; Terlaky, Tamás (1999). "The finite criss-cross method for hyperbolic programming". European Journal of Operational Research. 114 (1): 198–214. doi:10.1016/S0377-2217(98)00049-6. Zbl 0953.90055. Postscript preprint. Klafszky, Emil; Terlaky, Tamás (June 1991). "The role of pivoting in proving some fundamental theorems of linear algebra". Linear Algebra and Its Applications. 151: 97–118. doi:10.1016/0024-3795(91)90356-2. MR 1102142. Roos, C. (1990). "An exponential example for Terlaky's pivoting rule for the criss-cross simplex method". Mathematical Programming. Series A. 46 (1): 79–84. doi:10.1007/BF01585729. MR 1045573. S2CID 33463483. Terlaky, T. (1985). "A convergent criss-cross method". Optimization: A Journal of Mathematical Programming and Operations Research. 16 (5): 683–690. doi:10.1080/02331938508843067. ISSN 0233-1934. MR 0798939. Terlaky, Tamás (1987). "A finite crisscross method for oriented matroids". Journal of Combinatorial Theory. Series B. 42 (3): 319–327. doi:10.1016/0095-8956(87)90049-9. ISSN 0095-8956. MR 0888684. Terlaky, Tamás; Zhang, Shu Zhong (1993) [1991]. "Pivot rules for linear programming: A Survey on recent theoretical developments". Annals of Operations Research. 46–47 (Degeneracy in optimization problems, number 1): 203–233. CiteSeerX 10.1.1.36.7658. doi:10.1007/BF02096264. ISSN 0254-5330. MR 1260019. S2CID 6058077. Wang, Zhe Min (1987). "A finite conformal-elimination free algorithm over oriented matroid programming". Chinese Annals of Mathematics (Shuxue Niankan B Ji). Series B. 8 (1): 120–125. ISSN 0252-9599. MR 0886756. == External links == Komei Fukuda (ETH Zentrum, Zurich) with publications Tamás Terlaky (Lehigh University) with publications Archived 28 September 2011 at the Wayback Machine
Wikipedia/Criss-cross_algorithm
The design of experiments (DOE), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation. In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment. Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity. Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience. == History == === Statistical experiments, following Charles S. Peirce === A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics. ==== Randomized experiments ==== Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights. Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s. ==== Optimal designs for regression models ==== Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less). === Sequences of experiments === The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952. == Fisher's principles == A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research. Comparison In some fields of study it is not possible to have independent measurements to a traceable metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a scientific control or traditional treatment that acts as baseline. Randomization Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment. The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger statistical population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things. Statistical replication Measurements are usually subject to variation and measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic. However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible. Blocking Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study. Orthogonality Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T − 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts. Multifactorial experiments Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test. == Example == This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs. Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by θ 1 , … , θ 8 . {\displaystyle \theta _{1},\dots ,\theta _{8}.\,} We consider two different experiments: Weigh each object in one pan, with the other pan empty. Let Xi be the measured weight of the object, for i = 1, ..., 8. Do the eight weighings according to the following schedule—a weighing matrix: left pan right pan 1st weighing: 1 2 3 4 5 6 7 8 (empty) 2nd: 1 2 3 8 4 5 6 7 3rd: 1 4 5 8 2 3 6 7 4th: 1 6 7 8 2 3 4 5 5th: 2 4 6 8 1 3 5 7 6th: 2 5 7 8 1 3 4 6 7th: 3 4 7 8 1 2 5 6 8th: 3 5 6 8 1 2 4 7 {\displaystyle {\begin{array}{lcc}&{\text{left pan}}&{\text{right pan}}\\\hline {\text{1st weighing:}}&1\ 2\ 3\ 4\ 5\ 6\ 7\ 8&{\text{(empty)}}\\{\text{2nd:}}&1\ 2\ 3\ 8\ &4\ 5\ 6\ 7\\{\text{3rd:}}&1\ 4\ 5\ 8\ &2\ 3\ 6\ 7\\{\text{4th:}}&1\ 6\ 7\ 8\ &2\ 3\ 4\ 5\\{\text{5th:}}&2\ 4\ 6\ 8\ &1\ 3\ 5\ 7\\{\text{6th:}}&2\ 5\ 7\ 8\ &1\ 3\ 4\ 6\\{\text{7th:}}&3\ 4\ 7\ 8\ &1\ 2\ 5\ 6\\{\text{8th:}}&3\ 5\ 6\ 8\ &1\ 2\ 4\ 7\end{array}}} Let Yi be the measured difference for i = 1, ..., 8. Then the estimated value of the weight θ1 is θ ^ 1 = Y 1 + Y 2 + Y 3 + Y 4 − Y 5 − Y 6 − Y 7 − Y 8 8 . {\displaystyle {\widehat {\theta }}_{1}={\frac {Y_{1}+Y_{2}+Y_{3}+Y_{4}-Y_{5}-Y_{6}-Y_{7}-Y_{8}}{8}}.} Similar estimates can be found for the weights of the other items: θ ^ 2 = Y 1 + Y 2 − Y 3 − Y 4 + Y 5 + Y 6 − Y 7 − Y 8 8 . θ ^ 3 = Y 1 + Y 2 − Y 3 − Y 4 − Y 5 − Y 6 + Y 7 + Y 8 8 . θ ^ 4 = Y 1 − Y 2 + Y 3 − Y 4 + Y 5 − Y 6 + Y 7 − Y 8 8 . θ ^ 5 = Y 1 − Y 2 + Y 3 − Y 4 − Y 5 + Y 6 − Y 7 + Y 8 8 . θ ^ 6 = Y 1 − Y 2 − Y 3 + Y 4 + Y 5 − Y 6 − Y 7 + Y 8 8 . θ ^ 7 = Y 1 − Y 2 − Y 3 + Y 4 − Y 5 + Y 6 + Y 7 − Y 8 8 . θ ^ 8 = Y 1 + Y 2 + Y 3 + Y 4 + Y 5 + Y 6 + Y 7 + Y 8 8 . {\displaystyle {\begin{aligned}{\widehat {\theta }}_{2}&={\frac {Y_{1}+Y_{2}-Y_{3}-Y_{4}+Y_{5}+Y_{6}-Y_{7}-Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{3}&={\frac {Y_{1}+Y_{2}-Y_{3}-Y_{4}-Y_{5}-Y_{6}+Y_{7}+Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{4}&={\frac {Y_{1}-Y_{2}+Y_{3}-Y_{4}+Y_{5}-Y_{6}+Y_{7}-Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{5}&={\frac {Y_{1}-Y_{2}+Y_{3}-Y_{4}-Y_{5}+Y_{6}-Y_{7}+Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{6}&={\frac {Y_{1}-Y_{2}-Y_{3}+Y_{4}+Y_{5}-Y_{6}-Y_{7}+Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{7}&={\frac {Y_{1}-Y_{2}-Y_{3}+Y_{4}-Y_{5}+Y_{6}+Y_{7}-Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{8}&={\frac {Y_{1}+Y_{2}+Y_{3}+Y_{4}+Y_{5}+Y_{6}+Y_{7}+Y_{8}}{8}}.\end{aligned}}} The question of design of experiments is: which experiment is better? The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other. Many problems of the design of experiments involve combinatorial designs, as in this example and others. == Avoiding false positives == False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields. Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention. Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance. P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible. Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers. Clear and complete documentation of the experimental methodology is also important in order to support replication of results. == Discussion topics when setting up an experimental design == An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section: How many factors does the design have, and are the levels of these factors fixed or random? Are control conditions needed, and what should they be? Manipulation checks: did the manipulation really work? What are the background variables? What is the sample size? How many units must be collected for the experiment to be generalisable and have enough power? What is the relevance of interactions between factors? What is the influence of delayed effects of substantive factors on outcomes? How do response shifts affect self-report measures? How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests? What about using a proxy pretest? Are there confounding variables? Should the client/patient, researcher or even the analyst of the data be blind to conditions? What is the feasibility of subsequent application of different conditions to the same units? How many of each control and noise factors should be taken into account? The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used. == Causal attributions == In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design. == Statistical control == It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments. To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned. One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time. == Experimental designs after Fisher == Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations. In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards. Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics. As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space. Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn. The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known as system identification. == Human participant constraints == Laws and ethical considerations preclude some carefully designed experiments with human subjects. Legal constraints are dependent on jurisdiction. Constraints may involve institutional review boards, informed consent and confidentiality affecting both clinical (medical) trials and behavioral and social science experiments. In the field of toxicology, for example, experimentation is performed on laboratory animals with the goal of defining safe exposure limits for humans. Balancing the constraints are views from the medical field. Regarding the randomization of patients, "... if no one knows which therapy is better, there is no ethical imperative to use one therapy or another." (p 380) Regarding experimental design, "...it is clearly not ethical to place subjects at risk to collect data in a poorly designed study when this situation can be easily avoided...". (p 393) == See also == Adversarial collaboration – Method of research Bayesian experimental design Block design – Structure in combinatorial mathematics Box–Behnken design – Experimental designs for response surface methodology Central composite design – Experimental design in statistical mathematics Clinical trial – Phase of clinical research in medicine Clinical study design – Plan for research in clinical medicine Computer experiment – Experiment used to study computer simulation Control variable – Experimental element which is not changed throughout the experiment Controlling for a variable – Binning data according to measured values of the variable Experimetrics (econometrics-related experiments) Factor analysis – Statistical method Fractional factorial design – Statistical experimental design approach Glossary of experimental design Grey box model – Mathematical data production model with limited structure Industrial engineering – Branch of engineering which deals with the optimization of complex processes or systems Instrument effect Law of large numbers – Averages of repeated trials converge to the expected value Manipulation checks – certain kinds of secondary evaluations of an experimentPages displaying wikidata descriptions as a fallback Multifactor design of experiments software One-factor-at-a-time method – Method of designing experiments Optimal design – Experimental design that is optimal with respect to some statistical criterionPages displaying short descriptions of redirect targets Plackett–Burman design – Type of experimental design Probabilistic design – Discipline within engineering design Protocol (natural sciences) – Procedural method for the design and implementation of an experimentPages displaying short descriptions of redirect targets Quasi-experimental design – Empirical interventional studyPages displaying short descriptions of redirect targets Randomized block design – Design of experiments to collect similar contexts togetherPages displaying short descriptions of redirect targets Randomized controlled trial – Form of scientific experiment Research design – Overall strategy utilized to carry out research Robust parameter design – technique for design of processes and experimentsPages displaying wikidata descriptions as a fallback Sample size determination – Statistical considerations on how many observations to make Supersaturated designs – Type of experimental design Royal Commission on Animal Magnetism – 1784 French scientific bodies' investigations involving systematic controlled trials Survey sampling – Statistical selection process System identification – Statistical methods to build mathematical models of dynamical systems from measured data Taguchi methods – Statistical methods to improve the quality of manufactured goods == References == === Sources === == External links == A chapter from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST Box–Behnken designs from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST
Wikipedia/Design_of_experiments
The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. It is sometimes called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified or it is specified in several implementations with different running times. It was published in 1956 by L. R. Ford Jr. and D. R. Fulkerson. The name "Ford–Fulkerson" is often also used for the Edmonds–Karp algorithm, which is a fully defined implementation of the Ford–Fulkerson method. The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path. == Algorithm == Let G ( V , E ) {\displaystyle G(V,E)} be a graph, and for each edge from u to v, let c ( u , v ) {\displaystyle c(u,v)} be the capacity and f ( u , v ) {\displaystyle f(u,v)} be the flow. We want to find the maximum flow from the source s to the sink t. After every step in the algorithm the following is maintained: This means that the flow through the network is a legal flow after each round in the algorithm. We define the residual network G f ( V , E f ) {\displaystyle G_{f}(V,E_{f})} to be the network with capacity c f ( u , v ) = c ( u , v ) − f ( u , v ) {\displaystyle c_{f}(u,v)=c(u,v)-f(u,v)} and no flow. Notice that it can happen that a flow from v to u is allowed in the residual network, though disallowed in the original network: if f ( u , v ) > 0 {\displaystyle f(u,v)>0} and c ( v , u ) = 0 {\displaystyle c(v,u)=0} then c f ( v , u ) = c ( v , u ) − f ( v , u ) = f ( u , v ) > 0 {\displaystyle c_{f}(v,u)=c(v,u)-f(v,u)=f(u,v)>0} . The path in step 2 can be found with, for example, breadth-first search (BFS) or depth-first search in G f ( V , E f ) {\displaystyle G_{f}(V,E_{f})} . The former is known as the Edmonds–Karp algorithm. When no more paths in step 2 can be found, s will not be able to reach t in the residual network. If S is the set of nodes reachable by s in the residual network, then the total capacity in the original network of edges from S to the remainder of V is on the one hand equal to the total flow we found from s to t, and on the other hand serves as an upper bound for all such flows. This proves that the flow we found is maximal. See also Max-flow Min-cut theorem. If the graph G ( V , E ) {\displaystyle G(V,E)} has multiple sources and sinks, we act as follows: Suppose that T = { t ∣ t is a sink } {\displaystyle T=\{t\mid t{\text{ is a sink}}\}} and S = { s ∣ s is a source } {\displaystyle S=\{s\mid s{\text{ is a source}}\}} . Add a new source s ∗ {\displaystyle s^{*}} with an edge ( s ∗ , s ) {\displaystyle (s^{*},s)} from s ∗ {\displaystyle s^{*}} to every node s ∈ S {\displaystyle s\in S} , with capacity c ( s ∗ , s ) = d s = ∑ ( s , u ) ∈ E c ( s , u ) {\displaystyle c(s^{*},s)=d_{s}=\sum _{(s,u)\in E}c(s,u)} . And add a new sink t ∗ {\displaystyle t^{*}} with an edge ( t , t ∗ ) {\displaystyle (t,t^{*})} from every node t ∈ T {\displaystyle t\in T} to t ∗ {\displaystyle t^{*}} , with capacity c ( t , t ∗ ) = d t = ∑ ( v , t ) ∈ E c ( v , t ) {\displaystyle c(t,t^{*})=d_{t}=\sum _{(v,t)\in E}c(v,t)} . Then apply the Ford–Fulkerson algorithm. Also, if a node u has capacity constraint d u {\displaystyle d_{u}} , we replace this node with two nodes u i n , u o u t {\displaystyle u_{\mathrm {in} },u_{\mathrm {out} }} , and an edge ( u i n , u o u t ) {\displaystyle (u_{\mathrm {in} },u_{\mathrm {out} })} , with capacity c ( u i n , u o u t ) = d u {\displaystyle c(u_{\mathrm {in} },u_{\mathrm {out} })=d_{u}} . We can then apply the Ford–Fulkerson algorithm. == Complexity == By adding the flow augmenting path to the flow already established in the graph, the maximum flow will be reached when no more flow augmenting paths can be found in the graph. However, there is no certainty that this situation will ever be reached, so the best that can be guaranteed is that the answer will be correct if the algorithm terminates. In the case that the algorithm does not terminate, the flow might not converge towards the maximum flow. However, this situation only occurs with irrational flow values. When the capacities are integers, the runtime of Ford–Fulkerson is bounded by O ( E f ) {\displaystyle O(Ef)} (see big O notation), where E {\displaystyle E} is the number of edges in the graph and f {\displaystyle f} is the maximum flow in the graph. This is because each augmenting path can be found in O ( E ) {\displaystyle O(E)} time and increases the flow by an integer amount of at least 1 {\displaystyle 1} , with the upper bound f {\displaystyle f} . A variation of the Ford–Fulkerson algorithm with guaranteed termination and a runtime independent of the maximum flow value is the Edmonds–Karp algorithm, which runs in O ( V E 2 ) {\displaystyle O(VE^{2})} time. == Integer flow example == The following example shows the first steps of Ford–Fulkerson in a flow network with 4 nodes, source A {\displaystyle A} and sink D {\displaystyle D} . This example shows the worst-case behaviour of the algorithm. In each step, only a flow of 1 {\displaystyle 1} is sent across the network. If breadth-first-search were used instead, only two steps would be needed. == Non-terminating example == Consider the flow network shown on the right, with source s {\displaystyle s} , sink t {\displaystyle t} , capacities of edges e 1 = 1 {\displaystyle e_{1}=1} , e 2 = r = ( 5 − 1 ) / 2 {\displaystyle e_{2}=r=({\sqrt {5}}-1)/2} and e 3 = 1 {\displaystyle e_{3}=1} , and the capacity of all other edges some integer M ≥ 2 {\displaystyle M\geq 2} . The constant r {\displaystyle r} was chosen so, that r 2 = 1 − r {\displaystyle r^{2}=1-r} . We use augmenting paths according to the following table, where p 1 = { s , v 4 , v 3 , v 2 , v 1 , t } {\displaystyle p_{1}=\{s,v_{4},v_{3},v_{2},v_{1},t\}} , p 2 = { s , v 2 , v 3 , v 4 , t } {\displaystyle p_{2}=\{s,v_{2},v_{3},v_{4},t\}} and p 3 = { s , v 1 , v 2 , v 3 , t } {\displaystyle p_{3}=\{s,v_{1},v_{2},v_{3},t\}} . Note that after step 1 as well as after step 5, the residual capacities of edges e 1 {\displaystyle e_{1}} , e 2 {\displaystyle e_{2}} and e 3 {\displaystyle e_{3}} are in the form r n {\displaystyle r^{n}} , r n + 1 {\displaystyle r^{n+1}} and 0 {\displaystyle 0} , respectively, for some n ∈ N {\displaystyle n\in \mathbb {N} } . This means that we can use augmenting paths p 1 {\displaystyle p_{1}} , p 2 {\displaystyle p_{2}} , p 1 {\displaystyle p_{1}} and p 3 {\displaystyle p_{3}} infinitely many times and residual capacities of these edges will always be in the same form. Total flow in the network after step 5 is 1 + 2 ( r 1 + r 2 ) {\displaystyle 1+2(r^{1}+r^{2})} . If we continue to use augmenting paths as above, the total flow converges to 1 + 2 ∑ i = 1 ∞ r i = 3 + 2 r {\displaystyle \textstyle 1+2\sum _{i=1}^{\infty }r^{i}=3+2r} . However, note that there is a flow of value 2 M + 1 {\displaystyle 2M+1} , by sending M {\displaystyle M} units of flow along s v 1 t {\displaystyle sv_{1}t} , 1 unit of flow along s v 2 v 3 t {\displaystyle sv_{2}v_{3}t} , and M {\displaystyle M} units of flow along s v 4 t {\displaystyle sv_{4}t} . Therefore, the algorithm never terminates and the flow does not converge to the maximum flow. Another non-terminating example based on the Euclidean algorithm is given by Backman & Huynh (2018), where they also show that the worst case running-time of the Ford-Fulkerson algorithm on a network G ( V , E ) {\displaystyle G(V,E)} in ordinal numbers is ω Θ ( | E | ) {\displaystyle \omega ^{\Theta (|E|)}} . == Python implementation of the Edmonds–Karp algorithm == == See also == Berge's theorem Approximate max-flow min-cut theorem Turn restriction routing Dinic's algorithm == Notes == == References == Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 26.2: The Ford–Fulkerson method". Introduction to Algorithms (Second ed.). MIT Press and McGraw–Hill. pp. 651–664. ISBN 0-262-03293-7. George T. Heineman; Gary Pollice; Stanley Selkow (2008). "Chapter 8:Network Flow Algorithms". Algorithms in a Nutshell. Oreilly Media. pp. 226–250. ISBN 978-0-596-51624-6. Jon Kleinberg; Éva Tardos (2006). "Chapter 7:Extensions to the Maximum-Flow Problem". Algorithm Design. Pearson Education. pp. 378–384. ISBN 0-321-29535-8. Samuel Gutekunst (2019). ENGRI 1101. Cornell University. Backman, Spencer; Huynh, Tony (2018). "Transfinite Ford–Fulkerson on a finite network". Computability. 7 (4): 341–347. arXiv:1504.04363. doi:10.3233/COM-180082. S2CID 15497138. == External links == A tutorial explaining the Ford–Fulkerson method to solve the max-flow problem Another Java animation Java Web Start application Media related to Ford-Fulkerson's algorithm at Wikimedia Commons
Wikipedia/Ford–Fulkerson_algorithm
In mathematical optimization, penalty methods are a certain class of algorithms for solving constrained optimization problems. A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution of the original constrained problem. The unconstrained problems are formed by adding a term, called a penalty function, to the objective function that consists of a penalty parameter multiplied by a measure of violation of the constraints. The measure of violation is nonzero when the constraints are violated and is zero in the region where constraints are not violated. == Description == Let us say we are solving the following constrained problem: min x f ( x ) {\displaystyle \min _{x}f(\mathbf {x} )} subject to c i ( x ) ≤ 0 ∀ i ∈ I . {\displaystyle c_{i}(\mathbf {x} )\leq 0~\forall i\in I.} This problem can be solved as a series of unconstrained minimization problems min f p ( x ) := f ( x ) + p ∑ i ∈ I g ( c i ( x ) ) {\displaystyle \min f_{p}(\mathbf {x} ):=f(\mathbf {x} )+p~\sum _{i\in I}~g(c_{i}(\mathbf {x} ))} where g ( c i ( x ) ) = max ( 0 , c i ( x ) ) 2 . {\displaystyle g(c_{i}(\mathbf {x} ))=\max(0,c_{i}(\mathbf {x} ))^{2}.} In the above equations, g ( c i ( x ) ) {\displaystyle g(c_{i}(\mathbf {x} ))} is the exterior penalty function while p {\displaystyle p} is the penalty coefficient. When the penalty coefficient p {\displaystyle p} is 0, fp = f, meaning that we do not take the constraints into account. In each iteration of the method, we increase the penalty coefficient p {\displaystyle p} (e.g. by a factor of 10), solve the unconstrained problem and use the solution as the initial guess for the next iteration. Solutions of the successive unconstrained problems will asymptotically converge to the solution of the original constrained problem. Common penalty functions in constrained optimization are the quadratic penalty function and the deadzone-linear penalty function. == Convergence == We first consider the set of global optimizers of the original problem, X*.: Thm.9.2.1 Assume that the objective f has bounded level sets, and that the original problem is feasible. Then: For every penalty coefficient p, the set of global optimizers of the penalized problem, Xp*, is non-empty. For every ε>0, there exists a penalty coefficient p such that the set Xp* is contained in an ε-neighborhood of the set X*. This theorem is helpful mostly when fp is convex, since in this case, we can find the global optimizers of fp. A second theorem considers local optimizers.: Thm.9.2.2  Let x* be a non-degenerate local optimizer of the original problem ("nondegenerate" means that the gradients of the active constraints are linearly independent and the second-order sufficient optimality condition is satisfied). Then, there exists a neighborhood V* of x*, and some p0>0, such that for all p>p0, the penalized objective fp has exactly one critical point in V* (denoted by x*(p)), and x*(p) approaches x* as p→∞. Also, the objective value f(x*(p)) is weakly-increasing with p. == Practical applications == Image compression optimization algorithms can make use of penalty functions for selecting how best to compress zones of colour to single representative values. The penalty method is often used in computational mechanics, especially in the Finite element method, to enforce conditions such as e.g. contact. The advantage of the penalty method is that, once we have a penalized objective with no constraints, we can use any unconstrained optimization method to solve it. The disadvantage is that, as the penalty coefficient p grows, the unconstrained problem becomes ill-conditioned - the coefficients are very large, and this may cause numeric errors and slow convergence of the unconstrained minimization.: Sub.9.2  == See also == Barrier methods constitute an alternative class of algorithms for constrained optimization. These methods also add a penalty-like term to the objective function, but in this case the iterates are forced to remain interior to the feasible domain and the barrier is in place to bias the iterates to remain away from the boundary of the feasible region. They are practically more efficient than penalty methods. Augmented Lagrangian methods are alternative penalty methods, which allow to get high-accuracy solutions without pushing the penalty coefficient to infinity. This makes the unconstrained penalized problems easier to solve. Other nonlinear programming algorithms: Sequential quadratic programming Successive linear programming Sequential linear-quadratic programming Interior point method == References == Smith, Alice E.; Coit David W. Penalty functions Handbook of Evolutionary Computation, Section C 5.2. Oxford University Press and Institute of Physics Publishing, 1996. Coello, A.C.[1]: Theoretical and Numerical Constraint-Handling Techniques Used with Evolutionary Algorithms: A Survey of the State of the Art. Comput. Methods Appl. Mech. Engrg. 191(11-12), 1245-1287 Courant, R. Variational methods for the solution of problems of equilibrium and vibrations. Bull. Amer. Math. Soc., 49, 1–23, 1943. Wotao, Y. Optimization Algorithms for constrained optimization. Department of Mathematics, UCLA, 2015.
Wikipedia/Penalty_method
In mathematical optimization, the push–relabel algorithm (alternatively, preflow–push algorithm) is an algorithm for computing maximum flows in a flow network. The name "push–relabel" comes from the two basic operations used in the algorithm. Throughout its execution, the algorithm maintains a "preflow" and gradually converts it into a maximum flow by moving flow locally between neighboring nodes using push operations under the guidance of an admissible network maintained by relabel operations. In comparison, the Ford–Fulkerson algorithm performs global augmentations that send flow following paths from the source all the way to the sink. The push–relabel algorithm is considered one of the most efficient maximum flow algorithms. The generic algorithm has a strongly polynomial O(V 2E) time complexity, which is asymptotically more efficient than the O(VE 2) Edmonds–Karp algorithm. Specific variants of the algorithms achieve even lower time complexities. The variant based on the highest label node selection rule has O(V 2√E) time complexity and is generally regarded as the benchmark for maximum flow algorithms. Subcubic O(VElog(V 2/E)) time complexity can be achieved using dynamic trees, although in practice it is less efficient. The push–relabel algorithm has been extended to compute minimum cost flows. The idea of distance labels has led to a more efficient augmenting path algorithm, which in turn can be incorporated back into the push–relabel algorithm to create a variant with even higher empirical performance. == History == The concept of a preflow was originally designed by Alexander V. Karzanov and was published in 1974 in Soviet Mathematical Dokladi 15. This pre-flow algorithm also used a push operation; however, it used distances in the auxiliary network to determine where to push the flow instead of a labeling system. The push-relabel algorithm was designed by Andrew V. Goldberg and Robert Tarjan. The algorithm was initially presented in November 1986 in STOC '86: Proceedings of the eighteenth annual ACM symposium on Theory of computing, and then officially in October 1988 as an article in the Journal of the ACM. Both papers detail a generic form of the algorithm terminating in O(V 2E) along with a O(V 3) sequential implementation, a O(VE log(V 2/E)) implementation using dynamic trees, and parallel/distributed implementation. As explained in, Goldberg-Tarjan introduced distance labels by incorporating them into the parallel maximum flow algorithm of Yossi Shiloach and Uzi Vishkin. == Concepts == === Definitions and notations === Let: G = (V, E) be a network with capacity function c: V × V → R {\displaystyle \mathbb {R} } ∞, F = (G, c, s, t) a flow network, where s ∈ V and t ∈ V are chosen source and sink vertices respectively, f : V × V → R {\displaystyle \mathbb {R} } denote a pre-flow in F, xf : V → R {\displaystyle \mathbb {R} } denote the excess function with respect to the flow f, defined by xf (u) = Σv ∈ V f (v, u) − Σv ∈ V f (u, v), cf : V × V → R {\displaystyle \mathbb {R} } ∞ denote the residual capacity function with respect to the flow f, defined by cf (e) = c(e) − f (e), Ef ⊂ E being the edges where f < c, and Gf (V, Ef ) denote the residual network of G with respect to the flow f. The push–relabel algorithm uses a nonnegative integer valid labeling function which makes use of distance labels, or heights, on nodes to determine which arcs should be selected for the push operation. This labeling function is denoted by 𝓁 : V → N {\displaystyle \mathbb {N} } . This function must satisfy the following conditions in order to be considered valid: Valid labeling: 𝓁(u) ≤ 𝓁(v) + 1 for all (u, v) ∈ Ef Source condition: 𝓁(s) = | V | Sink conservation: 𝓁(t) = 0 In the algorithm, the label values of s and t are fixed. 𝓁(u) is a lower bound of the unweighted distance from u to t in Gf if t is reachable from u. If u has been disconnected from t, then 𝓁(u) − | V | is a lower bound of the unweighted distance from u to s. As a result, if a valid labeling function exists, there are no s-t paths in Gf because no such paths can be longer than | V | − 1. An arc (u, v) ∈ Ef is called admissible if 𝓁(u) = 𝓁(v) + 1. The admissible network G̃f (V, Ẽf ) is composed of the set of arcs e ∈ Ef that are admissible. The admissible network is acyclic. For a fixed flow f, a vertex v ∉ {s, t} is called active if it has positive excess with respect to f, i.e., xf (u) > 0. === Operations === ==== Initialization ==== The algorithm starts by creating a residual graph, initializing the preflow values to zero and performing a set of saturating push operations on residual arcs (s, v) exiting the source, where v ∈ V \ {s}. Similarly, the labels are initialized such that the label at the source is the number of nodes in the graph, 𝓁(s) = | V |, and all other nodes are given a label of zero. Once the initialization is complete the algorithm repeatedly performs either the push or relabel operations against active nodes until no applicable operation can be performed. ==== Push ==== The push operation applies on an admissible out-arc (u, v) of an active node u in Gf. It moves min{xf (u), cf (u,v)} units of flow from u to v. push(u, v): assert xf[u] > 0 and 𝓁[u] == 𝓁[v] + 1 Δ = min(xf[u], c[u][v] - f[u][v]) f[u][v] += Δ f[v][u] -= Δ xf[u] -= Δ xf[v] += Δ A push operation that causes f (u, v) to reach c(u, v) is called a saturating push since it uses up all the available capacity of the residual arc. Otherwise, all of the excess at the node is pushed across the residual arc. This is called an unsaturating or non-saturating push. ==== Relabel ==== The relabel operation applies on an active node u which is neither the source nor the sink without any admissible out-arcs in Gf. It modifies 𝓁(u) to be the minimum value such that an admissible out-arc is created. Note that this always increases 𝓁(u) and never creates a steep arc, which is an arc (u, v) such that cf (u, v) > 0, and 𝓁(u) > 𝓁(v) + 1. relabel(u): assert xf[u] > 0 and 𝓁[u] <= 𝓁[v] for all v such that cf[u][v] > 0 𝓁[u] = 1 + min(𝓁[v] for all v such that cf[u][v] > 0) ==== Effects of push and relabel ==== After a push or relabel operation, 𝓁 remains a valid labeling function with respect to f. For a push operation on an admissible arc (u, v), it may add an arc (v, u) to Ef, where 𝓁(v) = 𝓁(u) − 1 ≤ 𝓁(u) + 1; it may also remove the arc (u, v) from Ef, where it effectively removes the constraint 𝓁(u) ≤ 𝓁(v) + 1. To see that a relabel operation on node u preserves the validity of 𝓁(u), notice that this is trivially guaranteed by definition for the out-arcs of u in Gf. For the in-arcs of u in Gf, the increased 𝓁(u) can only satisfy the constraints less tightly, not violate them. == The generic push–relabel algorithm == The generic push–relabel algorithm is used as a proof of concept only and does not contain implementation details on how to select an active node for the push and relabel operations. This generic version of the algorithm will terminate in O(V2E). Since 𝓁(s) = | V |, 𝓁(t) = 0, and there are no paths longer than | V | − 1 in Gf, in order for 𝓁(s) to satisfy the valid labeling condition s must be disconnected from t. At initialisation, the algorithm fulfills this requirement by creating a pre-flow f that saturates all out-arcs of s, after which 𝓁(v) = 0 is trivially valid for all v ∈ V \ {s, t}. After initialisation, the algorithm repeatedly executes an applicable push or relabel operation until no such operations apply, at which point the pre-flow has been converted into a maximum flow. generic-push-relabel(G, c, s, t): create a pre-flow f that saturates all out-arcs of s let 𝓁[s] = |V| let 𝓁[v] = 0 for all v ∈ V \ {s} while there is an applicable push or relabel operation do execute the operation === Correctness === The algorithm maintains the condition that 𝓁 is a valid labeling during its execution. This can be proven true by examining the effects of the push and relabel operations on the label function 𝓁. The relabel operation increases the label value by the associated minimum plus one which will always satisfy the 𝓁(u) ≤ 𝓁(v) + 1 constraint. The push operation can send flow from u to v if 𝓁(u) = 𝓁(v) + 1. This may add (v, u) to Gf and may delete (u, v) from Gf . The addition of (v, u) to Gf will not affect the valid labeling since 𝓁(v) = 𝓁(u) − 1. The deletion of (u, v) from Gf removes the corresponding constraint since the valid labeling property 𝓁(u) ≤ 𝓁(v) + 1 only applies to residual arcs in Gf . If a preflow f and a valid labeling 𝓁 for f exists then there is no augmenting path from s to t in the residual graph Gf . This can be proven by contradiction based on inequalities which arise in the labeling function when supposing that an augmenting path does exist. If the algorithm terminates, then all nodes in V \ {s, t} are not active. This means all v ∈ V \ {s, t} have no excess flow, and with no excess the preflow f obeys the flow conservation constraint and can be considered a normal flow. This flow is the maximum flow according to the max-flow min-cut theorem since there is no augmenting path from s to t. Therefore, the algorithm will return the maximum flow upon termination. === Time complexity === In order to bound the time complexity of the algorithm, we must analyze the number of push and relabel operations which occur within the main loop. The numbers of relabel, saturating push and nonsaturating push operations are analyzed separately. In the algorithm, the relabel operation can be performed at most (2| V | − 1)(| V | − 2) < 2| V |2 times. This is because the labeling 𝓁(u) value for any node u can never decrease, and the maximum label value is at most 2| V | − 1 for all nodes. This means the relabel operation could potentially be performed 2| V | − 1 times for all nodes V \ {s, t} (i.e. | V | − 2). This results in a bound of O(V 2) for the relabel operation. Each saturating push on an admissible arc (u, v) removes the arc from Gf . For the arc to be reinserted into Gf for another saturating push, v must first be relabeled, followed by a push on the arc (v, u), then u must be relabeled. In the process, 𝓁(u) increases by at least two. Therefore, there are O(V) saturating pushes on (u, v), and the total number of saturating pushes is at most 2| V || E |. This results in a time bound of O(VE) for the saturating push operations. Bounding the number of nonsaturating pushes can be achieved via a potential argument. We use the potential function Φ = Σ[u ∈ V ∧ xf (u) > 0] 𝓁(u) (i.e. Φ is the sum of the labels of all active nodes). It is obvious that Φ is 0 initially and stays nonnegative throughout the execution of the algorithm. Both relabels and saturating pushes can increase Φ. However, the value of Φ must be equal to 0 at termination since there cannot be any remaining active nodes at the end of the algorithm's execution. This means that over the execution of the algorithm, the nonsaturating pushes must make up the difference of the relabel and saturating push operations in order for Φ to terminate with a value of 0. The relabel operation can increase Φ by at most (2| V | − 1)(| V | − 2). A saturating push on (u, v) activates v if it was inactive before the push, increasing Φ by at most 2| V | − 1. Hence, the total contribution of all saturating pushes operations to Φ is at most (2| V | − 1)(2| V || E |). A nonsaturating push on (u, v) always deactivates u, but it can also activate v as in a saturating push. As a result, it decreases Φ by at least 𝓁(u) − 𝓁(v) = 1. Since relabels and saturating pushes increase Φ, the total number of nonsaturating pushes must make up the difference of (2| V | − 1)(| V | − 2) + (2| V | − 1)(2| V || E |) ≤ 4| V |2| E |. This results in a time bound of O(V 2E) for the nonsaturating push operations. In sum, the algorithm executes O(V 2) relabels, O(VE) saturating pushes and O(V 2E) nonsaturating pushes. Data structures can be designed to pick and execute an applicable operation in O(1) time. Therefore, the time complexity of the algorithm is O(V 2E). === Example === The following is a sample execution of the generic push-relabel algorithm, as defined above, on the following simple network flow graph diagram. In the example, the h and e values denote the label 𝓁 and excess xf , respectively, of the node during the execution of the algorithm. Each residual graph in the example only contains the residual arcs with a capacity larger than zero. Each residual graph may contain multiple iterations of the perform operation loop. The example (but with initial flow of 0) can be run here interactively. == Practical implementations == While the generic push–relabel algorithm has O(V 2E) time complexity, efficient implementations achieve O(V 3) or lower time complexity by enforcing appropriate rules in selecting applicable push and relabel operations. The empirical performance can be further improved by heuristics. === "Current-arc" data structure and discharge operation === The "current-arc" data structure is a mechanism for visiting the in- and out-neighbors of a node in the flow network in a static circular order. If a singly linked list of neighbors is created for a node, the data structure can be as simple as a pointer into the list that steps through the list and rewinds to the head when it runs off the end. Based on the "current-arc" data structure, the discharge operation can be defined. A discharge operation applies on an active node and repeatedly pushes flow from the node until it becomes inactive, relabeling it as necessary to create admissible arcs in the process. discharge(u): while xf[u] > 0 do if current-arc[u] has run off the end of neighbors[u] then relabel(u) rewind current-arc[u] else let (u, v) = current-arc[u] if (u, v) is admissible then push(u, v) let current-arc[u] point to the next neighbor of u Finding the next admissible edge to push on has O ( 1 ) {\displaystyle O(1)} amortized complexity. The current-arc pointer only moves to the next neighbor when the edge to the current neighbor is saturated or non-admissible, and neither of these two properties can change until the active node u {\displaystyle u} is relabelled. Therefore, when the pointer runs off, there are no admissible unsaturated edges and we have to relabel the active node u {\displaystyle u} , so having moved the pointer O ( V ) {\displaystyle O(V)} times is paid for by the O ( V ) {\displaystyle O(V)} relabel operation. === Active node selection rules === Definition of the discharge operation reduces the push–relabel algorithm to repeatedly selecting an active node to discharge. Depending on the selection rule, the algorithm exhibits different time complexities. For the sake of brevity, we ignore s and t when referring to the nodes in the following discussion. ==== FIFO selection rule ==== The FIFO push–relabel algorithm organizes the active nodes into a queue. The initial active nodes can be inserted in arbitrary order. The algorithm always removes the node at the front of the queue for discharging. Whenever an inactive node becomes active, it is appended to the back of the queue. The algorithm has O(V 3) time complexity. ==== Relabel-to-front selection rule ==== The relabel-to-front push–relabel algorithm organizes all nodes into a linked list and maintains the invariant that the list is topologically sorted with respect to the admissible network. The algorithm scans the list from front to back and performs a discharge operation on the current node if it is active. If the node is relabeled, it is moved to the front of the list, and the scan is restarted from the front. The algorithm also has O(V 3) time complexity. ==== Highest label selection rule ==== The highest-label push–relabel algorithm organizes all nodes into buckets indexed by their labels. The algorithm always selects an active node with the largest label to discharge. The algorithm has O(V 2√E) time complexity. If the lowest-label selection rule is used instead, the time complexity becomes O(V 2E). === Implementation techniques === Although in the description of the generic push–relabel algorithm above, 𝓁(u) is set to zero for each node u other than s and t at the beginning, it is preferable to perform a backward breadth-first search from t to compute exact labels. The algorithm is typically separated into two phases. Phase one computes a maximum pre-flow by discharging only active nodes whose labels are below n. Phase two converts the maximum preflow into a maximum flow by returning excess flow that cannot reach t to s. It can be shown that phase two has O(VE) time complexity regardless of the order of push and relabel operations and is therefore dominated by phase one. Alternatively, it can be implemented using flow decomposition. Heuristics are crucial to improving the empirical performance of the algorithm. Two commonly used heuristics are the gap heuristic and the global relabeling heuristic. The gap heuristic detects gaps in the labeling function. If there is a label 0 < 𝓁' < | V | for which there is no node u such that 𝓁(u) = 𝓁', then any node u with 𝓁' < 𝓁(u) < | V | has been disconnected from t and can be relabeled to (| V | + 1) immediately. The global relabeling heuristic periodically performs backward breadth-first search from t in Gf to compute the exact labels of the nodes. Both heuristics skip unhelpful relabel operations, which are a bottleneck of the algorithm and contribute to the ineffectiveness of dynamic trees. == Sample implementations == == References ==
Wikipedia/Push–relabel_maximum_flow_algorithm
In computer science, the Edmonds–Karp algorithm is an implementation of the Ford–Fulkerson method for computing the maximum flow in a flow network in O ( | V | | E | 2 ) {\displaystyle O(|V||E|^{2})} time. The algorithm was first published by Yefim Dinitz in 1970, and independently published by Jack Edmonds and Richard Karp in 1972. Dinitz's algorithm includes additional techniques that reduce the running time to O ( | V | 2 | E | ) {\displaystyle O(|V|^{2}|E|)} . == Algorithm == The algorithm is identical to the Ford–Fulkerson algorithm, except that the search order when finding the augmenting path is defined. The path found must be a shortest path that has available capacity. This can be found by a breadth-first search, where we apply a weight of 1 to each edge. The running time of O ( | V | | E | 2 ) {\displaystyle O(|V||E|^{2})} is found by showing that each augmenting path can be found in O ( | E | ) {\displaystyle O(|E|)} time, that every time at least one of the E edges becomes saturated (an edge which has the maximum possible flow), that the distance from the saturated edge to the source along the augmenting path must be longer than last time it was saturated, and that the length is at most | V | {\displaystyle |V|} . Another property of this algorithm is that the length of the shortest augmenting path increases monotonically. A proof outline using these properties is as follows: The proof first establishes that distance of the shortest path from the source node s to any non-sink node v in a residual flow network increases monotonically after each augmenting iteration (Lemma 1, proven below). Then, it shows that each of the | E | {\displaystyle |E|} edges can be critical at most | V | 2 {\displaystyle {\frac {|V|}{2}}} times for the duration of the algorithm, giving an upper-bound of O ( | V | | E | 2 ) ∈ O ( | V | | E | ) {\displaystyle O\left({\frac {|V||E|}{2}}\right)\in O(|V||E|)} augmenting iterations. Since each iteration takes O ( | E | ) {\displaystyle O(|E|)} time (bounded by the time for finding the shortest path using Breadth-First-Search), the total running time of Edmonds-Karp is O ( | V | | E | 2 ) {\displaystyle O(|V||E|^{2})} as required. To prove Lemma 1, one can use proof by contradiction by assuming that there is an augmenting iteration that causes the shortest path distance from s to v to decrease. Let f be the flow before such an augmentation and f ′ {\displaystyle f'} be the flow after. Denote the minimum distance in a residual flow network ⁠ G f {\displaystyle G_{f}} ⁠ from nodes u , v {\displaystyle u,v} as δ f ( u , v ) {\displaystyle \delta _{f}(u,v)} . One can derive a contradiction by showing that δ f ( s , v ) ≤ δ f ′ ( s , v ) {\displaystyle \delta _{f}(s,v)\leq \delta _{f'}(s,v)} , meaning that the shortest path distance between source node s and non-sink node v did not in fact decrease. == Pseudocode == algorithm EdmondsKarp is input: graph (graph[v] should be the list of edges coming out of vertex v in the original graph and their corresponding constructed reverse edges which are used for push-back flow. Each edge should have a capacity 'cap', flow, source 's' and sink 't' as parameters, as well as a pointer to the reverse edge 'rev'.) s (Source vertex) t (Sink vertex) output: flow (Value of maximum flow) flow := 0 (Initialize flow to zero) repeat (Run a breadth-first search (bfs) to find the shortest s-t path. We use 'pred' to store the edge taken to get to each vertex, so we can recover the path afterwards) q := queue() q.push(s) pred := array(graph.length) while not empty(q) and pred[t] = null cur := q.pop() for Edge e in graph[cur] do if pred[e.t] = null and e.t ≠ s and e.cap > e.flow then pred[e.t] := e q.push(e.t) if not (pred[t] = null) then (We found an augmenting path. See how much flow we can send) df := ∞ for (e := pred[t]; e ≠ null; e := pred[e.s]) do df := min(df, e.cap - e.flow) (And update edges by that amount) for (e := pred[t]; e ≠ null; e := pred[e.s]) do e.flow := e.flow + df e.rev.flow := e.rev.flow - df flow := flow + df until pred[t] = null (i.e., until no augmenting path was found) return flow == Example == Given a network of seven nodes, source A, sink G, and capacities as shown below: In the pairs f / c {\displaystyle f/c} written on the edges, f {\displaystyle f} is the current flow, and c {\displaystyle c} is the capacity. The residual capacity from u {\displaystyle u} to v {\displaystyle v} is c f ( u , v ) = c ( u , v ) − f ( u , v ) {\displaystyle c_{f}(u,v)=c(u,v)-f(u,v)} , the total capacity, minus the flow that is already used. If the net flow from u {\displaystyle u} to v {\displaystyle v} is negative, it contributes to the residual capacity. Notice how the length of the augmenting path found by the algorithm (in red) never decreases. The paths found are the shortest possible. The flow found is equal to the capacity across the minimum cut in the graph separating the source and the sink. There is only one minimal cut in this graph, partitioning the nodes into the sets { A , B , C , E } {\displaystyle \{A,B,C,E\}} and { D , F , G } {\displaystyle \{D,F,G\}} , with the capacity c ( A , D ) + c ( C , D ) + c ( E , G ) = 3 + 1 + 1 = 5. {\displaystyle c(A,D)+c(C,D)+c(E,G)=3+1+1=5.\ } == Notes == == References == Algorithms and Complexity (see pages 63–69). https://web.archive.org/web/20061005083406/http://www.cis.upenn.edu/~wilf/AlgComp3.html
Wikipedia/Edmonds–Karp_algorithm
In computer science, the Floyd–Warshall algorithm (also known as Floyd's algorithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algorithm) is an algorithm for finding shortest paths in a directed weighted graph with positive or negative edge weights (but with no negative cycles). A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure of a relation R {\displaystyle R} , or (in connection with the Schulze voting system) widest paths between all pairs of vertices in a weighted graph. == History and naming == The Floyd–Warshall algorithm is an example of dynamic programming, and was published in its currently recognized form by Robert Floyd in 1962. However, it is essentially the same as algorithms previously published by Bernard Roy in 1959 and also by Stephen Warshall in 1962 for finding the transitive closure of a graph, and is closely related to Kleene's algorithm (published in 1956) for converting a deterministic finite automaton into a regular expression, with the difference being the use of a min-plus semiring. The modern formulation of the algorithm as three nested for-loops was first described by Peter Ingerman, also in 1962. == Algorithm == The Floyd–Warshall algorithm compares many possible paths through the graph between each pair of vertices. It is guaranteed to find all shortest paths and is able to do this with Θ ( | V | 3 ) {\displaystyle \Theta (|V|^{3})} comparisons in a graph, even though there may be Θ ( | V | 2 ) {\displaystyle \Theta (|V|^{2})} edges in the graph. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is optimal. Consider a graph G {\displaystyle G} with vertices V {\displaystyle V} numbered 1 through N {\displaystyle N} . Further consider a function s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} that returns the length of the shortest possible path (if one exists) from i {\displaystyle i} to j {\displaystyle j} using vertices only from the set { 1 , 2 , … , k } {\displaystyle \{1,2,\ldots ,k\}} as intermediate points along the way. Now, given this function, our goal is to find the length of the shortest path from each i {\displaystyle i} to each j {\displaystyle j} using any vertex in { 1 , 2 , … , N } {\displaystyle \{1,2,\ldots ,N\}} . By definition, this is the value s h o r t e s t P a t h ( i , j , N ) {\displaystyle \mathrm {shortestPath} (i,j,N)} , which we will find recursively. Observe that s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} must be less than or equal to s h o r t e s t P a t h ( i , j , k − 1 ) {\displaystyle \mathrm {shortestPath} (i,j,k-1)} : we have more flexibility if we are allowed to use the vertex k {\displaystyle k} . If s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} is in fact less than s h o r t e s t P a t h ( i , j , k − 1 ) {\displaystyle \mathrm {shortestPath} (i,j,k-1)} , then there must be a path from i {\displaystyle i} to j {\displaystyle j} using the vertices { 1 , 2 , … , k } {\displaystyle \{1,2,\ldots ,k\}} that is shorter than any such path that does not use the vertex k {\displaystyle k} . Since there are no negative cycles this path can be decomposed as: (1) a path from i {\displaystyle i} to k {\displaystyle k} that uses the vertices { 1 , 2 , … , k − 1 } {\displaystyle \{1,2,\ldots ,k-1\}} , followed by (2) a path from k {\displaystyle k} to j {\displaystyle j} that uses the vertices { 1 , 2 , … , k − 1 } {\displaystyle \{1,2,\ldots ,k-1\}} . And of course, these must be a shortest such path (or several of them), otherwise we could further decrease the length. In other words, we have arrived at the recursive formula: s h o r t e s t P a t h ( i , j , k ) = {\displaystyle \mathrm {shortestPath} (i,j,k)=} m i n ( s h o r t e s t P a t h ( i , j , k − 1 ) , {\displaystyle \mathrm {min} {\Big (}\mathrm {shortestPath} (i,j,k-1),} s h o r t e s t P a t h ( i , k , k − 1 ) + s h o r t e s t P a t h ( k , j , k − 1 ) ) {\displaystyle \mathrm {shortestPath} (i,k,k-1)+\mathrm {shortestPath} (k,j,k-1){\Big )}} . The base case is given by s h o r t e s t P a t h ( i , j , 0 ) = w ( i , j ) , {\displaystyle \mathrm {shortestPath} (i,j,0)=w(i,j),} where w ( i , j ) {\displaystyle w(i,j)} denotes the weight of the edge from i {\displaystyle i} to j {\displaystyle j} if one exists and ∞ (infinity) otherwise. These formulas are the heart of the Floyd–Warshall algorithm. The algorithm works by first computing s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} for all ( i , j ) {\displaystyle (i,j)} pairs for k = 0 {\displaystyle k=0} , then k = 1 {\displaystyle k=1} , then k = 2 {\displaystyle k=2} , and so on. This process continues until k = N {\displaystyle k=N} , and we have found the shortest path for all ( i , j ) {\displaystyle (i,j)} pairs using any intermediate vertices. Pseudocode for this basic version follows. === Pseudocode === let dist be a |V| × |V| array of minimum distances initialized to ∞ (infinity) for each edge (u, v) do dist[u][v] = w(u, v) // The weight of the edge (u, v) for each vertex v do dist[v][v] = 0 for k from 1 to |V| for i from 1 to |V| for j from 1 to |V| if dist[i][j] > dist[i][k] + dist[k][j] dist[i][j] = dist[i][k] + dist[k][j] end if == Example == The algorithm above is executed on the graph on the left below: Prior to the first recursion of the outer loop, labeled k = 0 above, the only known paths correspond to the single edges in the graph. At k = 1, paths that go through the vertex 1 are found: in particular, the path [2,1,3] is found, replacing the path [2,3] which has fewer edges but is longer (in terms of weight). At k = 2, paths going through the vertices {1,2} are found. The red and blue boxes show how the path [4,2,1,3] is assembled from the two known paths [4,2] and [2,1,3] encountered in previous iterations, with 2 in the intersection. The path [4,2,3] is not considered, because [2,1,3] is the shortest path encountered so far from 2 to 3. At k = 3, paths going through the vertices {1,2,3} are found. Finally, at k = 4, all shortest paths are found. The distance matrix at each iteration of k, with the updated distances in bold, will be: == Behavior with negative cycles == A negative cycle is a cycle whose edges sum to a negative value. There is no shortest path between any pair of vertices i {\displaystyle i} , j {\displaystyle j} which form part of a negative cycle, because path-lengths from i {\displaystyle i} to j {\displaystyle j} can be arbitrarily small (negative). For numerically meaningful output, the Floyd–Warshall algorithm assumes that there are no negative cycles. Nevertheless, if there are negative cycles, the Floyd–Warshall algorithm can be used to detect them. The intuition is as follows: The Floyd–Warshall algorithm iteratively revises path lengths between all pairs of vertices ( i , j ) {\displaystyle (i,j)} , including where i = j {\displaystyle i=j} ; Initially, the length of the path ( i , i ) {\displaystyle (i,i)} is zero; A path [ i , k , … , i ] {\displaystyle [i,k,\ldots ,i]} can only improve upon this if it has length less than zero, i.e. denotes a negative cycle; Thus, after the algorithm, ( i , i ) {\displaystyle (i,i)} will be negative if there exists a negative-length path from i {\displaystyle i} back to i {\displaystyle i} . Hence, to detect negative cycles using the Floyd–Warshall algorithm, one can inspect the diagonal of the path matrix, and the presence of a negative number indicates that the graph contains at least one negative cycle. However, when a negative cycle is present, during the execution of the algorithm exponentially large numbers on the order of Ω ( 6 n ⋅ w m a x ) {\displaystyle \Omega (6^{n}\cdot w_{max})} can appear, where w m a x {\displaystyle w_{max}} is the largest absolute value edge weight in the graph. To avoid integer underflow problems, one should check for a negative cycle within the innermost for loop of the algorithm. == Path reconstruction == The Floyd–Warshall algorithm typically only provides the lengths of the paths between all pairs of vertices. With simple modifications, it is possible to create a method to reconstruct the actual path between any two endpoint vertices. While one may be inclined to store the actual path from each vertex to each other vertex, this is not necessary, and in fact, is very costly in terms of memory. Instead, we can use the shortest-path tree, which can be calculated for each node in Θ ( | E | ) {\displaystyle \Theta (|E|)} time using Θ ( | V | ) {\displaystyle \Theta (|V|)} memory, and allows us to efficiently reconstruct a directed path between any two connected vertices. === Pseudocode === The array prev[u][v] holds the penultimate vertex on the path from u to v (except in the case of prev[v][v], where it always contains v even if there is no self-loop on v): let dist be a | V | × | V | {\displaystyle |V|\times |V|} array of minimum distances initialized to ∞ {\displaystyle \infty } (infinity) let prev be a | V | × | V | {\displaystyle |V|\times |V|} array of vertex indices initialized to null procedure FloydWarshallWithPathReconstruction() is for each edge (u, v) do dist[u][v] = w(u, v) // The weight of the edge (u, v) prev[u][v] = u for each vertex v do dist[v][v] = 0 prev[v][v] = v for k from 1 to |V| do // standard Floyd-Warshall implementation for i from 1 to |V| for j from 1 to |V| if dist[i][j] > dist[i][k] + dist[k][j] then dist[i][j] = dist[i][k] + dist[k][j] prev[i][j] = prev[k][j] procedure Path(u, v) is if prev[u][v] = null then return [] path = [v] while u ≠ v do v = prev[u][v] path.prepend(v) return path == Time complexity == Let n {\displaystyle n} be | V | {\displaystyle |V|} , the number of vertices. To find all n 2 {\displaystyle n^{2}} of s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} (for all i {\displaystyle i} and j {\displaystyle j} ) from those of s h o r t e s t P a t h ( i , j , k − 1 ) {\displaystyle \mathrm {shortestPath} (i,j,k-1)} requires Θ ( n 2 ) {\displaystyle \Theta (n^{2})} operations. Since we begin with s h o r t e s t P a t h ( i , j , 0 ) = e d g e C o s t ( i , j ) {\displaystyle \mathrm {shortestPath} (i,j,0)=\mathrm {edgeCost} (i,j)} and compute the sequence of n {\displaystyle n} matrices s h o r t e s t P a t h ( i , j , 1 ) {\displaystyle \mathrm {shortestPath} (i,j,1)} , s h o r t e s t P a t h ( i , j , 2 ) {\displaystyle \mathrm {shortestPath} (i,j,2)} , … {\displaystyle \ldots } , s h o r t e s t P a t h ( i , j , n ) {\displaystyle \mathrm {shortestPath} (i,j,n)} , each having a cost of Θ ( n 2 ) {\displaystyle \Theta (n^{2})} , the total time complexity of the algorithm is n ⋅ Θ ( n 2 ) = Θ ( n 3 ) {\displaystyle n\cdot \Theta (n^{2})=\Theta (n^{3})} . == Applications and generalizations == The Floyd–Warshall algorithm can be used to solve the following problems, among others: Shortest paths in directed graphs (Floyd's algorithm). Transitive closure of directed graphs (Warshall's algorithm). In Warshall's original formulation of the algorithm, the graph is unweighted and represented by a Boolean adjacency matrix. Then the addition operation is replaced by logical conjunction (AND) and the minimum operation by logical disjunction (OR). Finding a regular expression denoting the regular language accepted by a finite automaton (Kleene's algorithm, a closely related generalization of the Floyd–Warshall algorithm) Inversion of real matrices (Gauss–Jordan algorithm) Optimal routing. In this application one is interested in finding the path with the maximum flow between two vertices. This means that, rather than taking minima as in the pseudocode above, one instead takes maxima. The edge weights represent fixed constraints on flow. Path weights represent bottlenecks; so the addition operation above is replaced by the minimum operation. Fast computation of Pathfinder networks. Widest paths/Maximum bandwidth paths Computing canonical form of difference bound matrices (DBMs) Computing the similarity between graphs Transitive closure in AND/OR/threshold graphs. == Implementations == Implementations are available for many programming languages. For C++, in the boost::graph library For C#, at QuikGraph For C#, at QuickGraphPCL (A fork of QuickGraph with better compatibility with projects using Portable Class Libraries.) For Java, in the Apache Commons Graph library For JavaScript, in the Cytoscape library For Julia, in the Graphs.jl package For MATLAB, in the Matlab_bgl package For Perl, in the Graph module For Python, in the SciPy library (module scipy.sparse.csgraph) or NetworkX library For R, in packages e1071 and Rfast For C, a pthreads, parallelized, implementation including a SQLite interface to the data at floydWarshall.h == Comparison with other shortest path algorithms == For graphs with non-negative edge weights, Dijkstra's algorithm can be used to find all shortest paths from a single vertex with running time Θ ( | E | + | V | log ⁡ | V | ) {\displaystyle \Theta (|E|+|V|\log |V|)} . Thus, running Dijkstra starting at each vertex takes time Θ ( | E | | V | + | V | 2 log ⁡ | V | ) {\displaystyle \Theta (|E||V|+|V|^{2}\log |V|)} . Since | E | = O ( | V | 2 ) {\displaystyle |E|=O(|V|^{2})} , this yields a worst-case running time of repeated Dijkstra of O ( | V | 3 ) {\displaystyle O(|V|^{3})} . While this matches the asymptotic worst-case running time of the Floyd-Warshall algorithm, the constants involved matter quite a lot. When a graph is dense (i.e., | E | ≈ | V | 2 {\displaystyle |E|\approx |V|^{2}} ), the Floyd-Warshall algorithm tends to perform better in practice. When the graph is sparse (i.e., | E | {\displaystyle |E|} is significantly smaller than | V | 2 {\displaystyle |V|^{2}} ), Dijkstra tends to dominate. For sparse graphs with negative edges but no negative cycles, Johnson's algorithm can be used, with the same asymptotic running time as the repeated Dijkstra approach. There are also known algorithms using fast matrix multiplication to speed up all-pairs shortest path computation in dense graphs, but these typically make extra assumptions on the edge weights (such as requiring them to be small integers). In addition, because of the high constant factors in their running time, they would only provide a speedup over the Floyd–Warshall algorithm for very large graphs. == References == == External links == Interactive animation of the Floyd–Warshall algorithm Interactive animation of the Floyd–Warshall algorithm (Technical University of Munich)
Wikipedia/Floyd–Warshall_algorithm
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. == Optimization problems == Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set. A problem with continuous variables is known as a continuous optimization, in which optimal arguments from a continuous set must be found. They can include constrained problems and multimodal problems. An optimization problem can be represented in the following way: Given: a function f : A → R {\displaystyle \mathbb {R} } from some set A to the real numbers Sought: an element x0 ∈ A such that f(x0) ≤ f(x) for all x ∈ A ("minimization") or such that f(x0) ≥ f(x) for all x ∈ A ("maximization"). Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework. Since the following is valid: f ( x 0 ) ≥ f ( x ) ⇔ − f ( x 0 ) ≤ − f ( x ) , {\displaystyle f(\mathbf {x} _{0})\geq f(\mathbf {x} )\Leftrightarrow -f(\mathbf {x} _{0})\leq -f(\mathbf {x} ),} it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too. Problems formulated using this technique in the fields of physics may refer to the technique as energy minimization, speaking of the value of the function f as representing the energy of the system being modeled. In machine learning, it is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically, A is some subset of the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the search space or the choice set, while the elements of A are called candidate solutions or feasible solutions. The function f is variously called an objective function, criterion function, loss function, cost function (minimization), utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution. In mathematics, conventional optimization problems are usually stated in terms of minimization. A local minimum x* is defined as an element for which there exists some δ > 0 such that ∀ x ∈ A where ‖ x − x ∗ ‖ ≤ δ , {\displaystyle \forall \mathbf {x} \in A\;{\text{where}}\;\left\Vert \mathbf {x} -\mathbf {x} ^{\ast }\right\Vert \leq \delta ,\,} the expression f(x*) ≤ f(x) holds; that is to say, on some region around x* all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly. While a local minimum is at least as good as any nearby elements, a global minimum is at least as good as every feasible element. Generally, unless the objective function is convex in a minimization problem, there may be several local minima. In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem. == Notation == Optimization problems are often expressed with special notation. Here are some examples: === Minimum and maximum value of a function === Consider the following notation: min x ∈ R ( x 2 + 1 ) {\displaystyle \min _{x\in \mathbb {R} }\;\left(x^{2}+1\right)} This denotes the minimum value of the objective function x2 + 1, when choosing x from the set of real numbers R {\displaystyle \mathbb {R} } . The minimum value in this case is 1, occurring at x = 0. Similarly, the notation max x ∈ R 2 x {\displaystyle \max _{x\in \mathbb {R} }\;2x} asks for the maximum value of the objective function 2x, where x may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined". === Optimal input arguments === Consider the following notation: a r g m i n x ∈ ( − ∞ , − 1 ] x 2 + 1 , {\displaystyle {\underset {x\in (-\infty ,-1]}{\operatorname {arg\,min} }}\;x^{2}+1,} or equivalently a r g m i n x x 2 + 1 , subject to: x ∈ ( − ∞ , − 1 ] . {\displaystyle {\underset {x}{\operatorname {arg\,min} }}\;x^{2}+1,\;{\text{subject to:}}\;x\in (-\infty ,-1].} This represents the value (or values) of the argument x in the interval (−∞,−1] that minimizes (or minimize) the objective function x2 + 1 (the actual minimum value of that function is not what the problem asks for). In this case, the answer is x = −1, since x = 0 is infeasible, that is, it does not belong to the feasible set. Similarly, a r g m a x x ∈ [ − 5 , 5 ] , y ∈ R x cos ⁡ y , {\displaystyle {\underset {x\in [-5,5],\;y\in \mathbb {R} }{\operatorname {arg\,max} }}\;x\cos y,} or equivalently a r g m a x x , y x cos ⁡ y , subject to: x ∈ [ − 5 , 5 ] , y ∈ R , {\displaystyle {\underset {x,\;y}{\operatorname {arg\,max} }}\;x\cos y,\;{\text{subject to:}}\;x\in [-5,5],\;y\in \mathbb {R} ,} represents the {x, y} pair (or pairs) that maximizes (or maximize) the value of the objective function x cos y, with the added constraint that x lie in the interval [−5,5] (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form {5, 2kπ} and {−5, (2k + 1)π}, where k ranges over all integers. Operators arg min and arg max are sometimes also written as argmin and argmax, and stand for argument of the minimum and argument of the maximum. == History == Fermat and Lagrange found calculus-based formulae for identifying optima, while Newton and Gauss proposed iterative methods for moving towards an optimum. The term "linear programming" for certain optimization cases was due to George B. Dantzig, although much of the theory had been introduced by Leonid Kantorovich in 1939. (Programming in this context does not refer to computer programming, but comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and also John von Neumann and other researchers worked on the theoretical aspects of linear programming (like the theory of duality) around the same time. Other notable researchers in mathematical optimization include the following: == Major subfields == Convex programming studies the case when the objective function is convex (minimization) or concave (maximization) and the constraint set is convex. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming. Linear programming (LP), a type of convex programming, studies the case in which the objective function f is linear and the constraints are specified using only linear equalities and inequalities. Such a constraint set is called a polyhedron or a polytope if it is bounded. Second-order cone programming (SOCP) is a convex program, and includes certain types of quadratic programs. Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are semidefinite matrices. It is a generalization of linear and convex quadratic programming. Conic programming is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone. Geometric programming is a technique whereby objective and inequality constraints expressed as posynomials and equality constraints as monomials can be transformed into a convex program. Integer programming studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming. Quadratic programming allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming. Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem. Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it. Stochastic programming studies the case in which some of the constraints or parameters depend on random variables. Robust optimization is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set. Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one. Stochastic optimization is used with random (noisy) function measurements or random inputs in the search process. Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions. Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems. Constraint satisfaction studies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning). Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints. Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling. Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model. In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): Calculus of variations is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area. Optimal control theory is a generalization of the calculus of variations which introduces control policies. Dynamic programming is the approach to solve the stochastic optimization problem with stochastic, randomness, and unknown model parameters. It studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation. Mathematical programming with equilibrium constraints is where the constraints include variational inequalities or complementarities. === Multi-objective optimization === Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier. A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering. === Multi-modal or global optimization === Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms, Bayesian optimization and simulated annealing. == Classification of critical points and extrema == === Feasibility problem === The satisfiability problem, also called the feasibility problem, is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal. Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative. === Existence === The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view. === Necessary conditions for optimality === One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions. Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'. === Sufficient conditions for optimality === While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality. === Sensitivity and continuity of optima === The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics. The maximum theorem of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters. === Calculus of optimization === For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum. Further, critical points can be classified using the definiteness of the Hessian matrix: If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of saddle point. Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. Lagrangian relaxation can also provide approximate solutions to difficult constrained problems. When the objective function is a convex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods. === Global convergence === More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points. == Computational optimization techniques == To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge). === Optimization algorithms === Simplex algorithm of George Dantzig, designed for linear programming Extensions of the simplex algorithm, designed for quadratic programming and for linear-fractional programming Variants of the simplex algorithm that are especially suited for network optimization Combinatorial algorithms Quantum optimization algorithms === Iterative methods === The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself. Methods that evaluate Hessians (or approximate Hessians, using finite differences): Newton's method Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems. Interior point methods: This is a large class of methods for constrained optimization, some of which use only (sub)gradient information and others of which require the evaluation of Hessians. Methods that evaluate gradients, or approximate gradients in some way (or even subgradients): Coordinate descent methods: Algorithms which update a single coordinate in each iteration Conjugate gradient methods: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.) Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems. Subgradient methods: An iterative method for large locally Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient–projection methods are similar to conjugate–gradient methods. Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for convex minimization problems (similar to conjugate gradient methods). Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods. Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems). Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000). Simultaneous perturbation stochastic approximation (SPSA) method for stochastic optimization; uses random (efficient) gradient approximation. Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used. Interpolation methods Pattern search methods, which have better convergence properties than the Nelder–Mead heuristic (with simplices), which is listed below. Mirror descent === Heuristics === Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics: == Applications == === Mechanics === Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem. Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems. This approach may be applied in cosmology and astrophysics. === Economics and finance === Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria. The Journal of Economic Literature codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their profit. Also, agents are often modeled as being risk-averse, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics. Since the 1970s, economists have modeled dynamic decisions over time using control theory. For example, dynamic search models are used to study labor-market behavior. A crucial distinction is between deterministic and stochastic models. Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments. === Electrical engineering === Some common applications of optimization techniques in electrical engineering include active filter design, stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures, handset antennas, electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993. Optimization techniques are also used in power-flow analysis. === Civil engineering === Optimization has been widely used in civil engineering. Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource leveling, water resource allocation, traffic management and schedule optimization. === Operations research === Another field that uses optimization techniques extensively is operations research. Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses stochastic programming to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and stochastic optimization methods. === Control engineering === Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled. === Geophysics === Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the physical properties and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used. === Molecular modeling === Nonlinear optimization methods are widely used in conformational analysis. === Computational systems biology === Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology. Linear programming has been applied to calculate the maximal possible yields of fermentation products, and to infer gene regulatory networks from multiple microarray datasets as well as transcriptional regulatory networks from high-throughput data. Nonlinear programming has been used to analyze energy metabolism and has been applied to metabolic engineering and parameter estimation in biochemical pathways. === Machine learning === == Solvers == == See also == == Notes == == Further reading == Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge: Cambridge University Press. ISBN 0-521-83378-7. Gill, P. E.; Murray, W.; Wright, M. H. (1982). Practical Optimization. London: Academic Press. ISBN 0-12-283952-8. Lee, Jon (2004). A First Course in Combinatorial Optimization. Cambridge University Press. ISBN 0-521-01012-8. Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin: Springer. ISBN 0-387-30303-0. G.L. Nemhauser, A.H.G. Rinnooy Kan and M.J. Todd (eds.): Optimization, Elsevier, (1989). Stanislav Walukiewicz:Integer Programming, Springer,ISBN 978-9048140688, (1990). R. Fletcher: Practical Methods of Optimization, 2nd Ed., Wiley, (2000). Panos M. Pardalos:Approximation and Complexity in Numerical Optimization: Continuous and Discrete Problems, Springer,ISBN 978-1-44194829-8, (2000). Xiaoqi Yang, K. L. Teo, Lou Caccetta (Eds.):Optimization Methods and Applications,Springer, ISBN 978-0-79236866-3, (2001). Panos M. Pardalos, and Mauricio G. C. Resende(Eds.):Handbook of Applied Optimization、Oxford Univ Pr on Demand, ISBN 978-0-19512594-8, (2002). Wil Michiels, Emile Aarts, and Jan Korst: Theoretical Aspects of Local Search, Springer, ISBN 978-3-64207148-5, (2006). Der-San Chen, Robert G. Batson, and Yu Dang: Applied Integer Programming: Modeling and Solution,Wiley,ISBN 978-0-47037306-4, (2010). Mykel J. Kochenderfer and Tim A. Wheeler: Algorithms for Optimization, The MIT Press, ISBN 978-0-26203942-0, (2019). Vladislav Bukshtynov: Optimization: Success in Practice, CRC Press (Taylor & Francis), ISBN 978-1-03222947-8, (2023) . Rosario Toscano: Solving Optimization Problems with the Heuristic Kalman Algorithm: New Stochastic Methods, Springer, ISBN 978-3-031-52458-5 (2024). Immanuel M. Bomze, Tibor Csendes, Reiner Horst and Panos M. Pardalos: Developments in Global Optimization, Kluwer Academic, ISBN 978-1-4419-4768-0 (2010). == External links == "Decision Tree for Optimization Software". Links to optimization source codes "Global optimization". "EE364a: Convex Optimization I". Course from Stanford University. Varoquaux, Gaël. "Mathematical Optimization: Finding Minima of Functions".
Wikipedia/Optimization_algorithm
In mathematical optimization, the cutting-plane method is any of a variety of optimization methods that iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are commonly used to find integer solutions to mixed integer linear programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting planes to solve MILP was introduced by Ralph E. Gomory. Cutting plane methods for MILP work by solving a non-integer linear program, the linear relaxation of the given integer program. The theory of Linear Programming dictates that under mild assumptions (if the linear program has an optimal solution, and if the feasible region does not contain a line), one can always find an extreme point or a corner point that is optimal. The obtained optimum is tested for being an integer solution. If it is not, there is guaranteed to exist a linear inequality that separates the optimum from the convex hull of the true feasible set. Finding such an inequality is the separation problem, and such an inequality is a cut. A cut can be added to the relaxed linear program. Then, the current non-integer solution is no longer feasible to the relaxation. This process is repeated until an optimal integer solution is found. Cutting-plane methods for general convex continuous optimization and variants are known under various names: Kelley's method, Kelley–Cheney–Goldstein method, and bundle methods. They are popularly used for non-differentiable convex minimization, where a convex objective function and its subgradient can be evaluated efficiently but usual gradient methods for differentiable optimization can not be used. This situation is most typical for the concave maximization of Lagrangian dual functions. Another common situation is the application of the Dantzig–Wolfe decomposition to a structured optimization problem in which formulations with an exponential number of variables are obtained. Generating these variables on demand by means of delayed column generation is identical to performing a cutting plane on the respective dual problem. == History == Cutting planes were proposed by Ralph Gomory in the 1950s as a method for solving integer programming and mixed-integer programming problems. However, most experts, including Gomory himself, considered them to be impractical due to numerical instability, as well as ineffective because many rounds of cuts were needed to make progress towards the solution. Things turned around when in the mid-1990s Gérard Cornuéjols and co-workers showed them to be very effective in combination with branch-and-bound (called branch-and-cut) and ways to overcome numerical instabilities. Nowadays, all commercial MILP solvers use Gomory cuts in one way or another. Gomory cuts are very efficiently generated from a simplex tableau, whereas many other types of cuts are either expensive or even NP-hard to separate. Among other general cuts for MILP, most notably lift-and-project dominates Gomory cuts. == Gomory's cut == Let an integer programming problem be formulated (in canonical form) as: Maximize c T x Subject to A x ≤ b , x ≥ 0 , x i all integers . {\displaystyle {\begin{aligned}{\text{Maximize }}&c^{T}x\\{\text{Subject to }}&Ax\leq b,\\&x\geq 0,\,x_{i}{\text{ all integers}}.\end{aligned}}} where A is a matrix and b , c is a vector. The vector x is unknown and is to be found in order to maximize the objective while respecting the linear constraints. === General idea === The method proceeds by first dropping the requirement that the xi be integers and solving the associated relaxed linear programming problem to obtain a basic feasible solution. Geometrically, this solution will be a vertex of the convex polytope consisting of all feasible points. If this vertex is not an integer point then the method finds a hyperplane with the vertex on one side and all feasible integer points on the other. This is then added as an additional linear constraint to exclude the vertex found, creating a modified linear program. The new program is then solved and the process is repeated until an integer solution is found. === Step 1: solving the relaxed linear program === Using the simplex method to solve a linear program produces a set of equations of the form x i + ∑ j a ¯ i , j x j = b ¯ i {\displaystyle x_{i}+\sum _{j}{\bar {a}}_{i,j}x_{j}={\bar {b}}_{i}} where xi is a basic variable and the xj's are the nonbasic variables (i.e. the basic solution which is an optimal solution to the relaxed linear program is x i = b ¯ i {\displaystyle x_{i}={\bar {b}}_{i}} and x j = 0 {\displaystyle x_{j}=0} ). We write coefficients b ¯ i {\displaystyle {\bar {b}}_{i}} and a ¯ i , j {\displaystyle {\bar {a}}_{i,j}} with a bar to denote the last tableau produced by the simplex method. These coefficients are different from the coefficients in the matrix A and the vector b. === Step 2: Find a linear constraint === Consider now a basic variable x i {\displaystyle x_{i}} which is not an integer. Rewrite the above equation so that the integer parts are added on the left side and the fractional parts are on the right side: x i + ∑ j ⌊ a ¯ i , j ⌋ x j − ⌊ b ¯ i ⌋ = b ¯ i − ⌊ b ¯ i ⌋ − ∑ j ( a ¯ i , j − ⌊ a ¯ i , j ⌋ ) x j . {\displaystyle x_{i}+\sum _{j}\lfloor {\bar {a}}_{i,j}\rfloor x_{j}-\lfloor {\bar {b}}_{i}\rfloor ={\bar {b}}_{i}-\lfloor {\bar {b}}_{i}\rfloor -\sum _{j}({\bar {a}}_{i,j}-\lfloor {\bar {a}}_{i,j}\rfloor )x_{j}.} For any integer point in the feasible region, the left side is an integer since all the terms x i {\displaystyle x_{i}} , x j {\displaystyle x_{j}} , ⌊ a ¯ i , j ⌋ {\displaystyle \lfloor {\bar {a}}_{i,j}\rfloor } , ⌊ b ¯ i ⌋ {\displaystyle \lfloor {\bar {b}}_{i}\rfloor } are integers. The right side of this equation is strictly less than 1: indeed, b ¯ i − ⌊ b ¯ i ⌋ {\displaystyle {\bar {b}}_{i}-\lfloor {\bar {b}}_{i}\rfloor } is strictly less than 1 while − ∑ j ( a ¯ i , j − ⌊ a ¯ i , j ⌋ ) x j {\displaystyle -\sum _{j}({\bar {a}}_{i,j}-\lfloor {\bar {a}}_{i,j}\rfloor )x_{j}} is negative. Therefore the common value must be less than or equal to 0. So the inequality b ¯ i − ⌊ b ¯ i ⌋ − ∑ j ( a ¯ i , j − ⌊ a ¯ i , j ⌋ ) x j ≤ 0 {\displaystyle {\bar {b}}_{i}-\lfloor {\bar {b}}_{i}\rfloor -\sum _{j}({\bar {a}}_{i,j}-\lfloor {\bar {a}}_{i,j}\rfloor )x_{j}\leq 0} must hold for any integer point in the feasible region. Furthermore, non-basic variables are equal to 0s in any basic solution and if xi is not an integer for the basic solution x, b ¯ i − ⌊ b ¯ i ⌋ − ∑ j ( a ¯ i , j − ⌊ a ¯ i , j ⌋ ) x j = b ¯ i − ⌊ b ¯ i ⌋ > 0. {\displaystyle {\bar {b}}_{i}-\lfloor {\bar {b}}_{i}\rfloor -\sum _{j}({\bar {a}}_{i,j}-\lfloor {\bar {a}}_{i,j}\rfloor )x_{j}={\bar {b}}_{i}-\lfloor {\bar {b}}_{i}\rfloor >0.} === Conclusion === So the inequality above excludes the basic feasible solution and thus is a cut with the desired properties. More precisely, b ¯ i − ⌊ b ¯ i ⌋ − ∑ j ( a ¯ i , j − ⌊ a ¯ i , j ⌋ ) x j {\displaystyle {\bar {b}}_{i}-\lfloor {\bar {b}}_{i}\rfloor -\sum _{j}({\bar {a}}_{i,j}-\lfloor {\bar {a}}_{i,j}\rfloor )x_{j}} is negative for any integer point in the feasible region, and strictly positive for the basic feasible (non integer) solution of the relaxed linear program. Introducing a new slack variable xk for this inequality, a new constraint is added to the linear program, namely x k + ∑ j ( ⌊ a ¯ i , j ⌋ − a ¯ i , j ) x j = ⌊ b ¯ i ⌋ − b ¯ i , x k ≥ 0 , x k an integer . {\displaystyle x_{k}+\sum _{j}(\lfloor {\bar {a}}_{i,j}\rfloor -{\bar {a}}_{i,j})x_{j}=\lfloor {\bar {b}}_{i}\rfloor -{\bar {b}}_{i},\,x_{k}\geq 0,\,x_{k}{\mbox{ an integer}}.} == Convex optimization == Cutting plane methods are also applicable in nonlinear programming. The underlying principle is to approximate the feasible region of a nonlinear (convex) program by a finite set of closed half spaces and to solve a sequence of approximating linear programs. == See also == Benders' decomposition Branch and cut Branch and bound Column generation Dantzig–Wolfe decomposition == References == Marchand, Hugues; Martin, Alexander; Weismantel, Robert; Wolsey, Laurence (2002). "Cutting planes in integer and mixed integer programming". Discrete Applied Mathematics. 123 (1–3): 387–446. doi:10.1016/s0166-218x(01)00348-1. Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover Publications. ISBN 0-486-43227-0 Cornuéjols, Gérard (2008). Valid Inequalities for Mixed Integer Linear Programs. Mathematical Programming Ser. B, (2008) 112:3–44. [1] Cornuéjols, Gérard (2007). Revival of the Gomory Cuts in the 1990s. Annals of Operations Research, Vol. 149 (2007), pp. 63–66. [2] == External links == "Integer Programming" Section 9.8 Applied Mathematical Programming Chapter 9 Integer Programming (full text). Bradley, Hax, and Magnanti (Addison-Wesley, 1977)
Wikipedia/Cutting-plane_method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems. The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel, who programmed it on the Z4, and extensively researched it. The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear optimization problems. == Description of the problem addressed by conjugate gradients == Suppose we want to solve the system of linear equations A x = b {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} } for the vector x {\displaystyle \mathbf {x} } , where the known n × n {\displaystyle n\times n} matrix A {\displaystyle \mathbf {A} } is symmetric (i.e., A T = A {\displaystyle \mathbf {A} ^{\mathsf {T}}=\mathbf {A} } ), positive-definite (i.e. x T A x > 0 {\displaystyle \mathbf {x} ^{\mathsf {T}}\mathbf {Ax} >0} for all non-zero vectors x {\displaystyle \mathbf {x} } in R n {\displaystyle \mathbb {R} ^{n}} ), and real, and b {\displaystyle \mathbf {b} } is known as well. We denote the unique solution of this system by x ∗ {\displaystyle \mathbf {x} _{*}} . == Derivation as a direct method == The conjugate gradient method can be derived from several different perspectives, including specialization of the conjugate direction method for optimization, and variation of the Arnoldi/Lanczos iteration for eigenvalue problems. Despite differences in their approaches, these derivations share a common topic—proving the orthogonality of the residuals and conjugacy of the search directions. These two properties are crucial to developing the well-known succinct formulation of the method. We say that two non-zero vectors u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are conjugate (with respect to A {\displaystyle \mathbf {A} } ) if u T A v = 0. {\displaystyle \mathbf {u} ^{\mathsf {T}}\mathbf {A} \mathbf {v} =0.} Since A {\displaystyle \mathbf {A} } is symmetric and positive-definite, the left-hand side defines an inner product u T A v = ⟨ u , v ⟩ A := ⟨ A u , v ⟩ = ⟨ u , A T v ⟩ = ⟨ u , A v ⟩ . {\displaystyle \mathbf {u} ^{\mathsf {T}}\mathbf {A} \mathbf {v} =\langle \mathbf {u} ,\mathbf {v} \rangle _{\mathbf {A} }:=\langle \mathbf {A} \mathbf {u} ,\mathbf {v} \rangle =\langle \mathbf {u} ,\mathbf {A} ^{\mathsf {T}}\mathbf {v} \rangle =\langle \mathbf {u} ,\mathbf {A} \mathbf {v} \rangle .} Two vectors are conjugate if and only if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: if u {\displaystyle \mathbf {u} } is conjugate to v {\displaystyle \mathbf {v} } , then v {\displaystyle \mathbf {v} } is conjugate to u {\displaystyle \mathbf {u} } . Suppose that P = { p 1 , … , p n } {\displaystyle P=\{\mathbf {p} _{1},\dots ,\mathbf {p} _{n}\}} is a set of n {\displaystyle n} mutually conjugate vectors with respect to A {\displaystyle \mathbf {A} } , i.e. p i T A p j = 0 {\displaystyle \mathbf {p} _{i}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{j}=0} for all i ≠ j {\displaystyle i\neq j} . Then P {\displaystyle P} forms a basis for R n {\displaystyle \mathbb {R} ^{n}} , and we may express the solution x ∗ {\displaystyle \mathbf {x} _{*}} of A x = b {\displaystyle \mathbf {Ax} =\mathbf {b} } in this basis: x ∗ = ∑ i = 1 n α i p i ⇒ A x ∗ = ∑ i = 1 n α i A p i . {\displaystyle \mathbf {x} _{*}=\sum _{i=1}^{n}\alpha _{i}\mathbf {p} _{i}\Rightarrow \mathbf {A} \mathbf {x} _{*}=\sum _{i=1}^{n}\alpha _{i}\mathbf {A} \mathbf {p} _{i}.} Left-multiplying the problem A x = b {\displaystyle \mathbf {Ax} =\mathbf {b} } with the vector p k T {\displaystyle \mathbf {p} _{k}^{\mathsf {T}}} yields p k T b = p k T A x ∗ = ∑ i = 1 n α i p k T A p i = ∑ i = 1 n α i ⟨ p k , p i ⟩ A = α k ⟨ p k , p k ⟩ A {\displaystyle \mathbf {p} _{k}^{\mathsf {T}}\mathbf {b} =\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {x} _{*}=\sum _{i=1}^{n}\alpha _{i}\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{i}=\sum _{i=1}^{n}\alpha _{i}\left\langle \mathbf {p} _{k},\mathbf {p} _{i}\right\rangle _{\mathbf {A} }=\alpha _{k}\left\langle \mathbf {p} _{k},\mathbf {p} _{k}\right\rangle _{\mathbf {A} }} and so α k = ⟨ p k , b ⟩ ⟨ p k , p k ⟩ A . {\displaystyle \alpha _{k}={\frac {\langle \mathbf {p} _{k},\mathbf {b} \rangle }{\langle \mathbf {p} _{k},\mathbf {p} _{k}\rangle _{\mathbf {A} }}}.} This gives the following method for solving the equation A x = b {\displaystyle \mathbf {Ax} =\mathbf {b} } : find a sequence of n {\displaystyle n} conjugate directions, and then compute the coefficients α k {\displaystyle \alpha _{k}} . == As an iterative method == If we choose the conjugate vectors p k {\displaystyle \mathbf {p} _{k}} carefully, then we may not need all of them to obtain a good approximation to the solution x ∗ {\displaystyle \mathbf {x} _{*}} . So, we want to regard the conjugate gradient method as an iterative method. This also allows us to approximately solve systems where n {\displaystyle n} is so large that the direct method would take too much time. We denote the initial guess for x ∗ {\displaystyle \mathbf {x} _{*}} by x 0 {\displaystyle \mathbf {x} _{0}} (we can assume without loss of generality that x 0 = 0 {\displaystyle \mathbf {x} _{0}=\mathbf {0} } , otherwise consider the system A z = b − A x 0 {\displaystyle \mathbf {Az} =\mathbf {b} -\mathbf {Ax} _{0}} instead). Starting with x 0 {\displaystyle \mathbf {x} _{0}} we search for the solution and in each iteration we need a metric to tell us whether we are closer to the solution x ∗ {\displaystyle \mathbf {x} _{*}} (that is unknown to us). This metric comes from the fact that the solution x ∗ {\displaystyle \mathbf {x} _{*}} is also the unique minimizer of the following quadratic function f ( x ) = 1 2 x T A x − x T b , x ∈ R n . {\displaystyle f(\mathbf {x} )={\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}\mathbf {A} \mathbf {x} -\mathbf {x} ^{\mathsf {T}}\mathbf {b} ,\qquad \mathbf {x} \in \mathbf {R} ^{n}\,.} The existence of a unique minimizer is apparent as its Hessian matrix of second derivatives is symmetric positive-definite H ( f ( x ) ) = A , {\displaystyle \mathbf {H} (f(\mathbf {x} ))=\mathbf {A} \,,} and that the minimizer (use D f ( x ) = 0 {\displaystyle Df(\mathbf {x} )=0} ) solves the initial problem follows from its first derivative ∇ f ( x ) = A x − b . {\displaystyle \nabla f(\mathbf {x} )=\mathbf {A} \mathbf {x} -\mathbf {b} \,.} This suggests taking the first basis vector p 0 {\displaystyle \mathbf {p} _{0}} to be the negative of the gradient of f {\displaystyle f} at x = x 0 {\displaystyle \mathbf {x} =\mathbf {x} _{0}} . The gradient of f {\displaystyle f} equals A x − b {\displaystyle \mathbf {Ax} -\mathbf {b} } . Starting with an initial guess x 0 {\displaystyle \mathbf {x} _{0}} , this means we take p 0 = b − A x 0 {\displaystyle \mathbf {p} _{0}=\mathbf {b} -\mathbf {Ax} _{0}} . The other vectors in the basis will be conjugate to the gradient, hence the name conjugate gradient method. Note that p 0 {\displaystyle \mathbf {p} _{0}} is also the residual provided by this initial step of the algorithm. Let r k {\displaystyle \mathbf {r} _{k}} be the residual at the k {\displaystyle k} th step: r k = b − A x k . {\displaystyle \mathbf {r} _{k}=\mathbf {b} -\mathbf {Ax} _{k}.} As observed above, r k {\displaystyle \mathbf {r} _{k}} is the negative gradient of f {\displaystyle f} at x k {\displaystyle \mathbf {x} _{k}} , so the gradient descent method would require to move in the direction rk. Here, however, we insist that the directions p k {\displaystyle \mathbf {p} _{k}} must be conjugate to each other. A practical way to enforce this is by requiring that the next search direction be built out of the current residual and all previous search directions. The conjugation constraint is an orthonormal-type constraint and hence the algorithm can be viewed as an example of Gram-Schmidt orthonormalization. This gives the following expression: p k = r k − ∑ i < k r k T A p i p i T A p i p i {\displaystyle \mathbf {p} _{k}=\mathbf {r} _{k}-\sum _{i<k}{\frac {\mathbf {r} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{i}}{\mathbf {p} _{i}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{i}}}\mathbf {p} _{i}} (see the picture at the top of the article for the effect of the conjugacy constraint on convergence). Following this direction, the next optimal location is given by x k + 1 = x k + α k p k {\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}} with α k = p k T ( b − A x k ) p k T A p k = p k T r k p k T A p k , {\displaystyle \alpha _{k}={\frac {\mathbf {p} _{k}^{\mathsf {T}}(\mathbf {b} -\mathbf {Ax} _{k})}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}}={\frac {\mathbf {p} _{k}^{\mathsf {T}}\mathbf {r} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}},} where the last equality follows from the definition of r k {\displaystyle \mathbf {r} _{k}} . The expression for α k {\displaystyle \alpha _{k}} can be derived if one substitutes the expression for xk+1 into f and minimizing it with respect to α k {\displaystyle \alpha _{k}} f ( x k + 1 ) = f ( x k + α k p k ) =: g ( α k ) g ′ ( α k ) = ! 0 ⇒ α k = p k T ( b − A x k ) p k T A p k . {\displaystyle {\begin{aligned}f(\mathbf {x} _{k+1})&=f(\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k})=:g(\alpha _{k})\\g'(\alpha _{k})&{\overset {!}{=}}0\quad \Rightarrow \quad \alpha _{k}={\frac {\mathbf {p} _{k}^{\mathsf {T}}(\mathbf {b} -\mathbf {Ax} _{k})}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}}\,.\end{aligned}}} === The resulting algorithm === The above algorithm gives the most straightforward explanation of the conjugate gradient method. Seemingly, the algorithm as stated requires storage of all previous searching directions and residue vectors, as well as many matrix–vector multiplications, and thus can be computationally expensive. However, a closer analysis of the algorithm shows that r i {\displaystyle \mathbf {r} _{i}} is orthogonal to r j {\displaystyle \mathbf {r} _{j}} , i.e. r i T r j = 0 {\displaystyle \mathbf {r} _{i}^{\mathsf {T}}\mathbf {r} _{j}=0} , for i ≠ j {\displaystyle i\neq j} . And p i {\displaystyle \mathbf {p} _{i}} is A {\displaystyle \mathbf {A} } -orthogonal to p j {\displaystyle \mathbf {p} _{j}} , i.e. p i T A p j = 0 {\displaystyle \mathbf {p} _{i}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{j}=0} , for i ≠ j {\displaystyle i\neq j} . This can be regarded that as the algorithm progresses, p i {\displaystyle \mathbf {p} _{i}} and r i {\displaystyle \mathbf {r} _{i}} span the same Krylov subspace, where r i {\displaystyle \mathbf {r} _{i}} form the orthogonal basis with respect to the standard inner product, and p i {\displaystyle \mathbf {p} _{i}} form the orthogonal basis with respect to the inner product induced by A {\displaystyle \mathbf {A} } . Therefore, x k {\displaystyle \mathbf {x} _{k}} can be regarded as the projection of x {\displaystyle \mathbf {x} } on the Krylov subspace. That is, if the CG method starts with x 0 = 0 {\displaystyle \mathbf {x} _{0}=0} , then x k = a r g m i n y ∈ R n { ( x − y ) ⊤ A ( x − y ) : y ∈ span ⁡ { b , A b , … , A k − 1 b } } {\displaystyle x_{k}=\mathrm {argmin} _{y\in \mathbb {R} ^{n}}{\left\{(x-y)^{\top }A(x-y):y\in \operatorname {span} \left\{b,Ab,\ldots ,A^{k-1}b\right\}\right\}}} The algorithm is detailed below for solving A x = b {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} } where A {\displaystyle \mathbf {A} } is a real, symmetric, positive-definite matrix. The input vector x 0 {\displaystyle \mathbf {x} _{0}} can be an approximate initial solution or 0 {\displaystyle \mathbf {0} } . It is a different formulation of the exact procedure described above. r 0 := b − A x 0 if r 0 is sufficiently small, then return x 0 as the result p 0 := r 0 k := 0 repeat α k := r k T r k p k T A p k x k + 1 := x k + α k p k r k + 1 := r k − α k A p k if r k + 1 is sufficiently small, then exit loop β k := r k + 1 T r k + 1 r k T r k p k + 1 := r k + 1 + β k p k k := k + 1 end repeat return x k + 1 as the result {\displaystyle {\begin{aligned}&\mathbf {r} _{0}:=\mathbf {b} -\mathbf {Ax} _{0}\\&{\hbox{if }}\mathbf {r} _{0}{\text{ is sufficiently small, then return }}\mathbf {x} _{0}{\text{ as the result}}\\&\mathbf {p} _{0}:=\mathbf {r} _{0}\\&k:=0\\&{\text{repeat}}\\&\qquad \alpha _{k}:={\frac {\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {Ap} _{k}}}\\&\qquad \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}\\&\qquad \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}\\&\qquad {\hbox{if }}\mathbf {r} _{k+1}{\text{ is sufficiently small, then exit loop}}\\&\qquad \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {r} _{k+1}}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}}}\\&\qquad \mathbf {p} _{k+1}:=\mathbf {r} _{k+1}+\beta _{k}\mathbf {p} _{k}\\&\qquad k:=k+1\\&{\text{end repeat}}\\&{\text{return }}\mathbf {x} _{k+1}{\text{ as the result}}\end{aligned}}} This is the most commonly used algorithm. The same formula for β k {\displaystyle \beta _{k}} is also used in the Fletcher–Reeves nonlinear conjugate gradient method. ==== Restarts ==== We note that x 1 {\displaystyle \mathbf {x} _{1}} is computed by the gradient descent method applied to x 0 {\displaystyle \mathbf {x} _{0}} . Setting β k = 0 {\displaystyle \beta _{k}=0} would similarly make x k + 1 {\displaystyle \mathbf {x} _{k+1}} computed by the gradient descent method from x k {\displaystyle \mathbf {x} _{k}} , i.e., can be used as a simple implementation of a restart of the conjugate gradient iterations. Restarts could slow down convergence, but may improve stability if the conjugate gradient method misbehaves, e.g., due to round-off error. ==== Explicit residual calculation ==== The formulas x k + 1 := x k + α k p k {\displaystyle \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}} and r k := b − A x k {\displaystyle \mathbf {r} _{k}:=\mathbf {b} -\mathbf {Ax} _{k}} , which both hold in exact arithmetic, make the formulas r k + 1 := r k − α k A p k {\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}} and r k + 1 := b − A x k + 1 {\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}} mathematically equivalent. The former is used in the algorithm to avoid an extra multiplication by A {\displaystyle \mathbf {A} } since the vector A p k {\displaystyle \mathbf {Ap} _{k}} is already computed to evaluate α k {\displaystyle \alpha _{k}} . The latter may be more accurate, substituting the explicit calculation r k + 1 := b − A x k + 1 {\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}} for the implicit one by the recursion subject to round-off error accumulation, and is thus recommended for an occasional evaluation. A norm of the residual is typically used for stopping criteria. The norm of the explicit residual r k + 1 := b − A x k + 1 {\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}} provides a guaranteed level of accuracy both in exact arithmetic and in the presence of the rounding errors, where convergence naturally stagnates. In contrast, the implicit residual r k + 1 := r k − α k A p k {\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}} is known to keep getting smaller in amplitude well below the level of rounding errors and thus cannot be used to determine the stagnation of convergence. ==== Computation of alpha and beta ==== In the algorithm, α k {\displaystyle \alpha _{k}} is chosen such that r k + 1 {\displaystyle \mathbf {r} _{k+1}} is orthogonal to r k {\displaystyle \mathbf {r} _{k}} . The denominator is simplified from α k = r k T r k r k T A p k = r k T r k p k T A p k {\displaystyle \alpha _{k}={\frac {\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}}={\frac {\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {Ap} _{k}}}} since r k + 1 = p k + 1 − β k p k {\displaystyle \mathbf {r} _{k+1}=\mathbf {p} _{k+1}-\mathbf {\beta } _{k}\mathbf {p} _{k}} . The β k {\displaystyle \beta _{k}} is chosen such that p k + 1 {\displaystyle \mathbf {p} _{k+1}} is conjugate to p k {\displaystyle \mathbf {p} _{k}} . Initially, β k {\displaystyle \beta _{k}} is β k = − r k + 1 T A p k p k T A p k {\displaystyle \beta _{k}=-{\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}}} using r k + 1 = r k − α k A p k {\displaystyle \mathbf {r} _{k+1}=\mathbf {r} _{k}-\alpha _{k}\mathbf {A} \mathbf {p} _{k}} and equivalently A p k = 1 α k ( r k − r k + 1 ) , {\displaystyle \mathbf {A} \mathbf {p} _{k}={\frac {1}{\alpha _{k}}}(\mathbf {r} _{k}-\mathbf {r} _{k+1}),} the numerator of β k {\displaystyle \beta _{k}} is rewritten as r k + 1 T A p k = 1 α k r k + 1 T ( r k − r k + 1 ) = − 1 α k r k + 1 T r k + 1 {\displaystyle \mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}={\frac {1}{\alpha _{k}}}\mathbf {r} _{k+1}^{\mathsf {T}}(\mathbf {r} _{k}-\mathbf {r} _{k+1})=-{\frac {1}{\alpha _{k}}}\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {r} _{k+1}} because r k + 1 {\displaystyle \mathbf {r} _{k+1}} and r k {\displaystyle \mathbf {r} _{k}} are orthogonal by design. The denominator is rewritten as p k T A p k = ( r k + β k − 1 p k − 1 ) T A p k = 1 α k r k T ( r k − r k + 1 ) = 1 α k r k T r k {\displaystyle \mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}=(\mathbf {r} _{k}+\beta _{k-1}\mathbf {p} _{k-1})^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}={\frac {1}{\alpha _{k}}}\mathbf {r} _{k}^{\mathsf {T}}(\mathbf {r} _{k}-\mathbf {r} _{k+1})={\frac {1}{\alpha _{k}}}\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}} using that the search directions p k {\displaystyle \mathbf {p} _{k}} are conjugated and again that the residuals are orthogonal. This gives the β {\displaystyle \beta } in the algorithm after cancelling α k {\displaystyle \alpha _{k}} . ==== Example code in Julia (programming language) ==== ==== Example code in MATLAB ==== === Numerical example === Consider the linear system Ax = b given by A x = [ 4 1 1 3 ] [ x 1 x 2 ] = [ 1 2 ] , {\displaystyle \mathbf {A} \mathbf {x} ={\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}={\begin{bmatrix}1\\2\end{bmatrix}},} we will perform two steps of the conjugate gradient method beginning with the initial guess x 0 = [ 2 1 ] {\displaystyle \mathbf {x} _{0}={\begin{bmatrix}2\\1\end{bmatrix}}} in order to find an approximate solution to the system. ==== Solution ==== For reference, the exact solution is x = [ 1 11 7 11 ] ≈ [ 0.0909 0.6364 ] {\displaystyle \mathbf {x} ={\begin{bmatrix}{\frac {1}{11}}\\\\{\frac {7}{11}}\end{bmatrix}}\approx {\begin{bmatrix}0.0909\\\\0.6364\end{bmatrix}}} Our first step is to calculate the residual vector r0 associated with x0. This residual is computed from the formula r0 = b - Ax0, and in our case is equal to r 0 = [ 1 2 ] − [ 4 1 1 3 ] [ 2 1 ] = [ − 8 − 3 ] = p 0 . {\displaystyle \mathbf {r} _{0}={\begin{bmatrix}1\\2\end{bmatrix}}-{\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}2\\1\end{bmatrix}}={\begin{bmatrix}-8\\-3\end{bmatrix}}=\mathbf {p} _{0}.} Since this is the first iteration, we will use the residual vector r0 as our initial search direction p0; the method of selecting pk will change in further iterations. We now compute the scalar α0 using the relationship α 0 = r 0 T r 0 p 0 T A p 0 = [ − 8 − 3 ] [ − 8 − 3 ] [ − 8 − 3 ] [ 4 1 1 3 ] [ − 8 − 3 ] = 73 331 ≈ 0.2205 {\displaystyle \alpha _{0}={\frac {\mathbf {r} _{0}^{\mathsf {T}}\mathbf {r} _{0}}{\mathbf {p} _{0}^{\mathsf {T}}\mathbf {Ap} _{0}}}={\frac {{\begin{bmatrix}-8&-3\end{bmatrix}}{\begin{bmatrix}-8\\-3\end{bmatrix}}}{{\begin{bmatrix}-8&-3\end{bmatrix}}{\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}-8\\-3\end{bmatrix}}}}={\frac {73}{331}}\approx 0.2205} We can now compute x1 using the formula x 1 = x 0 + α 0 p 0 = [ 2 1 ] + 73 331 [ − 8 − 3 ] ≈ [ 0.2356 0.3384 ] . {\displaystyle \mathbf {x} _{1}=\mathbf {x} _{0}+\alpha _{0}\mathbf {p} _{0}={\begin{bmatrix}2\\1\end{bmatrix}}+{\frac {73}{331}}{\begin{bmatrix}-8\\-3\end{bmatrix}}\approx {\begin{bmatrix}0.2356\\0.3384\end{bmatrix}}.} This result completes the first iteration, the result being an "improved" approximate solution to the system, x1. We may now move on and compute the next residual vector r1 using the formula r 1 = r 0 − α 0 A p 0 = [ − 8 − 3 ] − 73 331 [ 4 1 1 3 ] [ − 8 − 3 ] ≈ [ − 0.2810 0.7492 ] . {\displaystyle \mathbf {r} _{1}=\mathbf {r} _{0}-\alpha _{0}\mathbf {A} \mathbf {p} _{0}={\begin{bmatrix}-8\\-3\end{bmatrix}}-{\frac {73}{331}}{\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}-8\\-3\end{bmatrix}}\approx {\begin{bmatrix}-0.2810\\0.7492\end{bmatrix}}.} Our next step in the process is to compute the scalar β0 that will eventually be used to determine the next search direction p1. β 0 = r 1 T r 1 r 0 T r 0 ≈ [ − 0.2810 0.7492 ] [ − 0.2810 0.7492 ] [ − 8 − 3 ] [ − 8 − 3 ] = 0.0088. {\displaystyle \beta _{0}={\frac {\mathbf {r} _{1}^{\mathsf {T}}\mathbf {r} _{1}}{\mathbf {r} _{0}^{\mathsf {T}}\mathbf {r} _{0}}}\approx {\frac {{\begin{bmatrix}-0.2810&0.7492\end{bmatrix}}{\begin{bmatrix}-0.2810\\0.7492\end{bmatrix}}}{{\begin{bmatrix}-8&-3\end{bmatrix}}{\begin{bmatrix}-8\\-3\end{bmatrix}}}}=0.0088.} Now, using this scalar β0, we can compute the next search direction p1 using the relationship p 1 = r 1 + β 0 p 0 ≈ [ − 0.2810 0.7492 ] + 0.0088 [ − 8 − 3 ] = [ − 0.3511 0.7229 ] . {\displaystyle \mathbf {p} _{1}=\mathbf {r} _{1}+\beta _{0}\mathbf {p} _{0}\approx {\begin{bmatrix}-0.2810\\0.7492\end{bmatrix}}+0.0088{\begin{bmatrix}-8\\-3\end{bmatrix}}={\begin{bmatrix}-0.3511\\0.7229\end{bmatrix}}.} We now compute the scalar α1 using our newly acquired p1 using the same method as that used for α0. α 1 = r 1 T r 1 p 1 T A p 1 ≈ [ − 0.2810 0.7492 ] [ − 0.2810 0.7492 ] [ − 0.3511 0.7229 ] [ 4 1 1 3 ] [ − 0.3511 0.7229 ] = 0.4122. {\displaystyle \alpha _{1}={\frac {\mathbf {r} _{1}^{\mathsf {T}}\mathbf {r} _{1}}{\mathbf {p} _{1}^{\mathsf {T}}\mathbf {Ap} _{1}}}\approx {\frac {{\begin{bmatrix}-0.2810&0.7492\end{bmatrix}}{\begin{bmatrix}-0.2810\\0.7492\end{bmatrix}}}{{\begin{bmatrix}-0.3511&0.7229\end{bmatrix}}{\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}-0.3511\\0.7229\end{bmatrix}}}}=0.4122.} Finally, we find x2 using the same method as that used to find x1. x 2 = x 1 + α 1 p 1 ≈ [ 0.2356 0.3384 ] + 0.4122 [ − 0.3511 0.7229 ] = [ 0.0909 0.6364 ] . {\displaystyle \mathbf {x} _{2}=\mathbf {x} _{1}+\alpha _{1}\mathbf {p} _{1}\approx {\begin{bmatrix}0.2356\\0.3384\end{bmatrix}}+0.4122{\begin{bmatrix}-0.3511\\0.7229\end{bmatrix}}={\begin{bmatrix}0.0909\\0.6364\end{bmatrix}}.} The result, x2, is a "better" approximation to the system's solution than x1 and x0. If exact arithmetic were to be used in this example instead of limited-precision, then the exact solution would theoretically have been reached after n = 2 iterations (n being the order of the system). == Finite Termination Property == Under exact arithmetic, the number of iterations required is no more than the order of the matrix. This behavior is known as the finite termination property of the conjugate gradient method. It refers to the method's ability to reach the exact solution of a linear system in a finite number of steps—at most equal to the dimension of the system—when exact arithmetic is used. This property arises from the fact that, at each iteration, the method generates a residual vector that is orthogonal to all previous residuals. These residuals form a mutually orthogonal set. In an \(n\)-dimensional space, it is impossible to construct more than \(n\) linearly independent and mutually orthogonal vectors unless one of them is the zero vector. Therefore, once a zero residual appears, the method has reached the solution and must terminate. This ensures that the conjugate gradient method converges in at most \(n\) steps. To demonstrate this, consider the system: A = [ 3 − 2 − 2 4 ] , b = [ 1 1 ] {\displaystyle A={\begin{bmatrix}3&-2\\-2&4\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}1\\1\end{bmatrix}}} We start from an initial guess x 0 = [ 1 2 ] {\displaystyle \mathbf {x} _{0}={\begin{bmatrix}1\\2\end{bmatrix}}} . Since A {\displaystyle A} is symmetric positive-definite and the system is 2-dimensional, the conjugate gradient method should find the exact solution in no more than 2 steps. The following MATLAB code demonstrates this behavior: The output confirms that the method reaches [ 1 1 ] {\displaystyle {\begin{bmatrix}1\\1\end{bmatrix}}} after two iterations, consistent with the theoretical prediction. This example illustrates how the conjugate gradient method behaves as a direct method under idealized conditions. === Application to Sparse Systems === The finite termination property also has practical implications in solving large sparse systems, which frequently arise in scientific and engineering applications. For instance, discretizing the two-dimensional Laplace equation ∇ 2 u = 0 {\displaystyle \nabla ^{2}u=0} using finite differences on a uniform grid leads to a sparse linear system A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , where A {\displaystyle A} is symmetric and positive definite. Using a 5 × 5 {\displaystyle 5\times 5} interior grid yields a 25 × 25 {\displaystyle 25\times 25} system, and the coefficient matrix A {\displaystyle A} has a five-point stencil pattern. Each row of A {\displaystyle A} contains at most five nonzero entries corresponding to the central point and its immediate neighbors. For example, the matrix generated from such a grid may look like: A = [ 4 − 1 0 ⋯ − 1 0 ⋯ − 1 4 − 1 ⋯ 0 0 ⋯ 0 − 1 4 − 1 0 0 ⋯ ⋮ ⋮ ⋱ ⋱ ⋱ ⋮ − 1 0 ⋯ − 1 4 − 1 ⋯ 0 0 ⋯ 0 − 1 4 ⋯ ⋮ ⋮ ⋯ ⋯ ⋯ ⋱ ] {\displaystyle A={\begin{bmatrix}4&-1&0&\cdots &-1&0&\cdots \\-1&4&-1&\cdots &0&0&\cdots \\0&-1&4&-1&0&0&\cdots \\\vdots &\vdots &\ddots &\ddots &\ddots &\vdots \\-1&0&\cdots &-1&4&-1&\cdots \\0&0&\cdots &0&-1&4&\cdots \\\vdots &\vdots &\cdots &\cdots &\cdots &\ddots \end{bmatrix}}} Although the system dimension is 25, the conjugate gradient method is theoretically guaranteed to terminate in at most 25 iterations under exact arithmetic. In practice, convergence often occurs in far fewer steps due to the matrix's spectral properties. This efficiency makes CGM particularly attractive for solving large-scale systems arising from partial differential equations, such as those found in heat conduction, fluid dynamics, and electrostatics. == Convergence properties == The conjugate gradient method can theoretically be viewed as a direct method, as in the absence of round-off error it produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix. In practice, the exact solution is never obtained since the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, due to a degenerative nature of generating the Krylov subspaces. As an iterative method, the conjugate gradient method monotonically (in the energy norm) improves approximations x k {\displaystyle \mathbf {x} _{k}} to the exact solution and may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by the condition number κ ( A ) {\displaystyle \kappa (A)} of the system matrix A {\displaystyle A} : the larger κ ( A ) {\displaystyle \kappa (A)} is, the slower the improvement. However, an interesting case appears when the eigenvalues are spaced logarithmically for a large symmetric matrix. For example, let A = Q D Q T {\displaystyle A=QDQ^{T}} where Q {\displaystyle Q} is a random orthogonal matrix and D {\displaystyle D} is a diagonal matrix with eigenvalues ranging from λ n = 1 {\displaystyle \lambda _{n}=1} to λ 1 = 10 6 {\displaystyle \lambda _{1}=10^{6}} , spaced logarithmically. Despite the finite termination property of CGM, where the exact solution should theoretically be reached in at most n {\displaystyle n} steps, the method may exhibit stagnation in convergence. In such a scenario, even after many more iterations—e.g., ten times the matrix size—the error may only decrease modestly (e.g., to 10 − 5 {\displaystyle 10^{-5}} ). Moreover, the iterative error may oscillate significantly, making it unreliable as a stopping condition. This poor convergence is not explained by the condition number alone (e.g., κ 2 ( A ) = 10 6 {\displaystyle \kappa _{2}(A)=10^{6}} ), but rather by the eigenvalue distribution itself. When the eigenvalues are more evenly spaced or randomly distributed, such convergence issues are typically absent, highlighting that CGM performance depends not only on κ ( A ) {\displaystyle \kappa (A)} but also on how the eigenvalues are distributed. If κ ( A ) {\displaystyle \kappa (A)} is large, preconditioning is commonly used to replace the original system A x − b = 0 {\displaystyle \mathbf {Ax} -\mathbf {b} =0} with M − 1 ( A x − b ) = 0 {\displaystyle \mathbf {M} ^{-1}(\mathbf {Ax} -\mathbf {b} )=0} such that κ ( M − 1 A ) {\displaystyle \kappa (\mathbf {M} ^{-1}\mathbf {A} )} is smaller than κ ( A ) {\displaystyle \kappa (\mathbf {A} )} , see below. === Convergence theorem === Define a subset of polynomials as Π k ∗ := { p ∈ Π k : p ( 0 ) = 1 } , {\displaystyle \Pi _{k}^{*}:=\left\lbrace \ p\in \Pi _{k}\ :\ p(0)=1\ \right\rbrace \,,} where Π k {\displaystyle \Pi _{k}} is the set of polynomials of maximal degree k {\displaystyle k} . Let ( x k ) k {\displaystyle \left(\mathbf {x} _{k}\right)_{k}} be the iterative approximations of the exact solution x ∗ {\displaystyle \mathbf {x} _{*}} , and define the errors as e k := x k − x ∗ {\displaystyle \mathbf {e} _{k}:=\mathbf {x} _{k}-\mathbf {x} _{*}} . Now, the rate of convergence can be approximated as ‖ e k ‖ A = min p ∈ Π k ∗ ‖ p ( A ) e 0 ‖ A ≤ min p ∈ Π k ∗ max λ ∈ σ ( A ) | p ( λ ) | ‖ e 0 ‖ A ≤ 2 ( κ ( A ) − 1 κ ( A ) + 1 ) k ‖ e 0 ‖ A ≤ 2 exp ⁡ ( − 2 k κ ( A ) ) ‖ e 0 ‖ A , {\displaystyle {\begin{aligned}\left\|\mathbf {e} _{k}\right\|_{\mathbf {A} }&=\min _{p\in \Pi _{k}^{*}}\left\|p(\mathbf {A} )\mathbf {e} _{0}\right\|_{\mathbf {A} }\\&\leq \min _{p\in \Pi _{k}^{*}}\,\max _{\lambda \in \sigma (\mathbf {A} )}|p(\lambda )|\ \left\|\mathbf {e} _{0}\right\|_{\mathbf {A} }\\&\leq 2\left({\frac {{\sqrt {\kappa (\mathbf {A} )}}-1}{{\sqrt {\kappa (\mathbf {A} )}}+1}}\right)^{k}\ \left\|\mathbf {e} _{0}\right\|_{\mathbf {A} }\\&\leq 2\exp \left({\frac {-2k}{\sqrt {\kappa (\mathbf {A} )}}}\right)\ \left\|\mathbf {e} _{0}\right\|_{\mathbf {A} }\,,\end{aligned}}} where σ ( A ) {\displaystyle \sigma (\mathbf {A} )} denotes the spectrum, and κ ( A ) {\displaystyle \kappa (\mathbf {A} )} denotes the condition number. This shows k = 1 2 κ ( A ) log ⁡ ( ‖ e 0 ‖ A ε − 1 ) {\displaystyle k={\tfrac {1}{2}}{\sqrt {\kappa (\mathbf {A} )}}\log \left(\left\|\mathbf {e} _{0}\right\|_{\mathbf {A} }\varepsilon ^{-1}\right)} iterations suffices to reduce the error to 2 ε {\displaystyle 2\varepsilon } for any ε > 0 {\displaystyle \varepsilon >0} . Note, the important limit when κ ( A ) {\displaystyle \kappa (\mathbf {A} )} tends to ∞ {\displaystyle \infty } κ ( A ) − 1 κ ( A ) + 1 ≈ 1 − 2 κ ( A ) for κ ( A ) ≫ 1 . {\displaystyle {\frac {{\sqrt {\kappa (\mathbf {A} )}}-1}{{\sqrt {\kappa (\mathbf {A} )}}+1}}\approx 1-{\frac {2}{\sqrt {\kappa (\mathbf {A} )}}}\quad {\text{for}}\quad \kappa (\mathbf {A} )\gg 1\,.} This limit shows a faster convergence rate compared to the iterative methods of Jacobi or Gauss–Seidel which scale as ≈ 1 − 2 κ ( A ) {\displaystyle \approx 1-{\frac {2}{\kappa (\mathbf {A} )}}} . No round-off error is assumed in the convergence theorem, but the convergence bound is commonly valid in practice as theoretically explained by Anne Greenbaum. === Practical convergence === If initialized randomly, the first stage of iterations is often the fastest, as the error is eliminated within the Krylov subspace that initially reflects a smaller effective condition number. The second stage of convergence is typically well defined by the theoretical convergence bound with κ ( A ) {\textstyle {\sqrt {\kappa (\mathbf {A} )}}} , but may be super-linear, depending on a distribution of the spectrum of the matrix A {\displaystyle A} and the spectral distribution of the error. In the last stage, the smallest attainable accuracy is reached and the convergence stalls or the method may even start diverging. In typical scientific computing applications in double-precision floating-point format for matrices of large sizes, the conjugate gradient method uses a stopping criterion with a tolerance that terminates the iterations during the first or second stage. == The preconditioned conjugate gradient method == In most cases, preconditioning is necessary to ensure fast convergence of the conjugate gradient method. If M − 1 {\displaystyle \mathbf {M} ^{-1}} is symmetric positive-definite and M − 1 A {\displaystyle \mathbf {M} ^{-1}\mathbf {A} } has a better condition number than A , {\displaystyle \mathbf {A} ,} a preconditioned conjugate gradient method can be used. It takes the following form: r 0 := b − A x 0 {\displaystyle \mathbf {r} _{0}:=\mathbf {b} -\mathbf {Ax} _{0}} Solve: M z 0 := r 0 {\displaystyle {\textrm {Solve:}}\mathbf {M} \mathbf {z} _{0}:=\mathbf {r} _{0}} p 0 := z 0 {\displaystyle \mathbf {p} _{0}:=\mathbf {z} _{0}} k := 0 {\displaystyle k:=0\,} repeat α k := r k T z k p k T A p k {\displaystyle \alpha _{k}:={\frac {\mathbf {r} _{k}^{\mathsf {T}}\mathbf {z} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {Ap} _{k}}}} x k + 1 := x k + α k p k {\displaystyle \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}} r k + 1 := r k − α k A p k {\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}} if rk+1 is sufficiently small then exit loop end if S o l v e M z k + 1 := r k + 1 {\displaystyle \mathrm {Solve} \ \mathbf {M} \mathbf {z} _{k+1}:=\mathbf {r} _{k+1}} β k := r k + 1 T z k + 1 r k T z k {\displaystyle \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {z} _{k+1}}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {z} _{k}}}} p k + 1 := z k + 1 + β k p k {\displaystyle \mathbf {p} _{k+1}:=\mathbf {z} _{k+1}+\beta _{k}\mathbf {p} _{k}} k := k + 1 {\displaystyle k:=k+1\,} end repeat The result is xk+1 The above formulation is equivalent to applying the regular conjugate gradient method to the preconditioned system E − 1 A ( E − 1 ) T x ^ = E − 1 b {\displaystyle \mathbf {E} ^{-1}\mathbf {A} (\mathbf {E} ^{-1})^{\mathsf {T}}\mathbf {\hat {x}} =\mathbf {E} ^{-1}\mathbf {b} } where E E T = M , x ^ = E T x . {\displaystyle \mathbf {EE} ^{\mathsf {T}}=\mathbf {M} ,\qquad \mathbf {\hat {x}} =\mathbf {E} ^{\mathsf {T}}\mathbf {x} .} The Cholesky decomposition of the preconditioner must be used to keep the symmetry (and positive definiteness) of the system. However, this decomposition does not need to be computed, and it is sufficient to know M − 1 {\displaystyle \mathbf {M} ^{-1}} . It can be shown that E − 1 A ( E − 1 ) T {\displaystyle \mathbf {E} ^{-1}\mathbf {A} (\mathbf {E} ^{-1})^{\mathsf {T}}} has the same spectrum as M − 1 A {\displaystyle \mathbf {M} ^{-1}\mathbf {A} } . The preconditioner matrix M has to be symmetric positive-definite and fixed, i.e., cannot change from iteration to iteration. If any of these assumptions on the preconditioner is violated, the behavior of the preconditioned conjugate gradient method may become unpredictable. An example of a commonly used preconditioner is the incomplete Cholesky factorization. === Using the preconditioner in practice === It is important to keep in mind that we don't want to invert the matrix M {\displaystyle \mathbf {M} } explicitly in order to get M − 1 {\displaystyle \mathbf {M} ^{-1}} for use it in the process, since inverting M {\displaystyle \mathbf {M} } would take more time/computational resources than solving the conjugate gradient algorithm itself. As an example, let's say that we are using a preconditioner coming from incomplete Cholesky factorization. The resulting matrix is the lower triangular matrix L {\displaystyle \mathbf {L} } , and the preconditioner matrix is: M = L L T {\displaystyle \mathbf {M} =\mathbf {LL} ^{\mathsf {T}}} Then we have to solve: M z = r {\displaystyle \mathbf {Mz} =\mathbf {r} } z = M − 1 r {\displaystyle \mathbf {z} =\mathbf {M} ^{-1}\mathbf {r} } But: M − 1 = ( L − 1 ) T L − 1 {\displaystyle \mathbf {M} ^{-1}=(\mathbf {L} ^{-1})^{\mathsf {T}}\mathbf {L} ^{-1}} Then: z = ( L − 1 ) T L − 1 r {\displaystyle \mathbf {z} =(\mathbf {L} ^{-1})^{\mathsf {T}}\mathbf {L} ^{-1}\mathbf {r} } Let's take an intermediary vector a {\displaystyle \mathbf {a} } : a = L − 1 r {\displaystyle \mathbf {a} =\mathbf {L} ^{-1}\mathbf {r} } r = L a {\displaystyle \mathbf {r} =\mathbf {L} \mathbf {a} } Since r {\displaystyle \mathbf {r} } and L {\displaystyle \mathbf {L} } and known, and L {\displaystyle \mathbf {L} } is lower triangular, solving for a {\displaystyle \mathbf {a} } is easy and computationally cheap by using forward substitution. Then, we substitute a {\displaystyle \mathbf {a} } in the original equation: z = ( L − 1 ) T a {\displaystyle \mathbf {z} =(\mathbf {L} ^{-1})^{\mathsf {T}}\mathbf {a} } a = L T z {\displaystyle \mathbf {a} =\mathbf {L} ^{\mathsf {T}}\mathbf {z} } Since a {\displaystyle \mathbf {a} } and L T {\displaystyle \mathbf {L} ^{\mathsf {T}}} are known, and L T {\displaystyle \mathbf {L} ^{\mathsf {T}}} is upper triangular, solving for z {\displaystyle \mathbf {z} } is easy and computationally cheap by using backward substitution. Using this method, there is no need to invert M {\displaystyle \mathbf {M} } or L {\displaystyle \mathbf {L} } explicitly at all, and we still obtain z {\displaystyle \mathbf {z} } . == The flexible preconditioned conjugate gradient method == In numerically challenging applications, sophisticated preconditioners are used, which may lead to variable preconditioning, changing between iterations. Even if the preconditioner is symmetric positive-definite on every iteration, the fact that it may change makes the arguments above invalid, and in practical tests leads to a significant slow down of the convergence of the algorithm presented above. Using the Polak–Ribière formula β k := r k + 1 T ( z k + 1 − z k ) r k T z k {\displaystyle \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\left(\mathbf {z} _{k+1}-\mathbf {z} _{k}\right)}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {z} _{k}}}} instead of the Fletcher–Reeves formula β k := r k + 1 T z k + 1 r k T z k {\displaystyle \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {z} _{k+1}}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {z} _{k}}}} may dramatically improve the convergence in this case. This version of the preconditioned conjugate gradient method can be called flexible, as it allows for variable preconditioning. The flexible version is also shown to be robust even if the preconditioner is not symmetric positive definite (SPD). The implementation of the flexible version requires storing an extra vector. For a fixed SPD preconditioner, r k + 1 T z k = 0 , {\displaystyle \mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {z} _{k}=0,} so both formulas for βk are equivalent in exact arithmetic, i.e., without the round-off error. The mathematical explanation of the better convergence behavior of the method with the Polak–Ribière formula is that the method is locally optimal in this case, in particular, it does not converge slower than the locally optimal steepest descent method. == Vs. the locally optimal steepest descent method == In both the original and the preconditioned conjugate gradient methods one only needs to set β k := 0 {\displaystyle \beta _{k}:=0} in order to make them locally optimal, using the line search, steepest descent methods. With this substitution, vectors p are always the same as vectors z, so there is no need to store vectors p. Thus, every iteration of these steepest descent methods is a bit cheaper compared to that for the conjugate gradient methods. However, the latter converge faster, unless a (highly) variable and/or non-SPD preconditioner is used, see above. == Conjugate gradient method as optimal feedback controller for double integrator == The conjugate gradient method can also be derived using optimal control theory. In this approach, the conjugate gradient method falls out as an optimal feedback controller, u = k ( x , v ) := − γ a ∇ f ( x ) − γ b v {\displaystyle u=k(x,v):=-\gamma _{a}\nabla f(x)-\gamma _{b}v} for the double integrator system, x ˙ = v , v ˙ = u {\displaystyle {\dot {x}}=v,\quad {\dot {v}}=u} The quantities γ a {\displaystyle \gamma _{a}} and γ b {\displaystyle \gamma _{b}} are variable feedback gains. == Conjugate gradient on the normal equations == The conjugate gradient method can be applied to an arbitrary n-by-m matrix by applying it to normal equations ATA and right-hand side vector ATb, since ATA is a symmetric positive-semidefinite matrix for any A. The result is conjugate gradient on the normal equations (CGN or CGNR). ATAx = ATb As an iterative method, it is not necessary to form ATA explicitly in memory but only to perform the matrix–vector and transpose matrix–vector multiplications. Therefore, CGNR is particularly useful when A is a sparse matrix since these operations are usually extremely efficient. However the downside of forming the normal equations is that the condition number κ(ATA) is equal to κ2(A) and so the rate of convergence of CGNR may be slow and the quality of the approximate solution may be sensitive to roundoff errors. Finding a good preconditioner is often an important part of using the CGNR method. Several algorithms have been proposed (e.g., CGLS, LSQR). The LSQR algorithm purportedly has the best numerical stability when A is ill-conditioned, i.e., A has a large condition number. == Conjugate gradient method for complex Hermitian matrices == The conjugate gradient method with a trivial modification is extendable to solving, given complex-valued matrix A and vector b, the system of linear equations A x = b {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} } for the complex-valued vector x, where A is Hermitian (i.e., A' = A) and positive-definite matrix, and the symbol ' denotes the conjugate transpose. The trivial modification is simply substituting the conjugate transpose for the real transpose everywhere. == Advantages and disadvantages == The advantages and disadvantages of the conjugate gradient methods are summarized in the lecture notes by Nemirovsky and BenTal.: Sec.7.3  === A pathological example === This example is from Let t ∈ ( 0 , 1 ) {\textstyle t\in (0,1)} , and define W = [ t t t 1 + t t t 1 + t t t ⋱ ⋱ ⋱ t t 1 + t ] , b = [ 1 0 ⋮ 0 ] {\displaystyle W={\begin{bmatrix}t&{\sqrt {t}}&&&&\\{\sqrt {t}}&1+t&{\sqrt {t}}&&&\\&{\sqrt {t}}&1+t&{\sqrt {t}}&&\\&&{\sqrt {t}}&\ddots &\ddots &\\&&&\ddots &&\\&&&&&{\sqrt {t}}\\&&&&{\sqrt {t}}&1+t\end{bmatrix}},\quad b={\begin{bmatrix}1\\0\\\vdots \\0\end{bmatrix}}} Since W {\displaystyle W} is invertible, there exists a unique solution to W x = b {\textstyle Wx=b} . Solving it by conjugate gradient descent gives us rather bad convergence: ‖ b − W x k ‖ 2 = ( 1 / t ) k , ‖ b − W x n ‖ 2 = 0 {\displaystyle \|b-Wx_{k}\|^{2}=(1/t)^{k},\quad \|b-Wx_{n}\|^{2}=0} In words, during the CG process, the error grows exponentially, until it suddenly becomes zero as the unique solution is found. == See also == == References == == Further reading == Atkinson, Kendell A. (1988). "Section 8.9". An introduction to numerical analysis (2nd ed.). John Wiley and Sons. ISBN 978-0-471-50023-0. Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN 978-0-486-43227-4. Golub, Gene H.; Van Loan, Charles F. (2013). "Chapter 11". Matrix Computations (4th ed.). Johns Hopkins University Press. ISBN 978-1-4214-0794-4. Saad, Yousef (2003-04-01). "Chapter 6". Iterative methods for sparse linear systems (2nd ed.). SIAM. ISBN 978-0-89871-534-7. Gérard Meurant: "Detection and correction of silent errors in the conjugate gradient algorithm", Numerical Algorithms, vol.92 (2023), pp.869-891. url=https://doi.org/10.1007/s11075-022-01380-1 Meurant, Gerard; Tichy, Petr (2024). Error Norm Estimation in the Conjugate Gradient Algorithm. SIAM. ISBN 978-1-61197-785-1. == External links == "Conjugate gradients, method of", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Conjugate_gradient_method
In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach. The algorithm was first published in 1944 by Kenneth Levenberg, while working at the Frankford Army Arsenal. It was rediscovered in 1963 by Donald Marquardt, who worked as a statistician at DuPont, and independently by Girard, Wynne and Morrison. The LMA is used in many software applications for solving generic curve-fitting problems. By using the Gauss–Newton algorithm it often converges faster than first-order methods. However, like other iterative optimization algorithms, the LMA finds only a local minimum, which is not necessarily the global minimum. == The problem == The primary application of the Levenberg–Marquardt algorithm is in the least-squares curve fitting problem: given a set of m {\displaystyle m} empirical pairs ( x i , y i ) {\displaystyle \left(x_{i},y_{i}\right)} of independent and dependent variables, find the parameters ⁠ β {\displaystyle {\boldsymbol {\beta }}} ⁠ of the model curve f ( x , β ) {\displaystyle f\left(x,{\boldsymbol {\beta }}\right)} so that the sum of the squares of the deviations S ( β ) {\displaystyle S\left({\boldsymbol {\beta }}\right)} is minimized: β ^ ∈ argmin β ⁡ S ( β ) ≡ argmin β ⁡ ∑ i = 1 m [ y i − f ( x i , β ) ] 2 , {\displaystyle {\hat {\boldsymbol {\beta }}}\in \operatorname {argmin} \limits _{\boldsymbol {\beta }}S\left({\boldsymbol {\beta }}\right)\equiv \operatorname {argmin} \limits _{\boldsymbol {\beta }}\sum _{i=1}^{m}\left[y_{i}-f\left(x_{i},{\boldsymbol {\beta }}\right)\right]^{2},} which is assumed to be non-empty. == The solution == Like other numeric minimization algorithms, the Levenberg–Marquardt algorithm is an iterative procedure. To start a minimization, the user has to provide an initial guess for the parameter vector ⁠ β {\displaystyle {\boldsymbol {\beta }}} ⁠. In cases with only one minimum, an uninformed standard guess like β T = ( 1 , 1 , … , 1 ) {\displaystyle {\boldsymbol {\beta }}^{\text{T}}={\begin{pmatrix}1,\ 1,\ \dots ,\ 1\end{pmatrix}}} will work fine; in cases with multiple minima, the algorithm converges to the global minimum only if the initial guess is already somewhat close to the final solution. In each iteration step, the parameter vector ⁠ β {\displaystyle {\boldsymbol {\beta }}} ⁠ is replaced by a new estimate ⁠ β + δ {\displaystyle {\boldsymbol {\beta }}+{\boldsymbol {\delta }}} ⁠. To determine ⁠ δ {\displaystyle {\boldsymbol {\delta }}} ⁠, the function f ( x i , β + δ ) {\displaystyle f\left(x_{i},{\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)} is approximated by its linearization: f ( x i , β + δ ) ≈ f ( x i , β ) + J i δ , {\displaystyle f\left(x_{i},{\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)\approx f\left(x_{i},{\boldsymbol {\beta }}\right)+\mathbf {J} _{i}{\boldsymbol {\delta }},} where J i = ∂ f ( x i , β ) ∂ β {\displaystyle \mathbf {J} _{i}={\frac {\partial f\left(x_{i},{\boldsymbol {\beta }}\right)}{\partial {\boldsymbol {\beta }}}}} is the gradient (row-vector in this case) of ⁠ f {\displaystyle f} ⁠ with respect to ⁠ β {\displaystyle {\boldsymbol {\beta }}} ⁠. The sum S ( β ) {\displaystyle S\left({\boldsymbol {\beta }}\right)} of square deviations has its minimum at a zero gradient with respect to ⁠ β {\displaystyle {\boldsymbol {\beta }}} ⁠. The above first-order approximation of f ( x i , β + δ ) {\displaystyle f\left(x_{i},{\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)} gives S ( β + δ ) ≈ ∑ i = 1 m [ y i − f ( x i , β ) − J i δ ] 2 , {\displaystyle S\left({\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)\approx \sum _{i=1}^{m}\left[y_{i}-f\left(x_{i},{\boldsymbol {\beta }}\right)-\mathbf {J} _{i}{\boldsymbol {\delta }}\right]^{2},} or in vector notation, S ( β + δ ) ≈ ‖ y − f ( β ) − J δ ‖ 2 = [ y − f ( β ) − J δ ] T [ y − f ( β ) − J δ ] = [ y − f ( β ) ] T [ y − f ( β ) ] − [ y − f ( β ) ] T J δ − ( J δ ) T [ y − f ( β ) ] + δ T J T J δ = [ y − f ( β ) ] T [ y − f ( β ) ] − 2 [ y − f ( β ) ] T J δ + δ T J T J δ . {\displaystyle {\begin{aligned}S\left({\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)&\approx \left\|\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)-\mathbf {J} {\boldsymbol {\delta }}\right\|^{2}\\&=\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)-\mathbf {J} {\boldsymbol {\delta }}\right]^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)-\mathbf {J} {\boldsymbol {\delta }}\right]\\&=\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]-\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]^{\mathrm {T} }\mathbf {J} {\boldsymbol {\delta }}-\left(\mathbf {J} {\boldsymbol {\delta }}\right)^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]+{\boldsymbol {\delta }}^{\mathrm {T} }\mathbf {J} ^{\mathrm {T} }\mathbf {J} {\boldsymbol {\delta }}\\&=\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]-2\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]^{\mathrm {T} }\mathbf {J} {\boldsymbol {\delta }}+{\boldsymbol {\delta }}^{\mathrm {T} }\mathbf {J} ^{\mathrm {T} }\mathbf {J} {\boldsymbol {\delta }}.\end{aligned}}} Taking the derivative of this approximation of S ( β + δ ) {\displaystyle S\left({\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)} with respect to ⁠ δ {\displaystyle {\boldsymbol {\delta }}} ⁠ and setting the result to zero gives ( J T J ) δ = J T [ y − f ( β ) ] , {\displaystyle \left(\mathbf {J} ^{\mathrm {T} }\mathbf {J} \right){\boldsymbol {\delta }}=\mathbf {J} ^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right],} where J {\displaystyle \mathbf {J} } is the Jacobian matrix, whose ⁠ i {\displaystyle i} ⁠-th row equals J i {\displaystyle \mathbf {J} _{i}} , and where f ( β ) {\displaystyle \mathbf {f} \left({\boldsymbol {\beta }}\right)} and y {\displaystyle \mathbf {y} } are vectors with ⁠ i {\displaystyle i} ⁠-th component f ( x i , β ) {\displaystyle f\left(x_{i},{\boldsymbol {\beta }}\right)} and y i {\displaystyle y_{i}} respectively. The above expression obtained for ⁠ β {\displaystyle {\boldsymbol {\beta }}} ⁠ comes under the Gauss–Newton method. The Jacobian matrix as defined above is not (in general) a square matrix, but a rectangular matrix of size m × n {\displaystyle m\times n} , where n {\displaystyle n} is the number of parameters (size of the vector β {\displaystyle {\boldsymbol {\beta }}} ). The matrix multiplication ( J T J ) {\displaystyle \left(\mathbf {J} ^{\mathrm {T} }\mathbf {J} \right)} yields the required n × n {\displaystyle n\times n} square matrix and the matrix-vector product on the right hand side yields a vector of size n {\displaystyle n} . The result is a set of n {\displaystyle n} linear equations, which can be solved for ⁠ δ {\displaystyle {\boldsymbol {\delta }}} ⁠. Levenberg's contribution is to replace this equation by a "damped version": ( J T J + λ I ) δ = J T [ y − f ( β ) ] , {\displaystyle \left(\mathbf {J} ^{\mathrm {T} }\mathbf {J} +\lambda \mathbf {I} \right){\boldsymbol {\delta }}=\mathbf {J} ^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right],} where ⁠ I {\displaystyle \mathbf {I} } ⁠ is the identity matrix, giving as the increment ⁠ δ {\displaystyle {\boldsymbol {\delta }}} ⁠ to the estimated parameter vector ⁠ β {\displaystyle {\boldsymbol {\beta }}} ⁠. The (non-negative) damping factor ⁠ λ {\displaystyle \lambda } ⁠ is adjusted at each iteration. If reduction of ⁠ S {\displaystyle S} ⁠ is rapid, a smaller value can be used, bringing the algorithm closer to the Gauss–Newton algorithm, whereas if an iteration gives insufficient reduction in the residual, ⁠ λ {\displaystyle \lambda } ⁠ can be increased, giving a step closer to the gradient-descent direction. Note that the gradient of ⁠ S {\displaystyle S} ⁠ with respect to ⁠ β {\displaystyle {\boldsymbol {\beta }}} ⁠ equals − 2 ( J T [ y − f ( β ) ] ) T {\displaystyle -2\left(\mathbf {J} ^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]\right)^{\mathrm {T} }} . Therefore, for large values of ⁠ λ {\displaystyle \lambda } ⁠, the step will be taken approximately in the direction opposite to the gradient. If either the length of the calculated step ⁠ δ {\displaystyle {\boldsymbol {\delta }}} ⁠ or the reduction of sum of squares from the latest parameter vector ⁠ β + δ {\displaystyle {\boldsymbol {\beta }}+{\boldsymbol {\delta }}} ⁠ fall below predefined limits, iteration stops, and the last parameter vector ⁠ β {\displaystyle {\boldsymbol {\beta }}} ⁠ is considered to be the solution. When the damping factor ⁠ λ {\displaystyle \lambda } ⁠ is large relative to ‖ J T J ‖ {\displaystyle \|\mathbf {J} ^{\mathrm {T} }\mathbf {J} \|} , inverting J T J + λ I {\displaystyle \mathbf {J} ^{\mathrm {T} }\mathbf {J} +\lambda \mathbf {I} } is not necessary, as the update is well-approximated by the small gradient step λ − 1 J T [ y − f ( β ) ] {\displaystyle \lambda ^{-1}\mathbf {J} ^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]} . To make the solution scale invariant Marquardt's algorithm solved a modified problem with each component of the gradient scaled according to the curvature. This provides larger movement along the directions where the gradient is smaller, which avoids slow convergence in the direction of small gradient. Fletcher in his 1971 paper A modified Marquardt subroutine for non-linear least squares simplified the form, replacing the identity matrix ⁠ I {\displaystyle \mathbf {I} } ⁠ with the diagonal matrix consisting of the diagonal elements of ⁠ J T J {\displaystyle \mathbf {J} ^{\text{T}}\mathbf {J} } ⁠: [ J T J + λ diag ⁡ ( J T J ) ] δ = J T [ y − f ( β ) ] . {\displaystyle \left[\mathbf {J} ^{\mathrm {T} }\mathbf {J} +\lambda \operatorname {diag} \left(\mathbf {J} ^{\mathrm {T} }\mathbf {J} \right)\right]{\boldsymbol {\delta }}=\mathbf {J} ^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right].} A similar damping factor appears in Tikhonov regularization, which is used to solve linear ill-posed problems, as well as in ridge regression, an estimation technique in statistics. === Choice of damping parameter === Various more or less heuristic arguments have been put forward for the best choice for the damping parameter ⁠ λ {\displaystyle \lambda } ⁠. Theoretical arguments exist showing why some of these choices guarantee local convergence of the algorithm; however, these choices can make the global convergence of the algorithm suffer from the undesirable properties of steepest descent, in particular, very slow convergence close to the optimum. The absolute values of any choice depend on how well-scaled the initial problem is. Marquardt recommended starting with a value ⁠ λ 0 {\displaystyle \lambda _{0}} ⁠ and a factor ⁠ ν > 1 {\displaystyle \nu >1} ⁠. Initially setting λ = λ 0 {\displaystyle \lambda =\lambda _{0}} and computing the residual sum of squares S ( β ) {\displaystyle S\left({\boldsymbol {\beta }}\right)} after one step from the starting point with the damping factor of λ = λ 0 {\displaystyle \lambda =\lambda _{0}} and secondly with ⁠ λ 0 / ν {\displaystyle \lambda _{0}/\nu } ⁠. If both of these are worse than the initial point, then the damping is increased by successive multiplication by ⁠ ν {\displaystyle \nu } ⁠ until a better point is found with a new damping factor of ⁠ λ 0 ν k {\displaystyle \lambda _{0}\nu ^{k}} ⁠ for some ⁠ k {\displaystyle k} ⁠. If use of the damping factor ⁠ λ / ν {\displaystyle \lambda /\nu } ⁠ results in a reduction in squared residual, then this is taken as the new value of ⁠ λ {\displaystyle \lambda } ⁠ (and the new optimum location is taken as that obtained with this damping factor) and the process continues; if using ⁠ λ / ν {\displaystyle \lambda /\nu } ⁠ resulted in a worse residual, but using ⁠ λ {\displaystyle \lambda } ⁠ resulted in a better residual, then ⁠ λ {\displaystyle \lambda } ⁠ is left unchanged and the new optimum is taken as the value obtained with ⁠ λ {\displaystyle \lambda } ⁠ as damping factor. An effective strategy for the control of the damping parameter, called delayed gratification, consists of increasing the parameter by a small amount for each uphill step, and decreasing by a large amount for each downhill step. The idea behind this strategy is to avoid moving downhill too fast in the beginning of optimization, therefore restricting the steps available in future iterations and therefore slowing down convergence. An increase by a factor of 2 and a decrease by a factor of 3 has been shown to be effective in most cases, while for large problems more extreme values can work better, with an increase by a factor of 1.5 and a decrease by a factor of 5. === Geodesic acceleration === When interpreting the Levenberg–Marquardt step as the velocity v k {\displaystyle {\boldsymbol {v}}_{k}} along a geodesic path in the parameter space, it is possible to improve the method by adding a second order term that accounts for the acceleration a k {\displaystyle {\boldsymbol {a}}_{k}} along the geodesic v k + 1 2 a k {\displaystyle {\boldsymbol {v}}_{k}+{\frac {1}{2}}{\boldsymbol {a}}_{k}} where a k {\displaystyle {\boldsymbol {a}}_{k}} is the solution of J k a k = − f v v . {\displaystyle {\boldsymbol {J}}_{k}{\boldsymbol {a}}_{k}=-f_{vv}.} Since this geodesic acceleration term depends only on the directional derivative f v v = ∑ μ ν v μ v ν ∂ μ ∂ ν f ( x ) {\displaystyle f_{vv}=\sum _{\mu \nu }v_{\mu }v_{\nu }\partial _{\mu }\partial _{\nu }f({\boldsymbol {x}})} along the direction of the velocity v {\displaystyle {\boldsymbol {v}}} , it does not require computing the full second order derivative matrix, requiring only a small overhead in terms of computing cost. Since the second order derivative can be a fairly complex expression, it can be convenient to replace it with a finite difference approximation f v v i ≈ f i ( x + h δ ) − 2 f i ( x ) + f i ( x − h δ ) h 2 = 2 h ( f i ( x + h δ ) − f i ( x ) h − J i δ ) {\displaystyle {\begin{aligned}f_{vv}^{i}&\approx {\frac {f_{i}({\boldsymbol {x}}+h{\boldsymbol {\delta }})-2f_{i}({\boldsymbol {x}})+f_{i}({\boldsymbol {x}}-h{\boldsymbol {\delta }})}{h^{2}}}\\&={\frac {2}{h}}\left({\frac {f_{i}({\boldsymbol {x}}+h{\boldsymbol {\delta }})-f_{i}({\boldsymbol {x}})}{h}}-{\boldsymbol {J}}_{i}{\boldsymbol {\delta }}\right)\end{aligned}}} where f ( x ) {\displaystyle f({\boldsymbol {x}})} and J {\displaystyle {\boldsymbol {J}}} have already been computed by the algorithm, therefore requiring only one additional function evaluation to compute f ( x + h δ ) {\displaystyle f({\boldsymbol {x}}+h{\boldsymbol {\delta }})} . The choice of the finite difference step h {\displaystyle h} can affect the stability of the algorithm, and a value of around 0.1 is usually reasonable in general. Since the acceleration may point in opposite direction to the velocity, to prevent it to stall the method in case the damping is too small, an additional criterion on the acceleration is added in order to accept a step, requiring that 2 ‖ a k ‖ ‖ v k ‖ ≤ α {\displaystyle {\frac {2\left\|{\boldsymbol {a}}_{k}\right\|}{\left\|{\boldsymbol {v}}_{k}\right\|}}\leq \alpha } where α {\displaystyle \alpha } is usually fixed to a value lesser than 1, with smaller values for harder problems. The addition of a geodesic acceleration term can allow significant increase in convergence speed and it is especially useful when the algorithm is moving through narrow canyons in the landscape of the objective function, where the allowed steps are smaller and the higher accuracy due to the second order term gives significant improvements. == Example == In this example we try to fit the function y = a cos ⁡ ( b X ) + b sin ⁡ ( a X ) {\displaystyle y=a\cos \left(bX\right)+b\sin \left(aX\right)} using the Levenberg–Marquardt algorithm implemented in GNU Octave as the leasqr function. The graphs show progressively better fitting for the parameters a = 100 {\displaystyle a=100} , b = 102 {\displaystyle b=102} used in the initial curve. Only when the parameters in the last graph are chosen closest to the original, are the curves fitting exactly. This equation is an example of very sensitive initial conditions for the Levenberg–Marquardt algorithm. One reason for this sensitivity is the existence of multiple minima — the function cos ⁡ ( β x ) {\displaystyle \cos \left(\beta x\right)} has minima at parameter value β ^ {\displaystyle {\hat {\beta }}} and β ^ + 2 n π {\displaystyle {\hat {\beta }}+2n\pi } . == See also == Trust region Nelder–Mead method Variants of the Levenberg–Marquardt algorithm have also been used for solving nonlinear systems of equations. == References == == Further reading == == External links == Detailed description of the algorithm can be found in Numerical Recipes in C, Chapter 15.5: Nonlinear models C. T. Kelley, Iterative Methods for Optimization, SIAM Frontiers in Applied Mathematics, no 18, 1999, ISBN 0-89871-433-8. Online copy History of the algorithm in SIAM news A tutorial by Ananth Ranganathan K. Madsen, H. B. Nielsen, O. Tingleff, Methods for Non-Linear Least Squares Problems (nonlinear least-squares tutorial; L-M code: analytic Jacobian secant) T. Strutz: Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond). 2nd edition, Springer Vieweg, 2016, ISBN 978-3-658-11455-8. H. P. Gavin, The Levenberg-Marquardt method for nonlinear least-squares curve-fitting problems (MATLAB implementation included)
Wikipedia/Levenberg–Marquardt_algorithm
In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem. An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints g 1 ( x ) ≥ 0 , … , g k ( x ) ≥ 0 {\displaystyle g_{1}(x)\geq 0,\dots ,g_{k}(x)\geq 0} that define the feasible region, that is, the set of all x to search for the optimal solution. Given a point x {\displaystyle x} in the feasible region, a constraint g i ( x ) ≥ 0 {\displaystyle g_{i}(x)\geq 0} is called active at x 0 {\displaystyle x_{0}} if g i ( x 0 ) = 0 {\displaystyle g_{i}(x_{0})=0} , and inactive at x 0 {\displaystyle x_{0}} if g i ( x 0 ) > 0. {\displaystyle g_{i}(x_{0})>0.} Equality constraints are always active. The active set at x 0 {\displaystyle x_{0}} is made up of those constraints g i ( x 0 ) {\displaystyle g_{i}(x_{0})} that are active at the current point (Nocedal & Wright 2006, p. 308). The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the linear programming problem, the active set gives the hyperplanes that intersect at the solution point. In quadratic programming, as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search. == Active-set methods == In general an active-set algorithm has the following structure: Find a feasible starting point repeat until "optimal enough" solve the equality problem defined by the active set (approximately) compute the Lagrange multipliers of the active set remove a subset of the constraints with negative Lagrange multipliers search for infeasible constraints among the inactive constraints and add them to the problem end repeat The motivations for this is that near the optimum usually only a small number of all constraints are binding and the solve step usually takes superlinear time in the amount of constraints. Thus repeated solving of a series equality constrained problem, which drop constraints which are not violated when improving but are in the way of improvement (negative lagrange multipliers) and adding of those constraints which the current solution violates can converge against the true solution. The optima of the last problem can often provide an initial guess in case the equality constrained problem solver needs an initial value. Methods that can be described as active-set methods include: Successive linear programming (SLP) Sequential quadratic programming (SQP) Sequential linear-quadratic programming (SLQP) Reduced gradient method (RG) Generalized reduced gradient method (GRG) == Performance == Consider the problem of Linearly Constrained Convex Quadratic Programming. Under reasonable assumptions (the problem is feasible, the system of constraints is regular at every point, and the quadratic objective is strongly convex), the active-set method terminates after finitely many steps, and yields a global solution to the problem. Theoretically, the active-set method may perform a number of iterations exponential in m, like the simplex method. However, its practical behaviour is typically much better.: Sec.9.1  == References == == Bibliography == Murty, K. G. (1988). Linear complementarity, linear and nonlinear programming. Sigma Series in Applied Mathematics. Vol. 3. Berlin: Heldermann Verlag. pp. xlviii+629 pp. ISBN 3-88538-403-5. MR 0949214. Archived from the original on 2010-04-01. Retrieved 2010-04-03. Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-30303-1.
Wikipedia/Active-set_method
In mathematical optimization, Lemke's algorithm is a procedure for solving linear complementarity problems, and more generally mixed linear complementarity problems. It is named after Carlton E. Lemke. Lemke's algorithm is of pivoting or basis-exchange type. Similar algorithms can compute Nash equilibria for two-person matrix and bimatrix games. == References == Cottle, Richard W.; Pang, Jong-Shi; Stone, Richard E. (1992). The linear complementarity problem. Computer Science and Scientific Computing. Boston, MA: Academic Press, Inc. pp. xxiv+762 pp. ISBN 0-12-192350-9. MR 1150683. Murty, K. G. (1988). Linear complementarity, linear and nonlinear programming. Sigma Series in Applied Mathematics. Vol. 3. Berlin: Heldermann Verlag. pp. xlviii+629 pp. ISBN 3-88538-403-5. Archived from the original on 2010-04-01. (Available for download at the website of Professor Katta G. Murty.) MR949214 == External links == OMatrix manual on Lemke Chris Hecker's GDC presentation on MLCPs and Lemke Linear Complementarity and Mathematical (Non-linear) Programming Siconos/Numerics open-source GPL implementation in C of Lemke's algorithm and other methods to solve LCPs and MLCPs
Wikipedia/Lemke's_algorithm
The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function (taken over the same domain). == Problem statement == Suppose D {\displaystyle {\mathcal {D}}} is a compact convex set in a vector space and f : D → R {\displaystyle f\colon {\mathcal {D}}\to \mathbb {R} } is a convex, differentiable real-valued function. The Frank–Wolfe algorithm solves the optimization problem Minimize f ( x ) {\displaystyle f(\mathbf {x} )} subject to x ∈ D {\displaystyle \mathbf {x} \in {\mathcal {D}}} . == Algorithm == Initialization: Let k ← 0 {\displaystyle k\leftarrow 0} , and let x 0 {\displaystyle \mathbf {x} _{0}\!} be any point in D {\displaystyle {\mathcal {D}}} . Step 1. Direction-finding subproblem: Find s k {\displaystyle \mathbf {s} _{k}} solving Minimize s T ∇ f ( x k ) {\displaystyle \mathbf {s} ^{T}\nabla f(\mathbf {x} _{k})} Subject to s ∈ D {\displaystyle \mathbf {s} \in {\mathcal {D}}} (Interpretation: Minimize the linear approximation of the problem given by the first-order Taylor approximation of f {\displaystyle f} around x k {\displaystyle \mathbf {x} _{k}\!} constrained to stay within D {\displaystyle {\mathcal {D}}} .) Step 2. Step size determination: Set α ← 2 k + 2 {\displaystyle \alpha \leftarrow {\frac {2}{k+2}}} , or alternatively find α {\displaystyle \alpha } that minimizes f ( x k + α ( s k − x k ) ) {\displaystyle f(\mathbf {x} _{k}+\alpha (\mathbf {s} _{k}-\mathbf {x} _{k}))} subject to 0 ≤ α ≤ 1 {\displaystyle 0\leq \alpha \leq 1} . Step 3. Update: Let x k + 1 ← x k + α ( s k − x k ) {\displaystyle \mathbf {x} _{k+1}\leftarrow \mathbf {x} _{k}+\alpha (\mathbf {s} _{k}-\mathbf {x} _{k})} , let k ← k + 1 {\displaystyle k\leftarrow k+1} and go to Step 1. == Properties == While competing methods such as gradient descent for constrained optimization require a projection step back to the feasible set in each iteration, the Frank–Wolfe algorithm only needs the solution of a convex problem over the same set in each iteration, and automatically stays in the feasible set. The convergence of the Frank–Wolfe algorithm is sublinear in general: the error in the objective function to the optimum is O ( 1 / k ) {\displaystyle O(1/k)} after k iterations, so long as the gradient is Lipschitz continuous with respect to some norm. The same convergence rate can also be shown if the sub-problems are only solved approximately. The iterations of the algorithm can always be represented as a sparse convex combination of the extreme points of the feasible set, which has helped to the popularity of the algorithm for sparse greedy optimization in machine learning and signal processing problems, as well as for example the optimization of minimum–cost flows in transportation networks. If the feasible set is given by a set of linear constraints, then the subproblem to be solved in each iteration becomes a linear program. While the worst-case convergence rate with O ( 1 / k ) {\displaystyle O(1/k)} can not be improved in general, faster convergence can be obtained for special problem classes, such as some strongly convex problems. == Lower bounds on the solution value, and primal-dual analysis == Since f {\displaystyle f} is convex, for any two points x , y ∈ D {\displaystyle \mathbf {x} ,\mathbf {y} \in {\mathcal {D}}} we have: f ( y ) ≥ f ( x ) + ( y − x ) T ∇ f ( x ) {\displaystyle f(\mathbf {y} )\geq f(\mathbf {x} )+(\mathbf {y} -\mathbf {x} )^{T}\nabla f(\mathbf {x} )} This also holds for the (unknown) optimal solution x ∗ {\displaystyle \mathbf {x} ^{*}} . That is, f ( x ∗ ) ≥ f ( x ) + ( x ∗ − x ) T ∇ f ( x ) {\displaystyle f(\mathbf {x} ^{*})\geq f(\mathbf {x} )+(\mathbf {x} ^{*}-\mathbf {x} )^{T}\nabla f(\mathbf {x} )} . The best lower bound with respect to a given point x {\displaystyle \mathbf {x} } is given by f ( x ∗ ) ≥ f ( x ) + ( x ∗ − x ) T ∇ f ( x ) ≥ min y ∈ D { f ( x ) + ( y − x ) T ∇ f ( x ) } = f ( x ) − x T ∇ f ( x ) + min y ∈ D y T ∇ f ( x ) {\displaystyle {\begin{aligned}f(\mathbf {x} ^{*})&\geq f(\mathbf {x} )+(\mathbf {x} ^{*}-\mathbf {x} )^{T}\nabla f(\mathbf {x} )\\&\geq \min _{\mathbf {y} \in D}\left\{f(\mathbf {x} )+(\mathbf {y} -\mathbf {x} )^{T}\nabla f(\mathbf {x} )\right\}\\&=f(\mathbf {x} )-\mathbf {x} ^{T}\nabla f(\mathbf {x} )+\min _{\mathbf {y} \in D}\mathbf {y} ^{T}\nabla f(\mathbf {x} )\end{aligned}}} The latter optimization problem is solved in every iteration of the Frank–Wolfe algorithm, therefore the solution s k {\displaystyle \mathbf {s} _{k}} of the direction-finding subproblem of the k {\displaystyle k} -th iteration can be used to determine increasing lower bounds l k {\displaystyle l_{k}} during each iteration by setting l 0 = − ∞ {\displaystyle l_{0}=-\infty } and l k := max ( l k − 1 , f ( x k ) + ( s k − x k ) T ∇ f ( x k ) ) {\displaystyle l_{k}:=\max(l_{k-1},f(\mathbf {x} _{k})+(\mathbf {s} _{k}-\mathbf {x} _{k})^{T}\nabla f(\mathbf {x} _{k}))} Such lower bounds on the unknown optimal value are important in practice because they can be used as a stopping criterion, and give an efficient certificate of the approximation quality in every iteration, since always l k ≤ f ( x ∗ ) ≤ f ( x k ) {\displaystyle l_{k}\leq f(\mathbf {x} ^{*})\leq f(\mathbf {x} _{k})} . It has been shown that this corresponding duality gap, that is the difference between f ( x k ) {\displaystyle f(\mathbf {x} _{k})} and the lower bound l k {\displaystyle l_{k}} , decreases with the same convergence rate, i.e. f ( x k ) − l k = O ( 1 / k ) . {\displaystyle f(\mathbf {x} _{k})-l_{k}=O(1/k).} == Notes == == Bibliography == Jaggi, Martin (2013). "Revisiting Frank–Wolfe: Projection-Free Sparse Convex Optimization". Journal of Machine Learning Research: Workshop and Conference Proceedings. 28 (1): 427–435. (Overview paper) The Frank–Wolfe algorithm description Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-30303-1.. == External links == https://conditional-gradients.org/: a survey of Frank–Wolfe algorithms. Marguerite Frank giving a personal account of the history of the algorithm == See also == Proximal gradient methods
Wikipedia/Frank–Wolfe_algorithm
In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value. Generalized linear models were formulated by John Nelder and Robert Wedderburn as a way of unifying various other statistical models, including linear regression, logistic regression and Poisson regression. They proposed an iteratively reweighted least squares method for maximum likelihood estimation (MLE) of the model parameters. MLE remains popular and is the default method on many statistical computing packages. Other approaches, including Bayesian regression and least squares fitting to variance stabilized responses, have been developed. == Intuition == Ordinary linear regression predicts the expected value of a given unknown quantity (the response variable, a random variable) as a linear combination of a set of observed values (predictors). This implies that a constant change in a predictor leads to a constant change in the response variable (i.e. a linear-response model). This is appropriate when the response variable can vary, to a good approximation, indefinitely in either direction, or more generally for any quantity that only varies by a relatively small amount compared to the variation in the predictive variables, e.g. human heights. However, these assumptions are inappropriate for some types of response variables. For example, in cases where the response variable is expected to be always positive and varying over a wide range, constant input changes lead to geometrically (i.e. exponentially) varying, rather than constantly varying, output changes. As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over differently-sized beaches. More specifically, the problem is that if the model is used to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, it would predict an impossible attendance value of −950. Logically, a more realistic model would instead predict a constant rate of increased beach attendance (e.g. an increase of 10 degrees leads to a doubling in beach attendance, and a drop of 10 degrees leads to a halving in attendance). Such a model is termed an exponential-response model (or log-linear model, since the logarithm of the response is predicted to vary linearly). Similarly, a model that predicts a probability of making a yes/no choice (a Bernoulli variable) is even less suitable as a linear-response model, since probabilities are bounded on both ends (they must be between 0 and 1). Imagine, for example, a model that predicts the likelihood of a given person going to the beach as a function of temperature. A reasonable model might predict, for example, that a change in 10 degrees makes a person two times more or less likely to go to the beach. But what does "twice as likely" mean in terms of a probability? It cannot literally mean to double the probability value (e.g. 50% becomes 100%, 75% becomes 150%, etc.). Rather, it is the odds that are doubling: from 2:1 odds, to 4:1 odds, to 8:1 odds, etc. Such a model is a log-odds or logistic model. Generalized linear models cover all these situations by allowing for response variables that have arbitrary distributions (rather than simply normal distributions), and for an arbitrary function of the response variable (the link function) to vary linearly with the predictors (rather than assuming that the response itself must vary linearly). For example, the case above of predicted number of beach attendees would typically be modeled with a Poisson distribution and a log link, while the case of predicted probability of beach attendance would typically be modelled with a Bernoulli distribution (or binomial distribution, depending on exactly how the problem is phrased) and a log-odds (or logit) link function. == Overview == In a generalized linear model (GLM), each outcome Y of the dependent variables is assumed to be generated from a particular distribution in an exponential family, a large class of probability distributions that includes the normal, binomial, Poisson and gamma distributions, among others. The conditional mean μ of the distribution depends on the independent variables X through: E ⁡ ( Y ∣ X ) = μ = g − 1 ( X β ) , {\displaystyle \operatorname {E} (\mathbf {Y} \mid \mathbf {X} )={\boldsymbol {\mu }}=g^{-1}(\mathbf {X} {\boldsymbol {\beta }}),} where E(Y | X) is the expected value of Y conditional on X; Xβ is the linear predictor, a linear combination of unknown parameters β; g is the link function. In this framework, the variance is typically a function, V, of the mean: Var ⁡ ( Y ∣ X ) = V ⁡ ( g − 1 ( X β ) ) . {\displaystyle \operatorname {Var} (\mathbf {Y} \mid \mathbf {X} )=\operatorname {V} (g^{-1}(\mathbf {X} {\boldsymbol {\beta }})).} It is convenient if V follows from an exponential family of distributions, but it may simply be that the variance is a function of the predicted value. The unknown parameters, β, are typically estimated with maximum likelihood, maximum quasi-likelihood, or Bayesian techniques. == Model components == The GLM consists of three elements: 1. A particular distribution for modeling Y {\displaystyle Y} from among those which are considered exponential families of probability distributions, 2. A linear predictor η = X β {\displaystyle \eta =X\beta } , and 3. A link function g {\displaystyle g} such that E ⁡ ( Y ∣ X ) = μ = g − 1 ( η ) {\displaystyle \operatorname {E} (Y\mid X)=\mu =g^{-1}(\eta )} . === Probability distribution === An overdispersed exponential family of distributions is a generalization of an exponential family and the exponential dispersion model of distributions and includes those families of probability distributions, parameterized by θ {\displaystyle {\boldsymbol {\theta }}} and τ {\displaystyle \tau } , whose density functions f (or probability mass function, for the case of a discrete distribution) can be expressed in the form f Y ( y ∣ θ , τ ) = h ( y , τ ) exp ⁡ ( b ( θ ) T T ( y ) − A ( θ ) d ( τ ) ) . {\displaystyle f_{Y}(\mathbf {y} \mid {\boldsymbol {\theta }},\tau )=h(\mathbf {y} ,\tau )\exp \left({\frac {\mathbf {b} ({\boldsymbol {\theta }})^{\rm {T}}\mathbf {T} (\mathbf {y} )-A({\boldsymbol {\theta }})}{d(\tau )}}\right).\,\!} The dispersion parameter, τ {\displaystyle \tau } , typically is known and is usually related to the variance of the distribution. The functions h ( y , τ ) {\displaystyle h(\mathbf {y} ,\tau )} , b ( θ ) {\displaystyle \mathbf {b} ({\boldsymbol {\theta }})} , T ( y ) {\displaystyle \mathbf {T} (\mathbf {y} )} , A ( θ ) {\displaystyle A({\boldsymbol {\theta }})} , and d ( τ ) {\displaystyle d(\tau )} are known. Many common distributions are in this family, including the normal, exponential, gamma, Poisson, Bernoulli, and (for fixed number of trials) binomial, multinomial, and negative binomial. For scalar y {\displaystyle \mathbf {y} } and θ {\displaystyle {\boldsymbol {\theta }}} (denoted y {\displaystyle y} and θ {\displaystyle \theta } in this case), this reduces to f Y ( y ∣ θ , τ ) = h ( y , τ ) exp ⁡ ( b ( θ ) T ( y ) − A ( θ ) d ( τ ) ) . {\displaystyle f_{Y}(y\mid \theta ,\tau )=h(y,\tau )\exp \left({\frac {b(\theta )T(y)-A(\theta )}{d(\tau )}}\right).\,\!} θ {\displaystyle {\boldsymbol {\theta }}} is related to the mean of the distribution. If b ( θ ) {\displaystyle \mathbf {b} ({\boldsymbol {\theta }})} is the identity function, then the distribution is said to be in canonical form (or natural form). Note that any distribution can be converted to canonical form by rewriting θ {\displaystyle {\boldsymbol {\theta }}} as θ ′ {\displaystyle {\boldsymbol {\theta }}'} and then applying the transformation θ = b ( θ ′ ) {\displaystyle {\boldsymbol {\theta }}=\mathbf {b} ({\boldsymbol {\theta }}')} . It is always possible to convert A ( θ ) {\displaystyle A({\boldsymbol {\theta }})} in terms of the new parametrization, even if b ( θ ′ ) {\displaystyle \mathbf {b} ({\boldsymbol {\theta }}')} is not a one-to-one function; see comments in the page on exponential families. If, in addition, T ( y ) {\displaystyle \mathbf {T} (\mathbf {y} )} and b ( θ ) {\displaystyle \mathbf {b} ({\boldsymbol {\theta }})} are the identity, then θ {\displaystyle {\boldsymbol {\theta }}} is called the canonical parameter (or natural parameter) and is related to the mean through μ = E ⁡ ( y ) = ∇ θ A ( θ ) . {\displaystyle {\boldsymbol {\mu }}=\operatorname {E} (\mathbf {y} )=\nabla _{\boldsymbol {\theta }}A({\boldsymbol {\theta }}).\,\!} For scalar y {\displaystyle \mathbf {y} } and θ {\displaystyle {\boldsymbol {\theta }}} , this reduces to μ = E ⁡ ( y ) = A ′ ( θ ) . {\displaystyle \mu =\operatorname {E} (y)=A'(\theta ).} Under this scenario, the variance of the distribution can be shown to be Var ⁡ ( y ) = ∇ θ 2 A ( θ ) d ( τ ) . {\displaystyle \operatorname {Var} (\mathbf {y} )=\nabla _{\boldsymbol {\theta }}^{2}A({\boldsymbol {\theta }})d(\tau ).\,\!} For scalar y {\displaystyle \mathbf {y} } and θ {\displaystyle {\boldsymbol {\theta }}} , this reduces to Var ⁡ ( y ) = A ″ ( θ ) d ( τ ) . {\displaystyle \operatorname {Var} (y)=A''(\theta )d(\tau ).\,\!} === Linear predictor === The linear predictor is the quantity which incorporates the information about the independent variables into the model. The symbol η (Greek "eta") denotes a linear predictor. It is related to the expected value of the data through the link function. η is expressed as linear combinations (thus, "linear") of unknown parameters β. The coefficients of the linear combination are represented as the matrix of independent variables X. η can thus be expressed as η = X β . {\displaystyle \eta =\mathbf {X} {\boldsymbol {\beta }}.\,} === Link function === The link function provides the relationship between the linear predictor and the mean of the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-defined canonical link function which is derived from the exponential of the response's density function. However, in some cases it makes sense to try to match the domain of the link function to the range of the distribution function's mean, or use a non-canonical link function for algorithmic purposes, for example Bayesian probit regression. When using a distribution function with a canonical parameter θ , {\displaystyle \theta ,} the canonical link function is the function that expresses θ {\displaystyle \theta } in terms of μ , {\displaystyle \mu ,} i.e. θ = g ( μ ) . {\displaystyle \theta =g(\mu ).} For the most common distributions, the mean μ {\displaystyle \mu } is one of the parameters in the standard form of the distribution's density function, and then g ( μ ) {\displaystyle g(\mu )} is the function as defined above that maps the density function into its canonical form. When using the canonical link function, g ( μ ) = θ = X β , {\displaystyle g(\mu )=\theta =\mathbf {X} {\boldsymbol {\beta }},} which allows X T Y {\displaystyle \mathbf {X} ^{\rm {T}}\mathbf {Y} } to be a sufficient statistic for β {\displaystyle {\boldsymbol {\beta }}} . Following is a table of several exponential-family distributions in common use and the data they are typically used for, along with the canonical link functions and their inverses (sometimes referred to as the mean function, as done here). In the cases of the exponential and gamma distributions, the domain of the canonical link function is not the same as the permitted range of the mean. In particular, the linear predictor may be positive, which would give an impossible negative mean. When maximizing the likelihood, precautions must be taken to avoid this. An alternative is to use a noncanonical link function. In the case of the Bernoulli, binomial, categorical and multinomial distributions, the support of the distributions is not the same type of data as the parameter being predicted. In all of these cases, the predicted parameter is one or more probabilities, i.e. real numbers in the range [ 0 , 1 ] {\displaystyle [0,1]} . The resulting model is known as logistic regression (or multinomial logistic regression in the case that K-way rather than binary values are being predicted). For the Bernoulli and binomial distributions, the parameter is a single probability, indicating the likelihood of occurrence of a single event. The Bernoulli still satisfies the basic condition of the generalized linear model in that, even though a single outcome will always be either 0 or 1, the expected value will nonetheless be a real-valued probability, i.e. the probability of occurrence of a "yes" (or 1) outcome. Similarly, in a binomial distribution, the expected value is Np, i.e. the expected proportion of "yes" outcomes will be the probability to be predicted. For categorical and multinomial distributions, the parameter to be predicted is a K-vector of probabilities, with the further restriction that all probabilities must add up to 1. Each probability indicates the likelihood of occurrence of one of the K possible values. For the multinomial distribution, and for the vector form of the categorical distribution, the expected values of the elements of the vector can be related to the predicted probabilities similarly to the binomial and Bernoulli distributions. == Fitting == === Maximum likelihood === The maximum likelihood estimates can be found using an iteratively reweighted least squares algorithm or a Newton's method with updates of the form: β ( t + 1 ) = β ( t ) + J − 1 ( β ( t ) ) u ( β ( t ) ) , {\displaystyle {\boldsymbol {\beta }}^{(t+1)}={\boldsymbol {\beta }}^{(t)}+{\mathcal {J}}^{-1}({\boldsymbol {\beta }}^{(t)})u({\boldsymbol {\beta }}^{(t)}),} where J ( β ( t ) ) {\displaystyle {\mathcal {J}}({\boldsymbol {\beta }}^{(t)})} is the observed information matrix (the negative of the Hessian matrix) and u ( β ( t ) ) {\displaystyle u({\boldsymbol {\beta }}^{(t)})} is the score function; or a Fisher's scoring method: β ( t + 1 ) = β ( t ) + I − 1 ( β ( t ) ) u ( β ( t ) ) , {\displaystyle {\boldsymbol {\beta }}^{(t+1)}={\boldsymbol {\beta }}^{(t)}+{\mathcal {I}}^{-1}({\boldsymbol {\beta }}^{(t)})u({\boldsymbol {\beta }}^{(t)}),} where I ( β ( t ) ) {\displaystyle {\mathcal {I}}({\boldsymbol {\beta }}^{(t)})} is the Fisher information matrix. Note that if the canonical link function is used, then they are the same. === Bayesian methods === In general, the posterior distribution cannot be found in closed form and so must be approximated, usually using Laplace approximations or some type of Markov chain Monte Carlo method such as Gibbs sampling. == Examples == === General linear models === A possible point of confusion has to do with the distinction between generalized linear models and general linear models, two broad statistical models. Co-originator John Nelder has expressed regret over this terminology. The general linear model may be viewed as a special case of the generalized linear model with identity link and responses normally distributed. As most exact results of interest are obtained only for the general linear model, the general linear model has undergone a somewhat longer historical development. Results for the generalized linear model with non-identity link are asymptotic (tending to work well with large samples). === Linear regression === A simple, very important example of a generalized linear model (also an example of a general linear model) is linear regression. In linear regression, the use of the least-squares estimator is justified by the Gauss–Markov theorem, which does not assume that the distribution is normal. From the perspective of generalized linear models, however, it is useful to suppose that the distribution function is the normal distribution with constant variance and the link function is the identity, which is the canonical link if the variance is known. Under these assumptions, the least-squares estimator is obtained as the maximum-likelihood parameter estimate. For the normal distribution, the generalized linear model has a closed form expression for the maximum-likelihood estimates, which is convenient. Most other GLMs lack closed form estimates. === Binary data === When the response data, Y, are binary (taking on only values 0 and 1), the distribution function is generally chosen to be the Bernoulli distribution and the interpretation of μi is then the probability, p, of Yi taking on the value one. There are several popular link functions for binomial functions. ==== Logit link function ==== The most typical link function is the canonical logit link: g ( p ) = logit ⁡ p = ln ⁡ ( p 1 − p ) . {\displaystyle g(p)=\operatorname {logit} p=\ln \left({p \over 1-p}\right).} GLMs with this setup are logistic regression models (or logit models). ==== Probit link function as popular choice of inverse cumulative distribution function ==== Alternatively, the inverse of any continuous cumulative distribution function (CDF) can be used for the link since the CDF's range is [ 0 , 1 ] {\displaystyle [0,1]} , the range of the binomial mean. The normal CDF Φ {\displaystyle \Phi } is a popular choice and yields the probit model. Its link is g ( p ) = Φ − 1 ( p ) . {\displaystyle g(p)=\Phi ^{-1}(p).\,\!} The reason for the use of the probit model is that a constant scaling of the input variable to a normal CDF (which can be absorbed through equivalent scaling of all of the parameters) yields a function that is practically identical to the logit function, but probit models are more tractable in some situations than logit models. (In a Bayesian setting in which normally distributed prior distributions are placed on the parameters, the relationship between the normal priors and the normal CDF link function means that a probit model can be computed using Gibbs sampling, while a logit model generally cannot.) ==== Complementary log-log (cloglog) ==== The complementary log-log function may also be used: g ( p ) = log ⁡ ( − log ⁡ ( 1 − p ) ) . {\displaystyle g(p)=\log(-\log(1-p)).} This link function is asymmetric and will often produce different results from the logit and probit link functions. The cloglog model corresponds to applications where we observe either zero events (e.g., defects) or one or more, where the number of events is assumed to follow the Poisson distribution. The Poisson assumption means that Pr ( 0 ) = exp ⁡ ( − μ ) , {\displaystyle \Pr(0)=\exp(-\mu ),} where μ is a positive number denoting the expected number of events. If p represents the proportion of observations with at least one event, its complement 1 − p = Pr ( 0 ) = exp ⁡ ( − μ ) , {\displaystyle 1-p=\Pr(0)=\exp(-\mu ),} and then − log ⁡ ( 1 − p ) = μ . {\displaystyle -\log(1-p)=\mu .} A linear model requires the response variable to take values over the entire real line. Since μ must be positive, we can enforce that by taking the logarithm, and letting log(μ) be a linear model. This produces the "cloglog" transformation log ⁡ ( − log ⁡ ( 1 − p ) ) = log ⁡ ( μ ) . {\displaystyle \log(-\log(1-p))=\log(\mu ).} ==== Identity link ==== The identity link g(p) = p is also sometimes used for binomial data to yield a linear probability model. However, the identity link can predict nonsense "probabilities" less than zero or greater than one. This can be avoided by using a transformation like cloglog, probit or logit (or any inverse cumulative distribution function). A primary merit of the identity link is that it can be estimated using linear math—and other standard link functions are approximately linear matching the identity link near p = 0.5. ==== Variance function ==== The variance function for "quasibinomial" data is: Var ⁡ ( Y i ) = τ μ i ( 1 − μ i ) {\displaystyle \operatorname {Var} (Y_{i})=\tau \mu _{i}(1-\mu _{i})\,\!} where the dispersion parameter τ is exactly 1 for the binomial distribution. Indeed, the standard binomial likelihood omits τ. When it is present, the model is called "quasibinomial", and the modified likelihood is called a quasi-likelihood, since it is not generally the likelihood corresponding to any real family of probability distributions. If τ exceeds 1, the model is said to exhibit overdispersion. === Multinomial regression === The binomial case may be easily extended to allow for a multinomial distribution as the response (also, a Generalized Linear Model for counts, with a constrained total). There are two ways in which this is usually done: ==== Ordered response ==== If the response variable is ordinal, then one may fit a model function of the form: g ( μ m ) = η m = β 0 + X 1 β 1 + ⋯ + X p β p + γ 2 + ⋯ + γ m = η 1 + γ 2 + ⋯ + γ m where μ m = P ⁡ ( Y ≤ m ) . {\displaystyle g(\mu _{m})=\eta _{m}=\beta _{0}+X_{1}\beta _{1}+\cdots +X_{p}\beta _{p}+\gamma _{2}+\cdots +\gamma _{m}=\eta _{1}+\gamma _{2}+\cdots +\gamma _{m}{\text{ where }}\mu _{m}=\operatorname {P} (Y\leq m).\,} for m > 2. Different links g lead to ordinal regression models like proportional odds models or ordered probit models. ==== Unordered response ==== If the response variable is a nominal measurement, or the data do not satisfy the assumptions of an ordered model, one may fit a model of the following form: g ( μ m ) = η m = β m , 0 + X 1 β m , 1 + ⋯ + X p β m , p where μ m = P ( Y = m ∣ Y ∈ { 1 , m } ) . {\displaystyle g(\mu _{m})=\eta _{m}=\beta _{m,0}+X_{1}\beta _{m,1}+\cdots +X_{p}\beta _{m,p}{\text{ where }}\mu _{m}=\mathrm {P} (Y=m\mid Y\in \{1,m\}).\,} for m > 2. Different links g lead to multinomial logit or multinomial probit models. These are more general than the ordered response models, and more parameters are estimated. === Count data === Another example of generalized linear models includes Poisson regression which models count data using the Poisson distribution. The link is typically the logarithm, the canonical link. The variance function is proportional to the mean var ⁡ ( Y i ) = τ μ i , {\displaystyle \operatorname {var} (Y_{i})=\tau \mu _{i},\,} where the dispersion parameter τ is typically fixed at exactly one. When it is not, the resulting quasi-likelihood model is often described as Poisson with overdispersion or quasi-Poisson. == Extensions == === Correlated or clustered data === The standard GLM assumes that the observations are uncorrelated. Extensions have been developed to allow for correlation between observations, as occurs for example in longitudinal studies and clustered designs: Generalized estimating equations (GEEs) allow for the correlation between observations without the use of an explicit probability model for the origin of the correlations, so there is no explicit likelihood. They are suitable when the random effects and their variances are not of inherent interest, as they allow for the correlation without explaining its origin. The focus is on estimating the average response over the population ("population-averaged" effects) rather than the regression parameters that would enable prediction of the effect of changing one or more components of X on a given individual. GEEs are usually used in conjunction with Huber–White standard errors. Generalized linear mixed models (GLMMs) are an extension to GLMs that includes random effects in the linear predictor, giving an explicit probability model that explains the origin of the correlations. The resulting "subject-specific" parameter estimates are suitable when the focus is on estimating the effect of changing one or more components of X on a given individual. GLMMs are also referred to as multilevel models and as mixed model. In general, fitting GLMMs is more computationally complex and intensive than fitting GEEs. === Generalized additive models === Generalized additive models (GAMs) are another extension to GLMs in which the linear predictor η is not restricted to be linear in the covariates X but is the sum of smoothing functions applied to the xis: η = β 0 + f 1 ( x 1 ) + f 2 ( x 2 ) + ⋯ {\displaystyle \eta =\beta _{0}+f_{1}(x_{1})+f_{2}(x_{2})+\cdots \,\!} The smoothing functions fi are estimated from the data. In general this requires a large number of data points and is computationally intensive. == See also == Response modeling methodology Comparison of general and generalized linear models – Statistical linear modelPages displaying short descriptions of redirect targets Fractional model Generalized linear array model – model used for analyzing data sets with array structuresPages displaying wikidata descriptions as a fallback GLIM (software) – statistical software program for fitting generalized linear modelsPages displaying wikidata descriptions as a fallback Quasi-variance Natural exponential family – class of probability distributions that is a special case of an exponential familyPages displaying wikidata descriptions as a fallback Tweedie distribution – Family of probability distributions Variance functions – Smooth function in statisticsPages displaying short descriptions of redirect targets Vector generalized linear model (VGLM) Generalized estimating equation == References == === Citations === === Bibliography === == Further reading == Dunn, P.K.; Smyth, G.K. (2018). Generalized Linear Models With Examples in R. New York: Springer. doi:10.1007/978-1-4419-0118-7. ISBN 978-1-4419-0118-7.{{cite book}}: CS1 maint: publisher location (link) Dobson, A.J.; Barnett, A.G. (2008). Introduction to Generalized Linear Models (3rd ed.). Boca Raton, FL: Chapman and Hall/CRC. ISBN 978-1-58488-165-0. Hardin, James; Hilbe, Joseph (2007). Generalized Linear Models and Extensions (2nd ed.). College Station: Stata Press. ISBN 978-1-59718-014-6.{{cite book}}: CS1 maint: publisher location (link) == External links == Media related to Generalized linear models at Wikimedia Commons
Wikipedia/Generalized_linear_model
The general linear model or general multivariate regression model is a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regression models may be compactly written as Y = X B + U , {\displaystyle \mathbf {Y} =\mathbf {X} \mathbf {B} +\mathbf {U} ,} where Y is a matrix with series of multivariate measurements (each column being a set of measurements on one of the dependent variables), X is a matrix of observations on independent variables that might be a design matrix (each column being a set of observations on one of the independent variables), B is a matrix containing parameters that are usually to be estimated and U is a matrix containing errors (noise). The errors are usually assumed to be uncorrelated across measurements, and follow a multivariate normal distribution. If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about Y and U. The general linear model (GLM) encompasses several statistical models, including ANOVA, ANCOVA, MANOVA, MANCOVA, ordinary linear regression. Within this framework, both t-test and F-test can be applied. The general linear model is a generalization of multiple linear regression to the case of more than one dependent variable. If Y, B, and U were column vectors, the matrix equation above would represent multiple linear regression. Hypothesis tests with the general linear model can be made in two ways: multivariate or as several independent univariate tests. In multivariate tests the columns of Y are tested together, whereas in univariate tests the columns of Y are tested independently, i.e., as multiple univariate tests with the same design matrix. == Comparison to multiple linear regression == Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is Y i = β 0 + β 1 X i 1 + β 2 X i 2 + … + β p X i p + ϵ i {\displaystyle Y_{i}=\beta _{0}+\beta _{1}X_{i1}+\beta _{2}X_{i2}+\ldots +\beta _{p}X_{ip}+\epsilon _{i}} or more compactly Y i = β 0 + ∑ k = 1 p β k X i k + ϵ i {\displaystyle Y_{i}=\beta _{0}+\sum \limits _{k=1}^{p}{\beta _{k}X_{ik}}+\epsilon _{i}} for each observation i = 1, ... , n. In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xik is kth observation of the kth independent variable, k = 1, 2, ..., p. The values βk represent parameters to be estimated, and εi is the ith independent identically distributed normal error. In the more general multivariate linear regression, there is one equation of the above form for each of m > 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other: Y i j = β 0 j + β 1 j X i 1 + β 2 j X i 2 + … + β p j X i p + ϵ i j {\displaystyle Y_{ij}=\beta _{0j}+\beta _{1j}X_{i1}+\beta _{2j}X_{i2}+\ldots +\beta _{pj}X_{ip}+\epsilon _{ij}} or more compactly Y i j = β 0 j + ∑ k = 1 p β k j X i k + ϵ i j {\displaystyle Y_{ij}=\beta _{0j}+\sum \limits _{k=1}^{p}{\beta _{kj}X_{ik}}+\epsilon _{ij}} for all observations indexed as i = 1, ... , n and for all dependent variables indexed as j = 1, ... , m. Note that, since each dependent variable has its own set of regression parameters to be fitted, from a computational point of view the general multivariate regression is simply a sequence of standard multiple linear regressions using the same explanatory variables. == Comparison to generalized linear model == The general linear model and the generalized linear model (GLM) are two commonly used families of statistical methods to relate some number of continuous and/or categorical predictors to a single outcome variable. The main difference between the two approaches is that the general linear model strictly assumes that the residuals will follow a conditionally normal distribution, while the GLM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals. The general linear model is a special case of the GLM in which the distribution of the residuals follow a conditionally normal distribution. The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLM family. Commonly used models in the GLM family include binary logistic regression for binary or dichotomous outcomes, Poisson regression for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLM may be spoken of as a general family of statistical models or as specific models for specific outcome types. == Applications == An application of the general linear model appears in the analysis of multiple brain scans in scientific experiments where Y contains data from brain scanners, X contains experimental design variables and confounds. It is usually tested in a univariate way (usually referred to a mass-univariate in this setting) and is often referred to as statistical parametric mapping. == See also == Bayesian multivariate linear regression F-test t-test == Notes == == References == Christensen, Ronald (2020). Plane Answers to Complex Questions: The Theory of Linear Models (5th ed.). New York: Springer. ISBN 978-3-030-32096-6. Wichura, Michael J. (2006). The coordinate-free approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6. MR 2283455. Rawlings, John O.; Pantula, Sastry G.; Dickey, David A., eds. (1998). Applied Regression Analysis. Springer Texts in Statistics. doi:10.1007/b98890. ISBN 0-387-98454-2.
Wikipedia/General_linear_model
In mathematical optimization, the revised simplex method is a variant of George Dantzig's simplex method for linear programming. The revised simplex method is mathematically equivalent to the standard simplex method but differs in implementation. Instead of maintaining a tableau which explicitly represents the constraints adjusted to a set of basic variables, it maintains a representation of a basis of the matrix representing the constraints. The matrix-oriented approach allows for greater computational efficiency by enabling sparse matrix operations. == Problem formulation == For the rest of the discussion, it is assumed that a linear programming problem has been converted into the following standard form: minimize c T x subject to A x = b , x ≥ 0 {\displaystyle {\begin{array}{rl}{\text{minimize}}&{\boldsymbol {c}}^{\mathrm {T} }{\boldsymbol {x}}\\{\text{subject to}}&{\boldsymbol {Ax}}={\boldsymbol {b}},{\boldsymbol {x}}\geq {\boldsymbol {0}}\end{array}}} where A ∈ ℝm×n. Without loss of generality, it is assumed that the constraint matrix A has full row rank and that the problem is feasible, i.e., there is at least one x ≥ 0 such that Ax = b. If A is rank-deficient, either there are redundant constraints, or the problem is infeasible. Both situations can be handled by a presolve step. == Algorithmic description == === Optimality conditions === For linear programming, the Karush–Kuhn–Tucker conditions are both necessary and sufficient for optimality. The KKT conditions of a linear programming problem in the standard form is A x = b , A T λ + s = c , x ≥ 0 , s ≥ 0 , s T x = 0 {\displaystyle {\begin{aligned}{\boldsymbol {Ax}}&={\boldsymbol {b}},\\{\boldsymbol {A}}^{\mathrm {T} }{\boldsymbol {\lambda }}+{\boldsymbol {s}}&={\boldsymbol {c}},\\{\boldsymbol {x}}&\geq {\boldsymbol {0}},\\{\boldsymbol {s}}&\geq {\boldsymbol {0}},\\{\boldsymbol {s}}^{\mathrm {T} }{\boldsymbol {x}}&=0\end{aligned}}} where λ and s are the Lagrange multipliers associated with the constraints Ax = b and x ≥ 0, respectively. The last condition, which is equivalent to sixi = 0 for all 1 < i < n, is called the complementary slackness condition. By what is sometimes known as the fundamental theorem of linear programming, a vertex x of the feasible polytope can be identified by being a basis B of A chosen from the latter's columns. Since A has full rank, B is nonsingular. Without loss of generality, assume that A = [B N]. Then x is given by x = [ x B x N ] = [ B − 1 b 0 ] {\displaystyle {\boldsymbol {x}}={\begin{bmatrix}{\boldsymbol {x_{B}}}\\{\boldsymbol {x_{N}}}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {B}}^{-1}{\boldsymbol {b}}\\{\boldsymbol {0}}\end{bmatrix}}} where xB ≥ 0. Partition c and s accordingly into c = [ c B c N ] , s = [ s B s N ] . {\displaystyle {\begin{aligned}{\boldsymbol {c}}&={\begin{bmatrix}{\boldsymbol {c_{B}}}\\{\boldsymbol {c_{N}}}\end{bmatrix}},\\{\boldsymbol {s}}&={\begin{bmatrix}{\boldsymbol {s_{B}}}\\{\boldsymbol {s_{N}}}\end{bmatrix}}.\end{aligned}}} To satisfy the complementary slackness condition, let sB = 0. It follows that B T λ = c B , N T λ + s N = c N , {\displaystyle {\begin{aligned}{\boldsymbol {B}}^{\mathrm {T} }{\boldsymbol {\lambda }}&={\boldsymbol {c_{B}}},\\{\boldsymbol {N}}^{\mathrm {T} }{\boldsymbol {\lambda }}+{\boldsymbol {s_{N}}}&={\boldsymbol {c_{N}}},\end{aligned}}} which implies that λ = ( B T ) − 1 c B , s N = c N − N T λ . {\displaystyle {\begin{aligned}{\boldsymbol {\lambda }}&=({\boldsymbol {B}}^{\mathrm {T} })^{-1}{\boldsymbol {c_{B}}},\\{\boldsymbol {s_{N}}}&={\boldsymbol {c_{N}}}-{\boldsymbol {N}}^{\mathrm {T} }{\boldsymbol {\lambda }}.\end{aligned}}} If sN ≥ 0 at this point, the KKT conditions are satisfied, and thus x is optimal. === Pivot operation === If the KKT conditions are violated, a pivot operation consisting of introducing a column of N into the basis at the expense of an existing column in B is performed. In the absence of degeneracy, a pivot operation always results in a strict decrease in cTx. Therefore, if the problem is bounded, the revised simplex method must terminate at an optimal vertex after repeated pivot operations because there are only a finite number of vertices. Select an index m < q ≤ n such that sq < 0 as the entering index. The corresponding column of A, Aq, will be moved into the basis, and xq will be allowed to increase from zero. It can be shown that ∂ ( c T x ) ∂ x q = s q , {\displaystyle {\frac {\partial ({\boldsymbol {c}}^{\mathrm {T} }{\boldsymbol {x}})}{\partial x_{q}}}=s_{q},} i.e., every unit increase in xq results in a decrease by −sq in cTx. Since B x B + A q x q = b , {\displaystyle {\boldsymbol {Bx_{B}}}+{\boldsymbol {A}}_{q}x_{q}={\boldsymbol {b}},} xB must be correspondingly decreased by ΔxB = B−1Aqxq subject to xB − ΔxB ≥ 0. Let d = B−1Aq. If d ≤ 0, no matter how much xq is increased, xB − ΔxB will stay nonnegative. Hence, cTx can be arbitrarily decreased, and thus the problem is unbounded. Otherwise, select an index p = argmin1≤i≤m {xi/di | di > 0} as the leaving index. This choice effectively increases xq from zero until xp is reduced to zero while maintaining feasibility. The pivot operation concludes with replacing Ap with Aq in the basis. == Numerical example == Consider a linear program where c = [ − 2 − 3 − 4 0 0 ] T , A = [ 3 2 1 1 0 2 5 3 0 1 ] , b = [ 10 15 ] . {\displaystyle {\begin{aligned}{\boldsymbol {c}}&={\begin{bmatrix}-2&-3&-4&0&0\end{bmatrix}}^{\mathrm {T} },\\{\boldsymbol {A}}&={\begin{bmatrix}3&2&1&1&0\\2&5&3&0&1\end{bmatrix}},\\{\boldsymbol {b}}&={\begin{bmatrix}10\\15\end{bmatrix}}.\end{aligned}}} Let B = [ A 4 A 5 ] , N = [ A 1 A 2 A 3 ] {\displaystyle {\begin{aligned}{\boldsymbol {B}}&={\begin{bmatrix}{\boldsymbol {A}}_{4}&{\boldsymbol {A}}_{5}\end{bmatrix}},\\{\boldsymbol {N}}&={\begin{bmatrix}{\boldsymbol {A}}_{1}&{\boldsymbol {A}}_{2}&{\boldsymbol {A}}_{3}\end{bmatrix}}\end{aligned}}} initially, which corresponds to a feasible vertex x = [0 0 0 10 15]T. At this moment, λ = [ 0 0 ] T , s N = [ − 2 − 3 − 4 ] T . {\displaystyle {\begin{aligned}{\boldsymbol {\lambda }}&={\begin{bmatrix}0&0\end{bmatrix}}^{\mathrm {T} },\\{\boldsymbol {s_{N}}}&={\begin{bmatrix}-2&-3&-4\end{bmatrix}}^{\mathrm {T} }.\end{aligned}}} Choose q = 3 as the entering index. Then d = [1 3]T, which means a unit increase in x3 results in x4 and x5 being decreased by 1 and 3, respectively. Therefore, x3 is increased to 5, at which point x5 is reduced to zero, and p = 5 becomes the leaving index. After the pivot operation, B = [ A 3 A 4 ] , N = [ A 1 A 2 A 5 ] . {\displaystyle {\begin{aligned}{\boldsymbol {B}}&={\begin{bmatrix}{\boldsymbol {A}}_{3}&{\boldsymbol {A}}_{4}\end{bmatrix}},\\{\boldsymbol {N}}&={\begin{bmatrix}{\boldsymbol {A}}_{1}&{\boldsymbol {A}}_{2}&{\boldsymbol {A}}_{5}\end{bmatrix}}.\end{aligned}}} Correspondingly, x = [ 0 0 5 5 0 ] T , λ = [ 0 − 4 / 3 ] T , s N = [ 2 / 3 11 / 3 4 / 3 ] T . {\displaystyle {\begin{aligned}{\boldsymbol {x}}&={\begin{bmatrix}0&0&5&5&0\end{bmatrix}}^{\mathrm {T} },\\{\boldsymbol {\lambda }}&={\begin{bmatrix}0&-4/3\end{bmatrix}}^{\mathrm {T} },\\{\boldsymbol {s_{N}}}&={\begin{bmatrix}2/3&11/3&4/3\end{bmatrix}}^{\mathrm {T} }.\end{aligned}}} A positive sN indicates that x is now optimal. == Practical issues == === Degeneracy === Because the revised simplex method is mathematically equivalent to the simplex method, it also suffers from degeneracy, where a pivot operation does not result in a decrease in cTx, and a chain of pivot operations causes the basis to cycle. A perturbation or lexicographic strategy can be used to prevent cycling and guarantee termination. === Basis representation === Two types of linear systems involving B are present in the revised simplex method: B z = y , B T z = y . {\displaystyle {\begin{aligned}{\boldsymbol {Bz}}&={\boldsymbol {y}},\\{\boldsymbol {B}}^{\mathrm {T} }{\boldsymbol {z}}&={\boldsymbol {y}}.\end{aligned}}} Instead of refactorizing B, usually an LU factorization is directly updated after each pivot operation, for which purpose there exist several strategies such as the Forrest−Tomlin and Bartels−Golub methods. However, the amount of data representing the updates as well as numerical errors builds up over time and makes periodic refactorization necessary. == Notes and references == === Notes === === References === === Bibliography ===
Wikipedia/Revised_simplex_method
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers. The algorithm was first proposed by Alfonso Shimbel (1955), but is instead named after Richard Bellman and Lester Ford Jr., who published it in 1958 and 1956, respectively. Edward F. Moore also published a variation of the algorithm in 1959, and for this reason it is also sometimes called the Bellman–Ford–Moore algorithm. Negative edge weights are found in various applications of graphs. This is why this algorithm is useful. If a graph contains a "negative cycle" (i.e. a cycle whose edges sum to a negative value) that is reachable from the source, then there is no cheapest path: any path that has a point on the negative cycle can be made cheaper by one more walk around the negative cycle. In such a case, the Bellman–Ford algorithm can detect and report the negative cycle. == Algorithm == Like Dijkstra's algorithm, Bellman–Ford proceeds by relaxation, in which approximations to the correct distance are replaced by better ones until they eventually reach the solution. In both algorithms, the approximate distance to each vertex is always an overestimate of the true distance, and is replaced by the minimum of its old value and the length of a newly found path. However, Dijkstra's algorithm uses a priority queue to greedily select the closest vertex that has not yet been processed, and performs this relaxation process on all of its outgoing edges; by contrast, the Bellman–Ford algorithm simply relaxes all the edges, and does this | V | − 1 {\displaystyle |V|-1} times, where | V | {\displaystyle |V|} is the number of vertices in the graph. In each of these repetitions, the number of vertices with correctly calculated distances grows, from which it follows that eventually all vertices will have their correct distances. This method allows the Bellman–Ford algorithm to be applied to a wider class of inputs than Dijkstra's algorithm. The intermediate answers depend on the order of edges relaxed, but the final answer remains the same. Bellman–Ford runs in O ( | V | ⋅ | E | ) {\displaystyle O(|V|\cdot |E|)} time, where | V | {\displaystyle |V|} and | E | {\displaystyle |E|} are the number of vertices and edges respectively. function BellmanFord(list vertices, list edges, vertex source) is // This implementation takes in a graph, represented as // lists of vertices (represented as integers [0..n-1]) and // edges, and fills two arrays (distance and predecessor) // holding the shortest path from the source to each vertex distance := list of size n predecessor := list of size n // Step 1: initialize graph for each vertex v in vertices do // Initialize the distance to all vertices to infinity distance[v] := inf // And having a null predecessor predecessor[v] := null // The distance from the source to itself is zero distance[source] := 0 // Step 2: relax edges repeatedly repeat |V|−1 times: for each edge (u, v) with weight w in edges do if distance[u] + w < distance[v] then distance[v] := distance[u] + w predecessor[v] := u // Step 3: check for negative-weight cycles for each edge (u, v) with weight w in edges do if distance[u] + w < distance[v] then predecessor[v] := u // A negative cycle exists; // find a vertex on the cycle visited := list of size n initialized with false visited[v] := true while not visited[u] do visited[u] := true u := predecessor[u] // u is a vertex in a negative cycle, // find the cycle itself ncycle := [u] v := predecessor[u] while v != u do ncycle := concatenate([v], ncycle) v := predecessor[v] error "Graph contains a negative-weight cycle", ncycle return distance, predecessor Simply put, the algorithm initializes the distance to the source to 0 and all other nodes to infinity. Then for all edges, if the distance to the destination can be shortened by taking the edge, the distance is updated to the new lower value. The core of the algorithm is a loop that scans across all edges at every loop. For every i ≤ | V | − 1 {\displaystyle i\leq |V|-1} , at the end of the i {\displaystyle i} -th iteration, from any vertex v, following the predecessor trail recorded in predecessor yields a path that has a total weight that is at most distance[v], and further, distance[v] is a lower bound to the length of any path from source to v that uses at most i edges. Since the longest possible path without a cycle can be | V | − 1 {\displaystyle |V|-1} edges, the edges must be scanned | V | − 1 {\displaystyle |V|-1} times to ensure the shortest path has been found for all nodes. A final scan of all the edges is performed and if any distance is updated, then a path of length | V | {\displaystyle |V|} edges has been found which can only occur if at least one negative cycle exists in the graph. The edge (u, v) that is found in step 3 must be reachable from a negative cycle, but it isn't necessarily part of the cycle itself, which is why it's necessary to follow the path of predecessors backwards until a cycle is detected. The above pseudo-code uses a Boolean array (visited) to find a vertex on the cycle, but any cycle finding algorithm can be used to find a vertex on the cycle. A common improvement when implementing the algorithm is to return early when an iteration of step 2 fails to relax any edges, which implies all shortest paths have been found, and therefore there are no negative cycles. In that case, the complexity of the algorithm is reduced from O ( | V | ⋅ | E | ) {\displaystyle O(|V|\cdot |E|)} to O ( l ⋅ | E | ) {\displaystyle O(l\cdot |E|)} where l {\displaystyle l} is the maximum length of a shortest path in the graph. == Proof of correctness == The correctness of the algorithm can be shown by induction: Lemma. After i repetitions of for loop, if Distance(u) is not infinity, it is equal to the length of some path from s to u; and if there is a path from s to u with at most i edges, then Distance(u) is at most the length of the shortest path from s to u with at most i edges. Proof. For the base case of induction, consider i=0 and the moment before for loop is executed for the first time. Then, for the source vertex, source.distance = 0, which is correct. For other vertices u, u.distance = infinity, which is also correct because there is no path from source to u with 0 edges. For the inductive case, we first prove the first part. Consider a moment when a vertex's distance is updated by v.distance := u.distance + uv.weight. By inductive assumption, u.distance is the length of some path from source to u. Then u.distance + uv.weight is the length of the path from source to v that follows the path from source to u and then goes to v. For the second part, consider a shortest path P (there may be more than one) from source to v with at most i edges. Let u be the last vertex before v on this path. Then, the part of the path from source to u is a shortest path from source to u with at most i-1 edges, since if it were not, then there must be some strictly shorter path from source to u with at most i-1 edges, and we could then append the edge uv to this path to obtain a path with at most i edges that is strictly shorter than P—a contradiction. By inductive assumption, u.distance after i−1 iterations is at most the length of this path from source to u. Therefore, uv.weight + u.distance is at most the length of P. In the ith iteration, v.distance gets compared with uv.weight + u.distance, and is set equal to it if uv.weight + u.distance is smaller. Therefore, after i iterations, v.distance is at most the length of P, i.e., the length of the shortest path from source to v that uses at most i edges. If there are no negative-weight cycles, then every shortest path visits each vertex at most once, so at step 3 no further improvements can be made. Conversely, suppose no improvement can be made. Then for any cycle with vertices v[0], ..., v[k−1], v[i].distance <= v[i-1 (mod k)].distance + v[i-1 (mod k)]v[i].weight Summing around the cycle, the v[i].distance and v[i−1 (mod k)].distance terms cancel, leaving 0 <= sum from 1 to k of v[i-1 (mod k)]v[i].weight I.e., every cycle has nonnegative weight. == Finding negative cycles == When the algorithm is used to find shortest paths, the existence of negative cycles is a problem, preventing the algorithm from finding a correct answer. However, since it terminates upon finding a negative cycle, the Bellman–Ford algorithm can be used for applications in which this is the target to be sought – for example in cycle-cancelling techniques in network flow analysis. == Applications in routing == A distributed variant of the Bellman–Ford algorithm is used in distance-vector routing protocols, for example the Routing Information Protocol (RIP). The algorithm is distributed because it involves a number of nodes (routers) within an Autonomous system (AS), a collection of IP networks typically owned by an ISP. It consists of the following steps: Each node calculates the distances between itself and all other nodes within the AS and stores this information as a table. Each node sends its table to all neighboring nodes. When a node receives distance tables from its neighbors, it calculates the shortest routes to all other nodes and updates its own table to reflect any changes. The main disadvantages of the Bellman–Ford algorithm in this setting are as follows: It does not scale well. Changes in network topology are not reflected quickly since updates are spread node-by-node. Count to infinity if link or node failures render a node unreachable from some set of other nodes, those nodes may spend forever gradually increasing their estimates of the distance to it, and in the meantime there may be routing loops. == Improvements == The Bellman–Ford algorithm may be improved in practice (although not in the worst case) by the observation that, if an iteration of the main loop of the algorithm terminates without making any changes, the algorithm can be immediately terminated, as subsequent iterations will not make any more changes. With this early termination condition, the main loop may in some cases use many fewer than |V| − 1 iterations, even though the worst case of the algorithm remains unchanged. The following improvements all maintain the O ( | V | ⋅ | E | ) {\displaystyle O(|V|\cdot |E|)} worst-case time complexity. A variation of the Bellman–Ford algorithm described by Moore (1959), reduces the number of relaxation steps that need to be performed within each iteration of the algorithm. If a vertex v has a distance value that has not changed since the last time the edges out of v were relaxed, then there is no need to relax the edges out of v a second time. In this way, as the number of vertices with correct distance values grows, the number whose outgoing edges that need to be relaxed in each iteration shrinks, leading to a constant-factor savings in time for dense graphs. This variation can be implemented by keeping a collection of vertices whose outgoing edges need to be relaxed, removing a vertex from this collection when its edges are relaxed, and adding to the collection any vertex whose distance value is changed by a relaxation step. In China, this algorithm was popularized by Fanding Duan, who rediscovered it in 1994, as the "shortest path faster algorithm". Yen (1970) described another improvement to the Bellman–Ford algorithm. His improvement first assigns some arbitrary linear order on all vertices and then partitions the set of all edges into two subsets. The first subset, Ef, contains all edges (vi, vj) such that i < j; the second, Eb, contains edges (vi, vj) such that i > j. Each vertex is visited in the order v1, v2, ..., v|V|, relaxing each outgoing edge from that vertex in Ef. Each vertex is then visited in the order v|V|, v|V|−1, ..., v1, relaxing each outgoing edge from that vertex in Eb. Each iteration of the main loop of the algorithm, after the first one, adds at least two edges to the set of edges whose relaxed distances match the correct shortest path distances: one from Ef and one from Eb. This modification reduces the worst-case number of iterations of the main loop of the algorithm from |V| − 1 to | V | / 2 {\displaystyle |V|/2} . Another improvement, by Bannister & Eppstein (2012), replaces the arbitrary linear order of the vertices used in Yen's second improvement by a random permutation. This change makes the worst case for Yen's improvement (in which the edges of a shortest path strictly alternate between the two subsets Ef and Eb) very unlikely to happen. With a randomly permuted vertex ordering, the expected number of iterations needed in the main loop is at most | V | / 3 {\displaystyle |V|/3} . Fineman (2024), at Georgetown University, created an improved algorithm that with high probability runs in O ~ ( | V | 8 9 ⋅ | E | ) {\displaystyle {\tilde {O}}(|V|^{\frac {8}{9}}\cdot |E|)} time. Here, the O ~ {\displaystyle {\tilde {O}}} is a variant of big O notation that hides logarithmic factors. == Notes == == References == === Original sources === Shimbel, A. (1955). Structure in communication nets. Proceedings of the Symposium on Information Networks. New York, New York: Polytechnic Press of the Polytechnic Institute of Brooklyn. pp. 199–203. Bellman, Richard (1958). "On a routing problem". Quarterly of Applied Mathematics. 16: 87–90. doi:10.1090/qam/102435. MR 0102435. Ford, Lester R. Jr. (August 14, 1956). Network Flow Theory. Paper P-923. Santa Monica, California: RAND Corporation. Moore, Edward F. (1959). The shortest path through a maze. Proc. Internat. Sympos. Switching Theory 1957, Part II. Cambridge, Massachusetts: Harvard Univ. Press. pp. 285–292. MR 0114710. Yen, Jin Y. (1970). "An algorithm for finding shortest routes from all source nodes to a given destination in general networks". Quarterly of Applied Mathematics. 27 (4): 526–530. doi:10.1090/qam/253822. MR 0253822. Bannister, M. J.; Eppstein, D. (2012). "Randomized speedup of the Bellman–Ford algorithm". Analytic Algorithmics and Combinatorics (ANALCO12), Kyoto, Japan. pp. 41–47. arXiv:1111.5414. doi:10.1137/1.9781611973020.6. Fineman, Jeremy T. (2024). "Single-source shortest paths with negative real weights in O ~ ( m n 8 / 9 ) {\displaystyle {\tilde {O}}(mn^{8/9})} time". In Mohar, Bojan; Shinkar, Igor; O'Donnell, Ryan (eds.). Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, Vancouver, BC, Canada, June 24–28, 2024. Association for Computing Machinery. pp. 3–14. arXiv:2311.02520. doi:10.1145/3618260.3649614. === Secondary sources === Ford, L. R. Jr.; Fulkerson, D. R. (1962). "A shortest chain algorithm". Flows in Networks. Princeton University Press. pp. 130–134. Bang-Jensen, Jørgen; Gutin, Gregory (2000). "Section 2.3.4: The Bellman-Ford-Moore algorithm". Digraphs: Theory, Algorithms and Applications (First ed.). Springer. ISBN 978-1-84800-997-4. Schrijver, Alexander (2005). "On the history of combinatorial optimization (till 1960)" (PDF). Handbook of Discrete Optimization. Elsevier: 1–68. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. Introduction to Algorithms. MIT Press and McGraw-Hill., Fourth Edition. MIT Press, 2022. ISBN 978-0-262-04630-5. Section 22.1: The Bellman–Ford algorithm, pp. 612–616. Problem 22–1, p. 640. Heineman, George T.; Pollice, Gary; Selkow, Stanley (2008). "Chapter 6: Graph Algorithms". Algorithms in a Nutshell. O'Reilly Media. pp. 160–164. ISBN 978-0-596-51624-6. Kleinberg, Jon; Tardos, Éva (2006). Algorithm Design. New York: Pearson Education, Inc. Sedgewick, Robert (2002). "Section 21.7: Negative Edge Weights". Algorithms in Java (3rd ed.). Addison-Wesley. ISBN 0-201-36121-3. Archived from the original on 2008-05-31. Retrieved 2007-05-28.
Wikipedia/Shortest_Path_Faster_Algorithm
The Berndt–Hall–Hall–Hausman (BHHH) algorithm is a numerical optimization algorithm similar to the Newton–Raphson algorithm, but it replaces the observed negative Hessian matrix with the outer product of the gradient. This approximation is based on the information matrix equality and therefore only valid while maximizing a likelihood function. The BHHH algorithm is named after the four originators: Ernst R. Berndt, Bronwyn Hall, Robert Hall, and Jerry Hausman. == Usage == If a nonlinear model is fitted to the data one often needs to estimate coefficients through optimization. A number of optimisation algorithms have the following general structure. Suppose that the function to be optimized is Q(β). Then the algorithms are iterative, defining a sequence of approximations, βk given by β k + 1 = β k − λ k A k ∂ Q ∂ β ( β k ) , {\displaystyle \beta _{k+1}=\beta _{k}-\lambda _{k}A_{k}{\frac {\partial Q}{\partial \beta }}(\beta _{k}),} , where β k {\displaystyle \beta _{k}} is the parameter estimate at step k, and λ k {\displaystyle \lambda _{k}} is a parameter (called step size) which partly determines the particular algorithm. For the BHHH algorithm λk is determined by calculations within a given iterative step, involving a line-search until a point βk+1 is found satisfying certain criteria. In addition, for the BHHH algorithm, Q has the form Q = ∑ i = 1 N Q i {\displaystyle Q=\sum _{i=1}^{N}Q_{i}} and A is calculated using A k = [ ∑ i = 1 N ∂ ln ⁡ Q i ∂ β ( β k ) ∂ ln ⁡ Q i ∂ β ( β k ) ′ ] − 1 . {\displaystyle A_{k}=\left[\sum _{i=1}^{N}{\frac {\partial \ln Q_{i}}{\partial \beta }}(\beta _{k}){\frac {\partial \ln Q_{i}}{\partial \beta }}(\beta _{k})'\right]^{-1}.} In other cases, e.g. Newton–Raphson, A k {\displaystyle A_{k}} can have other forms. The BHHH algorithm has the advantage that, if certain conditions apply, convergence of the iterative procedure is guaranteed. == See also == Davidon–Fletcher–Powell (DFP) algorithm Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm == References == == Further reading == V. Martin, S. Hurn, and D. Harris, Econometric Modelling with Time Series, Chapter 3 'Numerical Estimation Methods'. Cambridge University Press, 2015. Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 137–138. ISBN 0-674-00560-0. Gill, P.; Murray, W.; Wright, M. (1981). Practical Optimization. London: Harcourt Brace. Gourieroux, Christian; Monfort, Alain (1995). "Gradient Methods and ML Estimation". Statistics and Econometric Models. New York: Cambridge University Press. pp. 452–458. ISBN 0-521-40551-3. Harvey, A. C. (1990). The Econometric Analysis of Time Series (Second ed.). Cambridge: MIT Press. pp. 137–138. ISBN 0-262-08189-X.
Wikipedia/Berndt–Hall–Hall–Hausman_algorithm
In mathematical optimization and computer science, heuristic (from Greek εὑρίσκω "I find, discover") is a technique designed for problem solving more quickly when classic methods are too slow for finding an exact or approximate solution, or when classic methods fail to find any exact solution in a search space. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also simply called a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution. == Definition and motivation == The objective of a heuristic is to produce a solution in a reasonable time frame that is good enough for solving the problem at hand. This solution may not be the best of all the solutions to this problem, or it may simply approximate the exact solution. But it is still valuable because finding it does not require a prohibitively long time. Heuristics may produce results by themselves, or they may be used in conjunction with optimization algorithms to improve their efficiency (e.g., they may be used to generate good seed values). Results about NP-hardness in theoretical computer science make heuristics the only viable option for a variety of complex optimization problems that need to be routinely solved in real-world applications. Heuristics underlie the whole field of Artificial Intelligence and the computer simulation of thinking, as they may be used in situations where there are no known algorithms. == Examples == === Simpler problem === One way of achieving the computational performance gain expected of a heuristic consists of solving a simpler problem whose solution is also a solution to the initial problem. === Travelling salesman problem === An example of approximation is described by Jon Bentley for solving the travelling salesman problem (TSP): "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" so as to select the order to draw using a pen plotter. TSP is known to be NP-hard so an optimal solution for even a moderate size problem is difficult to solve. Instead, the greedy algorithm can be used to give a good but not optimal solution (it is an approximation to the optimal answer) in a reasonably short amount of time. The greedy algorithm heuristic says to pick whatever is currently the best next step regardless of whether that prevents (or even makes impossible) good steps later. It is a heuristic in the sense that practice indicates it is a good enough solution, while theory indicates that there are better solutions (and even indicates how much better, in some cases). === Search === Another example of heuristic making an algorithm faster occurs in certain search problems. Initially, the heuristic tries every possibility at each step, like the full-space search algorithm. But it can stop the search at any time if the current possibility is already worse than the best solution already found. In such search problems, a heuristic can be used to try good choices first so that bad paths can be eliminated early (see alpha–beta pruning). In the case of best-first search algorithms, such as A* search, the heuristic improves the algorithm's convergence while maintaining its correctness as long as the heuristic is admissible. === Newell and Simon: heuristic search hypothesis === In their Turing Award acceptance speech, Allen Newell and Herbert A. Simon discuss the heuristic search hypothesis: a physical symbol system will repeatedly generate and modify known symbol structures until the created structure matches the solution structure. Each following step depends upon the step before it, thus the heuristic search learns what avenues to pursue and which ones to disregard by measuring how close the current step is to the solution. Therefore, some possibilities will never be generated as they are measured to be less likely to complete the solution. A heuristic method can accomplish its task by using search trees. However, instead of generating all possible solution branches, a heuristic selects branches more likely to produce outcomes than other branches. It is selective at each decision point, picking branches that are more likely to produce solutions. === Antivirus software === Antivirus software often uses heuristic rules for detecting viruses and other forms of malware. Heuristic scanning looks for code and/or behavioral patterns common to a class or family of viruses, with different sets of rules for different viruses. If a file or executing process is found to contain matching code patterns and/or to be performing that set of activities, then the scanner infers that the file is infected. The most advanced part of behavior-based heuristic scanning is that it can work against highly randomized self-modifying/mutating (polymorphic) viruses that cannot be easily detected by simpler string scanning methods. Heuristic scanning has the potential to detect future viruses without requiring the virus to be first detected somewhere else, submitted to the virus scanner developer, analyzed, and a detection update for the scanner provided to the scanner's users. == Pitfalls == Some heuristics have a strong underlying theory; they are either derived in a top-down manner from the theory or are arrived at based on either experimental or real world data. Others are just rules of thumb based on real-world observation or experience without even a glimpse of theory. The latter are exposed to a larger number of pitfalls. When a heuristic is reused in various contexts because it has been seen to "work" in one context, without having been mathematically proven to meet a given set of requirements, it is possible that the current data set does not necessarily represent future data sets (see: overfitting) and that purported "solutions" turn out to be akin to noise. Statistical analysis can be conducted when employing heuristics to estimate the probability of incorrect outcomes. To use a heuristic for solving a search problem or a knapsack problem, it is necessary to check that the heuristic is admissible. Given a heuristic function h ( v i , v g ) {\displaystyle h(v_{i},v_{g})} meant to approximate the true optimal distance d ⋆ ( v i , v g ) {\displaystyle d^{\star }(v_{i},v_{g})} to the goal node v g {\displaystyle v_{g}} in a directed graph G {\displaystyle G} containing n {\displaystyle n} total nodes or vertices labeled v 0 , v 1 , ⋯ , v n {\displaystyle v_{0},v_{1},\cdots ,v_{n}} , "admissible" means roughly that the heuristic underestimates the cost to the goal or formally that h ( v i , v g ) ≤ d ⋆ ( v i , v g ) {\displaystyle h(v_{i},v_{g})\leq d^{\star }(v_{i},v_{g})} for all ( v i , v g ) {\displaystyle (v_{i},v_{g})} where i , g ∈ [ 0 , 1 , . . . , n ] {\displaystyle {i,g}\in [0,1,...,n]} . If a heuristic is not admissible, it may never find the goal, either by ending up in a dead end of graph G {\displaystyle G} or by skipping back and forth between two nodes v i {\displaystyle v_{i}} and v j {\displaystyle v_{j}} where i , j ≠ g {\displaystyle {i,j}\neq g} . == Etymology == The word "heuristic" came into usage in the early 19th century. It is formed irregularly from the Greek word heuriskein, meaning "to find". == See also == Constructive heuristic Metaheuristic: Methods for controlling and tuning basic heuristic algorithms, usually with usage of memory and learning. Matheuristics: Optimization algorithms made by the interoperation of metaheuristics and mathematical programming (MP) techniques. Reactive search optimization: Methods using online machine learning principles for self-tuning of heuristics. == References ==
Wikipedia/Heuristic_algorithm
Kruskal's algorithm finds a minimum spanning forest of an undirected edge-weighted graph. If the graph is connected, it finds a minimum spanning tree. It is a greedy algorithm that in each step adds to the forest the lowest-weight edge that will not form a cycle. The key steps of the algorithm are sorting and the use of a disjoint-set data structure to detect cycles. Its running time is dominated by the time to sort all of the graph edges by their weight. A minimum spanning tree of a connected weighted graph is a connected subgraph, without cycles, for which the sum of the weights of all the edges in the subgraph is minimal. For a disconnected graph, a minimum spanning forest is composed of a minimum spanning tree for each connected component. This algorithm was first published by Joseph Kruskal in 1956, and was rediscovered soon afterward by Loberman & Weinberger (1957). Other algorithms for this problem include Prim's algorithm, Borůvka's algorithm, and the reverse-delete algorithm. == Algorithm == The algorithm performs the following steps: Create a forest (a set of trees) initially consisting of a separate single-vertex tree for each vertex in the input graph. Sort the graph edges by weight. Loop through the edges of the graph, in ascending sorted order by their weight. For each edge: Test whether adding the edge to the current forest would create a cycle. If not, add the edge to the forest, combining two trees into a single tree. At the termination of the algorithm, the forest forms a minimum spanning forest of the graph. If the graph is connected, the forest has a single component and forms a minimum spanning tree. == Pseudocode == The following code is implemented with a disjoint-set data structure. It represents the forest F as a set of undirected edges, and uses the disjoint-set data structure to efficiently determine whether two vertices are part of the same tree. function Kruskal(Graph G) is F:= ∅ for each v in G.Vertices do MAKE-SET(v) for each {u, v} in G.Edges ordered by increasing weight({u, v}) do if FIND-SET(u) ≠ FIND-SET(v) then F := F ∪ { {u, v} } UNION(FIND-SET(u), FIND-SET(v)) return F == Complexity == For a graph with E edges and V vertices, Kruskal's algorithm can be shown to run in time O(E log E) time, with simple data structures. This time bound is often written instead as O(E log V), which is equivalent for graphs with no isolated vertices, because for these graphs V/2 ≤ E < V2 and the logarithms of V and E are again within a constant factor of each other. To achieve this bound, first sort the edges by weight using a comparison sort in O(E log E) time. Once sorted, it is possible to loop through the edges in sorted order in constant time per edge. Next, use a disjoint-set data structure, with a set of vertices for each component, to keep track of which vertices are in which components. Creating this structure, with a separate set for each vertex, takes V operations and O(V) time. The final iteration through all edges performs two find operations and possibly one union operation per edge. These operations take amortized time O(α(V)) time per operation, giving worst-case total time O(E α(V)) for this loop, where α is the extremely slowly growing inverse Ackermann function. This part of the time bound is much smaller than the time for the sorting step, so the total time for the algorithm can be simplified to the time for the sorting step. In cases where the edges are already sorted, or where they have small enough integer weight to allow integer sorting algorithms such as counting sort or radix sort to sort them in linear time, the disjoint set operations are the slowest remaining part of the algorithm and the total time is O(E α(V)). == Example == == Proof of correctness == The proof consists of two parts. First, it is proved that the algorithm produces a spanning tree. Second, it is proved that the constructed spanning tree is of minimal weight. === Spanning tree === Let G {\displaystyle G} be a connected, weighted graph and let Y {\displaystyle Y} be the subgraph of G {\displaystyle G} produced by the algorithm. Y {\displaystyle Y} cannot have a cycle, as by definition an edge is not added if it results in a cycle. Y {\displaystyle Y} cannot be disconnected, since the first encountered edge that joins two components of Y {\displaystyle Y} would have been added by the algorithm. Thus, Y {\displaystyle Y} is a spanning tree of G {\displaystyle G} . === Minimality === We show that the following proposition P is true by induction: If F is the set of edges chosen at any stage of the algorithm, then there is some minimum spanning tree that contains F and none of the edges rejected by the algorithm. Clearly P is true at the beginning, when F is empty: any minimum spanning tree will do, and there exists one because a weighted connected graph always has a minimum spanning tree. Now assume P is true for some non-final edge set F and let T be a minimum spanning tree that contains F. If the next chosen edge e is also in T, then P is true for F + e. Otherwise, if e is not in T then T + e has a cycle C. The cycle C contains edges which do not belong to F + e, since e does not form a cycle when added to F but does in T. Let f be an edge which is in C but not in F + e. Note that f also belongs to T, since f belongs to T + e but not F + e. By P, f has not been considered by the algorithm. f must therefore have a weight at least as large as e. Then T − f + e is a tree, and it has the same or less weight as T. However since T is a minimum spanning tree then T − f + e has the same weight as T, otherwise we get a contradiction and T would not be a minimum spanning tree. So T − f + e is a minimum spanning tree containing F + e and again P holds. Therefore, by the principle of induction, P holds when F has become a spanning tree, which is only possible if F is a minimum spanning tree itself. == Parallel algorithm == Kruskal's algorithm is inherently sequential and hard to parallelize. It is, however, possible to perform the initial sorting of the edges in parallel or, alternatively, to use a parallel implementation of a binary heap to extract the minimum-weight edge in every iteration. As parallel sorting is possible in time O ( n ) {\displaystyle O(n)} on O ( log ⁡ n ) {\displaystyle O(\log n)} processors, the runtime of Kruskal's algorithm can be reduced to O(E α(V)), where α again is the inverse of the single-valued Ackermann function. A variant of Kruskal's algorithm, named Filter-Kruskal, has been described by Osipov et al. and is better suited for parallelization. The basic idea behind Filter-Kruskal is to partition the edges in a similar way to quicksort and filter out edges that connect vertices of the same tree to reduce the cost of sorting. The following pseudocode demonstrates this. function filter_kruskal(G) is if |G.E| < kruskal_threshold: return kruskal(G) pivot = choose_random(G.E) E≤, E> = partition(G.E, pivot) A = filter_kruskal(E≤) E> = filter(E>) A = A ∪ filter_kruskal(E>) return A function partition(E, pivot) is E≤ = ∅, E> = ∅ foreach (u, v) in E do if weight(u, v) ≤ pivot then E≤ = E≤ ∪ {(u, v)} else E> = E> ∪ {(u, v)} return E≤, E> function filter(E) is Ef = ∅ foreach (u, v) in E do if find_set(u) ≠ find_set(v) then Ef = Ef ∪ {(u, v)} return Ef Filter-Kruskal lends itself better to parallelization as sorting, filtering, and partitioning can easily be performed in parallel by distributing the edges between the processors. Finally, other variants of a parallel implementation of Kruskal's algorithm have been explored. Examples include a scheme that uses helper threads to remove edges that are definitely not part of the MST in the background, and a variant which runs the sequential algorithm on p subgraphs, then merges those subgraphs until only one, the final MST, remains. == See also == Prim's algorithm Dijkstra's algorithm Borůvka's algorithm Reverse-delete algorithm Single-linkage clustering Greedy geometric spanner == References == Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 23.2: The algorithms of Kruskal and Prim, pp. 567–574. Michael T. Goodrich and Roberto Tamassia. Data Structures and Algorithms in Java, Fourth Edition. John Wiley & Sons, Inc., 2006. ISBN 0-471-73884-0. Section 13.7.1: Kruskal's Algorithm, pp. 632.. == External links == Data for the article's example. Gephi Plugin For Calculating a Minimum Spanning Tree source code. Kruskal's Algorithm with example and program in c++ Kruskal's Algorithm code in C++ as applied to random numbers Kruskal's Algorithm code in Python with explanation
Wikipedia/Kruskal's_algorithm
In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized secant method. Since the updates of the BFGS curvature matrix do not require matrix inversion, its computational complexity is only O ( n 2 ) {\displaystyle {\mathcal {O}}(n^{2})} , compared to O ( n 3 ) {\displaystyle {\mathcal {O}}(n^{3})} in Newton's method. Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints. The BFGS matrix also admits a compact representation, which makes it better suited for large constrained problems. The algorithm is named after Charles George Broyden, Roger Fletcher, Donald Goldfarb and David Shanno. == Rationale == The optimization problem is to minimize f ( x ) {\displaystyle f(\mathbf {x} )} , where x {\displaystyle \mathbf {x} } is a vector in R n {\displaystyle \mathbb {R} ^{n}} , and f {\displaystyle f} is a differentiable scalar function. There are no constraints on the values that x {\displaystyle \mathbf {x} } can take. The algorithm begins at an initial estimate x 0 {\displaystyle \mathbf {x} _{0}} for the optimal value and proceeds iteratively to get a better estimate at each stage. The search direction pk at stage k is given by the solution of the analogue of the Newton equation: B k p k = − ∇ f ( x k ) , {\displaystyle B_{k}\mathbf {p} _{k}=-\nabla f(\mathbf {x} _{k}),} where B k {\displaystyle B_{k}} is an approximation to the Hessian matrix at x k {\displaystyle \mathbf {x} _{k}} , which is updated iteratively at each stage, and ∇ f ( x k ) {\displaystyle \nabla f(\mathbf {x} _{k})} is the gradient of the function evaluated at xk. A line search in the direction pk is then used to find the next point xk+1 by minimizing f ( x k + γ p k ) {\displaystyle f(\mathbf {x} _{k}+\gamma \mathbf {p} _{k})} over the scalar γ > 0. {\displaystyle \gamma >0.} The quasi-Newton condition imposed on the update of B k {\displaystyle B_{k}} is B k + 1 ( x k + 1 − x k ) = ∇ f ( x k + 1 ) − ∇ f ( x k ) . {\displaystyle B_{k+1}(\mathbf {x} _{k+1}-\mathbf {x} _{k})=\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k}).} Let y k = ∇ f ( x k + 1 ) − ∇ f ( x k ) {\displaystyle \mathbf {y} _{k}=\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k})} and s k = x k + 1 − x k {\displaystyle \mathbf {s} _{k}=\mathbf {x} _{k+1}-\mathbf {x} _{k}} , then B k + 1 {\displaystyle B_{k+1}} satisfies B k + 1 s k = y k {\displaystyle B_{k+1}\mathbf {s} _{k}=\mathbf {y} _{k}} , which is the secant equation. The curvature condition s k ⊤ y k > 0 {\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}>0} should be satisfied for B k + 1 {\displaystyle B_{k+1}} to be positive definite, which can be verified by pre-multiplying the secant equation with s k T {\displaystyle \mathbf {s} _{k}^{T}} . If the function is not strongly convex, then the condition has to be enforced explicitly e.g. by finding a point xk+1 satisfying the Wolfe conditions, which entail the curvature condition, using line search. Instead of requiring the full Hessian matrix at the point x k + 1 {\displaystyle \mathbf {x} _{k+1}} to be computed as B k + 1 {\displaystyle B_{k+1}} , the approximate Hessian at stage k is updated by the addition of two matrices: B k + 1 = B k + U k + V k . {\displaystyle B_{k+1}=B_{k}+U_{k}+V_{k}.} Both U k {\displaystyle U_{k}} and V k {\displaystyle V_{k}} are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS and DFP updating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known as symmetric rank-one method, which does not guarantee the positive definiteness. In order to maintain the symmetry and positive definiteness of B k + 1 {\displaystyle B_{k+1}} , the update form can be chosen as B k + 1 = B k + α u u ⊤ + β v v ⊤ {\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }} . Imposing the secant condition, B k + 1 s k = y k {\displaystyle B_{k+1}\mathbf {s} _{k}=\mathbf {y} _{k}} . Choosing u = y k {\displaystyle \mathbf {u} =\mathbf {y} _{k}} and v = B k s k {\displaystyle \mathbf {v} =B_{k}\mathbf {s} _{k}} , we can obtain: α = 1 y k T s k , {\displaystyle \alpha ={\frac {1}{\mathbf {y} _{k}^{T}\mathbf {s} _{k}}},} β = − 1 s k T B k s k . {\displaystyle \beta =-{\frac {1}{\mathbf {s} _{k}^{T}B_{k}\mathbf {s} _{k}}}.} Finally, we substitute α {\displaystyle \alpha } and β {\displaystyle \beta } into B k + 1 = B k + α u u ⊤ + β v v ⊤ {\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }} and get the update equation of B k + 1 {\displaystyle B_{k+1}} : B k + 1 = B k + y k y k T y k T s k − B k s k s k T B k T s k T B k s k . {\displaystyle B_{k+1}=B_{k}+{\frac {\mathbf {y} _{k}\mathbf {y} _{k}^{\mathrm {T} }}{\mathbf {y} _{k}^{\mathrm {T} }\mathbf {s} _{k}}}-{\frac {B_{k}\mathbf {s} _{k}\mathbf {s} _{k}^{\mathrm {T} }B_{k}^{\mathrm {T} }}{\mathbf {s} _{k}^{\mathrm {T} }B_{k}\mathbf {s} _{k}}}.} == Algorithm == Consider the following unconstrained optimization problem minimize x ∈ R n f ( x ) , {\displaystyle {\begin{aligned}{\underset {\mathbf {x} \in \mathbb {R} ^{n}}{\text{minimize}}}\quad &f(\mathbf {x} ),\end{aligned}}} where f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a nonlinear objective function. From an initial guess x 0 ∈ R n {\displaystyle \mathbf {x} _{0}\in \mathbb {R} ^{n}} and an initial guess of the Hessian matrix B 0 ∈ R n × n {\displaystyle B_{0}\in \mathbb {R} ^{n\times n}} the following steps are repeated as x k {\displaystyle \mathbf {x} _{k}} converges to the solution: Obtain a direction p k {\displaystyle \mathbf {p} _{k}} by solving B k p k = − ∇ f ( x k ) {\displaystyle B_{k}\mathbf {p} _{k}=-\nabla f(\mathbf {x} _{k})} . Perform a one-dimensional optimization (line search) to find an acceptable stepsize α k {\displaystyle \alpha _{k}} in the direction found in the first step. If an exact line search is performed, then α k = arg ⁡ min f ( x k + α p k ) {\displaystyle \alpha _{k}=\arg \min f(\mathbf {x} _{k}+\alpha \mathbf {p} _{k})} . In practice, an inexact line search usually suffices, with an acceptable α k {\displaystyle \alpha _{k}} satisfying Wolfe conditions. Set s k = α k p k {\displaystyle \mathbf {s} _{k}=\alpha _{k}\mathbf {p} _{k}} and update x k + 1 = x k + s k {\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\mathbf {s} _{k}} . y k = ∇ f ( x k + 1 ) − ∇ f ( x k ) {\displaystyle \mathbf {y} _{k}={\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k})}} . B k + 1 = B k + y k y k T y k T s k − B k s k s k T B k T s k T B k s k {\displaystyle B_{k+1}=B_{k}+{\frac {\mathbf {y} _{k}\mathbf {y} _{k}^{\mathrm {T} }}{\mathbf {y} _{k}^{\mathrm {T} }\mathbf {s} _{k}}}-{\frac {B_{k}\mathbf {s} _{k}\mathbf {s} _{k}^{\mathrm {T} }B_{k}^{\mathrm {T} }}{\mathbf {s} _{k}^{\mathrm {T} }B_{k}\mathbf {s} _{k}}}} . Convergence can be determined by observing the norm of the gradient; given some ϵ > 0 {\displaystyle \epsilon >0} , one may stop the algorithm when | | ∇ f ( x k ) | | ≤ ϵ . {\displaystyle ||\nabla f(\mathbf {x} _{k})||\leq \epsilon .} If B 0 {\displaystyle B_{0}} is initialized with B 0 = I {\displaystyle B_{0}=I} , the first step will be equivalent to a gradient descent, but further steps are more and more refined by B k {\displaystyle B_{k}} , the approximation to the Hessian. The first step of the algorithm is carried out using the inverse of the matrix B k {\displaystyle B_{k}} , which can be obtained efficiently by applying the Sherman–Morrison formula to the step 5 of the algorithm, giving B k + 1 − 1 = ( I − s k y k T y k T s k ) B k − 1 ( I − y k s k T y k T s k ) + s k s k T y k T s k . {\displaystyle B_{k+1}^{-1}=\left(I-{\frac {\mathbf {s} _{k}\mathbf {y} _{k}^{T}}{\mathbf {y} _{k}^{T}\mathbf {s} _{k}}}\right)B_{k}^{-1}\left(I-{\frac {\mathbf {y} _{k}\mathbf {s} _{k}^{T}}{\mathbf {y} _{k}^{T}\mathbf {s} _{k}}}\right)+{\frac {\mathbf {s} _{k}\mathbf {s} _{k}^{T}}{\mathbf {y} _{k}^{T}\mathbf {s} _{k}}}.} This can be computed efficiently without temporary matrices, recognizing that B k − 1 {\displaystyle B_{k}^{-1}} is symmetric, and that y k T B k − 1 y k {\displaystyle \mathbf {y} _{k}^{\mathrm {T} }B_{k}^{-1}\mathbf {y} _{k}} and s k T y k {\displaystyle \mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}} are scalars, using an expansion such as B k + 1 − 1 = B k − 1 + ( s k T y k + y k T B k − 1 y k ) ( s k s k T ) ( s k T y k ) 2 − B k − 1 y k s k T + s k y k T B k − 1 s k T y k . {\displaystyle B_{k+1}^{-1}=B_{k}^{-1}+{\frac {(\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}+\mathbf {y} _{k}^{\mathrm {T} }B_{k}^{-1}\mathbf {y} _{k})(\mathbf {s} _{k}\mathbf {s} _{k}^{\mathrm {T} })}{(\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k})^{2}}}-{\frac {B_{k}^{-1}\mathbf {y} _{k}\mathbf {s} _{k}^{\mathrm {T} }+\mathbf {s} _{k}\mathbf {y} _{k}^{\mathrm {T} }B_{k}^{-1}}{\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}}}.} Therefore, in order to avoid any matrix inversion, the inverse of the Hessian can be approximated instead of the Hessian itself: H k = def B k − 1 . {\displaystyle H_{k}{\overset {\operatorname {def} }{=}}B_{k}^{-1}.} From an initial guess x 0 {\displaystyle \mathbf {x} _{0}} and an approximate inverted Hessian matrix H 0 {\displaystyle H_{0}} the following steps are repeated as x k {\displaystyle \mathbf {x} _{k}} converges to the solution: Obtain a direction p k {\displaystyle \mathbf {p} _{k}} by solving p k = − H k ∇ f ( x k ) {\displaystyle \mathbf {p} _{k}=-H_{k}\nabla f(\mathbf {x} _{k})} . Perform a one-dimensional optimization (line search) to find an acceptable stepsize α k {\displaystyle \alpha _{k}} in the direction found in the first step. If an exact line search is performed, then α k = arg ⁡ min f ( x k + α p k ) {\displaystyle \alpha _{k}=\arg \min f(\mathbf {x} _{k}+\alpha \mathbf {p} _{k})} . In practice, an inexact line search usually suffices, with an acceptable α k {\displaystyle \alpha _{k}} satisfying Wolfe conditions. Set s k = α k p k {\displaystyle \mathbf {s} _{k}=\alpha _{k}\mathbf {p} _{k}} and update x k + 1 = x k + s k {\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\mathbf {s} _{k}} . y k = ∇ f ( x k + 1 ) − ∇ f ( x k ) {\displaystyle \mathbf {y} _{k}={\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k})}} . H k + 1 = H k + ( s k T y k + y k T H k y k ) ( s k s k T ) ( s k T y k ) 2 − H k y k s k T + s k y k T H k s k T y k {\displaystyle H_{k+1}=H_{k}+{\frac {(\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}+\mathbf {y} _{k}^{\mathrm {T} }H_{k}\mathbf {y} _{k})(\mathbf {s} _{k}\mathbf {s} _{k}^{\mathrm {T} })}{(\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k})^{2}}}-{\frac {H_{k}\mathbf {y} _{k}\mathbf {s} _{k}^{\mathrm {T} }+\mathbf {s} _{k}\mathbf {y} _{k}^{\mathrm {T} }H_{k}}{\mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}}}} . In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix . However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix. == Further developments == The BFGS update formula heavily relies on the curvature s k ⊤ y k {\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}} being strictly positive and bounded away from zero. This condition is satisfied when we perform a line search with Wolfe conditions on a convex target. However, some real-life applications (like Sequential Quadratic Programming methods) routinely produce negative or nearly-zero curvatures. This can occur when optimizing a nonconvex target or when employing a trust-region approach instead of a line search. It is also possible to produce spurious values due to noise in the target. In such cases, one of the so-called damped BFGS updates can be used (see ) which modify s k {\displaystyle \mathbf {s} _{k}} and/or y k {\displaystyle \mathbf {y} _{k}} in order to obtain a more robust update. == Notable implementations == Notable open source implementations are: ALGLIB implements BFGS and its limited-memory version in C++ and C# GNU Octave uses a form of BFGS in its fsolve function, with trust region extensions. The GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2. In R, the BFGS algorithm (and the L-BFGS-B version that allows box constraints) is implemented as an option of the base function optim(). In SciPy, the scipy.optimize.fmin_bfgs function implements BFGS. It is also possible to run BFGS using any of the L-BFGS algorithms by setting the parameter L to a very large number. It is also one of the default methods used when running scipy.optimize.minimize with no constraints. In Julia, the Optim.jl package implements BFGS and L-BFGS as a solver option to the optimize() function (among other options). Notable proprietary implementations include: The large scale nonlinear optimization software Artelys Knitro implements, among others, both BFGS and L-BFGS algorithms. In the MATLAB Optimization Toolbox, the fminunc function uses BFGS with cubic line search when the problem size is set to "medium scale." Mathematica includes BFGS. LS-DYNA also uses BFGS to solve implicit Problems. == See also == == References == == Further reading == Avriel, Mordecai (2003), Nonlinear Programming: Analysis and Methods, Dover Publishing, ISBN 978-0-486-43227-4 Bonnans, J. Frédéric; Gilbert, J. Charles; Lemaréchal, Claude; Sagastizábal, Claudia A. (2006), "Newtonian Methods", Numerical Optimization: Theoretical and Practical Aspects (Second ed.), Berlin: Springer, pp. 51–66, ISBN 3-540-35445-X Fletcher, Roger (1987), Practical Methods of Optimization (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-91547-8 Luenberger, David G.; Ye, Yinyu (2008), Linear and nonlinear programming, International Series in Operations Research & Management Science, vol. 116 (Third ed.), New York: Springer, pp. xiv+546, ISBN 978-0-387-74502-2, MR 2423726 Kelley, C. T. (1999), Iterative Methods for Optimization, Philadelphia: Society for Industrial and Applied Mathematics, pp. 71–86, ISBN 0-89871-433-8
Wikipedia/Broyden–Fletcher–Goldfarb–Shanno_algorithm