text
stringlengths
559
401k
source
stringlengths
13
121
String functions are used in computer programming languages to manipulate a string or query information about a string (some do both). Most programming languages that have a string datatype will have some string functions although there may be other low-level ways within each language to handle strings directly. In object-oriented languages, string functions are often implemented as properties and methods of string objects. In functional and list-based languages a string is represented as a list (of character codes), therefore all list-manipulation procedures could be considered string functions. However such languages may implement a subset of explicit string-specific functions as well. For function that manipulate strings, modern object-oriented languages, like C# and Java have immutable strings and return a copy (in newly allocated dynamic memory), while others, like C manipulate the original string unless the programmer copies data to a new string. See for example Concatenation below. The most basic example of a string function is the length(string) function. This function returns the length of a string literal. e.g. length("hello world") would return 11. Other languages may have string functions with similar or exactly the same syntax or parameters or outcomes. For example, in many languages the length function is usually represented as len(string). The below list of common functions aims to help limit this confusion. == Common string functions (multi language reference) == String functions common to many languages are listed below, including the different names used. The below list of common functions aims to help programmers find the equivalent function in a language. Note, string concatenation and regular expressions are handled in separate pages. Statements in guillemets (« … ») are optional. === CharAt === # Example in ALGOL 68 # "Hello, World"[2]; // 'e' === Compare (integer result) === === Compare (relational operator-based, Boolean result) === === Concatenation === === Contains === ¢ Example in ALGOL 68 ¢ string in string("e", loc int, "Hello mate"); ¢ returns true ¢ string in string("z", loc int, "word"); ¢ returns false ¢ === Equality === Tests if two strings are equal. See also #Compare and #Compare. Note that doing equality checks via a generic Compare with integer result is not only confusing for the programmer but is often a significantly more expensive operation; this is especially true when using "C-strings". === Find === Examples Common Lisp C# Raku Scheme Visual Basic Smalltalk === Find character === ^a Given a set of characters, SCAN returns the position of the first character found, while VERIFY returns the position of the first character that does not belong to the set. === Format === === Inequality === Tests if two strings are not equal. See also #Equality. === index === see #Find === indexof === see #Find === instr === see #Find === instrrev === see #rfind === join === === lastindexof === see #rfind === left === === len === see #length === length === === locate === see #Find === Lowercase === === mid === see #substring === partition === === replace === === reverse === === rfind === === right === === rpartition === === slice === see #substring === split === === sprintf === see #Format === strip === see #trim === strcmp === see #Compare (integer result) === substring === === Uppercase === === trim === trim or strip is used to remove whitespace from the beginning, end, or both beginning and end, of a string. Other languages In languages without a built-in trim function, it is usually simple to create a custom function which accomplishes the same task. ==== APL ==== APL can use regular expressions directly: Alternatively, a functional approach combining Boolean masks that filter away leading and trailing spaces: Or reverse and remove leading spaces, twice: ==== AWK ==== In AWK, one can use regular expressions to trim: or: ==== C/C++ ==== There is no standard trim function in C or C++. Most of the available string libraries for C contain code which implements trimming, or functions that significantly ease an efficient implementation. The function has also often been called EatWhitespace in some non-standard C libraries. In C, programmers often combine a ltrim and rtrim to implement trim: The open source C++ library Boost has several trim variants, including a standard one: With boost's function named simply trim the input sequence is modified in-place, and returns no result. Another open source C++ library Qt, has several trim variants, including a standard one: The Linux kernel also includes a strip function, strstrip(), since 2.6.18-rc1, which trims the string "in place". Since 2.6.33-rc1, the kernel uses strim() instead of strstrip() to avoid false warnings. ==== Haskell ==== A trim algorithm in Haskell: may be interpreted as follows: f drops the preceding whitespace, and reverses the string. f is then again applied to its own output. Note that the type signature (the second line) is optional. ==== J ==== The trim algorithm in J is a functional description: That is: filter (#~) for non-space characters (' '&~:) between leading (+./\) and (*.) trailing (+./\.) spaces. ==== JavaScript ==== There is a built-in trim function in JavaScript 1.8.1 (Firefox 3.5 and later), and the ECMAScript 5 standard. In earlier versions it can be added to the String object's prototype as follows: ==== Perl ==== Perl 5 has no built-in trim function. However, the functionality is commonly achieved using regular expressions. Example: or: These examples modify the value of the original variable $string. Also available for Perl is StripLTSpace in String::Strip from CPAN. There are, however, two functions that are commonly used to strip whitespace from the end of strings, chomp and chop: chop removes the last character from a string and returns it. chomp removes the trailing newline character(s) from a string if present. (What constitutes a newline is $INPUT_RECORD_SEPARATOR dependent). In Raku, the upcoming sister language of Perl, strings have a trim method. Example: ==== Tcl ==== The Tcl string command has three relevant subcommands: trim, trimright and trimleft. For each of those commands, an additional argument may be specified: a string that represents a set of characters to remove—the default is whitespace (space, tab, newline, carriage return). Example of trimming vowels: ==== XSLT ==== XSLT includes the function normalize-space(string) which strips leading and trailing whitespace, in addition to replacing any whitespace sequence (including line breaks) with a single space. Example: XSLT 2.0 includes regular expressions, providing another mechanism to perform string trimming. Another XSLT technique for trimming is to utilize the XPath 2.0 substring() function. == References ==
Wikipedia/Comparison_of_programming_languages_(string_functions)
String functions are used in computer programming languages to manipulate a string or query information about a string (some do both). Most programming languages that have a string datatype will have some string functions although there may be other low-level ways within each language to handle strings directly. In object-oriented languages, string functions are often implemented as properties and methods of string objects. In functional and list-based languages a string is represented as a list (of character codes), therefore all list-manipulation procedures could be considered string functions. However such languages may implement a subset of explicit string-specific functions as well. For function that manipulate strings, modern object-oriented languages, like C# and Java have immutable strings and return a copy (in newly allocated dynamic memory), while others, like C manipulate the original string unless the programmer copies data to a new string. See for example Concatenation below. The most basic example of a string function is the length(string) function. This function returns the length of a string literal. e.g. length("hello world") would return 11. Other languages may have string functions with similar or exactly the same syntax or parameters or outcomes. For example, in many languages the length function is usually represented as len(string). The below list of common functions aims to help limit this confusion. == Common string functions (multi language reference) == String functions common to many languages are listed below, including the different names used. The below list of common functions aims to help programmers find the equivalent function in a language. Note, string concatenation and regular expressions are handled in separate pages. Statements in guillemets (« … ») are optional. === CharAt === # Example in ALGOL 68 # "Hello, World"[2]; // 'e' === Compare (integer result) === === Compare (relational operator-based, Boolean result) === === Concatenation === === Contains === ¢ Example in ALGOL 68 ¢ string in string("e", loc int, "Hello mate"); ¢ returns true ¢ string in string("z", loc int, "word"); ¢ returns false ¢ === Equality === Tests if two strings are equal. See also #Compare and #Compare. Note that doing equality checks via a generic Compare with integer result is not only confusing for the programmer but is often a significantly more expensive operation; this is especially true when using "C-strings". === Find === Examples Common Lisp C# Raku Scheme Visual Basic Smalltalk === Find character === ^a Given a set of characters, SCAN returns the position of the first character found, while VERIFY returns the position of the first character that does not belong to the set. === Format === === Inequality === Tests if two strings are not equal. See also #Equality. === index === see #Find === indexof === see #Find === instr === see #Find === instrrev === see #rfind === join === === lastindexof === see #rfind === left === === len === see #length === length === === locate === see #Find === Lowercase === === mid === see #substring === partition === === replace === === reverse === === rfind === === right === === rpartition === === slice === see #substring === split === === sprintf === see #Format === strip === see #trim === strcmp === see #Compare (integer result) === substring === === Uppercase === === trim === trim or strip is used to remove whitespace from the beginning, end, or both beginning and end, of a string. Other languages In languages without a built-in trim function, it is usually simple to create a custom function which accomplishes the same task. ==== APL ==== APL can use regular expressions directly: Alternatively, a functional approach combining Boolean masks that filter away leading and trailing spaces: Or reverse and remove leading spaces, twice: ==== AWK ==== In AWK, one can use regular expressions to trim: or: ==== C/C++ ==== There is no standard trim function in C or C++. Most of the available string libraries for C contain code which implements trimming, or functions that significantly ease an efficient implementation. The function has also often been called EatWhitespace in some non-standard C libraries. In C, programmers often combine a ltrim and rtrim to implement trim: The open source C++ library Boost has several trim variants, including a standard one: With boost's function named simply trim the input sequence is modified in-place, and returns no result. Another open source C++ library Qt, has several trim variants, including a standard one: The Linux kernel also includes a strip function, strstrip(), since 2.6.18-rc1, which trims the string "in place". Since 2.6.33-rc1, the kernel uses strim() instead of strstrip() to avoid false warnings. ==== Haskell ==== A trim algorithm in Haskell: may be interpreted as follows: f drops the preceding whitespace, and reverses the string. f is then again applied to its own output. Note that the type signature (the second line) is optional. ==== J ==== The trim algorithm in J is a functional description: That is: filter (#~) for non-space characters (' '&~:) between leading (+./\) and (*.) trailing (+./\.) spaces. ==== JavaScript ==== There is a built-in trim function in JavaScript 1.8.1 (Firefox 3.5 and later), and the ECMAScript 5 standard. In earlier versions it can be added to the String object's prototype as follows: ==== Perl ==== Perl 5 has no built-in trim function. However, the functionality is commonly achieved using regular expressions. Example: or: These examples modify the value of the original variable $string. Also available for Perl is StripLTSpace in String::Strip from CPAN. There are, however, two functions that are commonly used to strip whitespace from the end of strings, chomp and chop: chop removes the last character from a string and returns it. chomp removes the trailing newline character(s) from a string if present. (What constitutes a newline is $INPUT_RECORD_SEPARATOR dependent). In Raku, the upcoming sister language of Perl, strings have a trim method. Example: ==== Tcl ==== The Tcl string command has three relevant subcommands: trim, trimright and trimleft. For each of those commands, an additional argument may be specified: a string that represents a set of characters to remove—the default is whitespace (space, tab, newline, carriage return). Example of trimming vowels: ==== XSLT ==== XSLT includes the function normalize-space(string) which strips leading and trailing whitespace, in addition to replacing any whitespace sequence (including line breaks) with a single space. Example: XSLT 2.0 includes regular expressions, providing another mechanism to perform string trimming. Another XSLT technique for trimming is to utilize the XPath 2.0 substring() function. == References ==
Wikipedia/String_manipulation_algorithm
In mathematics, the inverse limit (also called the projective limit) is a construction that allows one to "glue together" several related objects, the precise gluing process being specified by morphisms between the objects. Thus, inverse limits can be defined in any category although their existence depends on the category that is considered. They are a special case of the concept of limit in category theory. By working in the dual category, that is by reversing the arrows, an inverse limit becomes a direct limit or inductive limit, and a limit becomes a colimit. == Formal definition == === Algebraic objects === We start with the definition of an inverse system (or projective system) of groups and homomorphisms. Let ( I , ≤ ) {\displaystyle (I,\leq )} be a directed poset (not all authors require I to be directed). Let (Ai)i∈I be a family of groups and suppose we have a family of homomorphisms f i j : A j → A i {\displaystyle f_{ij}:A_{j}\to A_{i}} for all i ≤ j {\displaystyle i\leq j} (note the order) with the following properties: f i i {\displaystyle f_{ii}} is the identity on A i {\displaystyle A_{i}} , f i k = f i j ∘ f j k for all i ≤ j ≤ k . {\displaystyle f_{ik}=f_{ij}\circ f_{jk}\quad {\text{for all }}i\leq j\leq k.} Then the pair ( ( A i ) i ∈ I , ( f i j ) i ≤ j ∈ I ) {\displaystyle ((A_{i})_{i\in I},(f_{ij})_{i\leq j\in I})} is called an inverse system of groups and morphisms over I {\displaystyle I} , and the morphisms f i j {\displaystyle f_{ij}} are called the transition morphisms of the system. We define the inverse limit of the inverse system ( ( A i ) i ∈ I , ( f i j ) i ≤ j ∈ I ) {\displaystyle ((A_{i})_{i\in I},(f_{ij})_{i\leq j\in I})} as a particular subgroup of the direct product of the A i {\displaystyle A_{i}} 's: A = lim ← i ∈ I ⁡ A i = { a → ∈ ∏ i ∈ I A i | a i = f i j ( a j ) for all i ≤ j in I } . {\displaystyle A=\varprojlim _{i\in I}{A_{i}}=\left\{\left.{\vec {a}}\in \prod _{i\in I}A_{i}\;\right|\;a_{i}=f_{ij}(a_{j}){\text{ for all }}i\leq j{\text{ in }}I\right\}.} The inverse limit A {\displaystyle A} comes equipped with natural projections πi: A → Ai which pick out the ith component of the direct product for each i {\displaystyle i} in I {\displaystyle I} . The inverse limit and the natural projections satisfy a universal property described in the next section. This same construction may be carried out if the A i {\displaystyle A_{i}} 's are sets, semigroups, topological spaces, rings, modules (over a fixed ring), algebras (over a fixed ring), etc., and the homomorphisms are morphisms in the corresponding category. The inverse limit will also belong to that category. === General definition === The inverse limit can be defined abstractly in an arbitrary category by means of a universal property. Let ( X i , f i j ) {\textstyle (X_{i},f_{ij})} be an inverse system of objects and morphisms in a category C (same definition as above). The inverse limit of this system is an object X in C together with morphisms πi: X → Xi (called projections) satisfying πi = f i j {\displaystyle f_{ij}} ∘ πj for all i ≤ j. The pair (X, πi) must be universal in the sense that for any other such pair (Y, ψi) there exists a unique morphism u: Y → X such that the diagram commutes for all i ≤ j. The inverse limit is often denoted X = lim ← ⁡ X i {\displaystyle X=\varprojlim X_{i}} with the inverse system ( X i , f i j ) {\textstyle (X_{i},f_{ij})} and the canonical projections π i {\displaystyle \pi _{i}} being understood. In some categories, the inverse limit of certain inverse systems does not exist. If it does, however, it is unique in a strong sense: given any two inverse limits X and X' of an inverse system, there exists a unique isomorphism X′ → X commuting with the projection maps. Inverse systems and inverse limits in a category C admit an alternative description in terms of functors. Any partially ordered set I can be considered as a small category where the morphisms consist of arrows i → j if and only if i ≤ j. An inverse system is then just a contravariant functor I → C. Let C I o p {\displaystyle C^{I^{\mathrm {op} }}} be the category of these functors (with natural transformations as morphisms). An object X of C can be considered a trivial inverse system, where all objects are equal to X and all arrow are the identity of X. This defines a "trivial functor" from C to C I o p . {\displaystyle C^{I^{\mathrm {op} }}.} The inverse limit, if it exists, is defined as a right adjoint of this trivial functor. == Examples == The ring of p-adic integers is the inverse limit of the rings Z / p n Z {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} } (see modular arithmetic) with the index set being the natural numbers with the usual order, and the morphisms being "take remainder". That is, one considers sequences of integers ( n 1 , n 2 , … ) {\displaystyle (n_{1},n_{2},\dots )} such that each element of the sequence "projects" down to the previous ones, namely, that n i ≡ n j mod p i {\displaystyle n_{i}\equiv n_{j}{\mbox{ mod }}p^{i}} whenever i < j . {\displaystyle i<j.} The natural topology on the p-adic integers is the one implied here, namely the product topology with cylinder sets as the open sets. The p-adic solenoid is the inverse limit of the topological groups R / p n Z {\displaystyle \mathbb {R} /p^{n}\mathbb {Z} } with the index set being the natural numbers with the usual order, and the morphisms being "take remainder". That is, one considers sequences of real numbers ( x 1 , x 2 , … ) {\displaystyle (x_{1},x_{2},\dots )} such that each element of the sequence "projects" down to the previous ones, namely, that x i ≡ x j mod p i {\displaystyle x_{i}\equiv x_{j}{\mbox{ mod }}p^{i}} whenever i < j . {\displaystyle i<j.} Its elements are exactly of form n + r {\displaystyle n+r} , where n {\displaystyle n} is a p-adic integer, and r ∈ [ 0 , 1 ) {\displaystyle r\in [0,1)} is the "remainder". The ring R [ [ t ] ] {\displaystyle \textstyle R[[t]]} of formal power series over a commutative ring R can be thought of as the inverse limit of the rings R [ t ] / t n R [ t ] {\displaystyle \textstyle R[t]/t^{n}R[t]} , indexed by the natural numbers as usually ordered, with the morphisms from R [ t ] / t n + j R [ t ] {\displaystyle \textstyle R[t]/t^{n+j}R[t]} to R [ t ] / t n R [ t ] {\displaystyle \textstyle R[t]/t^{n}R[t]} given by the natural projection. Pro-finite groups are defined as inverse limits of (discrete) finite groups. Let the index set I of an inverse system (Xi, f i j {\displaystyle f_{ij}} ) have a greatest element m. Then the natural projection πm: X → Xm is an isomorphism. In the category of sets, every inverse system has an inverse limit, which can be constructed in an elementary manner as a subset of the product of the sets forming the inverse system. The inverse limit of any inverse system of non-empty finite sets is non-empty. This is a generalization of Kőnig's lemma in graph theory and may be proved with Tychonoff's theorem, viewing the finite sets as compact discrete spaces, and then applying the finite intersection property characterization of compactness. In the category of topological spaces, every inverse system has an inverse limit. It is constructed by placing the initial topology on the underlying set-theoretic inverse limit. This is known as the limit topology. The set of infinite strings is the inverse limit of the set of finite strings, and is thus endowed with the limit topology. As the original spaces are discrete, the limit space is totally disconnected. This is one way of realizing the p-adic numbers and the Cantor set (as infinite strings). == Derived functors of the inverse limit == For an abelian category C, the inverse limit functor lim ← : C I → C {\displaystyle \varprojlim :C^{I}\rightarrow C} is left exact. If I is ordered (not simply partially ordered) and countable, and C is the category Ab of abelian groups, the Mittag-Leffler condition is a condition on the transition morphisms fij that ensures the exactness of lim ← {\displaystyle \varprojlim } . Specifically, Eilenberg constructed a functor lim ← ⁡ 1 : Ab I → Ab {\displaystyle \varprojlim {}^{1}:\operatorname {Ab} ^{I}\rightarrow \operatorname {Ab} } (pronounced "lim one") such that if (Ai, fij), (Bi, gij), and (Ci, hij) are three inverse systems of abelian groups, and 0 → A i → B i → C i → 0 {\displaystyle 0\rightarrow A_{i}\rightarrow B_{i}\rightarrow C_{i}\rightarrow 0} is a short exact sequence of inverse systems, then 0 → lim ← ⁡ A i → lim ← ⁡ B i → lim ← ⁡ C i → lim ← ⁡ 1 A i {\displaystyle 0\rightarrow \varprojlim A_{i}\rightarrow \varprojlim B_{i}\rightarrow \varprojlim C_{i}\rightarrow \varprojlim {}^{1}A_{i}} is an exact sequence in Ab. === Mittag-Leffler condition === If the ranges of the morphisms of an inverse system of abelian groups (Ai, fij) are stationary, that is, for every k there exists j ≥ k such that for all i ≥ j : f k j ( A j ) = f k i ( A i ) {\displaystyle f_{kj}(A_{j})=f_{ki}(A_{i})} one says that the system satisfies the Mittag-Leffler condition. The name "Mittag-Leffler" for this condition was given by Bourbaki in their chapter on uniform structures for a similar result about inverse limits of complete Hausdorff uniform spaces. Mittag-Leffler used a similar argument in the proof of Mittag-Leffler's theorem. The following situations are examples where the Mittag-Leffler condition is satisfied: a system in which the morphisms fij are surjective a system of finite-dimensional vector spaces or finite abelian groups or modules of finite length or Artinian modules. An example where lim ← ⁡ 1 {\displaystyle \varprojlim {}^{1}} is non-zero is obtained by taking I to be the non-negative integers, letting Ai = piZ, Bi = Z, and Ci = Bi / Ai = Z/piZ. Then lim ← ⁡ 1 A i = Z p / Z {\displaystyle \varprojlim {}^{1}A_{i}=\mathbf {Z} _{p}/\mathbf {Z} } where Zp denotes the p-adic integers. === Further results === More generally, if C is an arbitrary abelian category that has enough injectives, then so does CI, and the right derived functors of the inverse limit functor can thus be defined. The nth right derived functor is denoted R n lim ← : C I → C . {\displaystyle R^{n}\varprojlim :C^{I}\rightarrow C.} In the case where C satisfies Grothendieck's axiom (AB4*), Jan-Erik Roos generalized the functor lim1 on AbI to series of functors limn such that lim ← ⁡ n ≅ R n lim ← . {\displaystyle \varprojlim {}^{n}\cong R^{n}\varprojlim .} It was thought for almost 40 years that Roos had proved (in Sur les foncteurs dérivés de lim. Applications.) that lim1 Ai = 0 for (Ai, fij) an inverse system with surjective transition morphisms and I the set of non-negative integers (such inverse systems are often called "Mittag-Leffler sequences"). However, in 2002, Amnon Neeman and Pierre Deligne constructed an example of such a system in a category satisfying (AB4) (in addition to (AB4*)) with lim1 Ai ≠ 0. Roos has since shown (in "Derived functors of inverse limits revisited") that his result is correct if C has a set of generators (in addition to satisfying (AB3) and (AB4*)). Barry Mitchell has shown (in "The cohomological dimension of a directed set") that if I has cardinality ℵ d {\displaystyle \aleph _{d}} (the dth infinite cardinal), then Rnlim is zero for all n ≥ d + 2. This applies to the I-indexed diagrams in the category of R-modules, with R a commutative ring; it is not necessarily true in an arbitrary abelian category (see Roos' "Derived functors of inverse limits revisited" for examples of abelian categories in which limn, on diagrams indexed by a countable set, is nonzero for n > 1). == Related concepts and generalizations == The categorical dual of an inverse limit is a direct limit (or inductive limit). More general concepts are the limits and colimits of category theory. The terminology is somewhat confusing: inverse limits are a class of limits, while direct limits are a class of colimits. == Notes == == References == Bourbaki, Nicolas (1989), Algebra I, Springer, ISBN 978-3-540-64243-5, OCLC 40551484 Bourbaki, Nicolas (1989), General topology: Chapters 1-4, Springer, ISBN 978-3-540-64241-1, OCLC 40551485 Mac Lane, Saunders (September 1998), Categories for the Working Mathematician (2nd ed.), Springer, ISBN 0-387-98403-8 Mitchell, Barry (1972), "Rings with several objects", Advances in Mathematics, 8: 1–161, doi:10.1016/0001-8708(72)90002-3, MR 0294454 Neeman, Amnon (2002), "A counterexample to a 1961 "theorem" in homological algebra (with appendix by Pierre Deligne)", Inventiones Mathematicae, 148 (2): 397–420, doi:10.1007/s002220100197, MR 1906154 Roos, Jan-Erik (1961), "Sur les foncteurs dérivés de lim. Applications", C. R. Acad. Sci. Paris, 252: 3702–3704, MR 0132091 Roos, Jan-Erik (2006), "Derived functors of inverse limits revisited", J. London Math. Soc., Series 2, 73 (1): 65–83, doi:10.1112/S0024610705022416, MR 2197371 Section 3.5 of Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259.
Wikipedia/Limit_topology
In a written language, a logogram (from Ancient Greek logos 'word', and gramma 'that which is drawn or written'), also logograph or lexigraph, is a written character that represents a semantic component of a language, such as a word or morpheme. Chinese characters as used in Chinese as well as other languages are logograms, as are Egyptian hieroglyphs and characters in cuneiform script. A writing system that primarily uses logograms is called a logography. Non-logographic writing systems, such as alphabets and syllabaries, are phonemic: their individual symbols represent sounds directly and lack any inherent meaning. However, all known logographies have some phonetic component, generally based on the rebus principle, and the addition of a phonetic component to pure ideographs is considered to be a key innovation in enabling the writing system to adequately encode human language. == Types of logographic systems == Some of the earliest recorded writing systems are logographic; the first historical civilizations of Mesopotamia, Egypt, China and Mesoamerica all used some form of logographic writing. All logographic scripts ever used for natural languages rely on the rebus principle to extend a relatively limited set of logograms: A subset of characters is used for their phonetic values, either consonantal or syllabic. The term logosyllabary is used to emphasize the partially phonetic nature of these scripts when the phonetic domain is the syllable. In Ancient Egyptian hieroglyphs, Ch'olti', and in Chinese, there has been the additional development of determinatives, which are combined with logograms to narrow down their possible meaning. In Chinese, they are fused with logographic elements used phonetically; such "radical and phonetic" characters make up the bulk of the script. Ancient Egyptian and Chinese relegated the active use of rebus to the spelling of foreign and dialectical words. === Logoconsonantal === Logoconsonantal scripts have graphemes that may be extended phonetically according to the consonants of the words they represent, ignoring the vowels. For example, Egyptian was used to write both sȝ 'duck' and sȝ 'son', though it is likely that these words were not pronounced the same except for their consonants. The primary examples of logoconsonantal scripts are Egyptian hieroglyphs, hieratic, and demotic: Ancient Egyptian. === Logosyllabic === Logosyllabic scripts have graphemes which represent morphemes, often polysyllabic morphemes, but when extended phonetically represent single syllables. They include cuneiform, Anatolian hieroglyphs, Cretan hieroglyphs, Linear A and Linear B, Chinese characters, Maya script, Aztec script, Mixtec script, and the first five phases of the Bamum script. === Others === A peculiar system of logograms developed within the Pahlavi scripts (developed from the abjad of Aramaic) used to write Middle Persian during much of the Sassanid period; the logograms were composed of letters that spelled out the word in Aramaic but were pronounced as in Persian (for instance, the combination m-l-k would be pronounced "shah"). These logograms, called hozwārishn (a form of heterograms), were dispensed with altogether after the Arab conquest of Persia and the adoption of a variant of the Arabic alphabet. == Semantic and phonetic dimensions == All historical logographic systems include a phonetic dimension, as it is impractical to have a separate basic character for every word or morpheme in a language. In some cases, such as cuneiform as it was used for Akkadian, the vast majority of glyphs are used for their sound values rather than logographically. Many logographic systems also have a semantic/ideographic component (see ideogram), called "determinatives" in the case of Egyptian and "radicals" in the case of Chinese. Typical Egyptian usage was to augment a logogram, which may potentially represent several words with different pronunciations, with a determinate to narrow down the meaning, and a phonetic component to specify the pronunciation. In the case of Chinese, the vast majority of characters are a fixed combination of a radical that indicates its nominal category, plus a phonetic to give an idea of the pronunciation. The Mayan system used logograms with phonetic complements like the Egyptian, while lacking ideographic components. == Universal logograms == Not all logograms are associated with one specific language, and some are not associated with any language at all. The ampersand is a logogram in the Latin script, a combination of the letters "e" and "t." In Latin, "et" translates to "and," and the ampersand is still used to represent this word today, however, it does so in a variety of languages, being a representative of morphemes "and," "y," or "en," if they are a speaker of English, Spanish, or Dutch, respectively. Outside of any script is Unicode, a compilation of characters of various meanings. They state their intention to build the standard to include every character from every language. It's the generally accepted standard for computer character encoding, but others, like ASCII and Baudot, exist and serve various purposes in digital communication. Many logograms in these databases are ubiquitous, and are used on the Internet by users worldwide. == Chinese characters == Chinese scholars have traditionally classified the Chinese characters (hànzì) into six types by etymology. The first two types are "single-body", meaning that the character was created independently of other characters. "Single-body" pictograms and ideograms make up only a small proportion of Chinese logograms. More productive for the Chinese script were the two "compound" methods, i.e. the character was created from assembling different characters. Despite being called "compounds", these logograms are still single characters, and are written to take up the same amount of space as any other logogram. The final two types are methods in the usage of characters rather than the formation of characters themselves. The first type, and the type most often associated with Chinese writing, are pictograms, which are pictorial representations of the morpheme represented, e.g. 山 for 'mountain'. The second type are the ideograms that attempt to visualize abstract concepts, such as 上 'up' and 下 'down'. Also considered ideograms are pictograms with an ideographic indicator; for instance, 刀 is a pictogram meaning 'knife', while 刃 is an ideogram meaning 'blade'. Radical–radical compounds, in which each element of the character (called radical) hints at the meaning. For example, 休 'rest' is composed of the characters for 'person' (人) and 'tree' (木), with the intended idea of someone leaning against a tree, i.e. resting. Radical–phonetic compounds, in which one component (the radical) indicates the general meaning of the character, and the other (the phonetic) hints at the pronunciation. An example is 樑 (liáng), where the phonetic 梁 liáng indicates the pronunciation of the character and the radical 木 ('wood') indicates its meaning of 'supporting beam'. Characters of this type constitute around 90% of Chinese logograms. Changed-annotation characters are characters which were originally the same character but have bifurcated through orthographic and often semantic drift. For instance, 樂 / 乐 can mean both 'music' (yuè) and 'pleasure' (lè). Improvisational characters (lit. 'improvised-borrowed-words') come into use when a native spoken word has no corresponding character, and hence another character with the same or a similar sound (and often a close meaning) is "borrowed"; occasionally, the new meaning can supplant the old meaning. For example, 自 used to be a pictographic word meaning 'nose', but was borrowed to mean 'self', and is now used almost exclusively to mean the latter; the original meaning survives only in stock phrases and more archaic compounds. Because of their derivational process, the entire set of Japanese kana can be considered to be of this type of character, hence the name kana (lit. 'borrowed names'). Example: Japanese 仮名; 仮 is a simplified form of Chinese 假 used in Korea and Japan, and 假借 is the Chinese name for this type of characters. The most productive method of Chinese writing, the radical-phonetic, was made possible by ignoring certain distinctions in the phonetic system of syllables. In Old Chinese, post-final ending consonants /s/ and /ʔ/ were typically ignored; these developed into tones in Middle Chinese, which were likewise ignored when new characters were created. Also ignored were differences in aspiration (between aspirated vs. unaspirated obstruents, and voiced vs. unvoiced sonorants); the Old Chinese difference between type-A and type-B syllables (often described as presence vs. absence of palatalization or pharyngealization); and sometimes, voicing of initial obstruents and/or the presence of a medial /r/ after the initial consonant. In earlier times, greater phonetic freedom was generally allowed. During Middle Chinese times, newly created characters tended to match pronunciation exactly, other than the tone – often by using as the phonetic component a character that itself is a radical-phonetic compound. Due to the long period of language evolution, such component "hints" within characters as provided by the radical-phonetic compounds are sometimes useless and may be misleading in modern usage. As an example, based on 每 'each', pronounced měi in Standard Mandarin, are the characters 侮 'to humiliate', 悔 'to regret', and 海 'sea', pronounced respectively wǔ, huǐ, and hǎi in Mandarin. Three of these characters were pronounced very similarly in Old Chinese – /mˤəʔ/ (每), /m̥ˤəʔ/ (悔), and /m̥ˤəʔ/ (海) according to a recent reconstruction by William H. Baxter and Laurent Sagart – but sound changes in the intervening 3,000 years or so (including two different dialectal developments, in the case of the last two characters) have resulted in radically different pronunciations. === Chinese characters used in Japanese and Korean === Within the context of the Chinese language, Chinese characters (known as hanzi) by and large represent words and morphemes rather than pure ideas; however, the adoption of Chinese characters by the Japanese and Korean languages (where they are known as kanji and hanja, respectively) have resulted in some complications to this picture. Many Chinese words, composed of Chinese morphemes, were borrowed into Japanese and Korean together with their character representations; in this case, the morphemes and characters were borrowed together. In other cases, however, characters were borrowed to represent native Japanese and Korean morphemes, on the basis of meaning alone. As a result, a single character can end up representing multiple morphemes of similar meaning but with different origins across several languages. Because of this, kanji and hanja are sometimes described as morphographic writing systems. === Differences in processing of logographic and phonologic writing systems === Because much research on language processing has centered on English and other alphabetically written languages, many theories of language processing have stressed the role of phonology in producing speech. Contrasting logographically coded languages, where a single character is represented phonetically and ideographically, with phonetically/phonemically spelled languages has yielded insights into how different languages rely on different processing mechanisms. Studies on the processing of logographically coded languages have amongst other things looked at neurobiological differences in processing, with one area of particular interest being hemispheric lateralization. Since logographically coded languages are more closely associated with images than alphabetically coded languages, several researchers have hypothesized that right-side activation should be more prominent in logographically coded languages. Although some studies have yielded results consistent with this hypothesis there are too many contrasting results to make any final conclusions about the role of hemispheric lateralization in orthographically versus phonetically coded languages. Another topic that has been given some attention is differences in processing of homophones. Verdonschot et al. examined differences in the time it took to read a homophone out loud when a picture that was either related or unrelated to a homophonic character was presented before the character. Both Japanese and Chinese homophones were examined. Whereas word production of alphabetically coded languages (such as English) has shown a relatively robust immunity to the effect of context stimuli, Verdschot et al. found that Japanese homophones seem particularly sensitive to these types of effects. Specifically, reaction times were shorter when participants were presented with a phonologically related picture before being asked to read a target character out loud. An example of a phonologically related stimulus from the study would be for instance when participants were presented with a picture of an elephant, which is pronounced zou in Japanese, before being presented with the Chinese character 造, which is also read zou. No effect of phonologically related context pictures were found for the reaction times for reading Chinese words. A comparison of the (partially) logographically coded languages Japanese and Chinese is interesting because whereas the Japanese language consists of more than 60% homographic heterophones (characters that can be read two or more different ways), most Chinese characters only have one reading. Because both languages are logographically coded, the difference in latency in reading aloud Japanese and Chinese due to context effects cannot be ascribed to the logographic nature of the writing systems. Instead, the authors hypothesize that the difference in latency times is due to additional processing costs in Japanese, where the reader cannot rely solely on a direct orthography-to-phonology route, but information on a lexical-syntactical level must also be accessed in order to choose the correct pronunciation. This hypothesis is confirmed by studies finding that Japanese Alzheimer's disease patients whose comprehension of characters had deteriorated still could read the words out loud with no particular difficulty. Studies contrasting the processing of English and Chinese homophones in lexical decision tasks have found an advantage for homophone processing in Chinese, and a disadvantage for processing homophones in English. The processing disadvantage in English is usually described in terms of the relative lack of homophones in the English language. When a homophonic word is encountered, the phonological representation of that word is first activated. However, since this is an ambiguous stimulus, a matching at the orthographic/lexical ("mental dictionary") level is necessary before the stimulus can be disambiguated, and the correct pronunciation can be chosen. In contrast, in a language (such as Chinese) where many characters with the same reading exists, it is hypothesized that the person reading the character will be more familiar with homophones, and that this familiarity will aid the processing of the character, and the subsequent selection of the correct pronunciation, leading to shorter reaction times when attending to the stimulus. In an attempt to better understand homophony effects on processing, Hino et al. conducted a series of experiments using Japanese as their target language. While controlling for familiarity, they found a processing advantage for homophones over non-homophones in Japanese, similar to what has previously been found in Chinese. The researchers also tested whether orthographically similar homophones would yield a disadvantage in processing, as has been the case with English homophones, but found no evidence for this. It is evident that there is a difference in how homophones are processed in logographically coded and alphabetically coded languages, but whether the advantage for processing of homophones in the logographically coded languages Japanese and Chinese (i.e. their writing systems) is due to the logographic nature of the scripts, or if it merely reflects an advantage for languages with more homophones regardless of script nature, remains to be seen. == Advantages and disadvantages == === Separating writing and pronunciation === The main difference between logograms and other writing systems is that the graphemes are not linked directly to their pronunciation. An advantage of this separation is that understanding of the pronunciation or language of the writer is unnecessary, e.g. 1 is understood regardless of whether it be called one, ichi or wāḥid by its reader. Likewise, people speaking different varieties of Chinese may not understand each other in speaking, but may do so to a significant extent in writing even if they do not write in Standard Chinese. Therefore, in China, Vietnam, Korea, and Japan before modern times, communication by writing (筆談) was the norm of East Asian international trade and diplomacy using Classical Chinese. This separation, however, also has the great disadvantage of requiring the memorization of the logograms when learning to read and write, separately from the pronunciation. Though not from an inherent feature of logograms but due to its unique history of development, Japanese has the added complication that almost every logogram has more than one pronunciation. Conversely, a phonetic character set is written precisely as it is spoken, but with the disadvantage that slight pronunciation differences introduce ambiguities. Many alphabetic systems such as those of Greek, Latin, Italian, Spanish, and Finnish make the practical compromise of standardizing how words are written while maintaining a nearly one-to-one relation between characters and sounds. Orthographies in some other languages, such as English, French, Thai and Tibetan, are all more complicated than that; character combinations are often pronounced in multiple ways, usually depending on their history. Hangul, the Korean language's writing system, is an example of an alphabetic script that was designed to replace the logogrammatic hanja in order to increase literacy. The latter is now rarely used, but retains some currency in South Korea, sometimes in combination with hangul. According to government-commissioned research, the most commonly used 3,500 characters listed in the People's Republic of China's "Chart of Common Characters of Modern Chinese" (现代汉语常用字表, Xiàndài Hànyǔ Chángyòngzì Biǎo) cover 99.48% of a two-million-word sample. As for the case of traditional Chinese characters, 4,808 characters are listed in the "Chart of Standard Forms of Common National Characters" (常用國字標準字體表) by the Ministry of Education of the Republic of China, while 4,759 in the "List of Graphemes of Commonly-Used Chinese Characters" (常用字字形表) by the Education and Manpower Bureau of Hong Kong, both of which are intended to be taught during elementary and junior secondary education. Education after elementary school includes not as many new characters as new words, which are mostly combinations of two or more already learned characters. === Characters in information technology === Entering complex characters can be cumbersome on electronic devices due to a practical limitation in the number of input keys. There exist various input methods for entering logograms, either by breaking them up into their constituent parts such as with the Cangjie and Wubi methods of typing Chinese, or using phonetic systems such as Bopomofo or Pinyin where the word is entered as pronounced and then selected from a list of logograms matching it. While the former method is (linearly) faster, it is more difficult to learn. With the Chinese alphabet system however, the strokes forming the logogram are typed as they are normally written, and the corresponding logogram is then entered. Also due to the number of glyphs, in programming and computing in general, more memory is needed to store each grapheme, as the character set is larger. As a comparison, ISO 8859 requires only one byte for each grapheme, while the Basic Multilingual Plane encoded in UTF-8 requires up to three bytes. On the other hand, English words, for example, average five characters and a space per word and thus need six bytes for every word. Since many logograms contain more than one grapheme, it is not clear which is more memory-efficient. Variable-width encodings allow a unified character encoding standard such as Unicode to use only the bytes necessary to represent a character, reducing the overhead that results merging large character sets with smaller ones. == See also == Dongba symbols Emoji Logo Symbol Syllabogram Wingdings Rebus, the use of pictures to represent words or parts of words Sitelen Pona, a constructed logography == Notes == == References == === Citations === === Sources === DeFrancis, John (1984). The Chinese Language: Fact and Fantasy. University of Hawaii Press. ISBN 0-8248-1068-6. Hannas, William C. (1997). Asia's Orthographic Dilemma. University of Hawaii Press. ISBN 0-8248-1892-X. Hoffman, Joel M. (2004). "Chapter 3". In the Beginning: A Short History of the Hebrew Language. New York University Press. ISBN 0-8147-3690-4. Daniels, Peter T.; Bright, William, eds. (1996). The World's Writing Systems. Oxford University Press. ISBN 9780195079937. == External links == 古代文字資料館 Ancient Writing Library
Wikipedia/Logograph
In mathematics and computer science, an algorithm ( ) is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning). In contrast, a heuristic is an approach to solving problems without well-defined correct or optimal results. For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation. As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. == Etymology == Around 825 AD, Persian scientist and polymath Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving the Hindu–Arabic numeral system and arithmetic appeared, for example Liber Alghoarismi de practica arismetrice, attributed to John of Seville, and Liber Algorismi de numero Indorum, attributed to Adelard of Bath. Here, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi, or "Thus spoke Al-Khwarizmi". The word algorism in English came to mean the use of place-value notation in calculations; it occurs in the Ancrene Wisse from circa 1225. By the time Geoffrey Chaucer wrote The Canterbury Tales in the late 14th century, he used a variant of the same word in describing augrym stones, stones used for place-value calculation. In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus. By 1596, this form of the word was used in English, as algorithm, by Thomas Hood. == Definition == One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and any prescribed bureaucratic procedure or cook-book recipe. In general, a program is an algorithm only if it stops eventually—even though infinite loops may sometimes prove desirable. Boolos, Jeffrey & 1974, 1999 define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols. Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain performing arithmetic or an insect looking for food), in an electrical circuit, or a mechanical device. == History == === Ancient algorithms === Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes in Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later), the Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC), Chinese mathematics (around 200 BC and later), and Arabic mathematics (around 800 AD). The earliest evidence of algorithms is found in ancient Mesopotamian mathematics. A Sumerian clay tablet found in Shuruppak near Baghdad and dated to c. 2500 BC describes the earliest division algorithm. During the Hammurabi dynasty c. 1800 – c. 1600 BC, Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events. Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus c. 1550 BC. Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus,: Ch 9.2  and the Euclidean algorithm, which was first described in Euclid's Elements (c. 300 BC).: Ch 9.1 Examples of ancient Indian mathematics included the Shulba Sutras, the Kerala School, and the Brāhmasphuṭasiddhānta. The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm. === Computers === ==== Weight-driven clocks ==== Bolter credits the invention of the weight-driven clock as "the key invention [of Europe in the Middle Ages]," specifically the verge escapement mechanism producing the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" in the 13th century and "computational machines"—the difference and analytical engines of Charles Babbage and Ada Lovelace in the mid-19th century. Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a real Turing-complete computer instead of just a calculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer". ==== Electromechanical relay ==== Bell and Newell (1971) write that the Jacquard loom, a precursor to Hollerith cards (punch cards), and "telephone switching technologies" led to the development of the first computers. By the mid-19th century, the telegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, the ticker tape (c. 1870s) was in use, as were Hollerith cards (c. 1890). Then came the teleprinter (c. 1910) with its punched-paper use of Baudot code on tape. Telephone-switching networks of electromechanical relays were invented in 1835. These led to the invention of the digital adding device by George Stibitz in 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device". === Formalization === In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939. == Representations == Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms. === Turing machines === There are many possible representations and Turing machine programs can be expressed as a sequence of machine tables (see finite-state machine, state-transition table, and control table for more), as flowcharts and drakon-charts (see state diagram for more), as a form of rudimentary machine code or assembly code called "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description. A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine. An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states. In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine. === Flowchart representation === The graphical aid called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. == Algorithmic analysis == It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list of n numbers would have a time requirement of ⁠ O ( n ) {\displaystyle O(n)} ⁠, using big O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of ⁠ O ( 1 ) {\displaystyle O(1)} ⁠, otherwise ⁠ O ( n ) {\displaystyle O(n)} ⁠ is required. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost ⁠ O ( log ⁡ n ) {\displaystyle O(\log n)} ⁠) outperforms a sequential search (cost ⁠ O ( n ) {\displaystyle O(n)} ⁠ ) when used for table lookups on sorted lists or arrays. === Formal versus empirical === The analysis, and study of algorithms is a discipline of computer science. Algorithms are often studied abstractly, without referencing any specific programming language or implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation. Pseudocode is typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and their algorithmic efficiency is tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful for uncovering unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly. === Execution efficiency === To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. === Best Case and Worst Case === The best case of an algorithm refers to the scenario or input for which the algorithm or data structure takes the least time and resources to complete its tasks. The worst case of an algorithm is the case that causes the algorithm or data structure to consume the maximum period of time and computational resources. == Design == Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g., an algorithm's run-time growth as the size of its input increases. === Structured programming === Per the Church–Turing thesis, any algorithm can be computed by any Turing complete model. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction. == Legal status == By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is controversial, and there are criticized patents involving algorithms, especially data compression algorithms, such as Unisys's LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography). == Classification == === By implementation === Recursion A recursive algorithm invokes itself repeatedly until meeting a termination condition and is a common functional programming method. Iterative algorithms use repetitions such as loops or data structures like stacks to solve problems. Problems may be suited for one implementation or the other. The Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa. Serial, parallel or distributed Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike parallel or distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems. Deterministic or non-deterministic Deterministic algorithms solve the problem with exact decisions at every step; whereas non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of heuristics. Exact or approximate While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the Knapsack problem, where there is a set of items, and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider the weights of items as well as their value. Quantum algorithm Quantum algorithms run on a realistic model of quantum computation. The term is usually used for those algorithms that seem inherently quantum or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement. === By design paradigm === Another way of classifying algorithms is by their design methodology or paradigm. Some common paradigms are: Brute-force or exhaustive search Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords. Divide and conquer A divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually recursively) until the instances are small enough to solve easily. Merge sorting is an example of divide and conquer, where an unordered list is repeatedly split into smaller lists, which are sorted in the same way and then merged. In a simpler variant of divide and conquer called prune and search or decrease-and-conquer algorithm, which solves one smaller instance of itself, and does not require a merge step. An example of a prune and search algorithm is the binary search algorithm. Search and enumeration Many problems (such as playing chess) can be modelled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration, and backtracking. Randomized algorithm Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems, the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithm for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms: Monte Carlo algorithms return a correct answer with high probability. E.g. RP is the subclass of these that run in polynomial time. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP. Reduction of complexity This technique transforms difficult problems into better-known problems solvable with (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms. For example, one selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer. Back tracking In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution. === Optimization problems === For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: Linear programming When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be integers, then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem. Dynamic programming When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions. For example, Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial. The greedy method Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems, they always find the optimal solution but for others they may stop at local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem. The heuristic method In optimization problems, heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm. == Examples == One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as: High-level description: If a set of numbers is empty, then there is no highest number. Assume the first number in the set is the largest. For each remaining number in the set: if this number is greater than the current largest, it becomes the new largest. When there are no unchecked numbers left in the set, consider the current largest number to be the largest in the set. (Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code: == See also == == Notes == == Bibliography == Zaslavsky, C. (1970). Mathematics of the Yoruba People and of Their Neighbors in Southern Nigeria. The Two-Year College Mathematics Journal, 1(2), 76–99. https://doi.org/10.2307/3027363 == Further reading == == External links == "Algorithm". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. Weisstein, Eric W. "Algorithm". MathWorld. Dictionary of Algorithms and Data Structures – National Institute of Standards and Technology Algorithm repositories The Stony Brook Algorithm Repository – State University of New York at Stony Brook Collected Algorithms of the ACM – Associations for Computing Machinery The Stanford GraphBase Archived December 6, 2015, at the Wayback Machine – Stanford University
Wikipedia/algorithm
In computer science, empirical algorithmics (or experimental algorithmics) is the practice of using empirical methods to study the behavior of algorithms. The practice combines algorithm development and experimentation: algorithms are not just designed, but also implemented and tested in a variety of situations. In this process, an initial design of an algorithm is analyzed so that the algorithm may be developed in a stepwise manner. == Overview == Methods from empirical algorithmics complement theoretical methods for the analysis of algorithms. Through the principled application of empirical methods, particularly from statistics, it is often possible to obtain insights into the behavior of algorithms such as high-performance heuristic algorithms for hard combinatorial problems that are (currently) inaccessible to theoretical analysis. Empirical methods can also be used to achieve substantial improvements in algorithmic efficiency. American computer scientist Catherine McGeoch identifies two main branches of empirical algorithmics: the first (known as empirical analysis) deals with the analysis and characterization of the behavior of algorithms, and the second (known as algorithm design or algorithm engineering) is focused on empirical methods for improving the performance of algorithms. The former often relies on techniques and tools from statistics, while the latter is based on approaches from statistics, machine learning and optimization. Dynamic analysis tools, typically performance profilers, are commonly used when applying empirical methods for the selection and refinement of algorithms of various types for use in various contexts. Research in empirical algorithmics is published in several journals, including the ACM Journal on Experimental Algorithmics (JEA) and the Journal of Artificial Intelligence Research (JAIR). Besides Catherine McGeoch, well-known researchers in empirical algorithmics include Bernard Moret, Giuseppe F. Italiano, Holger H. Hoos, David S. Johnson, and Roberto Battiti. == Performance profiling in the design of complex algorithms == In the absence of empirical algorithmics, analyzing the complexity of an algorithm can involve various theoretical methods applicable to various situations in which the algorithm may be used. Memory and cache considerations are often significant factors to be considered in the theoretical choice of a complex algorithm, or the approach to its optimization, for a given purpose. Performance profiling is a dynamic program analysis technique typically used for finding and analyzing bottlenecks in an entire application's code or for analyzing an entire application to identify poorly performing code. A profiler can reveal the code most relevant to an application's performance issues. A profiler may help to determine when to choose one algorithm over another in a particular situation. When an individual algorithm is profiled, as with complexity analysis, memory and cache considerations are often more significant than instruction counts or clock cycles; however, the profiler's findings can be considered in light of how the algorithm accesses data rather than the number of instructions it uses. Profiling may provide intuitive insight into an algorithm's behavior by revealing performance findings as a visual representation. Performance profiling has been applied, for example, during the development of algorithms for matching wildcards. Early algorithms for matching wildcards, such as Rich Salz' wildmat algorithm, typically relied on recursion, a technique criticized on grounds of performance. The Krauss matching wildcards algorithm was developed based on an attempt to formulate a non-recursive alternative using test cases followed by optimizations suggested via performance profiling, resulting in a new algorithmic strategy conceived in light of the profiling along with other considerations. Profilers that collect data at the level of basic blocks or that rely on hardware assistance provide results that can be accurate enough to assist software developers in optimizing algorithms for a particular computer or situation. Performance profiling can aid developer understanding of the characteristics of complex algorithms applied in complex situations, such as coevolutionary algorithms applied to arbitrary test-based problems, and may help lead to design improvements. == See also == Algorithm engineering Analysis of algorithms Profiling (computer programming) Performance tuning Software development == References ==
Wikipedia/Experimental_algorithmics
The Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) is a collaboration between Rutgers University, Princeton University, and the research firms AT&T, Bell Labs, Applied Communication Sciences, and NEC. It was founded in 1989 with money from the National Science Foundation. Its offices are located on the Rutgers campus, and 250 members from the six institutions form its permanent members. DIMACS is devoted to both theoretical development and practical applications of discrete mathematics and theoretical computer science. It engages in a wide variety of evangelism including encouraging, inspiring, and facilitating researchers in these subject areas, and sponsoring conferences and workshops. Fundamental research in discrete mathematics has applications in diverse fields including Cryptology, Engineering, Networking, and Management Decision Support. Past directors have included Fred S. Roberts, Daniel Gorenstein, András Hajnal, and Rebecca N. Wright. == The DIMACS challenges == DIMACS sponsors implementation challenges to determine practical algorithm performance on problems of interest. There have been eleven DIMACS challenges so far. 1990−1991: Network flows and matching 1992−1992: NP-hard problems: Max Clique, Graph Coloring, and SAT 1993−1994: Parallel algorithms for combinatorial problems 1994−1995: Computational giology: fragment assembly and genome rearrangement 1995−1996: Priority queues, dictionaries, and multidimensional point sets 1998−1998: Near-neighbor searches 2000−2000: Semidefinite and related optimization problems 2001−2001: The traveling salesman problem 2005−2005: The shortest-path problem 2011−2012: Graph partitioning and graph clustering [1] 2013−2014: Steiner tree problems 2020−2021: Vehicle routing problems == References == == External links == DIMACS Website
Wikipedia/Center_for_Discrete_Mathematics_and_Theoretical_Computer_Science
The Library of Efficient Data types and Algorithms (LEDA) is a proprietarily-licensed software library providing C++ implementations of a broad variety of algorithms for graph theory and computational geometry. It was originally developed by the Max Planck Institute for Informatics Saarbrücken. From 2001 to 2022 LEDA was further developed and commercially distributed by the Algorithmic Solutions Software GmbH. == Technical details == === Data types === ==== Numerical representations ==== LEDA provides four additional numerical representations alongside those built-in to C++: integer, rational, bigfloat, and real: LEDA's integer type offers an improvement over the built-in int datatype by eliminating the problem of overflow at the cost of unbounded memory usage for increasingly large numbers. It follows that LEDA's rational type has the same resistance to overflow because it is based directly on the mathematical definition of rational as the quotient of two integers. The bigfloat type improves on the C++ floating-point types by allowing for the significand (also commonly called mantissa) to be set to an arbitrary level of precision instead of following the IEEE standard. LEDA's real type allows for precise representations of real numbers, and can be used to compute the sign of a radical expression. === Error checking === LEDA makes use of certifying algorithms to demonstrate that the results of a function are mathematically correct. In addition to the input and output of a function, LEDA computes a third "witness" value which can be used as an input to checker programs to validate the output of the function. LEDA's checker programs were developed in Simpl, an imperative programming language, and validated using Isabelle/HOL, a software tool for checking the correctness of mathematical proofs. The nature of a witness value often depends on the type of mathematical calculation being performed. For LEDA's planarity testing function, If the graph is planar, a combinatorial embedding is produced as a witness. If not, a Kuratowski subgraph is returned. These values can then be passed directly to checker functions to confirm their validity. A developer only needs to understand the inner-workings of these checker functions to be confident that the result is correct, which greatly reduces the learning curve compared to gaining a full understanding of LEDA's planarity testing algorithm. == Use cases == LEDA is useful in the field of computational geometry due to its support for exact representations of real numbers via the leda_real datatype. This provides an advantage in accuracy over floating-point arithmetic. For example, calculations involving radicals are considerably more accurate when computed using leda_real. Algorithms such as parametric search, a technique for solving a subset of optimization problems, and others under the real RAM model of computation rely upon real number parameters to produce accurate results. == Alternatives == Boost and LEMON are potential alternative libraries with some benefits in efficiency due to different implementations of fundamental algorithms and data structures. However, neither employs a similar set of correctness checking, particularly when dealing with floating-point arithmetic. == References == == External links == Official website
Wikipedia/Library_of_Efficient_Data_types_and_Algorithms
In quantum field theory, the Wightman distributions can be analytically continued to analytic functions in Euclidean space with the domain restricted to ordered n-tuples in R d {\displaystyle \mathbb {R} ^{d}} that are pairwise distinct. These functions are called the Schwinger functions (named after Julian Schwinger) and they are real-analytic, symmetric under the permutation of arguments (antisymmetric for fermionic fields), Euclidean covariant and satisfy a property known as reflection positivity. Properties of Schwinger functions are known as Osterwalder–Schrader axioms (named after Konrad Osterwalder and Robert Schrader). Schwinger functions are also referred to as Euclidean correlation functions. == Osterwalder–Schrader axioms == Here we describe Osterwalder–Schrader (OS) axioms for a Euclidean quantum field theory of a Hermitian scalar field ϕ ( x ) {\displaystyle \phi (x)} , x ∈ R d {\displaystyle x\in \mathbb {R} ^{d}} . Note that a typical quantum field theory will contain infinitely many local operators, including also composite operators, and their correlators should also satisfy OS axioms similar to the ones described below. The Schwinger functions of ϕ {\displaystyle \phi } are denoted as S n ( x 1 , … , x n ) ≡ ⟨ ϕ ( x 1 ) ϕ ( x 2 ) … ϕ ( x n ) ⟩ , x k ∈ R d . {\displaystyle S_{n}(x_{1},\ldots ,x_{n})\equiv \langle \phi (x_{1})\phi (x_{2})\ldots \phi (x_{n})\rangle ,\quad x_{k}\in \mathbb {R} ^{d}.} OS axioms from are numbered (E0)-(E4) and have the following meaning: (E0) Temperedness (E1) Euclidean covariance (E2) Positivity (E3) Symmetry (E4) Cluster property === Temperedness === Temperedness axiom (E0) says that Schwinger functions are tempered distributions away from coincident points. This means that they can be integrated against Schwartz test functions which vanish with all their derivatives at configurations where two or more points coincide. It can be shown from this axiom and other OS axioms (but not the linear growth condition) that Schwinger functions are in fact real-analytic away from coincident points. === Euclidean covariance === Euclidean covariance axiom (E1) says that Schwinger functions transform covariantly under rotations and translations, namely: S n ( x 1 , … , x n ) = S n ( R x 1 + b , … , R x n + b ) {\displaystyle S_{n}(x_{1},\ldots ,x_{n})=S_{n}(Rx_{1}+b,\ldots ,Rx_{n}+b)} for an arbitrary rotation matrix R ∈ S O ( d ) {\displaystyle R\in SO(d)} and an arbitrary translation vector b ∈ R d {\displaystyle b\in \mathbb {R} ^{d}} . OS axioms can be formulated for Schwinger functions of fields transforming in arbitrary representations of the rotation group. === Symmetry === Symmetry axiom (E3) says that Schwinger functions are invariant under permutations of points: S n ( x 1 , … , x n ) = S n ( x π ( 1 ) , … , x π ( n ) ) {\displaystyle S_{n}(x_{1},\ldots ,x_{n})=S_{n}(x_{\pi (1)},\ldots ,x_{\pi (n)})} , where π {\displaystyle \pi } is an arbitrary permutation of { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} . Schwinger functions of fermionic fields are instead antisymmetric; for them this equation would have a ± sign equal to the signature of the permutation. === Cluster property === Cluster property (E4) says that Schwinger function S p + q {\displaystyle S_{p+q}} reduces to the product S p S q {\displaystyle S_{p}S_{q}} if two groups of points are separated from each other by a large constant translation: lim b → ∞ S p + q ( x 1 , … , x p , x p + 1 + b , … , x p + q + b ) = S p ( x 1 , … , x p ) S q ( x p + 1 , … , x p + q ) {\displaystyle \lim _{b\to \infty }S_{p+q}(x_{1},\ldots ,x_{p},x_{p+1}+b,\ldots ,x_{p+q}+b)=S_{p}(x_{1},\ldots ,x_{p})S_{q}(x_{p+1},\ldots ,x_{p+q})} . The limit is understood in the sense of distributions. There is also a technical assumption that the two groups of points lie on two sides of the x 0 = 0 {\displaystyle x^{0}=0} hyperplane, while the vector b {\displaystyle b} is parallel to it: x 1 0 , … , x p 0 > 0 , x p + 1 0 , … , x p + q 0 < 0 , b 0 = 0. {\displaystyle x_{1}^{0},\ldots ,x_{p}^{0}>0,\quad x_{p+1}^{0},\ldots ,x_{p+q}^{0}<0,\quad b^{0}=0.} === Reflection positivity === Positivity axioms (E2) asserts the following property called (Osterwalder–Schrader) reflection positivity. Pick any arbitrary coordinate τ and pick a test function fN with N points as its arguments. Assume fN has its support in the "time-ordered" subset of N points with 0 < τ1 < ... < τN. Choose one such fN for each positive N, with the f's being zero for all N larger than some integer M. Given a point x {\displaystyle x} , let x θ {\displaystyle x^{\theta }} be the reflected point about the τ = 0 hyperplane. Then, ∑ m , n ∫ d d x 1 ⋯ d d x m d d y 1 ⋯ d d y n S m + n ( x 1 , … , x m , y 1 , … , y n ) f m ( x 1 θ , … , x m θ ) ∗ f n ( y 1 , … , y n ) ≥ 0 {\displaystyle \sum _{m,n}\int d^{d}x_{1}\cdots d^{d}x_{m}\,d^{d}y_{1}\cdots d^{d}y_{n}S_{m+n}(x_{1},\dots ,x_{m},y_{1},\dots ,y_{n})f_{m}(x_{1}^{\theta },\dots ,x_{m}^{\theta })^{*}f_{n}(y_{1},\dots ,y_{n})\geq 0} where * represents complex conjugation. Sometimes in theoretical physics literature reflection positivity is stated as the requirement that the Schwinger function of arbitrary even order should be non-negative if points are inserted symmetrically with respect to the τ = 0 {\displaystyle \tau =0} hyperplane: S 2 n ( x 1 , … , x n , x n θ , … , x 1 θ ) ≥ 0 {\displaystyle S_{2n}(x_{1},\dots ,x_{n},x_{n}^{\theta },\dots ,x_{1}^{\theta })\geq 0} . This property indeed follows from the reflection positivity but it is weaker than full reflection positivity. ==== Intuitive understanding ==== One way of (formally) constructing Schwinger functions which satisfy the above properties is through the Euclidean path integral. In particular, Euclidean path integrals (formally) satisfy reflection positivity. Let F be any polynomial functional of the field φ which only depends upon the value of φ(x) for those points x whose τ coordinates are nonnegative. Then ∫ D ϕ F [ ϕ ( x ) ] F [ ϕ ( x θ ) ] ∗ e − S [ ϕ ] = ∫ D ϕ 0 ∫ ϕ + ( τ = 0 ) = ϕ 0 D ϕ + F [ ϕ + ] e − S + [ ϕ + ] ∫ ϕ − ( τ = 0 ) = ϕ 0 D ϕ − F [ ( ϕ − ) θ ] ∗ e − S − [ ϕ − ] . {\displaystyle \int {\mathcal {D}}\phi F[\phi (x)]F[\phi (x^{\theta })]^{*}e^{-S[\phi ]}=\int {\mathcal {D}}\phi _{0}\int _{\phi _{+}(\tau =0)=\phi _{0}}{\mathcal {D}}\phi _{+}F[\phi _{+}]e^{-S_{+}[\phi _{+}]}\int _{\phi _{-}(\tau =0)=\phi _{0}}{\mathcal {D}}\phi _{-}F[(\phi _{-})^{\theta }]^{*}e^{-S_{-}[\phi _{-}]}.} Since the action S is real and can be split into S + {\displaystyle S_{+}} , which only depends on φ on the positive half-space ( ϕ + {\displaystyle \phi _{+}} ), and S − {\displaystyle S_{-}} which only depends upon φ on the negative half-space ( ϕ − {\displaystyle \phi _{-}} ), and if S also happens to be invariant under the combined action of taking a reflection and complex conjugating all the fields, then the previous quantity has to be nonnegative. == Osterwalder–Schrader theorem == The Osterwalder–Schrader theorem states that Euclidean Schwinger functions which satisfy the above axioms (E0)-(E4) and an additional property (E0') called linear growth condition can be analytically continued to Lorentzian Wightman distributions which satisfy Wightman axioms and thus define a quantum field theory. === Linear growth condition === This condition, called (E0') in, asserts that when the Schwinger function of order n {\displaystyle n} is paired with an arbitrary Schwartz test function f {\displaystyle f} which vanishes at coincident points, we have the following bound: | S n ( f ) | ≤ σ n | f | C ⋅ n , {\displaystyle |S_{n}(f)|\leq \sigma _{n}|f|_{C\cdot n},} where C ∈ N {\displaystyle C\in \mathbb {N} } is an integer constant, | f | C ⋅ n {\displaystyle |f|_{C\cdot n}} is the Schwartz-space seminorm of order N = C ⋅ n {\displaystyle N=C\cdot n} , i.e. | f | N = sup | α | ≤ N , x ∈ R d | ( 1 + | x | ) N D α f ( x ) | , {\displaystyle |f|_{N}=\sup _{|\alpha |\leq N,x\in \mathbb {R} ^{d}}|(1+|x|)^{N}D^{\alpha }f(x)|,} and σ n {\displaystyle \sigma _{n}} a sequence of constants of factorial growth, i.e. σ n ≤ A ( n ! ) B {\displaystyle \sigma _{n}\leq A(n!)^{B}} with some constants A , B {\displaystyle A,B} . Linear growth condition is subtle as it has to be satisfied for all Schwinger functions simultaneously. It also has not been derived from the Wightman axioms, so that the system of OS axioms (E0)-(E4) plus the linear growth condition (E0') appears to be stronger than the Wightman axioms. === History === At first, Osterwalder and Schrader claimed a stronger theorem that the axioms (E0)-(E4) by themselves imply the Wightman axioms, however their proof contained an error which could not be corrected without adding extra assumptions. Two years later they published a new theorem, with the linear growth condition added as an assumption, and a correct proof. The new proof is based on a complicated inductive argument (proposed also by Vladimir Glaser), by which the region of analyticity of Schwinger functions is gradually extended towards the Minkowski space, and Wightman distributions are recovered as a limit. The linear growth condition (E0') is crucially used to show that the limit exists and is a tempered distribution. Osterwalder's and Schrader's paper also contains another theorem replacing (E0') by yet another assumption called (E0) ˇ {\displaystyle {\check {\text{(E0)}}}} . This other theorem is rarely used, since (E0) ˇ {\displaystyle {\check {\text{(E0)}}}} is hard to check in practice. == Other axioms for Schwinger functions == === Axioms by Glimm and Jaffe === An alternative approach to axiomatization of Euclidean correlators is described by Glimm and Jaffe in their book. In this approach one assumes that one is given a measure d μ {\displaystyle d\mu } on the space of distributions ϕ ∈ D ′ ( R d ) {\displaystyle \phi \in D'(\mathbb {R} ^{d})} . One then considers a generating functional S ( f ) = ∫ e ϕ ( f ) d μ , f ∈ D ( R d ) {\displaystyle S(f)=\int e^{\phi (f)}d\mu ,\quad f\in D(\mathbb {R} ^{d})} which is assumed to satisfy properties OS0-OS4: (OS0) Analyticity. This asserts that z = ( z 1 , … , z n ) ↦ S ( ∑ i = 1 n z i f i ) {\displaystyle z=(z_{1},\ldots ,z_{n})\mapsto S\left(\sum _{i=1}^{n}z_{i}f_{i}\right)} is an entire-analytic function of z ∈ R n {\displaystyle z\in \mathbb {R} ^{n}} for any collection of n {\displaystyle n} compactly supported test functions f i ∈ D ( R d ) {\displaystyle f_{i}\in D(\mathbb {R} ^{d})} . Intuitively, this means that the measure d μ {\displaystyle d\mu } decays faster than any exponential. (OS1) Regularity. This demands a growth bound for S ( f ) {\displaystyle S(f)} in terms of f {\displaystyle f} , such as | S ( f ) | ≤ exp ⁡ ( C ∫ d d x | f ( x ) | ) {\displaystyle |S(f)|\leq \exp \left(C\int d^{d}x|f(x)|\right)} . See for the precise condition. (OS2) Euclidean invariance. This says that the functional S ( f ) {\displaystyle S(f)} is invariant under Euclidean transformations f ( x ) ↦ f ( R x + b ) {\displaystyle f(x)\mapsto f(Rx+b)} . (OS3) Reflection positivity. Take a finite sequence of test functions f i ∈ D ( R d ) {\displaystyle f_{i}\in D(\mathbb {R} ^{d})} which are all supported in the upper half-space i.e. at x 0 > 0 {\displaystyle x^{0}>0} . Denote by θ f i ( x ) = f i ( θ x ) {\displaystyle \theta f_{i}(x)=f_{i}(\theta x)} where θ {\displaystyle \theta } is a reflection operation defined above. This axioms says that the matrix M i j = S ( f i + θ f j ) {\displaystyle M_{ij}=S(f_{i}+\theta f_{j})} has to be positive semidefinite. (OS4) Ergodicity. The time translation semigroup acts ergodically on the measure space ( D ′ ( R d ) , d μ ) {\displaystyle (D'(\mathbb {R} ^{d}),d\mu )} . See for the precise condition. ==== Relation to Osterwalder–Schrader axioms ==== Although the above axioms were named by Glimm and Jaffe (OS0)-(OS4) in honor of Osterwalder and Schrader, they are not equivalent to the Osterwalder–Schrader axioms. Given (OS0)-(OS4), one can define Schwinger functions of ϕ {\displaystyle \phi } as moments of the measure d μ {\displaystyle d\mu } , and show that these moments satisfy Osterwalder–Schrader axioms (E0)-(E4) and also the linear growth conditions (E0'). Then one can appeal to the Osterwalder–Schrader theorem to show that Wightman functions are tempered distributions. Alternatively, and much easier, one can derive Wightman axioms directly from (OS0)-(OS4). Note however that the full quantum field theory will contain infinitely many other local operators apart from ϕ {\displaystyle \phi } , such as ϕ 2 {\displaystyle \phi ^{2}} , ϕ 4 {\displaystyle \phi ^{4}} and other composite operators built from ϕ {\displaystyle \phi } and its derivatives. It's not easy to extract these Schwinger functions from the measure d μ {\displaystyle d\mu } and show that they satisfy OS axioms, as it should be the case. To summarize, the axioms called (OS0)-(OS4) by Glimm and Jaffe are stronger than the OS axioms as far as the correlators of the field ϕ {\displaystyle \phi } are concerned, but weaker than then the full set of OS axioms since they don't say much about correlators of composite operators. === Nelson's axioms === These axioms were proposed by Edward Nelson. See also their description in the book of Barry Simon. Like in the above axioms by Glimm and Jaffe, one assumes that the field ϕ ∈ D ′ ( R d ) {\displaystyle \phi \in D'(\mathbb {R} ^{d})} is a random distribution with a measure d μ {\displaystyle d\mu } . This measure is sufficiently regular so that the field ϕ {\displaystyle \phi } has regularity of a Sobolev space of negative derivative order. The crucial feature of these axioms is to consider the field restricted to a surface. One of the axioms is Markov property, which formalizes the intuitive notion that the state of the field inside a closed surface depends only on the state of the field on the surface. == See also == Wick rotation Axiomatic quantum field theory Wightman axioms == References ==
Wikipedia/Schwinger_function
A galactic algorithm is an algorithm with record-breaking theoretical (asymptotic) performance, but which is not used due to practical constraints. Typical reasons are that the performance gains only appear for problems that are so large they never occur, or the algorithm's complexity outweighs a relatively small gain in performance. Galactic algorithms were so named by Richard Lipton and Ken Regan, because they will never be used on any data sets on Earth. == Possible use cases == Even if they are never used in practice, galactic algorithms may still contribute to computer science: An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms. See, for example, communication channel capacity, below. Available computational power may catch up to the crossover point, so that a previously impractical algorithm becomes practical. See, for example, Low-density parity-check codes, below. An impractical algorithm can still demonstrate that conjectured bounds can be achieved, or that proposed bounds are wrong, and hence advance the theory of algorithms (see, for example, Reingold's algorithm for connectivity in undirected graphs). As Lipton states:This alone could be important and often is a great reason for finding such algorithms. For example, if tomorrow there were a discovery that showed there is a factoring algorithm with a huge but provably polynomial time bound, that would change our beliefs about factoring. The algorithm might never be used, but would certainly shape the future research into factoring. Similarly, a hypothetical algorithm for the Boolean satisfiability problem with a large but polynomial time bound, such as Θ ( n 2 100 ) {\displaystyle \Theta {\bigl (}n^{2^{100}}{\bigr )}} , although unusable in practice, would settle the P versus NP problem, considered the most important open problem in computer science and one of the Millennium Prize Problems. == Examples == === Integer multiplication === An example of a galactic algorithm is the fastest known way to multiply two numbers, which is based on a 1729-dimensional Fourier transform. It needs O ( n log ⁡ n ) {\displaystyle O(n\log n)} bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits." === Primality testing === The AKS primality test is galactic. It is the most theoretically sound of any known algorithm that can take an arbitrary number and tell if it is prime. In particular, it is provably polynomial-time, deterministic, and unconditionally correct. All other known algorithms fall short on at least one of these criteria, but the shortcomings are minor and the calculations are much faster, so they are used instead. ECPP in practice runs much faster than AKS, but it has never been proven to be polynomial time. The Miller–Rabin test is also much faster than AKS, but produces only a probabilistic result. However the probability of error can be driven down to arbitrarily small values (say < 10 − 100 {\displaystyle <10^{-100}} ), good enough for practical purposes. There is also a deterministic version of the Miller-Rabin test, which runs in polynomial time over all inputs, but its correctness depends on the generalized Riemann hypothesis (which is widely believed, but not proven). The existence of these (much) faster alternatives means AKS is not used in practice. === Matrix multiplication === The first improvement over brute-force matrix multiplication (which needs O ( n 3 ) {\displaystyle O(n^{3})} multiplications) was the Strassen algorithm: a recursive algorithm that needs O ( n 2.807 ) {\displaystyle O(n^{2.807})} multiplications. This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated group theory, are the Coppersmith–Winograd algorithm and its slightly better successors, needing O ( n 2.373 ) {\displaystyle O(n^{2.373})} multiplications. These are galactic – "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical." === Communication channel capacity === Claude Shannon showed a simple but asymptotically optimal code that can reach the theoretical capacity of a communication channel. It requires assigning a random code word to every possible n {\displaystyle n} -bit message, then decoding by finding the closest code word. If n {\displaystyle n} is chosen large enough, this beats any existing code and can get arbitrarily close to the capacity of the channel. Unfortunately, any n {\displaystyle n} big enough to beat existing codes is also completely impractical. These codes, though never used, inspired decades of research into more practical algorithms that today can achieve rates arbitrarily close to channel capacity. === Sub-graphs === The problem of deciding whether a graph G {\displaystyle G} contains H {\displaystyle H} as a minor is NP-complete in general, but where H {\displaystyle H} is fixed, it can be solved in polynomial time. The running time for testing whether H {\displaystyle H} is a minor of G {\displaystyle G} in this case is O ( n 2 ) {\displaystyle O(n^{2})} , where n {\displaystyle n} is the number of vertices in G {\displaystyle G} and the big O notation hides a constant that depends superexponentially on H {\displaystyle H} . The constant is greater than 2 ↑↑ ( 2 ↑↑ ( 2 ↑↑ ( h / 2 ) ) ) {\displaystyle 2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow \uparrow (h/2)))} in Knuth's up-arrow notation, where h {\displaystyle h} is the number of vertices in H {\displaystyle H} . Even the case of h = 4 {\displaystyle h=4} cannot be reasonably computed as the constant is greater than 2 pentated by 4, or 2 tetrated by 65536, that is, 2 ↑↑↑ 4 = 65536 2 = 2 2 ⋅ ⋅ 2 ⏟ 65536 {\displaystyle 2\uparrow \uparrow \uparrow 4={}^{65536}2=\underbrace {2^{2^{\cdot ^{\cdot ^{2}}}}} _{65536}} . === Cryptographic breaks === In cryptography jargon, a "break" is any attack faster in expectation than brute force – i.e., performing one trial decryption for each possible key. For many cryptographic systems, breaks are known, but are still practically infeasible with current technology. One example is the best attack known against 128-bit AES, which takes only 2 126 {\displaystyle 2^{126}} operations. Despite being impractical, theoretical breaks can provide insight into vulnerability patterns, and sometimes lead to discovery of exploitable breaks. === Traveling salesman problem === For several decades, the best known approximation to the traveling salesman problem in a metric space was the very simple Christofides algorithm which produced a path at most 50% longer than the optimum. (Many other algorithms could usually do much better, but could not provably do so.) In 2020, a newer and much more complex algorithm was discovered that can beat this by 10 − 34 {\displaystyle 10^{-34}} percent. Although no one will ever switch to this algorithm for its very slight worst-case improvement, it is still considered important because "this minuscule improvement breaks through both a theoretical logjam and a psychological one". === Hutter search === A single algorithm, "Hutter search", can solve any well-defined problem in an asymptotically optimal time, barring some caveats. It works by searching through all possible algorithms (by runtime), while simultaneously searching through all possible proofs (by length of proof), looking for a proof of correctness for each algorithm. Since the proof of correctness is of finite size, it "only" adds a constant and does not affect the asymptotic runtime. However, this constant is so big that the algorithm is entirely impractical. For example, if the shortest proof of correctness of a given algorithm is 1000 bits long, the search will examine at least 2999 other potential proofs first. Hutter search is related to Solomonoff induction, which is a formalization of Bayesian inference. All computable theories (as implemented by programs) which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Again, the search over all possible explanations makes this procedure galactic. === Optimization === Simulated annealing, when used with a logarithmic cooling schedule, has been proven to find the global optimum of any optimization problem. However, such a cooling schedule results in entirely impractical runtimes, and is never used. However, knowing this ideal algorithm exists has led to practical variants that are able to find very good (though not provably optimal) solutions to complex optimization problems. === Minimum spanning trees === The expected linear time MST algorithm is able to discover the minimum spanning tree of a graph in O ( m + n ) {\displaystyle O(m+n)} , where m {\displaystyle m} is the number of edges and n {\displaystyle n} is the number of nodes of the graph. However, the constant factor that is hidden by the Big O notation is huge enough to make the algorithm impractical. An implementation is publicly available and given the experimentally estimated implementation constants, it would only be faster than Borůvka's algorithm for graphs in which m + n > 9 ⋅ 10 151 {\displaystyle m+n>9\cdot 10^{151}} . === Hash tables === Researchers have found an algorithm that achieves the provably best-possible asymptotic performance in terms of time-space tradeoff. But it remains purely theoretical: "Despite the new hash table’s unprecedented efficiency, no one is likely to try building it anytime soon. It’s just too complicated to construct." and "in practice, constants really matter. In the real world, a factor of 10 is a game ender.” === Connectivity in undirected graphs === Connectivity in undirected graphs (also known as USTCON, for Unconnected Source-Target CONnectivity) is the problem of deciding if a path exists between two nodes in an undirected graph, or in other words, if they are in the same connected component. If you are allowed to use O ( N ) {\displaystyle O({\text{N}})} space, polynomial time solutions such as Dijkstra's algorithm have been known and used for decades. But for many years it was unknown if this could be done deterministically in O ( log N ) {\displaystyle O({\text{log N}})} space (class L), though it was known to be possible with randomized algorithms (class RL). In 2004, a breakthrough paper by Omer Reingold showed that USTCON is in fact in L. However, despite the asymptotically better space requirement, this algorithm is galactic. The constant hidden by the O ( log N ) {\displaystyle O({\text{log N}})} is so big that in any practical case it uses far more memory than the well known O ( N ) {\displaystyle O({\text{N}})} algorithms, plus it is exceedingly slow. So despite being a landmark in theory (more than 1000 citations as of 2025) it is never used in practice. === Low-density parity-check codes === Low-density parity-check codes, also known as LDPC or Gallager codes, are an example of an algorithm that was galactic when first developed, but became practical as computation improved. They were originally conceived by Robert G. Gallager in his doctoral dissertation at the Massachusetts Institute of Technology in 1960. Although their performance was much better than other codes of that time, reaching the Gilbert–Varshamov bound for linear codes, the codes were largely ignored as their iterative decoding algorithm was prohibitively computationally expensive for the hardware available. Renewed interest in LDPC codes emerged following the invention of the closely-related turbo codes (1993), whose similarly iterative decoding algorithm outperformed other codes used at that time. LDPC codes were subsequently rediscovered in 1996. They are now used in many applications today. == References ==
Wikipedia/Galactic_algorithm
In computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems). A PTAS is an algorithm which takes an instance of an optimization problem and a parameter ε > 0 and produces a solution that is within a factor 1 + ε of being optimal (or 1 – ε for maximization problems). For example, for the Euclidean traveling salesman problem, a PTAS would produce a tour with length at most (1 + ε)L, with L being the length of the shortest tour. The running time of a PTAS is required to be polynomial in the problem size for every fixed ε, but can be different for different ε. Thus an algorithm running in time O(n1/ε) or even O(nexp(1/ε)) counts as a PTAS. == Variants == === Deterministic === A practical problem with PTAS algorithms is that the exponent of the polynomial could increase dramatically as ε shrinks, for example if the runtime is O(n(1/ε)!). One way of addressing this is to define the efficient polynomial-time approximation scheme or EPTAS, in which the running time is required to be O(nc) for a constant c independent of ε. This ensures that an increase in problem size has the same relative effect on runtime regardless of what ε is being used; however, the constant under the big-O can still depend on ε arbitrarily. In other words, an EPTAS runs in FPT time where the parameter is ε. Even more restrictive, and useful in practice, is the fully polynomial-time approximation scheme or FPTAS, which requires the algorithm to be polynomial in both the problem size n and 1/ε. Unless P = NP, it holds that FPTAS ⊊ PTAS ⊊ APX. Consequently, under this assumption, APX-hard problems do not have PTASs. Another deterministic variant of the PTAS is the quasi-polynomial-time approximation scheme or QPTAS. A QPTAS has time complexity npolylog(n) for each fixed ε > 0. Furthermore, a PTAS can run in FPT time for some parameterization of the problem, which leads to a parameterized approximation scheme. === Randomized === Some problems which do not have a PTAS may admit a randomized algorithm with similar properties, a polynomial-time randomized approximation scheme or PRAS. A PRAS is an algorithm which takes an instance of an optimization or counting problem and a parameter ε > 0 and, in polynomial time, produces a solution that has a high probability of being within a factor ε of optimal. Conventionally, "high probability" means probability greater than 3/4, though as with most probabilistic complexity classes the definition is robust to variations in this exact value (the bare minimum requirement is generally greater than 1/2). Like a PTAS, a PRAS must have running time polynomial in n, but not necessarily in ε; with further restrictions on the running time in ε, one can define an efficient polynomial-time randomized approximation scheme or EPRAS similar to the EPTAS, and a fully polynomial-time randomized approximation scheme or FPRAS similar to the FPTAS. == As a complexity class == The term PTAS may also be used to refer to the class of optimization problems that have a PTAS. PTAS is a subset of APX, and unless P = NP, it is a strict subset. Membership in PTAS can be shown using a PTAS reduction, L-reduction, or P-reduction, all of which preserve PTAS membership, and these may also be used to demonstrate PTAS-completeness. On the other hand, showing non-membership in PTAS (namely, the nonexistence of a PTAS), may be done by showing that the problem is APX-hard, after which the existence of a PTAS would show P = NP. APX-hardness is commonly shown via PTAS reduction or AP-reduction. == See also == Parameterized approximation scheme, an approximation scheme that runs in FPT time == References == == External links == Complexity Zoo: PTAS, EPTAS. Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski, and Gerhard Woeginger, A compendium of NP optimization problems – list which NP optimization problems have PTAS.
Wikipedia/Polynomial-time_approximation_scheme
In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999). Under the assumption that P ≠ NP, there exist many natural problems that require super-polynomial running time when complexity is measured in terms of the input size only but that are computable in a time that is polynomial in the input size and exponential or worse in a parameter k. Hence, if k is fixed at a small value and the growth of the function over k is relatively small then such problems can still be considered "tractable" despite their traditional classification as "intractable". The existence of efficient, exact, and deterministic solving algorithms for NP-complete, or otherwise NP-hard, problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is exponential (so in particular super-polynomial) in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input. Such an algorithm is called a fixed-parameter tractable (FPT) algorithm, because the problem can be solved efficiently (i.e., in polynomial time) for constant values of the fixed parameter. Problems in which some parameter k is fixed are called parameterized problems. A parameterized problem that allows for such an FPT algorithm is said to be a fixed-parameter tractable problem and belongs to the class FPT, and the early name of the theory of parameterized complexity was fixed-parameter tractability. Many problems have the following form: given an object x and a nonnegative integer k, does x have some property that depends on k? For instance, for the vertex cover problem, the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is challenging to find an algorithm that is exponential only in k, and not in the input size. In this way, parameterized complexity can be seen as two-dimensional complexity theory. This concept is formalized as follows: A parameterized problem is a language L ⊆ Σ ∗ × N {\displaystyle L\subseteq \Sigma ^{*}\times \mathbb {N} } , where Σ {\displaystyle \Sigma } is a finite alphabet. The second component is called the parameter of the problem. A parameterized problem L is fixed-parameter tractable if the question " ( x , k ) ∈ L {\displaystyle (x,k)\in L} ?" can be decided in running time f ( k ) ⋅ | x | O ( 1 ) {\displaystyle f(k)\cdot |x|^{O(1)}} , where f is an arbitrary function depending only on k. The corresponding complexity class is called FPT. For example, there is an algorithm that solves the vertex cover problem in O ( k n + 1.274 k ) {\displaystyle O(kn+1.274^{k})} time, where n is the number of vertices and k is the size of the vertex cover. This means that vertex cover is fixed-parameter tractable with the size of the solution as the parameter. == Complexity classes == === FPT === FPT contains the fixed parameter tractable problems, which are those that can be solved in time f ( k ) ⋅ | x | O ( 1 ) {\displaystyle f(k)\cdot {|x|}^{O(1)}} for some computable function f. Typically, this function is thought of as single exponential, such as 2 O ( k ) {\displaystyle 2^{O(k)}} , but the definition admits functions that grow even faster. This is essential for a large part of the early history of this class. The crucial part of the definition is to exclude functions of the form f ( n , k ) {\displaystyle f(n,k)} , such as k n {\displaystyle k^{n}} . The class FPL (fixed parameter linear) is the class of problems solvable in time f ( k ) ⋅ | x | {\displaystyle f(k)\cdot |x|} for some computable function f. FPL is thus a subclass of FPT. An example is the Boolean satisfiability problem, parameterised by the number of variables. A given formula of size m with k variables can be checked by brute force in time O ( 2 k m ) {\displaystyle O(2^{k}m)} . A vertex cover of size k in a graph of order n can be found in time O ( 2 k n ) {\displaystyle O(2^{k}n)} , so the vertex cover problem is also in FPL. An example of a problem that is thought not to be in FPT is graph coloring parameterised by the number of colors. It is known that 3-coloring is NP-hard, and an algorithm for graph k-coloring in time f ( k ) n O ( 1 ) {\displaystyle f(k)n^{O(1)}} for k = 3 {\displaystyle k=3} would run in polynomial time in the size of the input. Thus, if graph coloring parameterised by the number of colors were in FPT, then P = NP. There are a number of alternative definitions of FPT. For example, the running-time requirement can be replaced by f ( k ) + | x | O ( 1 ) {\displaystyle f(k)+|x|^{O(1)}} . Also, a parameterised problem is in FPT if it has a so-called kernel. Kernelization is a preprocessing technique that reduces the original instance to its "hard kernel", a possibly much smaller instance that is equivalent to the original instance but has a size that is bounded by a function in the parameter. FPT is closed under a parameterised notion of reductions called fpt-reductions. Such reductions transform an instance ( x , k ) {\displaystyle (x,k)} of some problem into an equivalent instance ( x ′ , k ′ ) {\displaystyle (x',k')} of another problem (with k ′ ≤ g ( k ) {\displaystyle k'\leq g(k)} ) and can be computed in time f ( k ) ⋅ p ( | x | ) {\displaystyle f(k)\cdot p(|x|)} where p {\displaystyle p} is a polynomial. Obviously, FPT contains all polynomial-time computable problems. Moreover, it contains all optimisation problems in NP that allow an efficient polynomial-time approximation scheme (EPTAS). === W hierarchy === The W hierarchy is a collection of computational complexity classes. A parameterized problem is in the class W[i], if every instance ( x , k ) {\displaystyle (x,k)} can be transformed (in fpt-time) to a combinatorial circuit that has weft at most i, such that ( x , k ) ∈ L {\displaystyle (x,k)\in L} if and only if there is a satisfying assignment to the inputs that assigns 1 to exactly k inputs. The weft is the largest number of logical units with fan-in greater than two on any path from an input to the output. The total number of logical units on the paths (known as depth) must be limited by a constant that holds for all instances of the problem. Note that F P T = W [ 0 ] {\displaystyle {\mathsf {FPT}}=W[0]} and W [ i ] ⊆ W [ j ] {\displaystyle W[i]\subseteq W[j]} for all i ≤ j {\displaystyle i\leq j} . The classes in the W hierarchy are also closed under fpt-reduction. A complete problem for W[i] is Weighted i-Normalized Satisfiability: given a Boolean formula written as an AND of ORs of ANDs of ... of possibly negated variables, with i + 1 {\displaystyle i+1} layers of ANDs or ORs (and i alternations between AND and OR), can it be satisfied by setting exactly k variables to 1? Many natural computational problems occupy the lower levels, W[1] and W[2]. ==== W[1] ==== Examples of W[1]-complete problems include deciding if a given graph contains a clique of size k deciding if a given graph contains an independent set of size k deciding if a given nondeterministic single-tape Turing machine accepts within k steps ("short Turing machine acceptance" problem). This also applies to nondeterministic Turing machines with f(k) tapes and even f(k) of f(k)-dimensional tapes, but even with this extension, the restriction to f(k) tape alphabet size is fixed-parameter tractable. Crucially, the branching of the Turing machine at each step is allowed to depend on n, the size of the input. In this way, the Turing machine may explore nO(k) computation paths. ==== W[2] ==== Examples of W[2]-complete problems include deciding if a given graph contains a dominating set of size k deciding if a given nondeterministic multi-tape Turing machine accepts within k steps ("short multi-tape Turing machine acceptance" problem). Crucially, the branching is allowed to depend on n (like the W[1] variant), as is the number of tapes. An alternate W[2]-complete formulation allows only single-tape Turing machines, but the alphabet size may depend on n. ==== W[t] ==== W [ t ] {\displaystyle W[t]} can be defined using the family of Weighted Weft-t-Depth-d SAT problems for d ≥ t {\displaystyle d\geq t} : W [ t , d ] {\displaystyle W[t,d]} is the class of parameterized problems that fpt-reduce to this problem, and W [ t ] = ⋃ d ≥ t W [ t , d ] {\displaystyle W[t]=\bigcup _{d\geq t}W[t,d]} . Here, Weighted Weft-t-Depth-d SAT is the following problem: Input: A Boolean formula of depth at most d and weft at most t, and a number k. The depth is the maximal number of gates on any path from the root to a leaf, and the weft is the maximal number of gates of fan-in at least three on any path from the root to a leaf. Question: Does the formula have a satisfying assignment of Hamming weight exactly k? It can be shown that for t ≥ 2 {\displaystyle t\geq 2} the problem Weighted t-Normalize SAT is complete for W [ t ] {\displaystyle W[t]} under fpt-reductions. Here, Weighted t-Normalize SAT is the following problem: Input: A Boolean formula of depth at most t with an AND-gate on top, and a number k. Question: Does the formula have a satisfying assignment of Hamming weight exactly k? ==== W[P] ==== W[P] is the class of problems that can be decided by a nondeterministic h ( k ) ⋅ | x | O ( 1 ) {\displaystyle h(k)\cdot {|x|}^{O(1)}} -time Turing machine that makes at most O ( f ( k ) ⋅ log ⁡ n ) {\displaystyle O(f(k)\cdot \log n)} nondeterministic choices in the computation on ( x , k ) {\displaystyle (x,k)} (a k-restricted Turing machine). Flum & Grohe (2006) It is known that FPT is contained in W[P], and the inclusion is believed to be strict. However, resolving this issue would imply a solution to the P versus NP problem. Other connections to unparameterised computational complexity are that FPT equals W[P] if and only if circuit satisfiability can be decided in time exp ⁡ ( o ( n ) ) m O ( 1 ) {\displaystyle \exp(o(n))m^{O(1)}} , or if and only if there is a computable, nondecreasing, unbounded function f such that all languages recognised by a nondeterministic polynomial-time Turing machine using ⁠ f ( n ) log ⁡ n {\displaystyle f(n)\log n} ⁠ nondeterministic choices are in P. W[P] can be loosely thought of as the class of problems where we have a set S of n items, and we want to find a subset T ⊂ S {\displaystyle T\subset S} of size k such that a certain property holds. We can encode a choice as a list of k integers, stored in binary. Since the highest any of these numbers can be is n, ⌈ log 2 ⁡ n ⌉ {\displaystyle \lceil \log _{2}n\rceil } bits are needed for each number. Therefore k ⋅ ⌈ log 2 ⁡ n ⌉ {\displaystyle k\cdot \lceil \log _{2}n\rceil } total bits are needed to encode a choice. Therefore we can select a subset T ⊂ S {\displaystyle T\subset S} with O ( k ⋅ log ⁡ n ) {\displaystyle O(k\cdot \log n)} nondeterministic choices. === XP === XP is the class of parameterized problems that can be solved in time n f ( k ) {\displaystyle n^{f(k)}} for some computable function f. These problems are called slicewise polynomial, in the sense that each "slice" of fixed k has a polynomial algorithm, although possibly with a different exponent for each k. Compare this with FPT, which merely allows a different constant prefactor for each value of k. XP contains FPT, and it is known that this containment is strict by diagonalization. === para-NP === para-NP is the class of parameterized problems that can be solved by a nondeterministic algorithm in time f ( k ) ⋅ | x | O ( 1 ) {\displaystyle f(k)\cdot |x|^{O(1)}} for some computable function f. It is known that FPT = para-NP {\displaystyle {\textsf {FPT}}={\textsf {para-NP}}} if and only if P = NP {\displaystyle {\textsf {P}}={\textsf {NP}}} . A problem is para-NP-hard if it is NP {\displaystyle {\textsf {NP}}} -hard already for a constant value of the parameter. That is, there is a "slice" of fixed k that is NP {\displaystyle {\textsf {NP}}} -hard. A parameterized problem that is para-NP {\displaystyle {\textsf {para-NP}}} -hard cannot belong to the class XP {\displaystyle {\textsf {XP}}} , unless P = NP {\displaystyle {\textsf {P}}={\textsf {NP}}} . A classic example of a para-NP {\displaystyle {\textsf {para-NP}}} -hard parameterized problem is graph coloring, parameterized by the number k of colors, which is already NP {\displaystyle {\textsf {NP}}} -hard for k = 3 {\displaystyle k=3} (see Graph coloring#Computational complexity). === A hierarchy === The A hierarchy is a collection of computational complexity classes similar to the W hierarchy. However, while the W hierarchy is a hierarchy contained in NP, the A hierarchy more closely mimics the polynomial-time hierarchy from classical complexity. It is known that A[1] = W[1] holds. == See also == Parameterized approximation algorithm, for optimization problems an algorithm running in FPT time might approximate the solution. == Notes == == References == Chen, Jianer; Kanj, Iyad A.; Xia, Ge (2006). Improved Parameterized Upper Bounds for Vertex Cover. Mathematical Foundations of Computer Science. Vol. 4162. Berlin, Heidelberg: Springer. pp. 238–249. CiteSeerX 10.1.1.432.831. doi:10.1007/11821069_21. ISBN 978-3-540-37791-7. Cygan, Marek; Fomin, Fedor V.; Kowalik, Lukasz; Lokshtanov, Daniel; Marx, Daniel; Pilipczuk, Marcin; Pilipczuk, Michal; Saurabh, Saket (2015). Parameterized Algorithms. Springer. p. 555. ISBN 978-3-319-21274-6. Downey, Rod G.; Fellows, Michael R. (1999). Parameterized Complexity. Springer. ISBN 978-0-387-94883-6. Flum, Jörg; Grohe, Martin (2006). Parameterized Complexity Theory. Springer. ISBN 978-3-540-29952-3. Fomin, Fedor V.; Lokshtanov, Daniel; Saurabh, Saket; Zehavi, Meirav (2019). Kernelization: Theory of Parameterized Preprocessing. Cambridge University Press. p. 528. doi:10.1017/9781107415157. ISBN 978-1107057760. S2CID 263888582. Gurevich, Yuri; Stockmeyer, Larry; Vishkin, Uzi (1984). Solving NP-hard problems on graphs that are almost trees and an application to facility location problems. Journal of the ACM. p. 459-473. Niedermeier, Rolf (2006). Invitation to Fixed-Parameter Algorithms. Oxford University Press. ISBN 978-0-19-856607-6. Archived from the original on 2008-09-24. Grohe, Martin (1999). "Descriptive and Parameterized Complexity". Computer Science Logic. Lecture Notes in Computer Science. Vol. 1683. Springer Berlin Heidelberg. pp. 14–31. CiteSeerX 10.1.1.25.9250. doi:10.1007/3-540-48168-0_3. ISBN 978-3-540-66536-6. The Computer Journal. Volume 51, Numbers 1 and 3 (2008). The Computer Journal. Special Double Issue on Parameterized Complexity with 15 survey articles, book review, and a Foreword by Guest Editors R. Downey, M. Fellows and M. Langston. == External links == Wiki on parameterized complexity Compendium of Parameterized Problems
Wikipedia/Fixed-parameter_algorithm
In computational complexity theory, the class APX (an abbreviation of "approximable") is the set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant (or constant-factor approximation algorithms for short). In simple terms, problems in this class have efficient algorithms that can find an answer within some fixed multiplicative factor of the optimal answer. An approximation algorithm is called an f ( n ) {\displaystyle f(n)} -approximation algorithm for input size n {\displaystyle n} if it can be proven that the solution that the algorithm finds is at most a multiplicative factor of f ( n ) {\displaystyle f(n)} times worse than the optimal solution. Here, f ( n ) {\displaystyle f(n)} is called the approximation ratio. Problems in APX are those with algorithms for which the approximation ratio f ( n ) {\displaystyle f(n)} is a constant c {\displaystyle c} . The approximation ratio is conventionally stated greater than 1. In the case of minimization problems, f ( n ) {\displaystyle f(n)} is the found solution's score divided by the optimum solution's score, while for maximization problems the reverse is the case. For maximization problems, where an inferior solution has a smaller score, f ( n ) {\displaystyle f(n)} is sometimes stated as less than 1; in such cases, the reciprocal of f ( n ) {\displaystyle f(n)} is the ratio of the score of the found solution to the score of the optimum solution. A problem is said to have a polynomial-time approximation scheme (PTAS) if for every multiplicative factor of the optimum worse than 1 there is a polynomial-time algorithm to solve the problem to within that factor. Unless P = NP there exist problems that are in APX but without a PTAS, so the class of problems with a PTAS is strictly contained in APX. One example of a problem with a PTAS is the knapsack problem. == APX-hardness and APX-completeness == A problem is said to be APX-hard if there is a PTAS reduction from every problem in APX to that problem, and to be APX-complete if the problem is APX-hard and also in APX. As a consequence of P ≠ NP ⇒ PTAS ≠ APX, if P ≠ NP is assumed, no APX-hard problem has a PTAS. In practice, reducing one problem to another to demonstrate APX-completeness is often done using other reduction schemes, such as L-reductions, which imply PTAS reductions. === Examples === One of the simplest APX-complete problems is MAX-3SAT-3, a variation of the Boolean satisfiability problem. In this problem, we have a Boolean formula in conjunctive normal form where each variable appears at most 3 times, and we wish to know the maximum number of clauses that can be simultaneously satisfied by a single assignment of true/false values to the variables. Other APX-complete problems include: Max independent set in bounded-degree graphs (here, the approximation ratio depends on the maximum degree of the graph, but is constant if the max degree is fixed). Min vertex cover. The complement of any maximal independent set must be a vertex cover. Min dominating set in bounded-degree graphs. The travelling salesman problem when the distances in the graph satisfy the conditions of a metric. TSP is NPO-complete in the general case. The token reconfiguration problem, via L-reduction from set cover. == Related complexity classes == === PTAS === PTAS (polynomial time approximation scheme) consists of problems that can be approximated to within any constant factor besides 1 in time that is polynomial to the input size, but the polynomial depends on such factor. This class is a subset of APX. === APX-intermediate === Unless P = NP, there exist problems in APX that are neither in PTAS nor APX-complete. Such problems can be thought of as having a hardness between PTAS problems and APX-complete problems, and may be called APX-intermediate. The bin packing problem is thought to be APX-intermediate. Despite not having a known PTAS, the bin packing problem has several "asymptotic PTAS" algorithms, which behave like a PTAS when the optimum solution is large, so intuitively it may be easier than problems that are APX-hard. One other example of a potentially APX-intermediate problem is min edge coloring. === f(n)-APX === One can also define a family of complexity classes f ( n ) {\displaystyle f(n)} -APX, where f ( n ) {\displaystyle f(n)} -APX contains problems with a polynomial time approximation algorithm with a O ( f ( n ) ) {\displaystyle O(f(n))} approximation ratio. One can analogously define f ( n ) {\displaystyle f(n)} -APX-complete classes; some such classes contain well-known optimization problems. Log-APX-completeness and poly-APX-completeness are defined in terms of AP-reductions rather than PTAS-reductions; this is because PTAS-reductions are not strong enough to preserve membership in Log-APX and Poly-APX, even though they suffice for APX. Log-APX-complete, consisting of the hardest problems that can be approximated efficiently to within a factor logarithmic in the input size, includes min dominating set when degree is unbounded. Poly-APX-complete, consisting of the hardest problems that can be approximated efficiently to within a factor polynomial in the input size, includes max independent set in the general case. There also exist problems that are exp-APX-complete, where the approximation ratio is exponential in the input size. This may occur when the approximation is dependent on the value of numbers within the problem instance; these numbers may be expressed in space logarithmic in their value, hence the exponential factor. == See also == Approximation-preserving reduction Complexity class Approximation algorithm Max/min CSP/Ones classification theorems - a set of theorems that enable mechanical classification of problems about Boolean relations into approximability complexity classes MaxSNP - a closely related subclass == References == Complexity Zoo: APX C. Papadimitriou and M. Yannakakis. Optimization, approximation and complexity classes. Journal of Computer and System Sciences, 43:425–440, 1991. Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski and Gerhard Woeginger. Maximum Satisfiability Archived 2007-04-13 at the Wayback Machine. A compendium of NP optimization problems Archived 2007-04-05 at the Wayback Machine.
Wikipedia/Constant-factor_approximation_algorithm
In computational complexity theory, the unique games conjecture (often referred to as UGC) is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time (as postulated by the P versus NP problem), but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines. The conjecture is unusual in that the academic world seems about evenly divided on whether it is true or not. == Formulations == The unique games conjecture can be stated in a number of equivalent ways. === Unique label cover === The following formulation of the unique games conjecture is often used in hardness of approximation. The conjecture postulates the NP-hardness of the following promise problem known as label cover with unique constraints. For each edge, the colors on the two vertices are restricted to some particular ordered pairs. Unique constraints means that for each edge none of the ordered pairs have the same color for the same node. This means that an instance of label cover with unique constraints over an alphabet of size k can be represented as a directed graph together with a collection of permutations πe: [k] → [k], one for each edge e of the graph. An assignment to a label cover instance gives to each vertex of G a value in the set [k] = {1, 2, ... k}, often called “colours.” Such instances are strongly constrained in the sense that the colour of a vertex uniquely defines the colours of its neighbours, and hence for its entire connected component. Thus, if the input instance admits a valid assignment, then such an assignment can be found efficiently by iterating over all colours of a single node. In particular, the problem of deciding if a given instance admits a satisfying assignment can be solved in polynomial time. The value of a unique label cover instance is the fraction of constraints that can be satisfied by any assignment. For satisfiable instances, this value is 1 and is easy to find. On the other hand, it seems to be very difficult to determine the value of an unsatisfiable game, even approximately. The unique games conjecture formalises this difficulty. More formally, the (c, s)-gap label-cover problem with unique constraints is the following promise problem (Lyes, Lno): Lyes = {G: Some assignment satisfies at least a c-fraction of constraints in G} Lno = {G: Every assignment satisfies at most an s-fraction of constraints in G} where G is an instance of the label cover problem with unique constraints. The unique games conjecture states that for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the (1 − δ, ε)-gap label-cover problem with unique constraints over alphabet of size k is NP-hard. === Maximizing Linear Equations Modulo k === Consider the following system of linear equations over the integers modulo k: a 1 x 1 ≡ b 1 ⋅ x 2 + c 1 ( mod k ) , a 2 x 2 ≡ b 2 ⋅ x 5 + c 2 ( mod k ) , ⋮ a m x 1 ≡ b m ⋅ x 7 + c m ( mod k ) . {\displaystyle {\begin{aligned}a_{1}x_{1}&\equiv b_{1}\cdot x_{2}+c_{1}{\pmod {k}},\\a_{2}x_{2}&\equiv b_{2}\cdot x_{5}+c_{2}{\pmod {k}},\\&{}\ \ \vdots \\a_{m}x_{1}&\equiv b_{m}\cdot x_{7}+c_{m}{\pmod {k}}.\end{aligned}}} When each equation involves exactly two variables, this is an instance of the label cover problem with unique constraints; such instances are known as instances of the Max2Lin(k) problem. It is not immediately obvious that the inapproximability of Max2Lin(k) is equivalent to the UGC, but this is in fact the case, by a reduction. Namely, the UGC is equivalent to: for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the (1 − δ, ε)-gap Max2Lin(k) problem is NP-hard. === Connection with computational topology === It has been argued that the UGC is essentially a question of computational topology, involving local-global principles (the latter are also evident in the proof of the 2-2 Games Conjecture, see below). Linial observed that unique label cover is an instance of the Maximum Section of a Covering Graph problem (covering graphs is the terminology from topology; in the context of unique games these are often referred to as graph lifts). To date, all known problems whose inapproximability is equivalent to the UGC are instances of this problem, including Unique Label Cover and Max2Lin(k). When the latter two problems are viewed as instances of Max Section of a Covering Graph, the reduction between them preserves the structure of the graph covering spaces, so not only the problems, but the reduction between them has a natural topological interpretation. Grochow and Tucker-Foltz exhibited a third computational topology problem whose inapproximability is equivalent to the UGC: 1-Cohomology Localization on Triangulations of 2-Manifolds. === Two-prover proof systems === A unique game is a special case of a two-prover one-round (2P1R) game. A two-prover one-round game has two players (also known as provers) and a referee. The referee sends each player a question drawn from a known probability distribution, and the players each have to send an answer. The answers come from a set of fixed size. The game is specified by a predicate that depends on the questions sent to the players and the answers provided by them. The players may decide on a strategy beforehand, although they cannot communicate with each other during the game. The players win if the predicate is satisfied by their questions and their answers. A two-prover one-round game is called a unique game if for every question and every answer by the first player, there is exactly one answer by the second player that results in a win for the players, and vice versa. The value of a game is the maximum winning probability for the players over all strategies. The unique games conjecture states that for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the following promise problem (Lyes, Lno) is NP-hard: Lyes = {G: the value of G is at least 1 − δ} Lno = {G: the value of G is at most ε} where G is a unique game whose answers come from a set of size k. === Probabilistically checkable proofs === Alternatively, the unique games conjecture postulates the existence of a certain type of probabilistically checkable proof for problems in NP. A unique game can be viewed as a special kind of nonadaptive probabilistically checkable proof with query complexity 2, where for each pair of possible queries of the verifier and each possible answer to the first query, there is exactly one possible answer to the second query that makes the verifier accept, and vice versa. The unique games conjecture states that for every sufficiently small pair of constants ε , δ > 0 {\displaystyle \varepsilon ,\delta >0} there is a constant K {\displaystyle K} such that every problem in NP has a probabilistically checkable proof over an alphabet of size K {\displaystyle K} with completeness 1 − δ {\displaystyle 1-\delta } , soundness ε {\displaystyle \varepsilon } , and randomness complexity O ( log ⁡ n ) {\displaystyle O(\log n)} which is a unique game. == Relevance == Some very natural, intrinsically interesting statements about things like voting and foams just popped out of studying the UGC.... Even if the UGC turns out to be false, it has inspired a lot of interesting math research. The unique games conjecture was introduced by Subhash Khot in 2002 in order to make progress on certain questions in the theory of hardness of approximation. The truth of the unique games conjecture would imply the optimality of many known approximation algorithms (assuming P ≠ NP). For example, the approximation ratio achieved by the algorithm of Goemans and Williamson for approximating the maximum cut in a graph is optimal to within any additive constant assuming the unique games conjecture and P ≠ NP. A list of results that the unique games conjecture is known to imply is shown in the adjacent table together with the corresponding best results for the weaker assumption P ≠ NP. A constant of c + ε {\displaystyle c+\varepsilon } or c − ε {\displaystyle c-\varepsilon } means that the result holds for every constant (with respect to the problem size) strictly greater than or less than c {\displaystyle c} , respectively. == Discussion and alternatives == Currently, there is no consensus regarding the truth of the unique games conjecture. Certain stronger forms of the conjecture have been disproved. A different form of the conjecture postulates that distinguishing the case when the value of a unique game is at least 1 − δ {\displaystyle 1-\delta } from the case when the value is at most ε {\displaystyle \varepsilon } is impossible for polynomial-time algorithms (but perhaps not NP-hard). This form of the conjecture would still be useful for applications in hardness of approximation. The constant δ > 0 {\displaystyle \delta >0} in the above formulations of the conjecture is necessary unless P = NP. If the uniqueness requirement is removed the corresponding statement is known to be true by the parallel repetition theorem, even when δ = 0 {\displaystyle \delta =0} . == Results == Marek Karpinski and Warren Schudy have constructed linear time approximation schemes for dense instances of unique games problem. In 2008, Prasad Raghavendra has shown that if the unique games conjecture is true, then for every constraint satisfaction problem the best approximation ratio is given by a certain simple semidefinite programming instance, which is in particular polynomial. In 2010, Prasad Raghavendra and David Steurer defined the gap-small-set expansion problem, and conjectured that it is NP-hard. The resulting small set expansion hypothesis implies the unique games conjecture. It has also been used to prove strong hardness of approximation results for finding complete bipartite subgraphs. In 2010, Sanjeev Arora, Boaz Barak and David Steurer found a subexponential time approximation algorithm for the unique games problem. A key ingredient in their result was the spectral algorithm of Alexandra Kolla (see also the earlier manuscript of A. Kolla and Madhur Tulsiani). The latter also re-proved that unique games on expander graphs could be solved in polynomial time, and was one of (if not the) first graph algorithms to take advantage of the full spectrum of a graph rather than just its first two eigenvalues. In 2012, it was shown that distinguishing instances with value at most 3 8 + δ {\displaystyle {\tfrac {3}{8}}+\delta } from instances with value at least 1 2 {\displaystyle {\tfrac {1}{2}}} is NP-hard. In 2018, after a series of papers, a weaker version of the conjecture, called the 2-2 games conjecture, was proven. In a certain sense, this proves "a half" of the original conjecture. This also improves the best known gap for unique label cover: it is NP-hard to distinguish instances with value at most δ {\displaystyle \delta } from instances with value at least 1 2 {\displaystyle {\tfrac {1}{2}}} . == References == == Further reading == Khot, Subhash (2010), "On the Unique Games Conjecture", Proc. 25th IEEE Conference on Computational Complexity (PDF), pp. 99–121, doi:10.1109/CCC.2010.19.
Wikipedia/Unique_games_conjecture
In computability theory and computational complexity theory, especially the study of approximation algorithms, an approximation-preserving reduction is an algorithm for transforming one optimization problem into another problem, such that the distance of solutions from optimal is preserved to some degree. Approximation-preserving reductions are a subset of more general reductions in complexity theory; the difference is that approximation-preserving reductions usually make statements on approximation problems or optimization problems, as opposed to decision problems. Intuitively, problem A is reducible to problem B via an approximation-preserving reduction if, given an instance of problem A and a (possibly approximate) solver for problem B, one can convert the instance of problem A into an instance of problem B, apply the solver for problem B, and recover a solution for problem A that also has some guarantee of approximation. == Definition == Unlike reductions on decision problems, an approximation-preserving reduction must preserve more than the truth of the problem instances when reducing from one problem to another. It must also maintain some guarantee on the relationship between the cost of the solution to the cost of the optimum in both problems. To formalize: Let A and B be optimization problems. Let x be an instance of problem A, with optimal solution OPT ( x ) {\displaystyle {\text{OPT}}(x)} . Let c A ( x , y ) {\displaystyle c_{A}(x,y)} denote the cost of a solution y to an instance x of problem A. This is also the metric used to determine which solution is considered optimal. An approximation-preserving reduction is a pair of functions ( f , g ) {\displaystyle (f,g)} (which often must be computable in polynomial time), such that: f maps an instance x of A to an instance x ′ {\displaystyle x'} of B. g maps a solution y ′ {\displaystyle y'} of B to a solution y of A. g preserves some guarantee of the solution's performance, or approximation ratio, defined as R A ( x , y ) = max ( c A ( x , OPT ( x ) ) c A ( x , y ) , c A ( x , y ) c A ( x , OPT ( x ) ) ) {\displaystyle R_{A}(x,y)=\max \left({\frac {c_{A}(x,{\text{OPT}}(x))}{c_{A}(x,y)}},{\frac {c_{A}(x,y)}{c_{A}(x,{\text{OPT}}(x))}}\right)} . == Types == There are many different types of approximation-preserving reductions, all of which have a different guarantee (the third point in the definition above). However, unlike with other reductions, approximation-preserving reductions often overlap in what properties they demonstrate on optimization problems (e.g. complexity class membership or completeness, or inapproximability). The different types of reductions are used instead as varying reduction techniques, in that the applicable reduction which is most easily adapted to the problem is used. Not all types of approximation-preserving reductions can be used to show membership in all approximability complexity classes, the most notable of which are PTAS and APX. A reduction below preserves membership in a complexity class C if, given a problem A that reduces to problem B via the reduction scheme, and B is in C, then A is in C as well. Some reductions shown below only preserve membership in APX or PTAS, but not the other. Because of this, careful choice must be made when selecting an approximation-preserving reductions, especially for the purpose of proving completeness of a problem within a complexity class. Crescenzi suggests that the three most ideal styles of reduction, for both ease of use and proving power, are PTAS reduction, AP reduction, and L-reduction. The reduction descriptions that follow are from Crescenzi's survey of approximation-preserving reductions. === Strict reduction === Strict reduction is the simplest type of approximation-preserving reduction. In a strict reduction, the approximation ratio of a solution y' to an instance x' of a problem B must be at most as good as the approximation ratio of the corresponding solution y to instance x of problem A. In other words: R A ( x , y ) ≤ R B ( x ′ , y ′ ) {\displaystyle R_{A}(x,y)\leq R_{B}(x',y')} for x ′ = f ( x ) , y = g ( y ′ ) {\displaystyle x'=f(x),y=g(y')} . Strict reduction is the most straightforward: if a strict reduction from problem A to problem B exists, then problem A can always be approximated to at least as good a ratio as problem B. Strict reduction preserves membership in both PTAS and APX. There exists a similar concept of an S-reduction, for which c A ( x , y ) = c B ( x ′ , y ′ ) {\displaystyle c_{A}(x,y)=c_{B}(x',y')} , and the optima of the two corresponding instances must have the same cost as well. S-reduction is a very special case of strict reduction, and is even more constraining. In effect, the two problems A and B must be in near-perfect correspondence with each other. The existence of an S-reduction implies not only the existence of a strict reduction but every other reduction listed here. === L-reduction === L-reductions preserve membership in PTAS as well as APX (but only for minimization problems in the case of the latter). As a result, they cannot be used in general to prove completeness results about APX, Log-APX, or Poly-APX, but nevertheless they are valued for their natural formulation and ease of use in proofs. === PTAS-reduction === PTAS-reduction is another commonly used reduction scheme. Though it preserves membership in PTAS, it does not do so for APX. Nevertheless, APX-completeness is defined in terms of PTAS reductions. PTAS-reductions are a generalization of P-reductions, shown below, with the only difference being that the function g is allowed to depend on the approximation ratio r. === A-reduction and P-reduction === A-reduction and P-reduction are similar reduction schemes that can be used to show membership in APX and PTAS respectively. Both introduce a new function c, defined on numbers greater than 1, which must be computable. In an A-reduction, we have that R B ( x ′ , y ′ ) ≤ r → R A ( x , y ) ≤ c ( r ) {\displaystyle R_{B}(x',y')\leq r\rightarrow R_{A}(x,y)\leq c(r)} . In a P-reduction, we have that R B ( x ′ , y ′ ) ≤ c ( r ) → R A ( x , y ) ≤ r {\displaystyle R_{B}(x',y')\leq c(r)\rightarrow R_{A}(x,y)\leq r} . The existence of a P-reduction implies the existence of a PTAS-reduction. === E-reduction === E-reduction, which is a generalization of strict reduction but implies both A-reduction and P-reduction, is an example of a less restrictive reduction style that preserves membership not only in PTAS and APX, but also the larger classes Log-APX and Poly-APX. E-reduction introduces two new parameters, a polynomial p and a constant β {\displaystyle \beta } . Its definition is as follows. In an E-reduction, we have that for some polynomial p and constant β {\displaystyle \beta } , c B ( OPT B ( x ′ ) ) ≤ p ( | x | ) c A ( OPT A ( x ) ) {\displaystyle c_{B}({\text{OPT}}_{B}(x'))\leq p(|x|)c_{A}({\text{OPT}}_{A}(x))} , where | x | {\displaystyle |x|} denotes the size of the problem instance's description. For any solution y ′ {\displaystyle y'} to B, we have R A ( x , y ) ≤ 1 + β ⋅ ( R B ( x ′ , y ′ ) − 1 ) {\displaystyle R_{A}(x,y)\leq 1+\beta \cdot (R_{B}(x',y')-1)} . To obtain an A-reduction from an E-reduction, let c ( r ) = 1 + β ⋅ ( r − 1 ) {\displaystyle c(r)=1+\beta \cdot (r-1)} , and to obtain a P-reduction from an E-reduction, let c ( r ) = 1 + ( r − 1 ) / β {\displaystyle c(r)=1+(r-1)/\beta } . === AP-reduction === AP-reductions are used to define completeness in the classes Log-APX and Poly-APX. They are a special case of PTAS reduction, meeting the following restrictions. In an AP-reduction, we have that for some constant α {\displaystyle \alpha } , R B ( x ′ , y ′ ) ≤ r → R A ( x , y ) ≤ 1 + α ⋅ ( r − 1 ) {\displaystyle R_{B}(x',y')\leq r\rightarrow R_{A}(x,y)\leq 1+\alpha \cdot (r-1)} with the additional generalization that the function g is allowed to depend on the approximation ratio r, as in PTAS-reduction. AP-reduction is also a generalization of E-reduction. An additional restriction actually needs to be imposed for AP-reduction to preserve Log-APX and Poly-APX membership, as E-reduction does: for fixed problem size, the computation time of f, g must be non-increasing as the approximation ratio increases. === Gap reduction === A gap reduction is a type of reduction that, while useful in proving some inapproximability results, does not resemble the other reductions shown here. Gap reductions deal with optimization problems within a decision problem container, generated by changing the problem goal to distinguishing between the optimal solution and solutions some multiplicative factor worse than the optimum. == See also == Reduction (complexity) PTAS reduction L-reduction Approximation algorithm == References ==
Wikipedia/Approximation-preserving_reduction
In computer science, hardness of approximation is a field that studies the algorithmic complexity of finding near-optimal solutions to optimization problems. == Scope == Hardness of approximation complements the study of approximation algorithms by proving, for certain problems, a limit on the factors with which their solution can be efficiently approximated. Typically such limits show a factor of approximation beyond which a problem becomes NP-hard, implying that finding a polynomial time approximation for the problem is impossible unless NP=P. Some hardness of approximation results, however, are based on other hypotheses, a notable one among which is the unique games conjecture. == History == Since the early 1970s it was known that many optimization problems could not be solved in polynomial time unless P = NP, but in many of these problems the optimal solution could be efficiently approximated to a certain degree. In the 1970s, Teofilo F. Gonzalez and Sartaj Sahni began the study of hardness of approximation, by showing that certain optimization problems were NP-hard even to approximate to within a given approximation ratio. That is, for these problems, there is a threshold such that any polynomial-time approximation with approximation ratio beyond this threshold could be used to solve NP-complete problems in polynomial time. In the early 1990s, with the development of PCP theory, it became clear that many more approximation problems were hard to approximate, and that (unless P = NP) many known approximation algorithms achieved the best possible approximation ratio. Hardness of approximation theory deals with studying the approximation threshold of such problems. == Examples == For an example of an NP-hard optimization problem that is hard to approximate, see set cover. == See also == PCP theorem == References == == Further reading == Trevisan, Luca (July 27, 2004), Inapproximability of Combinatorial Optimization Problems (PDF), arXiv:cs/0409043, Bibcode:2004cs........9043T == External links == CSE 533: The PCP Theorem and Hardness of Approximation, Autumn 2005, syllabus from the University of Washington, Venkatesan Guruswami and Ryan O'Donnell
Wikipedia/Hardness_of_approximation
A parameterized approximation algorithm is a type of algorithm that aims to find approximate solutions to NP-hard optimization problems in polynomial time in the input size and a function of a specific parameter. These algorithms are designed to combine the best aspects of both traditional approximation algorithms and fixed-parameter tractability. In traditional approximation algorithms, the goal is to find solutions that are at most a certain factor α away from the optimal solution, known as an α-approximation, in polynomial time. On the other hand, parameterized algorithms are designed to find exact solutions to problems, but with the constraint that the running time of the algorithm is polynomial in the input size and a function of a specific parameter k. The parameter describes some property of the input and is small in typical applications. The problem is said to be fixed-parameter tractable (FPT) if there is an algorithm that can find the optimum solution in f ( k ) n O ( 1 ) {\displaystyle f(k)n^{O(1)}} time, where f ( k ) {\displaystyle f(k)} is a function independent of the input size n. A parameterized approximation algorithm aims to find a balance between these two approaches by finding approximate solutions in FPT time: the algorithm computes an α-approximation in f ( k ) n O ( 1 ) {\displaystyle f(k)n^{O(1)}} time, where f ( k ) {\displaystyle f(k)} is a function independent of the input size n. This approach aims to overcome the limitations of both traditional approaches by having stronger guarantees on the solution quality compared to traditional approximations while still having efficient running times as in FPT algorithms. An overview of the research area studying parameterized approximation algorithms can be found in the survey of Marx and the more recent survey by Feldmann et al. == Obtainable approximation ratios == The full potential of parameterized approximation algorithms is utilized when a given optimization problem is shown to admit an α-approximation algorithm running in f ( k ) n O ( 1 ) {\displaystyle f(k)n^{O(1)}} time, while in contrast the problem neither has a polynomial-time α-approximation algorithm (under some complexity assumption, e.g., P ≠ N P {\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}} ), nor an FPT algorithm for the given parameter k (i.e., it is at least W[1]-hard). For example, some problems that are APX-hard and W[1]-hard admit a parameterized approximation scheme (PAS), i.e., for any ε > 0 {\displaystyle \varepsilon >0} a ( 1 + ε ) {\displaystyle (1+\varepsilon )} -approximation can be computed in f ( k , ε ) n g ( ε ) {\displaystyle f(k,\varepsilon )n^{g(\varepsilon )}} time for some functions f and g. This then circumvents the lower bounds in terms of polynomial-time approximation and fixed-parameter tractability. A PAS is similar in spirit to a polynomial-time approximation scheme (PTAS) but additionally exploits a given parameter k. Since the degree of the polynomial in the runtime of a PAS depends on a function g ( ε ) {\displaystyle g(\varepsilon )} , the value of ε {\displaystyle \varepsilon } is assumed to be arbitrary but constant in order for the PAS to run in FPT time. If this assumption is unsatisfying, ε {\displaystyle \varepsilon } is treated as a parameter as well to obtain an efficient parameterized approximation scheme (EPAS), which for any ε > 0 {\displaystyle \varepsilon >0} computes a ( 1 + ε ) {\displaystyle (1+\varepsilon )} -approximation in f ( k , ε ) n O ( 1 ) {\displaystyle f(k,\varepsilon )n^{O(1)}} time for some function f. This is similar in spirit to an efficient polynomial-time approximation scheme (EPTAS). === k-Cut === The k-cut problem has no polynomial-time ( 2 − ε ) {\displaystyle (2-\varepsilon )} -approximation algorithm for any ε > 0 {\displaystyle \varepsilon >0} , assuming P ≠ N P {\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}} and the small set expansion hypothesis. It is also W[1]-hard parameterized by the number k of required components. However an EPAS exists, which computes a ( 1 + ε ) {\displaystyle (1+\varepsilon )} -approximation in ( k / ε ) O ( k ) n O ( 1 ) {\displaystyle (k/\varepsilon )^{O(k)}n^{O(1)}} time. === Travelling Salesman === The Travelling Salesman problem is APX-hard and paraNP-hard parameterized by the doubling dimension (as it is NP-hard in the Euclidean plane). However, an EPAS exists parameterized by the doubling dimension, and even for the more general highway dimension parameter. === Steiner Tree === The Steiner Tree problem is FPT parameterized by the number of terminals. However, for the "dual" parameter consisting of the number k of non-terminals contained in the optimum solution, the problem is W[2]-hard (due to a folklore reduction from the Dominating Set problem). Steiner Tree is also known to be APX-hard. However, there is an EPAS computing a ( 1 + ε ) {\displaystyle (1+\varepsilon )} -approximation in 2 O ( k 2 / ε 4 ) n O ( 1 ) {\displaystyle 2^{O(k^{2}/\varepsilon ^{4})}n^{O(1)}} time. The more general Steiner Forest problem is NP-hard on graphs of treewidth 3. However, on graphs of treewidth t an EPAS can compute a ( 1 + ε ) {\displaystyle (1+\varepsilon )} -approximation in 2 O ( t 2 ε log ⁡ t ε ) n O ( 1 ) {\displaystyle 2^{O({\frac {t^{2}}{\varepsilon }}\log {\frac {t}{\varepsilon }})}n^{O(1)}} time. === Strongly-Connected Steiner Subgraph === It is known that the Strongly Connected Steiner Subgraph problem is W[1]-hard parameterized by the number k of terminals, and also does not admit an O ( log 2 − ε ⁡ n ) {\displaystyle O(\log ^{2-\varepsilon }n)} -approximation in polynomial time (under standard complexity assumptions). However a 2-approximation can be computed in 3 k n O ( 1 ) {\displaystyle 3^{k}n^{O(1)}} time. Furthermore, this is best possible, since no ( 2 − ε ) {\displaystyle (2-\varepsilon )} -approximation can be computed in f ( k ) n O ( 1 ) {\displaystyle f(k)n^{O(1)}} time for any function f, under Gap-ETH. === k-Median and k-Means === For the well-studied metric clustering problems of k-median and k-means parameterized by the number k of centers, it is known that no ( 1 + 2 / e − ε ) {\displaystyle (1+2/e-\varepsilon )} -approximation for k-Median and no ( 1 + 8 / e − ε ) {\displaystyle (1+8/e-\varepsilon )} -approximation for k-Means can be computed in f ( k ) n O ( 1 ) {\displaystyle f(k)n^{O(1)}} time for any function f, under Gap-ETH. Matching parameterized approximation algorithms exist, but it is not known whether matching approximations can be computed in polynomial time. Clustering is often considered in settings of low dimensional data, and thus a practically relevant parameterization is by the dimension of the underlying metric. In the Euclidean space, the k-Median and k-Means problems admit an EPAS parameterized by the dimension d, and also an EPAS parameterized by k. The former was generalized to an EPAS for the parameterization by the doubling dimension. For the loosely related highway dimension parameter, only an approximation scheme with XP runtime is known to date. === k-Center === For the metric k-center problem a 2-approximation can be computed in polynomial time. However, when parameterizing by either the number k of centers, the doubling dimension (in fact the dimension of a Manhattan metric), or the highway dimension, no parameterized ( 2 − ε ) {\displaystyle (2-\varepsilon )} -approximation algorithm exists, under standard complexity assumptions. Furthermore, the k-Center problem is W[1]-hard even on planar graphs when simultaneously parameterizing it by the number k of centers, the doubling dimension, the highway dimension, and the pathwidth. However, when combining k with the doubling dimension an EPAS exists, and the same is true when combining k with the highway dimension. For the more general version with vertex capacities, an EPAS exists for the parameterization by k and the doubling dimension, but not when using k and the highway dimension as the parameter. Regarding the pathwidth, k-Center admits an EPAS even for the more general treewidth parameter, and also for cliquewidth. === Densest Subgraph === An optimization variant of the k-Clique problem is the Densest k-Subgraph problem (which is a 2-ary Constraint Satisfaction problem), where the task is to find a subgraph on k vertices with maximum number of edges. It is not hard to obtain a ( k − 1 ) {\displaystyle (k-1)} -approximation by just picking a matching of size k / 2 {\displaystyle k/2} in the given input graph, since the maximum number of edges on k vertices is always at most ( k 2 ) = k ( k − 1 ) / 2 {\displaystyle {k \choose 2}=k(k-1)/2} . This is also asymptotically optimal, since under Gap-ETH no k 1 − o ( 1 ) {\displaystyle k^{1-o(1)}} -approximation can be computed in FPT time parameterized by k. === Dominating Set === For the Dominating set problem it is W[1]-hard to compute any g ( k ) {\displaystyle g(k)} -approximation in f ( k ) n O ( 1 ) {\displaystyle f(k)n^{O(1)}} time for any functions g and f. == Approximate kernelization == Kernelization is a technique used in fixed-parameter tractability to pre-process an instance of an NP-hard problem in order to remove "easy parts" and reveal the NP-hard core of the instance. A kernelization algorithm takes an instance I and a parameter k, and returns a new instance I ′ {\displaystyle I'} with parameter k ′ {\displaystyle k'} such that the size of I ′ {\displaystyle I'} and k ′ {\displaystyle k'} is bounded as a function of the input parameter k, and the algorithm runs in polynomial time. An α-approximate kernelization algorithm is a variation of this technique that is used in parameterized approximation algorithms. It returns a kernel I ′ {\displaystyle I'} such that any β-approximation in I ′ {\displaystyle I'} can be converted into an αβ-approximation to the input instance I in polynomial time. This notion was introduced by Lokshtanov et al., but there are other related notions in the literature such as Turing kernels and α-fidelity kernelization. As for regular (non-approximate) kernels, a problem admits an α-approximate kernelization algorithm if and only if it has a parameterized α-approximation algorithm. The proof of this fact is very similar to the one for regular kernels. However the guaranteed approximate kernel might be of exponential size (or worse) in the input parameter. Hence it becomes interesting to find problems that admit polynomial sized approximate kernels. Furthermore, a polynomial-sized approximate kernelization scheme (PSAKS) is an α-approximate kernelization algorithm that computes a polynomial-sized kernel and for which α can be set to 1 + ε {\displaystyle 1+\varepsilon } for any ε > 0 {\displaystyle \varepsilon >0} . For example, while the Connected Vertex Cover problem is FPT parameterized by the solution size, it does not admit a (regular) polynomial sized kernel (unless NP ⊆ coNP/poly {\displaystyle {\textsf {NP}}\subseteq {\textsf {coNP/poly}}} ), but a PSAKS exists. Similarly, the Steiner Tree problem is FPT parameterized by the number of terminals, does not admit a polynomial sized kernel (unless NP ⊆ coNP/poly {\displaystyle {\textsf {NP}}\subseteq {\textsf {coNP/poly}}} ), but a PSAKS exists. When parameterizing Steiner Tree by the number of non-terminals in the optimum solution, the problem is W[2]-hard (and thus admits no exact kernel at all, unless FPT=W[2]), but still admits a PSAKS. == Talks on parameterized approximations == Daniel Lokshtanov: A Parameterized Approximation Scheme for k-Min Cut Tuukka Korhonen: Single-Exponential Time 2-Approximation Algorithm for Treewidth Karthik C. S.: Recent Hardness of Approximation results in Parameterized Complexity Ariel Kulik. Two-variable Recurrence Relations with Application to Parameterized Approximations Meirav Zehavi. FPT Approximation Vincent Cohen-Added: On the Parameterized Complexity of Various Clustering Problems Fahad Panolan. Parameterized Approximation for Independent Set of Rectangles Andreas Emil Feldmann. Approximate Kernelization Schemes for Steiner Networks == References ==
Wikipedia/Parameterized_approximation_algorithm
In mathematics, a super vector space is a Z 2 {\displaystyle \mathbb {Z} _{2}} -graded vector space, that is, a vector space over a field K {\displaystyle \mathbb {K} } with a given decomposition of subspaces of grade 0 {\displaystyle 0} and grade 1 {\displaystyle 1} . The study of super vector spaces and their generalizations is sometimes called super linear algebra. These objects find their principal application in theoretical physics where they are used to describe the various algebraic aspects of supersymmetry. == Definitions == A super vector space is a Z 2 {\displaystyle \mathbb {Z} _{2}} -graded vector space with decomposition V = V 0 ⊕ V 1 , 0 , 1 ∈ Z 2 = Z / 2 Z . {\displaystyle V=V_{0}\oplus V_{1},\quad 0,1\in \mathbb {Z} _{2}=\mathbb {Z} /2\mathbb {Z} .} Vectors that are elements of either V 0 {\displaystyle V_{0}} or V 1 {\displaystyle V_{1}} are said to be homogeneous. The parity of a nonzero homogeneous element, denoted by | x | {\displaystyle |x|} , is 0 {\displaystyle 0} or 1 {\displaystyle 1} according to whether it is in V 0 {\displaystyle V_{0}} or V 1 {\displaystyle V_{1}} , | x | = { 0 x ∈ V 0 1 x ∈ V 1 {\displaystyle |x|={\begin{cases}0&x\in V_{0}\\1&x\in V_{1}\end{cases}}} Vectors of parity 0 {\displaystyle 0} are called even and those of parity 1 {\displaystyle 1} are called odd. In theoretical physics, the even elements are sometimes called Bose elements or bosonic, and the odd elements Fermi elements or fermionic. Definitions for super vector spaces are often given only in terms of homogeneous elements and then extended to nonhomogeneous elements by linearity. If V {\displaystyle V} is finite-dimensional and the dimensions of V 0 {\displaystyle V_{0}} and V 1 {\displaystyle V_{1}} are p {\displaystyle p} and q {\displaystyle q} respectively, then V {\displaystyle V} is said to have dimension p | q {\displaystyle p|q} . The standard super coordinate space, denoted K p | q {\displaystyle \mathbb {K} ^{p|q}} , is the ordinary coordinate space K p + q {\displaystyle \mathbb {K} ^{p+q}} where the even subspace is spanned by the first p {\displaystyle p} coordinate basis vectors and the odd space is spanned by the last q {\displaystyle q} . A homogeneous subspace of a super vector space is a linear subspace that is spanned by homogeneous elements. Homogeneous subspaces are super vector spaces in their own right (with the obvious grading). For any super vector space V {\displaystyle V} , one can define the parity reversed space Π V {\displaystyle \Pi V} to be the super vector space with the even and odd subspaces interchanged. That is, ( Π V ) 0 = V 1 , ( Π V ) 1 = V 0 . {\displaystyle {\begin{aligned}(\Pi V)_{0}&=V_{1},\\(\Pi V)_{1}&=V_{0}.\end{aligned}}} == Linear transformations == A homomorphism, a morphism in the category of super vector spaces, from one super vector space to another is a grade-preserving linear transformation. A linear transformation f : V → W {\displaystyle f:V\rightarrow W} between super vector spaces is grade preserving if f ( V i ) ⊂ W i , i = 0 , 1. {\displaystyle f(V_{i})\subset W_{i},\quad i=0,1.} That is, it maps the even elements of V {\displaystyle V} to even elements of W {\displaystyle W} and odd elements of V {\displaystyle V} to odd elements of W {\displaystyle W} . An isomorphism of super vector spaces is a bijective homomorphism. The set of all homomorphisms V → W {\displaystyle V\rightarrow W} is denoted H o m ( V , W ) {\displaystyle \mathrm {Hom} (V,W)} . Every linear transformation, not necessarily grade-preserving, from one super vector space to another can be written uniquely as the sum of a grade-preserving transformation and a grade-reversing one—that is, a transformation f : V → W {\displaystyle f:V\rightarrow W} such that f ( V i ) ⊂ W 1 − i , i = 0 , 1. {\displaystyle f(V_{i})\subset W_{1-i},\quad i=0,1.} Declaring the grade-preserving transformations to be even and the grade-reversing ones to be odd gives the space of all linear transformations from V {\displaystyle V} to W {\displaystyle W} , denoted H o m ( V , W ) {\displaystyle \mathbf {Hom} (V,W)} and called internal H o m {\displaystyle \mathrm {Hom} } , the structure of a super vector space. In particular, ( H o m ( V , W ) ) 0 = H o m ( V , W ) . {\displaystyle \left(\mathbf {Hom} (V,W)\right)_{0}=\mathrm {Hom} (V,W).} A grade-reversing transformation from V {\displaystyle V} to W {\displaystyle W} can be regarded as a homomorphism from V {\displaystyle V} to the parity reversed space Π W {\displaystyle \Pi W} , so that H o m ( V , W ) = H o m ( V , W ) ⊕ H o m ( V , Π W ) = H o m ( V , W ) ⊕ H o m ( Π V , W ) . {\displaystyle \mathbf {Hom} (V,W)=\mathrm {Hom} (V,W)\oplus \mathrm {Hom} (V,\Pi W)=\mathrm {Hom} (V,W)\oplus \mathrm {Hom} (\Pi V,W).} == Operations on super vector spaces == The usual algebraic constructions for ordinary vector spaces have their counterpart in the super vector space setting. === Dual space === The dual space V ∗ {\displaystyle V^{*}} of a super vector space V {\displaystyle V} can be regarded as a super vector space by taking the even functionals to be those that vanish on V 1 {\displaystyle V_{1}} and the odd functionals to be those that vanish on V 0 {\displaystyle V_{0}} . Equivalently, one can define V ∗ {\displaystyle V^{*}} to be the space of linear maps from V {\displaystyle V} to K 1 | 0 {\displaystyle \mathbb {K} ^{1|0}} (the base field K {\displaystyle \mathbb {K} } thought of as a purely even super vector space) with the gradation given in the previous section. === Direct sum === Direct sums of super vector spaces are constructed as in the ungraded case with the grading given by ( V ⊕ W ) 0 = V 0 ⊕ W 0 , {\displaystyle (V\oplus W)_{0}=V_{0}\oplus W_{0},} ( V ⊕ W ) 1 = V 1 ⊕ W 1 . {\displaystyle (V\oplus W)_{1}=V_{1}\oplus W_{1}.} === Tensor product === One can also construct tensor products of super vector spaces. Here the additive structure of Z 2 {\displaystyle \mathbb {Z} _{2}} comes into play. The underlying space is as in the ungraded case with the grading given by ( V ⊗ W ) i = ⨁ j + k = i V j ⊗ W k , {\displaystyle (V\otimes W)_{i}=\bigoplus _{j+k=i}V_{j}\otimes W_{k},} where the indices are in Z 2 {\displaystyle \mathbb {Z} _{2}} . Specifically, one has ( V ⊗ W ) 0 = ( V 0 ⊗ W 0 ) ⊕ ( V 1 ⊗ W 1 ) , {\displaystyle (V\otimes W)_{0}=(V_{0}\otimes W_{0})\oplus (V_{1}\otimes W_{1}),} ( V ⊗ W ) 1 = ( V 0 ⊗ W 1 ) ⊕ ( V 1 ⊗ W 0 ) . {\displaystyle (V\otimes W)_{1}=(V_{0}\otimes W_{1})\oplus (V_{1}\otimes W_{0}).} == Supermodules == Just as one may generalize vector spaces over a field to modules over a commutative ring, one may generalize super vector spaces over a field to supermodules over a supercommutative algebra (or ring). A common construction when working with super vector spaces is to enlarge the field of scalars to a supercommutative Grassmann algebra. Given a field K {\displaystyle \mathbb {K} } let R = K [ θ 1 , ⋯ , θ N ] {\displaystyle R=\mathbb {K} [\theta _{1},\cdots ,\theta _{N}]} denote the Grassmann algebra generated by N {\displaystyle N} anticommuting odd elements θ i {\displaystyle \theta _{i}} . Any super vector V {\displaystyle V} space over K {\displaystyle \mathbb {K} } can be embedded in a module over R {\displaystyle R} by considering the (graded) tensor product K [ θ 1 , ⋯ , θ N ] ⊗ V . {\displaystyle \mathbb {K} [\theta _{1},\cdots ,\theta _{N}]\otimes V.} == The category of super vector spaces == The category of super vector spaces, denoted by K − S V e c t {\displaystyle \mathbb {K} -\mathrm {SVect} } , is the category whose objects are super vector spaces (over a fixed field K {\displaystyle \mathbb {K} } ) and whose morphisms are even linear transformations (i.e. the grade preserving ones). The categorical approach to super linear algebra is to first formulate definitions and theorems regarding ordinary (ungraded) algebraic objects in the language of category theory and then transfer these directly to the category of super vector spaces. This leads to a treatment of "superobjects" such as superalgebras, Lie superalgebras, supergroups, etc. that is completely analogous to their ungraded counterparts. The category K − S V e c t {\displaystyle \mathbb {K} -\mathrm {SVect} } is a monoidal category with the super tensor product as the monoidal product and the purely even super vector space K 1 | 0 {\displaystyle \mathbb {K} ^{1|0}} as the unit object. The involutive braiding operator τ V , W : V ⊗ W → W ⊗ V , {\displaystyle \tau _{V,W}:V\otimes W\rightarrow W\otimes V,} given by τ V , W ( x ⊗ y ) = ( − 1 ) | x | | y | y ⊗ x {\displaystyle \tau _{V,W}(x\otimes y)=(-1)^{|x||y|}y\otimes x} on homogeneous elements, turns K − S V e c t {\displaystyle \mathbb {K} -\mathrm {SVect} } into a symmetric monoidal category. This commutativity isomorphism encodes the "rule of signs" that is essential to super linear algebra. It effectively says that a minus sign is picked up whenever two odd elements are interchanged. One need not worry about signs in the categorical setting as long as the above operator is used wherever appropriate. K − S V e c t {\displaystyle \mathbb {K} -\mathrm {SVect} } is also a closed monoidal category with the internal Hom object, H o m ( V , W ) {\displaystyle \mathbf {Hom} (V,W)} , given by the super vector space of all linear maps from V {\displaystyle V} to W {\displaystyle W} . The ordinary H o m {\displaystyle \mathrm {Hom} } set H o m ( V , W ) {\displaystyle \mathrm {Hom} (V,W)} is the even subspace therein: H o m ( V , W ) = H o m ( V , W ) 0 . {\displaystyle \mathrm {Hom} (V,W)=\mathbf {Hom} (V,W)_{0}.} The fact that K − S V e c t {\displaystyle \mathbb {K} -\mathrm {SVect} } is closed means that the functor − ⊗ V {\displaystyle -\otimes V} is left adjoint to the functor H o m ( V , − ) {\displaystyle \mathrm {Hom} (V,-)} , given a natural bijection H o m ( U ⊗ V , W ) ≅ H o m ( U , H o m ( V , W ) ) . {\displaystyle \mathrm {Hom} (U\otimes V,W)\cong \mathrm {Hom} (U,\mathbf {Hom} (V,W)).} == Superalgebra == A superalgebra over K {\displaystyle \mathbb {K} } can be described as a super vector space A {\displaystyle {\mathcal {A}}} with a multiplication map μ : A ⊗ A → A , {\displaystyle \mu :{\mathcal {A}}\otimes {\mathcal {A}}\to {\mathcal {A}},} that is a super vector space homomorphism. This is equivalent to demanding | a b | = | a | + | b | , a , b ∈ A {\displaystyle |ab|=|a|+|b|,\quad a,b\in {\mathcal {A}}} Associativity and the existence of an identity can be expressed with the usual commutative diagrams, so that a unital associative superalgebra over K {\displaystyle \mathbb {K} } is a monoid in the category K − S V e c t {\displaystyle \mathbb {K} -\mathrm {SVect} } . == Notes == == References == Deligne, P.; Morgan, J. W. (1999). "Notes on Supersymmetry (following Joseph Bernstein)". Quantum Fields and Strings: A Course for Mathematicians. Vol. 1. American Mathematical Society. pp. 41–97. ISBN 0-8218-2012-5 – via IAS. Varadarajan, V. S. (2004). Supersymmetry for Mathematicians: An Introduction. Courant Lecture Notes in Mathematics. Vol. 11. American Mathematical Society. ISBN 978-0-8218-3574-6.
Wikipedia/Super_linear_algebra
In mathematics, a supercommutative (associative) algebra is a superalgebra (i.e. a Z2-graded algebra) such that for any two homogeneous elements x, y we have y x = ( − 1 ) | x | | y | x y , {\displaystyle yx=(-1)^{|x||y|}xy,} where |x| denotes the grade of the element and is 0 or 1 (in Z2) according to whether the grade is even or odd, respectively. Equivalently, it is a superalgebra where the supercommutator [ x , y ] = x y − ( − 1 ) | x | | y | y x {\displaystyle [x,y]=xy-(-1)^{|x||y|}yx} always vanishes. Algebraic structures which supercommute in the above sense are sometimes referred to as skew-commutative associative algebras to emphasize the anti-commutation, or, to emphasize the grading, graded-commutative or, if the supercommutativity is understood, simply commutative. Any commutative algebra is a supercommutative algebra if given the trivial gradation (i.e. all elements are even). Grassmann algebras (also known as exterior algebras) are the most common examples of nontrivial supercommutative algebras. The supercenter of any superalgebra is the set of elements that supercommute with all elements, and is a supercommutative algebra. The even subalgebra of a supercommutative algebra is always a commutative algebra. That is, even elements always commute. Odd elements, on the other hand, always anticommute. That is, x y + y x = 0 {\displaystyle xy+yx=0\,} for odd x and y. In particular, the square of any odd element x vanishes whenever 2 is invertible: x 2 = 0. {\displaystyle x^{2}=0.} Thus a commutative superalgebra (with 2 invertible and nonzero degree one component) always contains nilpotent elements. A Z-graded anticommutative algebra with the property that x2 = 0 for every element x of odd grade (irrespective of whether 2 is invertible) is called an alternating algebra. == See also == Graded-commutative ring Lie superalgebra == References ==
Wikipedia/Supercommutative_algebra
In mathematics, a supercommutative (associative) algebra is a superalgebra (i.e. a Z2-graded algebra) such that for any two homogeneous elements x, y we have y x = ( − 1 ) | x | | y | x y , {\displaystyle yx=(-1)^{|x||y|}xy,} where |x| denotes the grade of the element and is 0 or 1 (in Z2) according to whether the grade is even or odd, respectively. Equivalently, it is a superalgebra where the supercommutator [ x , y ] = x y − ( − 1 ) | x | | y | y x {\displaystyle [x,y]=xy-(-1)^{|x||y|}yx} always vanishes. Algebraic structures which supercommute in the above sense are sometimes referred to as skew-commutative associative algebras to emphasize the anti-commutation, or, to emphasize the grading, graded-commutative or, if the supercommutativity is understood, simply commutative. Any commutative algebra is a supercommutative algebra if given the trivial gradation (i.e. all elements are even). Grassmann algebras (also known as exterior algebras) are the most common examples of nontrivial supercommutative algebras. The supercenter of any superalgebra is the set of elements that supercommute with all elements, and is a supercommutative algebra. The even subalgebra of a supercommutative algebra is always a commutative algebra. That is, even elements always commute. Odd elements, on the other hand, always anticommute. That is, x y + y x = 0 {\displaystyle xy+yx=0\,} for odd x and y. In particular, the square of any odd element x vanishes whenever 2 is invertible: x 2 = 0. {\displaystyle x^{2}=0.} Thus a commutative superalgebra (with 2 invertible and nonzero degree one component) always contains nilpotent elements. A Z-graded anticommutative algebra with the property that x2 = 0 for every element x of odd grade (irrespective of whether 2 is invertible) is called an alternating algebra. == See also == Graded-commutative ring Lie superalgebra == References ==
Wikipedia/Commutative_superalgebra
In algebra, the center of a ring R is the subring consisting of the elements x such that xy = yx for all elements y in R. It is a commutative ring and is denoted as Z(R); 'Z' stands for the German word Zentrum, meaning "center". If R is a ring, then R is an associative algebra over its center. Conversely, if R is an associative algebra over a commutative subring S, then S is a subring of the center of R, and if S happens to be the center of R, then the algebra R is called a central algebra. == Examples == The center of a commutative ring R is R itself. The center of a skew-field is a field. The center of the (full) matrix ring with entries in a commutative ring R consists of R-scalar multiples of the identity matrix. Let F be a field extension of a field k, and R an algebra over k. Then Z(R ⊗k F) = Z(R) ⊗k F. The center of the universal enveloping algebra of a Lie algebra plays an important role in the representation theory of Lie algebras. For example, a Casimir element is an element of such a center that is used to analyze Lie algebra representations. See also: Harish-Chandra isomorphism. The center of a simple algebra is a field. == See also == Center of a group Central simple algebra Morita equivalence == Notes == == References ==
Wikipedia/Center_of_an_algebra
In computer science, graph traversal (also known as graph search) refers to the process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal. == Redundancy == Unlike tree traversal, graph traversal may require that some vertices be visited more than once, since it is not necessarily known before transitioning to a vertex that it has already been explored. As graphs become more dense, this redundancy becomes more prevalent, causing computation time to increase; as graphs become more sparse, the opposite holds true. Thus, it is usually necessary to remember which vertices have already been explored by the algorithm, so that vertices are revisited as infrequently as possible (or in the worst case, to prevent the traversal from continuing indefinitely). This may be accomplished by associating each vertex of the graph with a "color" or "visitation" state during the traversal, which is then checked and updated as the algorithm visits each vertex. If the vertex has already been visited, it is ignored and the path is pursued no further; otherwise, the algorithm checks/updates the vertex and continues down its current path. Several special cases of graphs imply the visitation of other vertices in their structure, and thus do not require that visitation be explicitly recorded during the traversal. An important example of this is a tree: during a traversal it may be assumed that all "ancestor" vertices of the current vertex (and others depending on the algorithm) have already been visited. Both the depth-first and breadth-first graph searches are adaptations of tree-based algorithms, distinguished primarily by the lack of a structurally determined "root" vertex and the addition of a data structure to record the traversal's visitation state. == Graph traversal algorithms == Note. — If each vertex in a graph is to be traversed by a tree-based algorithm (such as DFS or BFS), then the algorithm must be called at least once for each connected component of the graph. This is easily accomplished by iterating through all the vertices of the graph, performing the algorithm on each vertex that is still unvisited when examined. === Depth-first search === A depth-first search (DFS) is an algorithm for traversing a finite graph. DFS visits the child vertices before visiting the sibling vertices; that is, it traverses the depth of any particular path before exploring its breadth. A stack (often the program's call stack via recursion) is generally used when implementing the algorithm. The algorithm begins with a chosen "root" vertex; it then iteratively transitions from the current vertex to an adjacent, unvisited vertex, until it can no longer find an unexplored vertex to transition to from its current location. The algorithm then backtracks along previously visited vertices, until it finds a vertex connected to yet more uncharted territory. It will then proceed down the new path as it had before, backtracking as it encounters dead-ends, and ending only when the algorithm has backtracked past the original "root" vertex from the very first step. DFS is the basis for many graph-related algorithms, including topological sorts and planarity testing. ==== Pseudocode ==== Input: A graph G and a vertex v of G. Output: A labeling of the edges in the connected component of v as discovery edges and back edges. procedure DFS(G, v) is label v as explored for all edges e in G.incidentEdges(v) do if edge e is unexplored then w ← G.adjacentVertex(v, e) if vertex w is unexplored then label e as a discovered edge recursively call DFS(G, w) else label e as a back edge === Breadth-first search === A breadth-first search (BFS) is another technique for traversing a finite graph. BFS visits the sibling vertices before visiting the child vertices, and a queue is used in the search process. This algorithm is often used to find the shortest path from one vertex to another. ==== Pseudocode ==== Input: A graph G and a vertex v of G. Output: The closest vertex to v satisfying some conditions, or null if no such vertex exists. procedure BFS(G, v) is create a queue Q enqueue v onto Q mark v while Q is not empty do w ← Q.dequeue() if w is what we are looking for then return w for all edges e in G.adjacentEdges(w) do x ← G.adjacentVertex(w, e) if x is not marked then mark x enqueue x onto Q return null == Applications == Breadth-first search can be used to solve many problems in graph theory, for example: finding all vertices within one connected component; Cheney's algorithm; finding the shortest path between two vertices; testing a graph for bipartiteness; Cuthill–McKee algorithm mesh numbering; Ford–Fulkerson algorithm for computing the maximum flow in a flow network; serialization/deserialization of a binary tree vs serialization in sorted order (allows the tree to be re-constructed in an efficient manner); maze generation algorithms; flood fill algorithm for marking contiguous regions of a two dimensional image or n-dimensional array; analysis of networks and relationships. == Graph exploration == The problem of graph exploration can be seen as a variant of graph traversal. It is an online problem, meaning that the information about the graph is only revealed during the runtime of the algorithm. A common model is as follows: given a connected graph G = (V, E) with non-negative edge weights. The algorithm starts at some vertex, and knows all incident outgoing edges and the vertices at the end of these edges—but not more. When a new vertex is visited, then again all incident outgoing edges and the vertices at the end are known. The goal is to visit all n vertices and return to the starting vertex, but the sum of the weights of the tour should be as small as possible. The problem can also be understood as a specific version of the travelling salesman problem, where the salesman has to discover the graph on the go. For general graphs, the best known algorithms for both undirected and directed graphs is a simple greedy algorithm: In the undirected case, the greedy tour is at most O(ln n)-times longer than an optimal tour. The best lower bound known for any deterministic online algorithm is 10/3. Unit weight undirected graphs can be explored with a competitive ration of 2 − ε, which is already a tight bound on Tadpole graphs. In the directed case, the greedy tour is at most (n − 1)-times longer than an optimal tour. This matches the lower bound of n − 1. An analogous competitive lower bound of Ω(n) also holds for randomized algorithms that know the coordinates of each node in a geometric embedding. If instead of visiting all nodes just a single "treasure" node has to be found, the competitive bounds are Θ(n2) on unit weight directed graphs, for both deterministic and randomized algorithms. == Universal traversal sequences == A universal traversal sequence is a sequence of instructions comprising a graph traversal for any regular graph with a set number of vertices and for any starting vertex. A probabilistic proof was used by Aleliunas et al. to show that there exists a universal traversal sequence with number of instructions proportional to O(n5) for any regular graph with n vertices. The steps specified in the sequence are relative to the current node, not absolute. For example, if the current node is vj, and vj has d neighbors, then the traversal sequence will specify the next node to visit, vj+1, as the ith neighbor of vj, where 1 ≤ i ≤ d. == See also == External memory graph traversal == References ==
Wikipedia/Graph_search
In mathematics, a submodular set function (also known as a submodular function) is a set function that, informally, describes the relationship between a set of inputs and an output, where adding more of one input has a decreasing additional benefit (diminishing returns). The natural diminishing returns property which makes them suitable for many applications, including approximation algorithms, game theory (as functions modeling user preferences) and electrical networks. Recently, submodular functions have also found utility in several real world problems in machine learning and artificial intelligence, including automatic summarization, multi-document summarization, feature selection, active learning, sensor placement, image collection summarization and many other domains. == Definition == If Ω {\displaystyle \Omega } is a finite set, a submodular function is a set function f : 2 Ω → R {\displaystyle f:2^{\Omega }\rightarrow \mathbb {R} } , where 2 Ω {\displaystyle 2^{\Omega }} denotes the power set of Ω {\displaystyle \Omega } , which satisfies one of the following equivalent conditions. For every X , Y ⊆ Ω {\displaystyle X,Y\subseteq \Omega } with X ⊆ Y {\displaystyle X\subseteq Y} and every x ∈ Ω ∖ Y {\displaystyle x\in \Omega \setminus Y} we have that f ( X ∪ { x } ) − f ( X ) ≥ f ( Y ∪ { x } ) − f ( Y ) {\displaystyle f(X\cup \{x\})-f(X)\geq f(Y\cup \{x\})-f(Y)} . For every S , T ⊆ Ω {\displaystyle S,T\subseteq \Omega } we have that f ( S ) + f ( T ) ≥ f ( S ∪ T ) + f ( S ∩ T ) {\displaystyle f(S)+f(T)\geq f(S\cup T)+f(S\cap T)} . For every X ⊆ Ω {\displaystyle X\subseteq \Omega } and x 1 , x 2 ∈ Ω ∖ X {\displaystyle x_{1},x_{2}\in \Omega \backslash X} such that x 1 ≠ x 2 {\displaystyle x_{1}\neq x_{2}} we have that f ( X ∪ { x 1 } ) + f ( X ∪ { x 2 } ) ≥ f ( X ∪ { x 1 , x 2 } ) + f ( X ) {\displaystyle f(X\cup \{x_{1}\})+f(X\cup \{x_{2}\})\geq f(X\cup \{x_{1},x_{2}\})+f(X)} . A nonnegative submodular function is also a subadditive function, but a subadditive function need not be submodular. If Ω {\displaystyle \Omega } is not assumed finite, then the above conditions are not equivalent. In particular a function f {\displaystyle f} defined by f ( S ) = 1 {\displaystyle f(S)=1} if S {\displaystyle S} is finite and f ( S ) = 0 {\displaystyle f(S)=0} if S {\displaystyle S} is infinite satisfies the first condition above, but the second condition fails when S {\displaystyle S} and T {\displaystyle T} are infinite sets with finite intersection. == Types and examples of submodular functions == === Monotone === A set function f {\displaystyle f} is monotone if for every T ⊆ S {\displaystyle T\subseteq S} we have that f ( T ) ≤ f ( S ) {\displaystyle f(T)\leq f(S)} . Examples of monotone submodular functions include: Linear (Modular) functions Any function of the form f ( S ) = ∑ i ∈ S w i {\displaystyle f(S)=\sum _{i\in S}w_{i}} is called a linear function. Additionally if ∀ i , w i ≥ 0 {\displaystyle \forall i,w_{i}\geq 0} then f is monotone. Budget-additive functions Any function of the form f ( S ) = min { B , ∑ i ∈ S w i } {\displaystyle f(S)=\min \left\{B,~\sum _{i\in S}w_{i}\right\}} for each w i ≥ 0 {\displaystyle w_{i}\geq 0} and B ≥ 0 {\displaystyle B\geq 0} is called budget additive. Coverage functions Let Ω = { E 1 , E 2 , … , E n } {\displaystyle \Omega =\{E_{1},E_{2},\ldots ,E_{n}\}} be a collection of subsets of some ground set Ω ′ {\displaystyle \Omega '} . The function f ( S ) = | ⋃ E i ∈ S E i | {\displaystyle f(S)=\left|\bigcup _{E_{i}\in S}E_{i}\right|} for S ⊆ Ω {\displaystyle S\subseteq \Omega } is called a coverage function. This can be generalized by adding non-negative weights to the elements. Entropy Let Ω = { X 1 , X 2 , … , X n } {\displaystyle \Omega =\{X_{1},X_{2},\ldots ,X_{n}\}} be a set of random variables. Then for any S ⊆ Ω {\displaystyle S\subseteq \Omega } we have that H ( S ) {\displaystyle H(S)} is a submodular function, where H ( S ) {\displaystyle H(S)} is the entropy of the set of random variables S {\displaystyle S} , a fact known as Shannon's inequality. Further inequalities for the entropy function are known to hold, see entropic vector. Matroid rank functions Let Ω = { e 1 , e 2 , … , e n } {\displaystyle \Omega =\{e_{1},e_{2},\dots ,e_{n}\}} be the ground set on which a matroid is defined. Then the rank function of the matroid is a submodular function. === Non-monotone === A submodular function that is not monotone is called non-monotone. In particular, a function is called non-monotone if it has the property that adding more elements to a set can decrease the value of the function. More formally, the function f {\displaystyle f} is non-monotone if there are sets S , T {\displaystyle S,T} in its domain s.t. S ⊂ T {\displaystyle S\subset T} and f ( S ) > f ( T ) {\displaystyle f(S)>f(T)} . ==== Symmetric ==== A non-monotone submodular function f {\displaystyle f} is called symmetric if for every S ⊆ Ω {\displaystyle S\subseteq \Omega } we have that f ( S ) = f ( Ω − S ) {\displaystyle f(S)=f(\Omega -S)} . Examples of symmetric non-monotone submodular functions include: Graph cuts Let Ω = { v 1 , v 2 , … , v n } {\displaystyle \Omega =\{v_{1},v_{2},\dots ,v_{n}\}} be the vertices of a graph. For any set of vertices S ⊆ Ω {\displaystyle S\subseteq \Omega } let f ( S ) {\displaystyle f(S)} denote the number of edges e = ( u , v ) {\displaystyle e=(u,v)} such that u ∈ S {\displaystyle u\in S} and v ∈ Ω − S {\displaystyle v\in \Omega -S} . This can be generalized by adding non-negative weights to the edges. Mutual information Let Ω = { X 1 , X 2 , … , X n } {\displaystyle \Omega =\{X_{1},X_{2},\ldots ,X_{n}\}} be a set of random variables. Then for any S ⊆ Ω {\displaystyle S\subseteq \Omega } we have that f ( S ) = I ( S ; Ω − S ) {\displaystyle f(S)=I(S;\Omega -S)} is a submodular function, where I ( S ; Ω − S ) {\displaystyle I(S;\Omega -S)} is the mutual information. ==== Asymmetric ==== A non-monotone submodular function which is not symmetric is called asymmetric. Directed cuts Let Ω = { v 1 , v 2 , … , v n } {\displaystyle \Omega =\{v_{1},v_{2},\dots ,v_{n}\}} be the vertices of a directed graph. For any set of vertices S ⊆ Ω {\displaystyle S\subseteq \Omega } let f ( S ) {\displaystyle f(S)} denote the number of edges e = ( u , v ) {\displaystyle e=(u,v)} such that u ∈ S {\displaystyle u\in S} and v ∈ Ω − S {\displaystyle v\in \Omega -S} . This can be generalized by adding non-negative weights to the directed edges. == Continuous extensions of submodular set functions == Often, given a submodular set function that describes the values of various sets, we need to compute the values of fractional sets. For example: we know that the value of receiving house A and house B is V, and we want to know the value of receiving 40% of house A and 60% of house B. To this end, we need a continuous extension of the submodular set function. Formally, a set function f : 2 Ω → R {\displaystyle f:2^{\Omega }\rightarrow \mathbb {R} } with | Ω | = n {\displaystyle |\Omega |=n} can be represented as a function on { 0 , 1 } n {\displaystyle \{0,1\}^{n}} , by associating each S ⊆ Ω {\displaystyle S\subseteq \Omega } with a binary vector x S ∈ { 0 , 1 } n {\displaystyle x^{S}\in \{0,1\}^{n}} such that x i S = 1 {\displaystyle x_{i}^{S}=1} when i ∈ S {\displaystyle i\in S} , and x i S = 0 {\displaystyle x_{i}^{S}=0} otherwise. A continuous extension of f {\displaystyle f} is a continuous function F : [ 0 , 1 ] n → R {\displaystyle F:[0,1]^{n}\rightarrow \mathbb {R} } , that matches the value of f {\displaystyle f} on x ∈ { 0 , 1 } n {\displaystyle x\in \{0,1\}^{n}} , i.e. F ( x S ) = f ( S ) {\displaystyle F(x^{S})=f(S)} . Several kinds of continuous extensions of submodular functions are commonly used, which are described below. === Lovász extension === This extension is named after mathematician László Lovász. Consider any vector x = { x 1 , x 2 , … , x n } {\displaystyle \mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}} such that each 0 ≤ x i ≤ 1 {\displaystyle 0\leq x_{i}\leq 1} . Then the Lovász extension is defined as f L ( x ) = E ( f ( { i | x i ≥ λ } ) ) {\displaystyle f^{L}(\mathbf {x} )=\mathbb {E} (f(\{i|x_{i}\geq \lambda \}))} where the expectation is over λ {\displaystyle \lambda } chosen from the uniform distribution on the interval [ 0 , 1 ] {\displaystyle [0,1]} . The Lovász extension is a convex function if and only if f {\displaystyle f} is a submodular function. === Multilinear extension === Consider any vector x = { x 1 , x 2 , … , x n } {\displaystyle \mathbf {x} =\{x_{1},x_{2},\ldots ,x_{n}\}} such that each 0 ≤ x i ≤ 1 {\displaystyle 0\leq x_{i}\leq 1} . Then the multilinear extension is defined as F ( x ) = ∑ S ⊆ Ω f ( S ) ∏ i ∈ S x i ∏ i ∉ S ( 1 − x i ) {\displaystyle F(\mathbf {x} )=\sum _{S\subseteq \Omega }f(S)\prod _{i\in S}x_{i}\prod _{i\notin S}(1-x_{i})} . Intuitively, xi represents the probability that item i is chosen for the set. For every set S, the two inner products represent the probability that the chosen set is exactly S. Therefore, the sum represents the expected value of f for the set formed by choosing each item i at random with probability xi, independently of the other items. === Convex closure === Consider any vector x = { x 1 , x 2 , … , x n } {\displaystyle \mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}} such that each 0 ≤ x i ≤ 1 {\displaystyle 0\leq x_{i}\leq 1} . Then the convex closure is defined as f − ( x ) = min ( ∑ S α S f ( S ) : ∑ S α S 1 S = x , ∑ S α S = 1 , α S ≥ 0 ) {\displaystyle f^{-}(\mathbf {x} )=\min \left(\sum _{S}\alpha _{S}f(S):\sum _{S}\alpha _{S}1_{S}=\mathbf {x} ,\sum _{S}\alpha _{S}=1,\alpha _{S}\geq 0\right)} . The convex closure of any set function is convex over [ 0 , 1 ] n {\displaystyle [0,1]^{n}} . === Concave closure === Consider any vector x = { x 1 , x 2 , … , x n } {\displaystyle \mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}} such that each 0 ≤ x i ≤ 1 {\displaystyle 0\leq x_{i}\leq 1} . Then the concave closure is defined as f + ( x ) = max ( ∑ S α S f ( S ) : ∑ S α S 1 S = x , ∑ S α S = 1 , α S ≥ 0 ) {\displaystyle f^{+}(\mathbf {x} )=\max \left(\sum _{S}\alpha _{S}f(S):\sum _{S}\alpha _{S}1_{S}=\mathbf {x} ,\sum _{S}\alpha _{S}=1,\alpha _{S}\geq 0\right)} . === Relations between continuous extensions === For the extensions discussed above, it can be shown that f + ( x ) ≥ F ( x ) ≥ f − ( x ) = f L ( x ) {\displaystyle f^{+}(\mathbf {x} )\geq F(\mathbf {x} )\geq f^{-}(\mathbf {x} )=f^{L}(\mathbf {x} )} when f {\displaystyle f} is submodular. == Properties == The class of submodular functions is closed under non-negative linear combinations. Consider any submodular function f 1 , f 2 , … , f k {\displaystyle f_{1},f_{2},\ldots ,f_{k}} and non-negative numbers α 1 , α 2 , … , α k {\displaystyle \alpha _{1},\alpha _{2},\ldots ,\alpha _{k}} . Then the function g {\displaystyle g} defined by g ( S ) = ∑ i = 1 k α i f i ( S ) {\displaystyle g(S)=\sum _{i=1}^{k}\alpha _{i}f_{i}(S)} is submodular. For any submodular function f {\displaystyle f} , the function defined by g ( S ) = f ( Ω ∖ S ) {\displaystyle g(S)=f(\Omega \setminus S)} is submodular. The function g ( S ) = min ( f ( S ) , c ) {\displaystyle g(S)=\min(f(S),c)} , where c {\displaystyle c} is a real number, is submodular whenever f {\displaystyle f} is monotone submodular. More generally, g ( S ) = h ( f ( S ) ) {\displaystyle g(S)=h(f(S))} is submodular, for any non decreasing concave function h {\displaystyle h} . Consider a random process where a set T {\displaystyle T} is chosen with each element in Ω {\displaystyle \Omega } being included in T {\displaystyle T} independently with probability p {\displaystyle p} . Then the following inequality is true E [ f ( T ) ] ≥ p f ( Ω ) + ( 1 − p ) f ( ∅ ) {\displaystyle \mathbb {E} [f(T)]\geq pf(\Omega )+(1-p)f(\varnothing )} where ∅ {\displaystyle \varnothing } is the empty set. More generally consider the following random process where a set S {\displaystyle S} is constructed as follows. For each of 1 ≤ i ≤ l , A i ⊆ Ω {\displaystyle 1\leq i\leq l,A_{i}\subseteq \Omega } construct S i {\displaystyle S_{i}} by including each element in A i {\displaystyle A_{i}} independently into S i {\displaystyle S_{i}} with probability p i {\displaystyle p_{i}} . Furthermore let S = ∪ i = 1 l S i {\displaystyle S=\cup _{i=1}^{l}S_{i}} . Then the following inequality is true E [ f ( S ) ] ≥ ∑ R ⊆ [ l ] Π i ∈ R p i Π i ∉ R ( 1 − p i ) f ( ∪ i ∈ R A i ) {\displaystyle \mathbb {E} [f(S)]\geq \sum _{R\subseteq [l]}\Pi _{i\in R}p_{i}\Pi _{i\notin R}(1-p_{i})f(\cup _{i\in R}A_{i})} . == Optimization problems == Submodular functions have properties which are very similar to convex and concave functions. For this reason, an optimization problem which concerns optimizing a convex or concave function can also be described as the problem of maximizing or minimizing a submodular function subject to some constraints. === Submodular set function minimization === The hardness of minimizing a submodular set function depends on constraints imposed on the problem. The unconstrained problem of minimizing a submodular function is computable in polynomial time, and even in strongly-polynomial time. Computing the minimum cut in a graph is a special case of this minimization problem. The problem of minimizing a submodular function with a cardinality lower bound is NP-hard, with polynomial factor lower bounds on the approximation factor. === Submodular set function maximization === Unlike the case of minimization, maximizing a generic submodular function is NP-hard even in the unconstrained setting. Thus, most of the works in this field are concerned with polynomial-time approximation algorithms, including greedy algorithms or local search algorithms. The problem of maximizing a non-negative submodular function admits a 1/2 approximation algorithm. Computing the maximum cut of a graph is a special case of this problem. The problem of maximizing a monotone submodular function subject to a cardinality constraint admits a 1 − 1 / e {\displaystyle 1-1/e} approximation algorithm. The maximum coverage problem is a special case of this problem. The problem of maximizing a monotone submodular function subject to a matroid constraint (which subsumes the case above) also admits a 1 − 1 / e {\displaystyle 1-1/e} approximation algorithm. Many of these algorithms can be unified within a semi-differential based framework of algorithms. === Related optimization problems === Apart from submodular minimization and maximization, there are several other natural optimization problems related to submodular functions. Minimizing the difference between two submodular functions is not only NP hard, but also inapproximable. Minimization/maximization of a submodular function subject to a submodular level set constraint (also known as submodular optimization subject to submodular cover or submodular knapsack constraint) admits bounded approximation guarantees. Partitioning data based on a submodular function to maximize the average welfare is known as the submodular welfare problem, which also admits bounded approximation guarantees (see welfare maximization). == Applications == Submodular functions naturally occur in several real world applications, in economics, game theory, machine learning and computer vision. Owing to the diminishing returns property, submodular functions naturally model costs of items, since there is often a larger discount, with an increase in the items one buys. Submodular functions model notions of complexity, similarity and cooperation when they appear in minimization problems. In maximization problems, on the other hand, they model notions of diversity, information and coverage. == See also == Supermodular function Matroid, Polymatroid Utility functions on indivisible goods == Citations == == References == Schrijver, Alexander (2003), Combinatorial Optimization, Springer, ISBN 3-540-44389-4 Lee, Jon (2004), A First Course in Combinatorial Optimization, Cambridge University Press, ISBN 0-521-01012-8 Fujishige, Satoru (2005), Submodular Functions and Optimization, Elsevier, ISBN 0-444-52086-4 Narayanan, H. (1997), Submodular Functions and Electrical Networks, Elsevier, ISBN 0-444-82523-1 Oxley, James G. (1992), Matroid theory, Oxford Science Publications, Oxford: Oxford University Press, ISBN 0-19-853563-5, Zbl 0784.05002 == External links == http://www.cs.berkeley.edu/~stefje/references.html has a longer bibliography http://submodularity.org/ includes further material on the subject
Wikipedia/Submodular_set_function
In graph theory, graph coloring is a methodic assignment of labels traditionally called "colors" to elements of a graph. The assignment is subject to certain constraints, such as that no two adjacent elements have the same color. Graph coloring is a special case of graph labeling. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edges so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face (or region) so that no two faces that share a boundary have the same color. Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just a vertex coloring of its line graph, and a face coloring of a plane graph is just a vertex coloring of its dual. However, non-vertex coloring problems are often stated and studied as-is. This is partly pedagogical, and partly because some problems are best studied in their non-vertex form, as in the case of edge coloring. The convention of using colors originates from coloring the countries in a political map, where each face is literally colored. This was generalized to coloring the faces of a graph embedded in the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In mathematical and computer representations, it is typical to use the first few positive or non-negative integers as the "colors". In general, one can use any finite set as the "color set". The nature of the coloring problem depends on the number of colors but not on what they are. Graph coloring enjoys many practical applications as well as theoretical challenges. Beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned, or even on the color itself. It has even reached popularity with the general public in the form of the popular number puzzle Sudoku. Graph coloring is still a very active field of research. Note: Many terms used in this article are defined in Glossary of graph theory. == History == The first results about graph coloring deal almost exclusively with planar graphs in the form of map coloring. While trying to color a map of the counties of England, Francis Guthrie postulated the four color conjecture, noting that four colors were sufficient to color the map so that no regions sharing a common border received the same color. Guthrie's brother passed on the question to his mathematics teacher Augustus De Morgan at University College, who mentioned it in a letter to William Hamilton in 1852. Arthur Cayley raised the problem at a meeting of the London Mathematical Society in 1879. The same year, Alfred Kempe published a paper that claimed to establish the result, and for a decade the four color problem was considered solved. For his accomplishment Kempe was elected a Fellow of the Royal Society and later President of the London Mathematical Society. In 1890, Percy John Heawood pointed out that Kempe's argument was wrong. However, in that paper he proved the five color theorem, saying that every planar map can be colored with no more than five colors, using ideas of Kempe. In the following century, a vast amount of work was done and theories were developed to reduce the number of colors to four, until the four color theorem was finally proved in 1976 by Kenneth Appel and Wolfgang Haken. The proof went back to the ideas of Heawood and Kempe and largely disregarded the intervening developments. The proof of the four color theorem is noteworthy, aside from its solution of a century-old problem, for being the first major computer-aided proof. In 1912, George David Birkhoff introduced the chromatic polynomial to study the coloring problem, which was generalised to the Tutte polynomial by W. T. Tutte, both of which are important invariants in algebraic graph theory. Kempe had already drawn attention to the general, non-planar case in 1879, and many results on generalisations of planar graph coloring to surfaces of higher order followed in the early 20th century. In 1960, Claude Berge formulated another conjecture about graph coloring, the strong perfect graph conjecture, originally motivated by an information-theoretic concept called the zero-error capacity of a graph introduced by Shannon. The conjecture remained unresolved for 40 years, until it was established as the celebrated strong perfect graph theorem by Chudnovsky, Robertson, Seymour, and Thomas in 2002. Graph coloring has been studied as an algorithmic problem since the early 1970s: the chromatic number problem (see section § Vertex coloring below) is one of Karp's 21 NP-complete problems from 1972, and at approximately the same time various exponential-time algorithms were developed based on backtracking and on the deletion-contraction recurrence of Zykov (1949). One of the major applications of graph coloring, register allocation in compilers, was introduced in 1981. == Definition and terminology == === Vertex coloring === When used without any qualification, a coloring of a graph almost always refers to a proper vertex coloring, namely a labeling of the graph's vertices with colors such that no two vertices sharing the same edge have the same color. Since a vertex with a loop (i.e. a connection directly back to itself) could never be properly colored, it is understood that graphs in this context are loopless. The terminology of using colors for vertex labels goes back to map coloring. Labels like red and blue are only used when the number of colors is small, and normally it is understood that the labels are drawn from the integers {1, 2, 3, ...}. A coloring using at most k colors is called a (proper) k-coloring. The smallest number of colors needed to color a graph G is called its chromatic number, and is often denoted χ(G). Sometimes γ(G) is used, since χ(G) is also used to denote the Euler characteristic of a graph. A graph that can be assigned a (proper) k-coloring is k-colorable, and it is k-chromatic if its chromatic number is exactly k. A subset of vertices assigned to the same color is called a color class; every such class forms an independent set. Thus, a k-coloring is the same as a partition of the vertex set into k independent sets, and the terms k-partite and k-colorable have the same meaning. === Chromatic polynomial === The chromatic polynomial counts the number of ways a graph can be colored using some of a given number of colors. For example, using three colors, the graph in the adjacent image can be colored in 12 ways. With only two colors, it cannot be colored at all. With four colors, it can be colored in 24 + 4 × 12 = 72 ways: using all four colors, there are 4! = 24 valid colorings (every assignment of four colors to any 4-vertex graph is a proper coloring); and for every choice of three of the four colors, there are 12 valid 3-colorings. So, for the graph in the example, a table of the number of valid colorings would start like this: The chromatic polynomial is a function P(G, t) that counts the number of t-colorings of G. As the name indicates, for a given G the function is indeed a polynomial in t. For the example graph, P(G, t) = t(t − 1)2(t − 2), and indeed P(G, 4) = 72. The chromatic polynomial includes more information about the colorability of G than does the chromatic number. Indeed, χ is the smallest positive integer that is not a zero of the chromatic polynomial χ(G) = min{k : P(G, k) > 0}. === Edge coloring === An edge coloring of a graph is a proper coloring of the edges, meaning an assignment of colors to edges so that no vertex is incident to two edges of the same color. An edge coloring with k colors is called a k-edge-coloring and is equivalent to the problem of partitioning the edge set into k matchings. The smallest number of colors needed for an edge coloring of a graph G is the chromatic index, or edge chromatic number, χ′(G). A Tait coloring is a 3-edge coloring of a cubic graph. The four color theorem is equivalent to the assertion that every planar cubic bridgeless graph admits a Tait coloring. === Total coloring === Total coloring is a type of coloring on the vertices and edges of a graph. When used without any qualification, a total coloring is always assumed to be proper in the sense that no adjacent vertices, no adjacent edges, and no edge and its end-vertices are assigned the same color. The total chromatic number χ″(G) of a graph G is the fewest colors needed in any total coloring of G. === Face coloring === For a graph with a strong embedding on a surface, the face coloring is the dual of the vertex coloring problem. === Tutte's flow theory === For a graph G with a strong embedding on an orientable surface, William T. Tutte discovered that if the graph is k-face-colorable then G admits a nowhere-zero k-flow. The equivalence holds if the surface is sphere. === Unlabeled coloring === An unlabeled coloring of a graph is an orbit of a coloring under the action of the automorphism group of the graph. The colors remain labeled; it is the graph that is unlabeled. There is an analogue of the chromatic polynomial which counts the number of unlabeled colorings of a graph from a given finite color set. If we interpret a coloring of a graph on d vertices as a vector in ⁠ Z d {\displaystyle \mathbb {Z} ^{d}} ⁠, the action of an automorphism is a permutation of the coefficients in the coloring vector. == Properties == === Upper bounds on the chromatic number === Assigning distinct colors to distinct vertices always yields a proper coloring, so 1 ≤ χ ( G ) ≤ n . {\displaystyle 1\leq \chi (G)\leq n.} The only graphs that can be 1-colored are edgeless graphs. A complete graph K n {\displaystyle K_{n}} of n vertices requires χ ( K n ) = n {\displaystyle \chi (K_{n})=n} colors. In an optimal coloring there must be at least one of the graph's m edges between every pair of color classes, so χ ( G ) ( χ ( G ) − 1 ) ≤ 2 m . {\displaystyle \chi (G)(\chi (G)-1)\leq 2m.} More generally a family F {\displaystyle {\mathcal {F}}} of graphs is χ-bounded if there is some function c {\displaystyle c} such that the graphs G {\displaystyle G} in F {\displaystyle {\mathcal {F}}} can be colored with at most c ( ω ( G ) ) {\displaystyle c(\omega (G))} colors, where ω ( G ) {\displaystyle \omega (G)} is the clique number of G {\displaystyle G} . For the family of the perfect graphs this function is c ( ω ( G ) ) = ω ( G ) {\displaystyle c(\omega (G))=\omega (G)} . The 2-colorable graphs are exactly the bipartite graphs, including trees and forests. By the four color theorem, every planar graph can be 4-colored. A greedy coloring shows that every graph can be colored with one more color than the maximum vertex degree, χ ( G ) ≤ Δ ( G ) + 1. {\displaystyle \chi (G)\leq \Delta (G)+1.} Complete graphs have χ ( G ) = n {\displaystyle \chi (G)=n} and Δ ( G ) = n − 1 {\displaystyle \Delta (G)=n-1} , and odd cycles have χ ( G ) = 3 {\displaystyle \chi (G)=3} and Δ ( G ) = 2 {\displaystyle \Delta (G)=2} , so for these graphs this bound is best possible. In all other cases, the bound can be slightly improved; Brooks' theorem states that Brooks' theorem: χ ( G ) ≤ Δ ( G ) {\displaystyle \chi (G)\leq \Delta (G)} for a connected, simple graph G, unless G is a complete graph or an odd cycle. === Lower bounds on the chromatic number === Several lower bounds for the chromatic bounds have been discovered over the years: If G contains a clique of size k, then at least k colors are needed to color that clique; in other words, the chromatic number is at least the clique number: χ ( G ) ≥ ω ( G ) . {\displaystyle \chi (G)\geq \omega (G).} For perfect graphs this bound is tight. Finding cliques is known as the clique problem. Hoffman's bound: Let W {\displaystyle W} be a real symmetric matrix such that W i , j = 0 {\displaystyle W_{i,j}=0} whenever ( i , j ) {\displaystyle (i,j)} is not an edge in G {\displaystyle G} . Define χ W ( G ) = 1 − λ max ( W ) λ min ( W ) {\displaystyle \chi _{W}(G)=1-{\tfrac {\lambda _{\max }(W)}{\lambda _{\min }(W)}}} , where λ max ( W ) , λ min ( W ) {\displaystyle \lambda _{\max }(W),\lambda _{\min }(W)} are the largest and smallest eigenvalues of W {\displaystyle W} . Define χ H ( G ) = max W χ W ( G ) {\textstyle \chi _{H}(G)=\max _{W}\chi _{W}(G)} , with W {\displaystyle W} as above. Then: χ H ( G ) ≤ χ ( G ) . {\displaystyle \chi _{H}(G)\leq \chi (G).} Vector chromatic number: Let W {\displaystyle W} be a positive semi-definite matrix such that W i , j ≤ − 1 k − 1 {\displaystyle W_{i,j}\leq -{\tfrac {1}{k-1}}} whenever ( i , j ) {\displaystyle (i,j)} is an edge in G {\displaystyle G} . Define χ V ( G ) {\displaystyle \chi _{V}(G)} to be the least k for which such a matrix W {\displaystyle W} exists. Then χ V ( G ) ≤ χ ( G ) . {\displaystyle \chi _{V}(G)\leq \chi (G).} Lovász number: The Lovász number of a complementary graph is also a lower bound on the chromatic number: ϑ ( G ¯ ) ≤ χ ( G ) . {\displaystyle \vartheta ({\bar {G}})\leq \chi (G).} Fractional chromatic number: The fractional chromatic number of a graph is a lower bound on the chromatic number as well: χ f ( G ) ≤ χ ( G ) . {\displaystyle \chi _{f}(G)\leq \chi (G).} These bounds are ordered as follows: χ H ( G ) ≤ χ V ( G ) ≤ ϑ ( G ¯ ) ≤ χ f ( G ) ≤ χ ( G ) . {\displaystyle \chi _{H}(G)\leq \chi _{V}(G)\leq \vartheta ({\bar {G}})\leq \chi _{f}(G)\leq \chi (G).} === Graphs with high chromatic number === Graphs with large cliques have a high chromatic number, but the opposite is not true. The Grötzsch graph is an example of a 4-chromatic graph without a triangle, and the example can be generalized to the Mycielskians. Theorem (William T. Tutte 1947, Alexander Zykov 1949, Jan Mycielski 1955): There exist triangle-free graphs with arbitrarily high chromatic number. To prove this, both, Mycielski and Zykov, each gave a construction of an inductively defined family of triangle-free graphs but with arbitrarily large chromatic number. Burling (1965) constructed axis aligned boxes in R 3 {\displaystyle \mathbb {R} ^{3}} whose intersection graph is triangle-free and requires arbitrarily many colors to be properly colored. This family of graphs is then called the Burling graphs. The same class of graphs is used for the construction of a family of triangle-free line segments in the plane, given by Pawlik et al. (2014). It shows that the chromatic number of its intersection graph is arbitrarily large as well. Hence, this implies that axis aligned boxes in R 3 {\displaystyle \mathbb {R} ^{3}} as well as line segments in R 2 {\displaystyle \mathbb {R} ^{2}} are not χ-bounded. From Brooks's theorem, graphs with high chromatic number must have high maximum degree. But colorability is not an entirely local phenomenon: A graph with high girth looks locally like a tree, because all cycles are long, but its chromatic number need not be 2: Theorem (Erdős): There exist graphs of arbitrarily high girth and chromatic number. === Bounds on the chromatic index === An edge coloring of G is a vertex coloring of its line graph L ( G ) {\displaystyle L(G)} , and vice versa. Thus, χ ′ ( G ) = χ ( L ( G ) ) . {\displaystyle \chi '(G)=\chi (L(G)).} There is a strong relationship between edge colorability and the graph's maximum degree Δ ( G ) {\displaystyle \Delta (G)} . Since all edges incident to the same vertex need their own color, we have χ ′ ( G ) ≥ Δ ( G ) . {\displaystyle \chi '(G)\geq \Delta (G).} Moreover, Kőnig's theorem: χ ′ ( G ) = Δ ( G ) {\displaystyle \chi '(G)=\Delta (G)} if G is bipartite. In general, the relationship is even stronger than what Brooks's theorem gives for vertex coloring: Vizing's Theorem: A graph of maximal degree Δ {\displaystyle \Delta } has edge-chromatic number Δ {\displaystyle \Delta } or Δ + 1 {\displaystyle \Delta +1} . === Other properties === A graph has a k-coloring if and only if it has an acyclic orientation for which the longest path has length at most k; this is the Gallai–Hasse–Roy–Vitaver theorem (Nešetřil & Ossona de Mendez 2012). For planar graphs, vertex colorings are essentially dual to nowhere-zero flows. About infinite graphs, much less is known. The following are two of the few results about infinite graph coloring: If all finite subgraphs of an infinite graph G are k-colorable, then so is G, under the assumption of the axiom of choice. This is the de Bruijn–Erdős theorem of de Bruijn & Erdős (1951). If a graph admits a full n-coloring for every n ≥ n0, it admits an infinite full coloring (Fawcett 1978). === Open problems === As stated above, ω ( G ) ≤ χ ( G ) ≤ Δ ( G ) + 1. {\displaystyle \omega (G)\leq \chi (G)\leq \Delta (G)+1.} A conjecture of Reed from 1998 is that the value is essentially closer to the lower bound, χ ( G ) ≤ ⌈ ω ( G ) + Δ ( G ) + 1 2 ⌉ . {\displaystyle \chi (G)\leq \left\lceil {\frac {\omega (G)+\Delta (G)+1}{2}}\right\rceil .} The chromatic number of the plane, where two points are adjacent if they have unit distance, is unknown, although it is one of 5, 6, or 7. Other open problems concerning the chromatic number of graphs include the Hadwiger conjecture stating that every graph with chromatic number k has a complete graph on k vertices as a minor, the Erdős–Faber–Lovász conjecture bounding the chromatic number of unions of complete graphs that have at most one vertex in common to each pair, and the Albertson conjecture that among k-chromatic graphs the complete graphs are the ones with smallest crossing number. When Birkhoff and Lewis introduced the chromatic polynomial in their attack on the four-color theorem, they conjectured that for planar graphs G, the polynomial P ( G , t ) {\displaystyle P(G,t)} has no zeros in the region [ 4 , ∞ ) {\displaystyle [4,\infty )} . Although it is known that such a chromatic polynomial has no zeros in the region [ 5 , ∞ ) {\displaystyle [5,\infty )} and that P ( G , 4 ) ≠ 0 {\displaystyle P(G,4)\neq 0} , their conjecture is still unresolved. It also remains an unsolved problem to characterize graphs which have the same chromatic polynomial and to determine which polynomials are chromatic. == Algorithms == === Polynomial time === Determining if a graph can be colored with 2 colors is equivalent to determining whether or not the graph is bipartite, and thus computable in linear time using breadth-first search or depth-first search. More generally, the chromatic number and a corresponding coloring of perfect graphs can be computed in polynomial time using semidefinite programming. Closed formulas for chromatic polynomials are known for many classes of graphs, such as forests, chordal graphs, cycles, wheels, and ladders, so these can be evaluated in polynomial time. If the graph is planar and has low branch-width (or is nonplanar but with a known branch-decomposition), then it can be solved in polynomial time using dynamic programming. In general, the time required is polynomial in the graph size, but exponential in the branch-width. === Exact algorithms === Brute-force search for a k-coloring considers each of the k n {\displaystyle k^{n}} assignments of k colors to n vertices and checks for each if it is legal. To compute the chromatic number and the chromatic polynomial, this procedure is used for every k = 1 , … , n − 1 {\displaystyle k=1,\ldots ,n-1} , impractical for all but the smallest input graphs. Using dynamic programming and a bound on the number of maximal independent sets, k-colorability can be decided in time and space O ( 2.4423 n ) {\displaystyle O(2.4423^{n})} . Using the principle of inclusion–exclusion and Yates's algorithm for the fast zeta transform, k-colorability can be decided in time O ( 2 n n ) {\displaystyle O(2^{n}n)} for any k. Faster algorithms are known for 3- and 4-colorability, which can be decided in time O ( 1.3289 n ) {\displaystyle O(1.3289^{n})} and O ( 1.7272 n ) {\displaystyle O(1.7272^{n})} , respectively. Exponentially faster algorithms are also known for 5- and 6-colorability, as well as for restricted families of graphs, including sparse graphs. === Contraction === The contraction G / u v {\displaystyle G/uv} of a graph G is the graph obtained by identifying the vertices u and v, and removing any edges between them. The remaining edges originally incident to u or v are now incident to their identification (i.e., the new fused node uv). This operation plays a major role in the analysis of graph coloring. The chromatic number satisfies the recurrence relation: χ ( G ) = min { χ ( G + u v ) , χ ( G / u v ) } {\displaystyle \chi (G)={\text{min}}\{\chi (G+uv),\chi (G/uv)\}} due to Zykov (1949), where u and v are non-adjacent vertices, and G + u v {\displaystyle G+uv} is the graph with the edge uv added. Several algorithms are based on evaluating this recurrence and the resulting computation tree is sometimes called a Zykov tree. The running time is based on a heuristic for choosing the vertices u and v. The chromatic polynomial satisfies the following recurrence relation P ( G − u v , k ) = P ( G / u v , k ) + P ( G , k ) {\displaystyle P(G-uv,k)=P(G/uv,k)+P(G,k)} where u and v are adjacent vertices, and G − u v {\displaystyle G-uv} is the graph with the edge uv removed. P ( G − u v , k ) {\displaystyle P(G-uv,k)} represents the number of possible proper colorings of the graph, where the vertices may have the same or different colors. Then the proper colorings arise from two different graphs. To explain, if the vertices u and v have different colors, then we might as well consider a graph where u and v are adjacent. If u and v have the same colors, we might as well consider a graph where u and v are contracted. Tutte's curiosity about which other graph properties satisfied this recurrence led him to discover a bivariate generalization of the chromatic polynomial, the Tutte polynomial. These expressions give rise to a recursive procedure called the deletion–contraction algorithm, which forms the basis of many algorithms for graph coloring. The running time satisfies the same recurrence relation as the Fibonacci numbers, so in the worst case the algorithm runs in time within a polynomial factor of ( 1 + 5 2 ) n + m = O ( 1.6180 n + m ) {\displaystyle \left({\tfrac {1+{\sqrt {5}}}{2}}\right)^{n+m}=O(1.6180^{n+m})} for n vertices and m edges. The analysis can be improved to within a polynomial factor of the number t ( G ) {\displaystyle t(G)} of spanning trees of the input graph. In practice, branch and bound strategies and graph isomorphism rejection are employed to avoid some recursive calls. The running time depends on the heuristic used to pick the vertex pair. === Greedy coloring === The greedy algorithm considers the vertices in a specific order v 1 {\displaystyle v_{1}} , ..., v n {\displaystyle v_{n}} and assigns to v i {\displaystyle v_{i}} the smallest available color not used by v i {\displaystyle v_{i}} 's neighbours among v 1 {\displaystyle v_{1}} , ..., v i − 1 {\displaystyle v_{i-1}} , adding a fresh color if needed. The quality of the resulting coloring depends on the chosen ordering. There exists an ordering that leads to a greedy coloring with the optimal number of χ ( G ) {\displaystyle \chi (G)} colors. On the other hand, greedy colorings can be arbitrarily bad; for example, the crown graph on n vertices can be 2-colored, but has an ordering that leads to a greedy coloring with n / 2 {\displaystyle n/2} colors. For chordal graphs, and for special cases of chordal graphs such as interval graphs and indifference graphs, the greedy coloring algorithm can be used to find optimal colorings in polynomial time, by choosing the vertex ordering to be the reverse of a perfect elimination ordering for the graph. The perfectly orderable graphs generalize this property, but it is NP-hard to find a perfect ordering of these graphs. If the vertices are ordered according to their degrees, the resulting greedy coloring uses at most max i min { d ( x i ) + 1 , i } {\displaystyle {\text{max}}_{i}{\text{ min}}\{d(x_{i})+1,i\}} colors, at most one more than the graph's maximum degree. This heuristic is sometimes called the Welsh–Powell algorithm. Another heuristic due to Brélaz establishes the ordering dynamically while the algorithm proceeds, choosing next the vertex adjacent to the largest number of different colors. Many other graph coloring heuristics are similarly based on greedy coloring for a specific static or dynamic strategy of ordering the vertices, these algorithms are sometimes called sequential coloring algorithms. The maximum (worst) number of colors that can be obtained by the greedy algorithm, by using a vertex ordering chosen to maximize this number, is called the Grundy number of a graph. === Heuristic algorithms === Two well-known polynomial-time heuristics for graph colouring are the DSatur and recursive largest first (RLF) algorithms. Similarly to the greedy colouring algorithm, DSatur colours the vertices of a graph one after another, expending a previously unused colour when needed. Once a new vertex has been coloured, the algorithm determines which of the remaining uncoloured vertices has the highest number of different colours in its neighbourhood and colours this vertex next. This is defined as the degree of saturation of a given vertex. The recursive largest first algorithm operates in a different fashion by constructing each color class one at a time. It does this by identifying a maximal independent set of vertices in the graph using specialised heuristic rules. It then assigns these vertices to the same color and removes them from the graph. These actions are repeated on the remaining subgraph until no vertices remain. The worst-case complexity of DSatur is O ( n 2 ) {\displaystyle O(n^{2})} , where n {\displaystyle n} is the number of vertices in the graph. The algorithm can also be implemented using a binary heap to store saturation degrees, operating in O ( ( n + m ) log ⁡ n ) {\displaystyle O((n+m)\log n)} where m {\displaystyle m} is the number of edges in the graph. This produces much faster runs with sparse graphs. The overall complexity of RLF is slightly higher than DSatur at O ( m n ) {\displaystyle O(mn)} . DSatur and RLF are exact for bipartite, cycle, and wheel graphs. === Parallel and distributed algorithms === It is known that a χ-chromatic graph can be c-colored in the deterministic LOCAL model, in O ( n 1 / α ) {\displaystyle O(n^{1/\alpha })} . rounds, with α = ⌊ c − 1 χ − 1 ⌋ {\displaystyle \alpha =\left\lfloor {\frac {c-1}{\chi -1}}\right\rfloor } . A matching lower bound of Ω ( n 1 / α ) {\displaystyle \Omega (n^{1/\alpha })} rounds is also known. This lower bound holds even if quantum computers that can exchange quantum information, possibly with a pre-shared entangled state, are allowed. In the field of distributed algorithms, graph coloring is closely related to the problem of symmetry breaking. The current state-of-the-art randomized algorithms are faster for sufficiently large maximum degree Δ than deterministic algorithms. The fastest randomized algorithms employ the multi-trials technique by Schneider and Wattenhofer. In a symmetric graph, a deterministic distributed algorithm cannot find a proper vertex coloring. Some auxiliary information is needed in order to break symmetry. A standard assumption is that initially each node has a unique identifier, for example, from the set {1, 2, ..., n}. Put otherwise, we assume that we are given an n-coloring. The challenge is to reduce the number of colors from n to, e.g., Δ + 1. The more colors are employed, e.g. O(Δ) instead of Δ + 1, the fewer communication rounds are required. A straightforward distributed version of the greedy algorithm for (Δ + 1)-coloring requires Θ(n) communication rounds in the worst case – information may need to be propagated from one side of the network to another side. The simplest interesting case is an n-cycle. Richard Cole and Uzi Vishkin show that there is a distributed algorithm that reduces the number of colors from n to O(log n) in one synchronous communication step. By iterating the same procedure, it is possible to obtain a 3-coloring of an n-cycle in O(log* n) communication steps (assuming that we have unique node identifiers). The function log*, iterated logarithm, is an extremely slowly growing function, "almost constant". Hence the result by Cole and Vishkin raised the question of whether there is a constant-time distributed algorithm for 3-coloring an n-cycle. Linial (1992) showed that this is not possible: any deterministic distributed algorithm requires Ω(log* n) communication steps to reduce an n-coloring to a 3-coloring in an n-cycle. The technique by Cole and Vishkin can be applied in arbitrary bounded-degree graphs as well; the running time is poly(Δ) + O(log* n). The technique was extended to unit disk graphs by Schneider and Wattenhofer. The fastest deterministic algorithms for (Δ + 1)-coloring for small Δ are due to Leonid Barenboim, Michael Elkin and Fabian Kuhn. The algorithm by Barenboim et al. runs in time O(Δ) + log*(n)/2, which is optimal in terms of n since the constant factor 1/2 cannot be improved due to Linial's lower bound. Panconesi & Srinivasan (1996) use network decompositions to compute a Δ+1 coloring in time 2 O ( log ⁡ n ) {\displaystyle 2^{O\left({\sqrt {\log n}}\right)}} . The problem of edge coloring has also been studied in the distributed model. Panconesi & Rizzi (2001) achieve a (2Δ − 1)-coloring in O(Δ + log* n) time in this model. The lower bound for distributed vertex coloring due to Linial (1992) applies to the distributed edge coloring problem as well. === Decentralized algorithms === Decentralized algorithms are ones where no message passing is allowed (in contrast to distributed algorithms where local message passing takes places), and efficient decentralized algorithms exist that will color a graph if a proper coloring exists. These assume that a vertex is able to sense whether any of its neighbors are using the same color as the vertex i.e., whether a local conflict exists. This is a mild assumption in many applications e.g. in wireless channel allocation it is usually reasonable to assume that a station will be able to detect whether other interfering transmitters are using the same channel (e.g. by measuring the SINR). This sensing information is sufficient to allow algorithms based on learning automata to find a proper graph coloring with probability one. === Computational complexity === Graph coloring is computationally hard. It is NP-complete to decide if a given graph admits a k-coloring for a given k except for the cases k ∈ {0,1,2}. In particular, it is NP-hard to compute the chromatic number. The 3-coloring problem remains NP-complete even on 4-regular planar graphs. On graphs with maximal degree 3 or less, however, Brooks' theorem implies that the 3-coloring problem can be solved in linear time. Further, for every k > 3, a k-coloring of a planar graph exists by the four color theorem, and it is possible to find such a coloring in polynomial time. However, finding the lexicographically smallest 4-coloring of a planar graph is NP-complete. The best known approximation algorithm computes a coloring of size at most within a factor O(n(log log n)2(log n)−3) of the chromatic number. For all ε > 0, approximating the chromatic number within n1−ε is NP-hard. It is also NP-hard to color a 3-colorable graph with 5 colors, 4-colorable graph with 7 colours, and a k-colorable graph with ( k ⌊ k / 2 ⌋ ) − 1 {\displaystyle \textstyle {\binom {k}{\lfloor k/2\rfloor }}-1} colors for k ≥ 5. Computing the coefficients of the chromatic polynomial is ♯P-hard. In fact, even computing the value of χ ( G , k ) {\displaystyle \chi (G,k)} is ♯P-hard at any rational point k except for k = 1 and k = 2. There is no FPRAS for evaluating the chromatic polynomial at any rational point k ≥ 1.5 except for k = 2 unless NP = RP. For edge coloring, the proof of Vizing's result gives an algorithm that uses at most Δ+1 colors. However, deciding between the two candidate values for the edge chromatic number is NP-complete. In terms of approximation algorithms, Vizing's algorithm shows that the edge chromatic number can be approximated to within 4/3, and the hardness result shows that no (4/3 − ε)-algorithm exists for any ε > 0 unless P = NP. These are among the oldest results in the literature of approximation algorithms, even though neither paper makes explicit use of that notion. == Applications == === Scheduling === Vertex coloring models to a number of scheduling problems. In the cleanest form, a given set of jobs need to be assigned to time slots, each job requires one such slot. Jobs can be scheduled in any order, but pairs of jobs may be in conflict in the sense that they may not be assigned to the same time slot, for example because they both rely on a shared resource. The corresponding graph contains a vertex for every job and an edge for every conflicting pair of jobs. The chromatic number of the graph is exactly the minimum makespan, the optimal time to finish all jobs without conflicts. Details of the scheduling problem define the structure of the graph. For example, when assigning aircraft to flights, the resulting conflict graph is an interval graph, so the coloring problem can be solved efficiently. In bandwidth allocation to radio stations, the resulting conflict graph is a unit disk graph, so the coloring problem is 3-approximable. === Register allocation === A compiler is a computer program that translates one computer language into another. To improve the execution time of the resulting code, one of the techniques of compiler optimization is register allocation, where the most frequently used values of the compiled program are kept in the fast processor registers. Ideally, values are assigned to registers so that they can all reside in the registers when they are used. The textbook approach to this problem is to model it as a graph coloring problem. The compiler constructs an interference graph, where vertices are variables and an edge connects two vertices if they are needed at the same time. If the graph can be colored with k colors then any set of variables needed at the same time can be stored in at most k registers. === Other applications === The problem of coloring a graph arises in many practical areas such as sports scheduling, designing seating plans, exam timetabling, the scheduling of taxis, and solving Sudoku puzzles. == Other colorings == === Ramsey theory === An important class of improper coloring problems is studied in Ramsey theory, where the graph's edges are assigned to colors, and there is no restriction on the colors of incident edges. A simple example is the theorem on friends and strangers, which states that in any coloring of the edges of K 6 {\displaystyle K_{6}} , the complete graph of six vertices, there will be a monochromatic triangle; often illustrated by saying that any group of six people either has three mutual strangers or three mutual acquaintances. Ramsey theory is concerned with generalisations of this idea to seek regularity amid disorder, finding general conditions for the existence of monochromatic subgraphs with given structure. === Modular Coloring === Modular coloring is a type of graph coloring in which the color of each vertex is the sum of the colors of its adjacent vertices. Let k ≥ 2 be a number of colors where Z k {\displaystyle \mathbb {Z} _{k}} is the set of integers modulo k consisting of the elements (or colors) 0,1,2, ..., k-2, k-1. First, we color each vertex in G using the elements of Z k {\displaystyle \mathbb {Z} _{k}} , allowing two adjacent vertices to be assigned the same color. In other words, we want c to be a coloring such that c: V(G) → Z k {\displaystyle \mathbb {Z} _{k}} where adjacent vertices can be assigned the same color. For each vertex v in G, the color sum of v, σ(v), is the sum of all of the adjacent vertices to v mod k. The color sum of v is denoted by σ(v) = ∑u ∈ N(v) c(u) where u is an arbitrary vertex in the neighborhood of v, N(v). We then color each vertex with the new coloring determined by the sum of the adjacent vertices. The graph G has a modular k-coloring if, for every pair of adjacent vertices a,b, σ(a) ≠ σ(b). The modular chromatic number of G, mc(G), is the minimum value of k such that there exists a modular k-coloring of G.< For example, let there be a vertex v adjacent to vertices with the assigned colors 0, 1, 1, and 3 mod 4 (k=4). The color sum would be σ(v) = 0 + 1 + 1+ 3 mod 4 = 5 mod 4 = 1. This would be the new color of vertex v. We would repeat this process for every vertex in G. If two adjacent vertices have equal color sums, G does not have a modulo 4 coloring. If none of the adjacent vertices have equal color sums, G has a modulo 4 coloring. === Other colorings === Coloring can also be considered for signed graphs and gain graphs. == See also == Critical graph Graph coloring game Graph homomorphism Hajós construction Mathematics of Sudoku Multipartite graph Uniquely colorable graph == Notes == == References == == External links == GCol An open-source python library for graph coloring. High-Performance Graph Colouring Algorithms Suite of 8 different algorithms (implemented in C++) used in the book A Guide to Graph Colouring: Algorithms and Applications (Springer International Publishers, 2015). CoLoRaTiOn by Jim Andrews and Mike Fellows is a graph coloring puzzle Links to Graph Coloring source codes Archived 2008-07-04 at the Wayback Machine Code for efficiently computing Tutte, Chromatic and Flow Polynomials Archived 2008-04-16 at the Wayback Machine by Gary Haggard, David J. Pearce and Gordon Royle A graph coloring Web App by Jose Antonio Martin H.
Wikipedia/Graph_coloring_problem
Geographic routing (also called georouting or position-based routing) is a routing principle that relies on geographic position information. It is mainly proposed for wireless networks and based on the idea that the source sends a message to the geographic location of the destination instead of using the network address. In the area of packet radio networks, the idea of using position information for routing was first proposed in the 1980s for interconnection networks. Geographic routing requires that each node can determine its own location and that the source is aware of the location of the destination. With this information, a message can be routed to the destination without knowledge of the network topology or a prior route discovery. == Approaches == There are various approaches, such as single-path, multi-path and flooding-based strategies (see for a survey). Most single-path strategies rely on two techniques: greedy forwarding and face routing. Greedy forwarding tries to bring the message closer to the destination in each step using only local information. Thus, each node forwards the message to the neighbor that is most suitable from a local point of view. The most suitable neighbor can be the one who minimizes the distance to the destination in each step (Greedy). Alternatively, one can consider another notion of progress, namely the projected distance on the source-destination-line (MFR, NFP), or the minimum angle between neighbor and destination (Compass Routing). Not all of these strategies are loop-free, i.e. a message can circulate among nodes in a certain constellation. It is known that the basic greedy strategy and MFR are loop free, while NFP and Compass Routing are not. Greedy forwarding can lead into a dead end, where there is no neighbor closer to the destination. Then, face routing helps to recover from that situation and find a path to another node, where greedy forwarding can be resumed. A recovery strategy such as face routing is necessary to assure that a message can be delivered to the destination. The combination of greedy forwarding and face routing was first proposed in 1999 under the name GFG (Greedy-Face-Greedy). It guarantees delivery in the so-called unit disk graph network model. Various variants, which were proposed later , also for non-unit disk graphs, are based on the principles of GFG . Face routing depends on a planar subgraph in general; however distributed planarization is difficult for real wireless sensor networks and does not scale well to 3D environments. == Greedy embedding == Although originally developed as a routing scheme that uses the physical positions of each node, geographic routing algorithms have also been applied to networks in which each node is associated with a point in a virtual space, unrelated to its physical position. The process of finding a set of virtual positions for the nodes of a network such that geographic routing using these positions is guaranteed to succeed is called greedy embedding. == See also == List of ad hoc routing protocols Backpressure Routing == References ==
Wikipedia/Geographic_routing
Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is simple to implement and has the potential for very high throughput in hardware implementations. It is the algorithm of the Unix file compression utility compress and is used in the GIF image format. == Algorithm == The scenario described by Welch's 1984 paper encodes sequences of 8-bit data as fixed-length 12-bit codes. The codes from 0 to 255 represent 1-character sequences consisting of the corresponding 8-bit character, and the codes 256 through 4095 are created in a dictionary for sequences encountered in the data as it is encoded. At each stage in compression, input bytes are gathered into a sequence until the next character would make a sequence with no code yet in the dictionary. The code for the sequence (without that character) is added to the output, and a new code (for the sequence with that character) is added to the dictionary. The idea was quickly adapted to other situations. In an image based on a color table, for example, the natural character alphabet is the set of color table indexes, and in the 1980s, many images had small color tables (on the order of 16 colors). For such a reduced alphabet, the full 12-bit codes yielded poor compression unless the image was large, so the idea of a variable-width code was introduced: codes typically start one bit wider than the symbols being encoded, and as each code size is used up, the code width increases by 1 bit, up to some prescribed maximum (typically 12 bits). When the maximum code value is reached, encoding proceeds using the existing table, but new codes are not generated for addition to the table. Further refinements include reserving a code to indicate that the code table should be cleared and restored to its initial state (a "clear code", typically the first value immediately after the values for the individual alphabet characters), and a code to indicate the end of data (a "stop code", typically one greater than the clear code). The clear code lets the table be reinitialized after it fills up, which lets the encoding adapt to changing patterns in the input data. Smart encoders can monitor the compression efficiency and clear the table whenever the existing table no longer matches the input well. Since codes are added in a manner determined by the data, the decoder mimics building the table as it sees the resulting codes. It is critical that the encoder and decoder agree on the variety of LZW used: the size of the alphabet, the maximum table size (and code width), whether variable-width encoding is used, initial code size, and whether to use the clear and stop codes (and what values they have). Most formats that employ LZW build this information into the format specification or provide explicit fields for them in a compression header for the data. === Encoding === A high-level view of the encoding algorithm is shown here: Initialize the dictionary to contain all strings of length one. Find the longest string W in the dictionary that matches the current input. Emit the dictionary index for W to output and remove W from the input. Add W followed by the next symbol in the input to the dictionary. Go to Step 2. A dictionary is initialized to contain the single-character strings corresponding to all the possible input characters (and nothing else except the clear and stop codes if they're being used). The algorithm works by scanning through the input string for successively longer substrings until it finds one that is not in the dictionary. When such a string is found, the index for the string without the last character (i.e., the longest substring that is in the dictionary) is retrieved from the dictionary and sent to output, and the new string (including the last character) is added to the dictionary with the next available code. The last input character is then used as the next starting point to scan for substrings. In this way, successively longer strings are registered in the dictionary and available for subsequent encoding as single output values. The algorithm works best on data with repeated patterns, so the initial parts of a message see little compression. As the message grows, however, the compression ratio tends asymptotically to the maximum (i.e., the compression factor or ratio improves on an increasing curve, and not linearly, approaching a theoretical maximum inside a limited time period rather than over infinite time). === Decoding === A high-level view of the decoding algorithm is shown here: Initialize the dictionary to contain all strings of length one. Read the next encoded symbol: Is it encoded in the dictionary? Yes: Emit the corresponding string W to output. Concatenate the previous string emitted to output with the first symbol of W. Add this to the dictionary. No: Concatenate the previous string emitted to output with its first symbol. Call this string V. Add V to the dictionary and emit V to output. Repeat Step 2 until end of input string The decoding algorithm works by reading a value from the encoded input and outputting the corresponding string from the dictionary. However, the full dictionary is not needed, only the initial dictionary that contains single-character strings (and that is usually hard coded in the program, instead of sent with the encoded data). Instead, the full dictionary is rebuilt during the decoding process the following way: after decoding a value and outputting a string, the decoder concatenates it with the first character of the next decoded string (or the first character of current string, if the next one can't be decoded; since if the next value is unknown, then it must be the value added to the dictionary in this iteration, and so its first character is the same as the first character of the current string), and updates the dictionary with the new string. The decoder then proceeds to the next input (which was already read in the previous iteration) and processes it as before, and so on until it has exhausted the input stream. === Variable-width codes === If variable-width codes are being used, the encoder and decoder must be careful to change the width at the same points in the encoded data so they don't disagree on boundaries between individual codes in the stream. In the standard version, the encoder increases the width from p to p + 1 when a sequence ω + s is encountered that is not in the table (so that a code must be added for it) but the next available code in the table is 2p (the first code requiring p + 1 bits). The encoder emits the code for ω at width p (since that code does not require p + 1 bits), and then increases the code width so that the next code emitted is p + 1 bits wide. The decoder is always one code behind the encoder in building the table, so when it sees the code for ω, it generates an entry for code 2p − 1. Since this is the point where the encoder increases the code width, the decoder must increase the width here as well—at the point where it generates the largest code that fits in p bits. Unfortunately, some early implementations of the encoding algorithm increase the code width and then emit ω at the new width instead of the old width, so that to the decoder it looks like the width changes one code too early. This is called "early change"; it caused so much confusion that Adobe now allows both versions in PDF files, but includes an explicit flag in the header of each LZW-compressed stream to indicate whether early change is being used. Of the graphics file formats that support LZW compression, TIFF uses early change, while GIF and most others don't. When the table is cleared in response to a clear code, both encoder and decoder change the code width after the clear code back to the initial code width, starting with the code immediately following the clear code. === Packing order === Since the codes emitted typically do not fall on byte boundaries, the encoder and decoder must agree on how codes are packed into bytes. The two common methods are LSB-first ("least significant bit first") and MSB-first ("most significant bit first"). In LSB-first packing, the first code is aligned so that the least significant bit of the code falls in the least significant bit of the first stream byte, and if the code has more than 8 bits, the high-order bits left over are aligned with the least significant bits of the next byte; further codes are packed with LSB going into the least significant bit not yet used in the current stream byte, proceeding into further bytes as necessary. MSB-first packing aligns the first code so that its most significant bit falls in the MSB of the first stream byte, with overflow aligned with the MSB of the next byte; further codes are written with MSB going into the most significant bit not yet used in the current stream byte. GIF files use LSB-first packing order. TIFF files and PDF files use MSB-first packing order. == Example == The following example illustrates the LZW algorithm in action, showing the status of the output and the dictionary at every stage, both in encoding and decoding the data. This example has been constructed to give reasonable compression on a very short message. In real text data, repetition is generally less pronounced, so longer input streams are typically necessary before the compression builds up efficiency. The plaintext to be encoded (from an alphabet using only the capital letters) is: TOBEORNOTTOBEORTOBEORNOT# There are 26 symbols in the plaintext alphabet (the capital letters A through Z). # is used to represent a stop code: a code outside the plaintext alphabet that triggers special handling. We arbitrarily assign these the values 1 through 26 for the letters, and 0 for the stop code '#'. (Most flavors of LZW would put the stop code after the data alphabet, but nothing in the basic algorithm requires that. The encoder and decoder only have to agree what value it has.) A computer renders these as strings of bits. Five-bit codes are needed to give sufficient combinations to encompass this set of 27 values. The dictionary is initialized with these 27 values. As the dictionary grows, the codes must grow in width to accommodate the additional entries. A 5-bit code gives 25 = 32 possible combinations of bits, so when the 33rd dictionary word is created, the algorithm must switch at that point from 5-bit strings to 6-bit strings (for all code values, including those previously output with only five bits). Note that since the all-zero code 00000 is used, and is labeled "0", the 33rd dictionary entry is labeled 32. (Previously generated output is not affected by the code-width change, but once a 6-bit value is generated in the dictionary, it could conceivably be the next code emitted, so the width for subsequent output shifts to 6 bits to accommodate that.) The initial dictionary, then, consists of the following entries: === Encoding === Buffer input characters in a sequence ω until ω + next character is not in the dictionary. Emit the code for ω, and add ω + next character to the dictionary. Start buffering again with the next character. (The string to be encoded is "TOBEORNOTTOBEORTOBEORNOT#".) Unencoded length = 25 symbols × 5 bits/symbol = 125 bits Encoded length = (6 codes × 5 bits/code) + (11 codes × 6 bits/code) = 96 bits. Using LZW has saved 29 bits out of 125, reducing the message by more than 23%. If the message were longer, then the dictionary words would begin to represent longer and longer sections of text, sending repeated words very compactly. === Decoding === To decode an LZW-compressed archive, one needs to know in advance the initial dictionary used, but additional entries can be reconstructed as they are always simply concatenations of previous entries. At each stage, the decoder receives a code X; it looks X up in the table and outputs the sequence χ it codes, and it conjectures χ + ? as the entry the encoder just added – because the encoder emitted X for χ precisely because χ + ? was not in the table, and the encoder goes ahead and adds it. But what is the missing letter? It is the first letter in the sequence coded by the next code Z that the decoder receives. So the decoder looks up Z, decodes it into the sequence ω and takes the first letter z and tacks it onto the end of χ as the next dictionary entry. This works as long as the codes received are in the decoder's dictionary, so that they can be decoded into sequences. What happens if the decoder receives a code Z that is not yet in its dictionary? Since the decoder is always just one code behind the encoder, Z can be in the encoder's dictionary only if the encoder just generated it, when emitting the previous code X for χ. Thus Z codes some ω that is χ + ?, and the decoder can determine the unknown character as follows: The decoder sees X and then Z, where X codes the sequence χ and Z codes some unknown sequence ω. The decoder knows that the encoder just added Z as a code for χ + some unknown character c, so ω = χ + c. Since c is the first character in the input stream after χ, and since ω is the string appearing immediately after χ, c must be the first character of the sequence ω. Since χ is an initial substring of ω, c must also be the first character of χ. So even though the Z code is not in the table, the decoder is able to infer the unknown sequence and adds χ + (the first character of χ) to the table as the value of Z. This situation occurs whenever the encoder encounters input of the form cScSc, where c is a single character, S is a string and cS is already in the dictionary, but cSc is not. The encoder emits the code for cS, putting a new code for cSc into the dictionary. Next it sees cSc in the input (starting at the second c of cScSc) and emits the new code it just inserted. The argument above shows that whenever the decoder receives a code not in its dictionary, the situation must look like this. Although input of form cScSc might seem unlikely, this pattern is fairly common when the input stream is characterized by significant repetition. In particular, long strings of a single character (which are common in the kinds of images LZW is often used to encode) repeatedly generate patterns of this sort. == Further coding == The simple scheme described above focuses on the LZW algorithm itself. Many applications apply further encoding to the sequence of output symbols. Some package the coded stream as printable characters using some form of binary-to-text encoding; this increases the encoded length and decreases the compression rate. Conversely, increased compression can often be achieved with an adaptive entropy encoder. Such a coder estimates the probability distribution for the value of the next symbol, based on the observed frequencies of values so far. A standard entropy encoding such as Huffman coding or arithmetic coding then uses shorter codes for values with higher probabilities. == Uses == LZW compression became the first widely used universal data compression method on computers. A large English text file can typically be compressed via LZW to about half its original size. LZW was used in the public-domain program compress, which became a more or less standard utility in Unix systems around 1986. It has since disappeared from many distributions, both because it infringed the LZW patent and because gzip produced better compression ratios using the LZ77-based DEFLATE algorithm, but as of 2008 at least FreeBSD includes both compress and uncompress as a part of the distribution. Several other popular compression utilities also used LZW or closely related methods. LZW became very widely used when it became part of the GIF image format in 1987. It may also (optionally) be used in TIFF and PDF files. (Although LZW is available in Adobe Acrobat software, Acrobat by default uses DEFLATE for most text and color-table-based image data in PDF files.) == Patents == Various patents have been issued in the United States and other countries for LZW and similar algorithms. LZ78 was covered by U.S. patent 4,464,650 by Lempel, Ziv, Cohn, and Eastman, assigned to Sperry Corporation, later Unisys Corporation, filed on August 10, 1981. Two US patents were issued for the LZW algorithm: U.S. patent 4,814,746 by Victor S. Miller and Mark N. Wegman and assigned to IBM, originally filed on June 1, 1983, and U.S. patent 4,558,302 by Welch, assigned to Sperry Corporation, later Unisys Corporation, filed on June 20, 1983. In addition to the above patents, Welch's 1983 patent also includes citations to several other patents that influenced it, including two 1980 Japanese patents (JP9343880A and JP17790880A) from NEC's Jun Kanatsu, U.S. patent 4,021,782 (1974) from John S. Hoerning, U.S. patent 4,366,551 (1977) from Klaus E. Holtz, and a 1981 German patent (DE19813118676) from Karl Eckhart Heinz. In 1993–94, and again in 1999, Unisys Corporation received widespread condemnation when it attempted to enforce licensing fees for LZW in GIF images. The 1993–1994 Unisys-CompuServe controversy (CompuServe being the creator of the GIF format) prompted a Usenet comp.graphics discussion Thoughts on a GIF-replacement file format, which in turn fostered an email exchange that eventually culminated in the creation of the patent-unencumbered Portable Network Graphics (PNG) file format in 1995. Unisys's US patent on the LZW algorithm expired on June 20, 2003, 20 years after it had been filed. Patents that had been filed in the United Kingdom, France, Germany, Italy, Japan and Canada all expired in 2004, likewise 20 years after they had been filed. == Variants == LZMW (1985, by V. Miller, M. Wegman) – Searches input for the longest string already in the dictionary (the "current" match); adds the concatenation of the previous match with the current match to the dictionary. (Dictionary entries thus grow more rapidly; but this scheme is much more complicated to implement.) Miller and Wegman also suggest deleting low-frequency entries from the dictionary when the dictionary fills up. LZAP (1988, by James Storer) – modification of LZMW: instead of adding just the concatenation of the previous match with the current match to the dictionary, add the concatenations of the previous match with each initial substring of the current match ("AP" stands for "all prefixes"). For example, if the previous match is "wiki" and current match is "pedia", then the LZAP encoder adds 5 new sequences to the dictionary: "wikip", "wikipe", "wikiped", "wikipedi", and "wikipedia", where the LZMW encoder adds only the one sequence "wikipedia". This eliminates some of the complexity of LZMW, at the price of adding more dictionary entries. LZWL is a syllable-based variant of LZW. == See also == LZ77 and LZ78 LZMA Lempel–Ziv–Storer–Szymanski LZJB Context tree weighting Discrete cosine transform (DCT), a lossy compression algorithm used in JPEG and MPEG coding standards == References == == External links == Rosettacode wiki, algorithm in various languages U.S. patent 4,558,302, Terry A. Welch, High speed data compression and decompression apparatus and method SharpLZW – C# open source implementation MIT OpenCourseWare: Lecture including LZW algorithm Mark Nelson, LZW Data Compression on Dr. Dobbs Journal (October 1, 1989) Shrink, Reduce, and Implode: The Legacy Zip Compression Methods explains LZW and how it was used in PKZIP
Wikipedia/Lempel-Ziv-Welch_algorithm
In decision tree learning, ID3 (Iterative Dichotomiser 3) is an algorithm invented by Ross Quinlan used to generate a decision tree from a dataset. ID3 is the precursor to the C4.5 algorithm, and is typically used in the machine learning and natural language processing domains. == Algorithm == The ID3 algorithm begins with the original set S {\displaystyle S} as the root node. On each iteration of the algorithm, it iterates through every unused attribute of the set S {\displaystyle S} and calculates the entropy H ( S ) {\displaystyle \mathrm {H} {(S)}} or the information gain I G ( S ) {\displaystyle IG(S)} of that attribute. It then selects the attribute which has the smallest entropy (or largest information gain) value. The set S {\displaystyle S} is then split or partitioned by the selected attribute to produce subsets of the data. (For example, a node can be split into child nodes based upon the subsets of the population whose ages are less than 50, between 50 and 100, and greater than 100.) The algorithm continues to recurse on each subset, considering only attributes never selected before. Recursion on a subset may stop in one of these cases: every element in the subset belongs to the same class; in which case the node is turned into a leaf node and labelled with the class of the examples. there are no more attributes to be selected, but the examples still do not belong to the same class. In this case, the node is made a leaf node and labelled with the most common class of the examples in the subset. there are no examples in the subset, which happens when no example in the parent set was found to match a specific value of the selected attribute. An example could be the absence of a person among the population with age over 100 years. Then a leaf node is created and labelled with the most common class of the examples in the parent node's set. Throughout the algorithm, the decision tree is constructed with each non-terminal node (internal node) representing the selected attribute on which the data was split, and terminal nodes (leaf nodes) representing the class label of the final subset of this branch. === Summary === Calculate the entropy of every attribute a {\displaystyle a} of the data set S {\displaystyle S} . Partition ("split") the set S {\displaystyle S} into subsets using the attribute for which the resulting entropy after splitting is minimized; or, equivalently, information gain is maximum. Make a decision tree node containing that attribute. Recurse on subsets using the remaining attributes. === Properties === ID3 does not guarantee an optimal solution. It can converge upon local optima. It uses a greedy strategy by selecting the locally best attribute to split the dataset on each iteration. The algorithm's optimality can be improved by using backtracking during the search for the optimal decision tree at the cost of possibly taking longer. ID3 can overfit the training data. To avoid overfitting, smaller decision trees should be preferred over larger ones. This algorithm usually produces small trees, but it does not always produce the smallest possible decision tree. ID3 is harder to use on continuous data than on factored data (factored data has a discrete number of possible values, thus reducing the possible branch points). If the values of any given attribute are continuous, then there are many more places to split the data on this attribute, and searching for the best value to split by can be time-consuming. === Usage === The ID3 algorithm is used by training on a data set S {\displaystyle S} to produce a decision tree which is stored in memory. At runtime, this decision tree is used to classify new test cases (feature vectors) by traversing the decision tree using the features of the datum to arrive at a leaf node. == The ID3 metrics == === Entropy === Entropy H ( S ) {\displaystyle \mathrm {H} {(S)}} is a measure of the amount of uncertainty in the (data) set S {\displaystyle S} (i.e. entropy characterizes the (data) set S {\displaystyle S} ). H ( S ) = ∑ x ∈ X − p ( x ) log 2 ⁡ p ( x ) {\displaystyle \mathrm {H} {(S)}=\sum _{x\in X}{-p(x)\log _{2}p(x)}} Where, S {\displaystyle S} – The current dataset for which entropy is being calculated This changes at each step of the ID3 algorithm, either to a subset of the previous set in the case of splitting on an attribute or to a "sibling" partition of the parent in case the recursion terminated previously. X {\displaystyle X} – The set of classes in S {\displaystyle S} p ( x ) {\displaystyle p(x)} – The proportion of the number of elements in class x {\displaystyle x} to the number of elements in set S {\displaystyle S} When H ( S ) = 0 {\displaystyle \mathrm {H} {(S)}=0} , the set S {\displaystyle S} is perfectly classified (i.e. all elements in S {\displaystyle S} are of the same class). In ID3, entropy is calculated for each remaining attribute. The attribute with the smallest entropy is used to split the set S {\displaystyle S} on this iteration. Entropy in information theory measures how much information is expected to be gained upon measuring a random variable; as such, it can also be used to quantify the amount to which the distribution of the quantity's values is unknown. A constant quantity has zero entropy, as its distribution is perfectly known. In contrast, a uniformly distributed random variable (discretely or continuously uniform) maximizes entropy. Therefore, the greater the entropy at a node, the less information is known about the classification of data at this stage of the tree; and therefore, the greater the potential to improve the classification here. As such, ID3 is a greedy heuristic performing a best-first search for locally optimal entropy values. Its accuracy can be improved by preprocessing the data. === Information gain === Information gain I G ( A ) {\displaystyle IG(A)} is the measure of the difference in entropy from before to after the set S {\displaystyle S} is split on an attribute A {\displaystyle A} . In other words, how much uncertainty in S {\displaystyle S} was reduced after splitting set S {\displaystyle S} on attribute A {\displaystyle A} . I G ( S , A ) = H ( S ) − ∑ t ∈ T p ( t ) H ( t ) = H ( S ) − H ( S | A ) . {\displaystyle IG(S,A)=\mathrm {H} {(S)}-\sum _{t\in T}p(t)\mathrm {H} {(t)}=\mathrm {H} {(S)}-\mathrm {H} {(S|A)}.} Where, H ( S ) {\displaystyle \mathrm {H} (S)} – Entropy of set S {\displaystyle S} T {\displaystyle T} – The subsets created from splitting set S {\displaystyle S} by attribute A {\displaystyle A} such that S = ⋃ t ∈ T t {\displaystyle S=\bigcup _{t\in T}t} p ( t ) {\displaystyle p(t)} – The proportion of the number of elements in t {\displaystyle t} to the number of elements in set S {\displaystyle S} H ( t ) {\displaystyle \mathrm {H} (t)} – Entropy of subset t {\displaystyle t} In ID3, information gain can be calculated (instead of entropy) for each remaining attribute. The attribute with the largest information gain is used to split the set S {\displaystyle S} on this iteration. == See also == Classification and regression tree (CART) C4.5 algorithm Decision tree learning Decision tree model == References == == Further reading == Mitchell, Tom Michael (1997). Machine Learning. New York, NY: McGraw-Hill. pp. 55–58. ISBN 0070428077. OCLC 36417892. Grzymala-Busse, Jerzy W. (February 1993). "Selected Algorithms of Machine Learning from Examples" (PDF). Fundamenta Informaticae. 18 (2): 193–207 – via ResearchGate. == External links == Seminars – http://www2.cs.uregina.ca/ Description and examples – http://www.cise.ufl.edu/ Description and examples – http://www.cis.temple.edu/ Decision Trees and Political Party Classification
Wikipedia/ID3_algorithm
In the physical sciences, the term spectrum was introduced first into optics by Isaac Newton in the 17th century, referring to the range of colors observed when white light was dispersed through a prism. Soon the term referred to a plot of light intensity or power as a function of frequency or wavelength, also known as a spectral density plot. Later it expanded to apply to other waves, such as sound waves and sea waves that could also be measured as a function of frequency (e.g., noise spectrum, sea wave spectrum). It has also been expanded to more abstract "signals", whose power spectrum can be analyzed and processed. The term now applies to any signal that can be measured or decomposed along a continuous variable, such as energy in electron spectroscopy or mass-to-charge ratio in mass spectrometry. Spectrum is also used to refer to a graphical representation of the signal as a function of the dependent variable. == Etymology == == Electromagnetic spectrum == Electromagnetic spectrum refers to the full range of all frequencies of electromagnetic radiation and also to the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object. Devices used to measure an electromagnetic spectrum are called spectrograph or spectrometer. The visible spectrum is the part of the electromagnetic spectrum that can be seen by the human eye. The wavelength of visible light ranges from 390 to 700 nm. The absorption spectrum of a chemical element or chemical compound is the spectrum of frequencies or wavelengths of incident radiation that are absorbed by the compound due to electron transitions from a lower to a higher energy state. The emission spectrum refers to the spectrum of radiation emitted by the compound due to electron transitions from a higher to a lower energy state. Light from many different sources contains various colors, each with its own brightness or intensity. A rainbow, or prism, sends these component colors in different directions, making them individually visible at different angles. A graph of the intensity plotted against the frequency (showing the brightness of each color) is the frequency spectrum of the light. When all the visible frequencies are present equally, the perceived color of the light is white, and the spectrum is a flat line. Therefore, flat-line spectra in general are often referred to as white, whether they represent light or another type of wave phenomenon (sound, for example, or vibration in a structure). In radio and telecommunications, the frequency spectrum can be shared among many different broadcasters. The radio spectrum is the part of the electromagnetic spectrum corresponding to frequencies lower below 300 GHz, which corresponds to wavelengths longer than about 1 mm. The microwave spectrum corresponds to frequencies between 300 MHz (0.3 GHz) and 300 GHz and wavelengths between one meter and one millimeter. Each broadcast radio and TV station transmits a wave on an assigned frequency range, called a channel. When many broadcasters are present, the radio spectrum consists of the sum of all the individual channels, each carrying separate information, spread across a wide frequency spectrum. Any particular radio receiver will detect a single function of amplitude (voltage) vs. time. The radio then uses a tuned circuit or tuner to select a single channel or frequency band and demodulate or decode the information from that broadcaster. If we made a graph of the strength of each channel vs. the frequency of the tuner, it would be the frequency spectrum of the antenna signal. In astronomical spectroscopy, the strength, shape, and position of absorption and emission lines, as well as the overall spectral energy distribution of the continuum, reveal many properties of astronomical objects. Stellar classification is the categorisation of stars based on their characteristic electromagnetic spectra. The spectral flux density is used to represent the spectrum of a light-source, such as a star. In radiometry and colorimetry (or color science more generally), the spectral power distribution (SPD) of a light source is a measure of the power contributed by each frequency or color in a light source. The light spectrum is usually measured at points (often 31) along the visible spectrum, in wavelength space instead of frequency space, which makes it not strictly a spectral density. Some spectrophotometers can measure increments as fine as one to two nanometers and even higher resolution devices with resolutions less than 0.5 nm have been reported. the values are used to calculate other specifications and then plotted to show the spectral attributes of the source. This can be helpful in analyzing the color characteristics of a particular source. == Mass spectrum == A plot of ion abundance as a function of mass-to-charge ratio is called a mass spectrum. It can be produced by a mass spectrometer instrument. The mass spectrum can be used to determine the quantity and mass of atoms and molecules. Tandem mass spectrometry is used to determine molecular structure. == Energy spectrum == In physics, the energy spectrum of a particle is the number of particles or intensity of a particle beam as a function of particle energy. Examples of techniques that produce an energy spectrum are alpha-particle spectroscopy, electron energy loss spectroscopy, and mass-analyzed ion-kinetic-energy spectrometry. == Displacement == Oscillatory displacements, including vibrations, can also be characterized spectrally. For water waves, see wave spectrum and tide spectrum. Sound and non-audible acoustic waves can also be characterized in terms of its spectral density, for example, timbre and musical acoustics. === Acoustical measurement === In acoustics, a spectrogram is a visual representation of the frequency spectrum of sound as a function of time or another variable. A source of sound can have many different frequencies mixed. A musical tone's timbre is characterized by its harmonic spectrum. Sound in our environment that we refer to as noise includes many different frequencies. When a sound signal contains a mixture of all audible frequencies, distributed equally over the audio spectrum, it is called white noise. The spectrum analyzer is an instrument which can be used to convert the sound wave of the musical note into a visual display of the constituent frequencies. This visual display is referred to as an acoustic spectrogram. Software based audio spectrum analyzers are available at low cost, providing easy access not only to industry professionals, but also to academics, students and the hobbyist. The acoustic spectrogram generated by the spectrum analyzer provides an acoustic signature of the musical note. In addition to revealing the fundamental frequency and its overtones, the spectrogram is also useful for analysis of the temporal attack, decay, sustain, and release of the musical note. == Continuous versus discrete spectra == In the physical sciences, the spectrum of a physical quantity (such as energy) may be called continuous if it is non-zero over the whole spectrum domain (such as frequency or wavelength) or discrete if it attains non-zero values only in a discrete set over the independent variable, with band gaps between pairs of spectral bands or spectral lines. The classical example of a continuous spectrum, from which the name is derived, is the part of the spectrum of the light emitted by excited atoms of hydrogen that is due to free electrons becoming bound to a hydrogen ion and emitting photons, which are smoothly spread over a wide range of wavelengths, in contrast to the discrete lines due to electrons falling from some bound quantum state to a state of lower energy. As in that classical example, the term is most often used when the range of values of a physical quantity may have both a continuous and a discrete part, whether at the same time or in different situations. In quantum systems, continuous spectra (as in bremsstrahlung and thermal radiation) are usually associated with free particles, such as atoms in a gas, electrons in an electron beam, or conduction band electrons in a metal. In particular, the position and momentum of a free particle has a continuous spectrum, but when the particle is confined to a limited space its spectrum becomes discrete. Often a continuous spectrum may be just a convenient model for a discrete spectrum whose values are too close to be distinguished, as in the phonons in a crystal. The continuous and discrete spectra of physical systems can be modeled in functional analysis as different parts in the decomposition of the spectrum of a linear operator acting on a function space, such as the Hamiltonian operator. The classical example of a discrete spectrum (for which the term was first used) is the characteristic set of discrete spectral lines seen in the emission spectrum and absorption spectrum of isolated atoms of a chemical element, which only absorb and emit light at particular wavelengths. The technique of spectroscopy is based on this phenomenon. Discrete spectra are seen in many other phenomena, such as vibrating strings, microwaves in a metal cavity, sound waves in a pulsating star, and resonances in high-energy particle physics. The general phenomenon of discrete spectra in physical systems can be mathematically modeled with tools of functional analysis, specifically by the decomposition of the spectrum of a linear operator acting on a functional space. === In classical mechanics === In classical mechanics, discrete spectra are often associated to waves and oscillations in a bounded object or domain. Mathematically they can be identified with the eigenvalues of differential operators that describe the evolution of some continuous variable (such as strain or pressure) as a function of time and/or space. Discrete spectra are also produced by some non-linear oscillators where the relevant quantity has a non-sinusoidal waveform. Notable examples are the sound produced by the vocal cords of mammals.: p.684  and the stridulation organs of crickets, whose spectrum shows a series of strong lines at frequencies that are integer multiples (harmonics) of the oscillation frequency. A related phenomenon is the appearance of strong harmonics when a sinusoidal signal (which has the ultimate "discrete spectrum", consisting of a single spectral line) is modified by a non-linear filter; for example, when a pure tone is played through an overloaded amplifier, or when an intense monochromatic laser beam goes through a non-linear medium. In the latter case, if two arbitrary sinusoidal signals with frequencies f and g are processed together, the output signal will generally have spectral lines at frequencies |mf + ng|, where m and n are any integers. === In quantum mechanics === In quantum mechanics, the discrete spectrum of an observable refers to the pure point spectrum of eigenvalues of the operator used to model that observable. Discrete spectra are usually associated with systems that are bound in some sense (mathematically, confined to a compact space). The position and momentum operators have continuous spectra in an infinite domain, but a discrete (quantized) spectrum in a compact domain and the same properties of spectra hold for angular momentum, Hamiltonians and other operators of quantum systems. The quantum harmonic oscillator and the hydrogen atom are examples of physical systems in which the Hamiltonian has a discrete spectrum. In the case of the hydrogen atom the spectrum has both a continuous and a discrete part, the continuous part representing the ionization. == See also == Spectrum (disambiguation) § Physics == References ==
Wikipedia/Energy_spectrum
A method in object-oriented programming (OOP) is a procedure associated with an object, and generally also a message. An object consists of state data and behavior; these compose an interface, which specifies how the object may be used. A method is a behavior of an object parametrized by a user. Data is represented as properties of the object, and behaviors are represented as methods. For example, a Window object could have methods such as open and close, while its state (whether it is open or closed at any given point in time) would be a property. In class-based programming, methods are defined within a class, and objects are instances of a given class. One of the most important capabilities that a method provides is method overriding - the same name (e.g., area) can be used for multiple different kinds of classes. This allows the sending objects to invoke behaviors and to delegate the implementation of those behaviors to the receiving object. A method in Java programming sets the behavior of a class object. For example, an object can send an area message to another object and the appropriate formula is invoked whether the receiving object is a rectangle, circle, triangle, etc. Methods also provide the interface that other classes use to access and modify the properties of an object; this is known as encapsulation. Encapsulation and overriding are the two primary distinguishing features between methods and procedure calls. == Overriding and overloading == Method overriding and overloading are two of the most significant ways that a method differs from a conventional procedure or function call. Overriding refers to a subclass redefining the implementation of a method of its superclass. For example, findArea may be a method defined on a shape class, triangle, etc. would each define the appropriate formula to calculate their area. The idea is to look at objects as "black boxes" so that changes to the internals of the object can be made with minimal impact on the other objects that use it. This is known as encapsulation and is meant to make code easier to maintain and re-use. Method overloading, on the other hand, refers to differentiating the code used to handle a message based on the parameters of the method. If one views the receiving object as the first parameter in any method then overriding is just a special case of overloading where the selection is based only on the first argument. The following simple Java example illustrates the difference: == Accessor, mutator and manager methods == Accessor methods are used to read the data values of an object. Mutator methods are used to modify the data of an object. Manager methods are used to initialize and destroy objects of a class, e.g. constructors and destructors. These methods provide an abstraction layer that facilitates encapsulation and modularity. For example, if a bank-account class provides a getBalance() accessor method to retrieve the current balance (rather than directly accessing the balance data fields), then later revisions of the same code can implement a more complex mechanism for balance retrieval (e.g., a database fetch), without the dependent code needing to be changed. The concepts of encapsulation and modularity are not unique to object-oriented programming. Indeed, in many ways the object-oriented approach is simply the logical extension of previous paradigms such as abstract data types and structured programming. === Constructors === A constructor is a method that is called at the beginning of an object's lifetime to create and initialize the object, a process called construction (or instantiation). Initialization may include an acquisition of resources. Constructors may have parameters but usually do not return values in most languages. See the following example in Java: === Destructor === A Destructor is a method that is called automatically at the end of an object's lifetime, a process called Destruction. Destruction in most languages does not allow destructor method arguments nor return values. Destructors can be implemented so as to perform cleanup chores and other tasks at object destruction. ==== Finalizers ==== In garbage-collected languages, such as Java,: 26, 29  C#,: 208–209  and Python, destructors are known as finalizers. They have a similar purpose and function to destructors, but because of the differences between languages that utilize garbage-collection and languages with manual memory management, the sequence in which they are called is different. == Abstract methods == An abstract method is one with only a signature and no implementation body. It is often used to specify that a subclass must provide an implementation of the method, as in an abstract class. Abstract methods are used to specify interfaces in some programming languages. === Example === The following Java code shows an abstract class that needs to be extended: The following subclass extends the main class: === Reabstraction === If a subclass provides an implementation for an abstract method, another subclass can make it abstract again. This is called reabstraction. In practice, this is rarely used. ==== Example ==== In C#, a virtual method can be overridden with an abstract method. (This also applies to Java, where all non-private methods are virtual.) Interfaces' default methods can also be reabstracted, requiring subclasses to implement them. (This also applies to Java.) == Class methods == Class methods are methods that are called on a class rather than an instance. They are typically used as part of an object meta-model. I.e, for each class, defined an instance of the class object in the meta-model is created. Meta-model protocols allow classes to be created and deleted. In this sense, they provide the same functionality as constructors and destructors described above. But in some languages such as the Common Lisp Object System (CLOS) the meta-model allows the developer to dynamically alter the object model at run time: e.g., to create new classes, redefine the class hierarchy, modify properties, etc. == Special methods == Special methods are very language-specific and a language may support none, some, or all of the special methods defined here. A language's compiler may automatically generate default special methods or a programmer may be allowed to optionally define special methods. Most special methods cannot be directly called, but rather the compiler generates code to call them at appropriate times. === Static methods === Static methods are meant to be relevant to all the instances of a class rather than to any specific instance. They are similar to static variables in that sense. An example would be a static method to sum the values of all the variables of every instance of a class. For example, if there were a Product class it might have a static method to compute the average price of all products. A static method can be invoked even if no instances of the class exist yet. Static methods are called "static" because they are resolved at compile time based on the class they are called on and not dynamically as in the case with instance methods, which are resolved polymorphically based on the runtime type of the object. ==== Examples ==== ===== In Java ===== In Java, a commonly used static method is: Math.max(double a, double b) This static method has no owning object and does not run on an instance. It receives all information from its arguments. === Copy-assignment operators === Copy-assignment operators define actions to be performed by the compiler when a class object is assigned to a class object of the same type. === Operator methods === Operator methods define or redefine operator symbols and define the operations to be performed with the symbol and the associated method parameters. C++ example: == Member functions in C++ == Some procedural languages were extended with object-oriented capabilities to leverage the large skill sets and legacy code for those languages but still provide the benefits of object-oriented development. Perhaps the most well-known example is C++, an object-oriented extension of the C programming language. Due to the design requirements to add the object-oriented paradigm on to an existing procedural language, message passing in C++ has some unique capabilities and terminologies. For example, in C++ a method is known as a member function. C++ also has the concept of virtual functions which are member functions that can be overridden in derived classes and allow for dynamic dispatch. === Virtual functions === Virtual functions are the means by which a C++ class can achieve polymorphic behavior. Non-virtual member functions, or regular methods, are those that do not participate in polymorphism. C++ Example: == See also == Property (programming) Remote method invocation Subroutine, also called subprogram, routine, procedure or function == Notes == == References ==
Wikipedia/Instance_method
In object-oriented computer programming, an extension method is a method added to an object after the original object was compiled. The modified object is often a class, a prototype, or a type. Extension methods are features of some object-oriented programming languages. There is no syntactic difference between calling an extension method and calling a method declared in the type definition. Not all languages implement extension methods in an equally safe manner, however. For instance, languages such as C#, Java (via Manifold, Lombok, or Fluent), and Kotlin don't alter the extended class in any way, because doing so may break class hierarchies and interfere with virtual method dispatching. Instead, these languages strictly implement extension methods statically and use static dispatching to invoke them. == Support in programming languages == Extension methods are features of numerous languages including C#, Java via Manifold or Lombok or Fluent, Gosu, JavaScript, Oxygene, Ruby, Smalltalk, Kotlin, Dart, Visual Basic.NET, and Xojo. In dynamic languages like Python, the concept of an extension method is unnecessary because classes (excluding built-in classes) can be extended without any special syntax (an approach known as "monkey-patching", employed in libraries such as gevent). In VB.NET and Oxygene, they are recognized by the presence of the "extension" keyword or attribute. In Xojo, the "Extends" keyword is used with global methods. In C#, they are implemented as static methods in static classes, with the first argument being of extended class and preceded by "this" keyword. In Java, extension methods are added via Manifold, a jar file added to the project's classpath. Similar to C#, a Java extension method is declared static in an @Extension class where the first argument has the same type as the extended class and is annotated with @This. Alternatively, the Fluent plugin allows calling any static method as an extension method without using annotations, as long as the method signature matches. In Smalltalk, any code can add a method to any class at any time, by sending a method creation message (such as methodsFor:) to the class the user wants to extend. The Smalltalk method category is conventionally named after the package that provides the extension, surrounded by asterisks. For example, when Etoys application code extends classes in the core library, the added methods are put in the *etoys* category. In Ruby, like Smalltalk, there is no special language feature for extension, as Ruby allows classes to be re-opened at any time with the class keyword to add new methods. The Ruby community often describes an extension method as a kind of monkey patch. There is also a newer feature for adding safe/local extensions to the objects, called Refinements, but it is known to be less used. In Swift, the extension keyword marks a class-like construct that allows the addition of methods, constructors, and fields to an existing class, including the ability to implement a new interface/protocol to the existing class. == Extension methods as enabling feature == Next to extension methods allowing code written by others to be extended as described below, extension methods enable patterns that are useful in their own right as well. The predominant reason why extension methods were introduced was Language Integrated Query (LINQ). Compiler support for extension methods allows deep integration of LINQ with old code just the same as with new code, as well as support for query syntax which for the moment is unique to the primary Microsoft .NET languages. === Centralize common behavior === However, extension methods allow features to be implemented once in ways that enable reuse without the need for inheritance or the overhead of virtual method invocations, or to require implementors of an interface to implement either trivial or woefully complex functionality. A particularly useful scenario is if the feature operates on an interface for which there is no concrete implementation or a useful implementation is not provided by the class library author, e.g. such as is often the case in libraries that provide developers a plugin architecture or similar functionality. Consider the following code and suppose it is the only code contained in a class library. Nevertheless, every implementor of the ILogger interface will gain the ability to write a formatted string, just by including a using MyCoolLogger statement, without having to implement it once and without being required to subclass a class library provided implementation of ILogger. use as : === Better loose coupling === Extension methods allow users of class libraries to refrain from ever declaring an argument, variable, or anything else with a type that comes from that library. Construction and conversion of the types used in the class library can be implemented as extension methods. After carefully implementing the conversions and factories, switching from one class library to another can be made as easy as changing the using statement that makes the extension methods available for the compiler to bind to. === Fluent application programmer's interfaces === Extension methods have special use in implementing so called fluent interfaces. An example is Microsoft's Entity Framework configuration API, which allows for example to write code that resembles regular English as closely as practical. One could argue this is just as well possible without extension methods, but one will find that in practice, extension methods provide a superior experience because less constraints are placed on the class hierarchy to make it work - and read - as desired. The following example uses Entity Framework and configures the TodoList class to be stored in the database table Lists and defines a primary and a foreign key. The code should be understood more or less as: "A TodoList has key TodoListID, its entity set name is Lists and it has many TodoItem's each of which has a required TodoList". === Productivity === Consider for example IEnumerable and note its simplicity - there is just one method, yet it is the basis of LINQ more or less. There are many implementations of this interface in Microsoft .NET. Nevertheless, obviously, it would have been burdensome to require each of these implementations to implement the whole series of methods that are defined in the System.Linq namespace to operate on IEnumerables, even though Microsoft has all the source code. Even worse, this would have required everybody besides Microsoft considering to use IEnumerable themselves to also implement all those methods, which would have been very anti-productive seeing the widespread use of this very common interface. Instead, by implementing the one method of this interface, LINQ can be used more or less immediately. Especially seeing in practically most cases IEnumerable's GetEnumerator method is delegated to a private collection, list or array's GetEnumerator implementation. === Performance === That said, additional implementations of a feature provided by an extension method can be added to improve performance, or to deal with differently implemented interface implementations, such as providing the compiler an implementation of IEnumerable specifically for arrays (in System.SZArrayHelper), which it will automatically choose for extension method calls on array typed references, since their argument will be more specific (this T[] value) than the extension method with the same name that operates on instances of the IEnumerable interface (this IEnumerable value). === Alleviating the need for a common base class === With generic classes, extension methods allow implementation of behavior that is available for all instantiations of the generic type without requiring them to derive from a common base class, and without restricting the type parameters to a specific inheritance branch. This is a big win, since the situations where this argument holds require a non-generic base class just to implement the shared feature - which then requires the generic subclass to perform boxing and/or casts whenever the type used is one of the type arguments. === Conservative use === A note should be placed on preferring extension methods over other means of achieving reuse and proper object-oriented design. Extension methods might 'clutter' the automatic completion features of code editors, such as Visual Studio's IntelliSense, hence they should either be in their own namespace to allow the developer to selectively import them or they should be defined on a type that is specific enough for the method to appear in IntelliSense only when really relevant and given the above, consider that they might be hard to find should the developer expect them, but miss them from IntelliSense due to a missing using statement, since the developer may not have associated the method with the class that defines it, or even the namespace in which it lives - but rather with the type that it extends and the namespace that type lives in. == The problem == In programming, situations arise where it is necessary to add functionality to an existing class—for instance by adding a new method. Normally the programmer would modify the existing class's source code, but this forces the programmer to recompile all binaries with these new changes and requires that the programmer be able to modify the class, which is not always possible, for example when using classes from a third-party assembly. This is typically worked around in one of three ways, all of which are somewhat limited and unintuitive : Inherit the class and then implement the functionality in an instance method in the derived class. Implement the functionality in a static method added to a helper class. Use aggregation instead of inheritance. == Current C# solutions == The first option is in principle easier, but it is unfortunately limited by the fact that many classes restrict inheritance of certain members or forbid it completely. This includes sealed class and the different primitive data types in C# such as int, float and string. The second option, on the other hand, does not share these restrictions, but it may be less intuitive as it requires a reference to a separate class instead of using the methods of the class in question directly. As an example, consider a need of extending the string class with a new reverse method whose return value is a string with the characters in reversed order. Because the string class is a sealed type, the method would typically be added to a new utility class in a manner similar to the following: This may, however, become increasingly difficult to navigate as the library of utility methods and classes increases, particularly for newcomers. The location is also less intuitive because, unlike most string methods, it would not be a member of the string class, but in a completely different class altogether. A better syntax would therefore be the following: == Current VB.NET solutions == In most ways, the VB.NET solution is similar to the C# solution above. However VB.NET has a unique advantage in that it allows members to be passed in to the extension by reference (C# only allows by value). Allowing for the following; Because Visual Basic allows the source object to be passed in by reference it is possible to make changes to the source object directly, without need to create another variable. It is also more intuitive as it works in a consistent fashion to existing methods of classes. == Extension methods == The new language feature of extension methods in C# 3.0, however, makes the latter code possible. This approach requires a static class and a static method, as follows. In the definition, the modifier 'this' before the first argument specifies that it's an extension method (in this case to the type 'string'). In a call, the first argument is not 'passed in' because it is already known as the 'calling' object (the object before the dot). The major difference between calling extension methods and calling static helper methods is that static methods are called in prefix notation, whereas extension methods are called in infix notation. The latter leads to more readable code when the result of one operation is used for another operation. With static methods With extension methods == Naming conflicts in extension methods and instance methods == In C# 3.0, both an instance method and an extension method with the same signature can exist for a class. In such a scenario, the instance method is preferred over the extension method. Neither the compiler nor the Microsoft Visual Studio IDE warns about the naming conflict. Consider this C# class, where the GetAlphabet() method is invoked on an instance of this class: Result of invoking GetAlphabet() on an instance of AlphabetMaker if only the extension method exists: ABC Result if both the instance method and the extension method exist: abc == See also == UFCS, a way to use free functions as extension methods provided in the D programming language Type classes Anonymous types Lambda expressions Expression trees Runtime alteration Duck typing == References == == External links == Open source collection of C# extension methods libraries. Now archived at Codeplex Extension method in C# Extension methods C# Extension Methods. A collection. extensionmethod.net Large database with C#, Visual Basic, F# and Javascript extension methods Explanation and code example Defining your own functions in jQuery Uniform function call syntax Extension methods in C# Extension Methods in Java with Manifold Extension Methods in Java with Lombok Extension Methods in Java with Fluent Extension functions in Kotlin
Wikipedia/Extension_method
Clinical governance is a systematic approach to maintaining and improving the quality of patient care within the National Health Service (NHS) and private sector health care. Clinical governance became important in health care after the Bristol heart scandal in 1995, during which an anaesthetist, Dr Stephen Bolsin, exposed the high mortality rate for paediatric cardiac surgery at the Bristol Royal Infirmary. It was originally elaborated within the United Kingdom National Health Service (NHS), and its most widely cited formal definition describes it as: A framework through which NHS organisations are accountable for continually improving the quality of their services and safeguarding high standards of care by creating an environment in which excellence in clinical care will flourish. This definition is intended to embody three key attributes: recognisably high standards of care, transparent responsibility and accountability for those standards, and a constant dynamic of improvement. The concept has some parallels with the more widely known corporate governance, in that it addresses those structures, systems and processes that assure the quality, accountability and proper management of an organisation's operation and delivery of service. However clinical governance applies only to health and social care organisations, and only those aspects of such organisations that relate to the delivery of care to patients and their carers; it is not concerned with the other business processes of the organisation except insofar as they affect the delivery of care. The concept of "integrated governance" has emerged to refer jointly to the corporate governance and clinical governance duties of healthcare organisations. Prior to 1999, the principal statutory responsibilities of UK NHS Trust Boards were to ensure proper financial management of the organisation and an acceptable level of patient safety. Trust Boards had no statutory duty to ensure a particular level of quality. Maintaining and improving the quality of care was understood to be the responsibility of the relevant clinical professions. In 1999, Trust Boards assumed a legal responsibility for quality of care that is equal in measure to their other statutory duties. Clinical governance is the mechanism by which that responsibility is discharged. "Clinical governance" does not mandate any particular structure, system or process for maintaining and improving the quality of care, except that designated responsibility for clinical governance must exist at Trust Board level, and that each Trust must prepare an Annual Review of Clinical Governance to report on quality of care and its maintenance. Beyond that, the Trust and its various clinical departments are obliged to interpret the principle of clinical governance into locally appropriate structures, processes, roles and responsibilities. == Elements == Clinical governance is composed of at least the following elements: Education and Training Clinical audit Clinical effectiveness Research and development Openness Risk management Information Management === Education and training === It is no longer considered acceptable for any clinician to abstain from continuing education after qualification – too much of what is learned during training becomes quickly outdated. In NHS Trusts, the continuing professional development (CPD) of clinicians has been the responsibility of the Trust and it has also been the professional duty of clinicians to remain up-to-date. === Clinical audit === Clinical audit is the review of clinical performance, the refining of clinical practice as a result and the measurement of performance against agreed standards – a cyclical process of improving the quality of clinical care. In one form or another, audit has been part of good clinical practice for generations. Whilst audit has been a requirement of NHS Trust employees, in primary care clinical audit has only been encouraged, where audit time has had to compete with other priorities. === Clinical effectiveness === Clinical effectiveness is a measure of the extent to which a particular intervention works. The measure on its own is useful, but decisions are enhanced by considering additional factors, such as whether the intervention is appropriate and whether it represents value for money. In the modern health service, clinical practice needs to be refined in the light of emerging evidence of effectiveness but also has to consider aspects of efficiency and safety from the perspective of the individual patient and carers in the wider community. === Research and development === A good professional practice is to always seek to change in the light of evidence-led research. The time lag for introducing such change can be substantial, thus reducing the time lag and associated morbidity requires emphasis not only on carrying out research but also on efficiently implementing said research. Techniques such as critical appraisal of the literature, project management and the development of guidelines, protocols and implementation strategies are all tools for promoting the implementation of research practice. === Openness === Poor performance and poor practice can too often thrive behind closed doors. Processes which are open to public scrutiny, while respecting individual patient and practitioner confidentiality, and which can be justified openly, are an essential part of quality assurance. Open proceedings and discussion about clinical governance issues should be a feature of the framework. Any organisation providing high quality care has to show that it is meeting the needs of the population it serves. Health needs assessment and understanding the problems and aspirations of the community requires the cooperation between NHS organisations, public health department. Legislations contribute to this. The system of clinical governance brings together all the elements which seek to promote quality of care. === Risk management === Risk management involves consideration of the following components: Risks to patients: compliance with statutory regulations can help to minimise risks to patients. In addition, patient risks can be minimised by ensuring that systems are regularly reviewed and questioned – for example, by critical event audit and learning from complaints. Medical ethical standards are also a key factor in maintaining patient and public safety and well-being. Risks to practitioners: ensuring that healthcare professionals are immunised against infectious diseases, working in a safe environment (e.g. safety in acute mental health units, promoting an anti-harassment culture) and are kept up-to-date on important parts of quality assurance. Furthermore, keeping healthcare professionals up to date with guidelines such as fire safety, basic life support (BLS) and local trust updates is also important, these can be annually or more frequent depending on risk stratification. Risks to the organisation: poor quality is a threat to any organisation. In addition to reducing risks to patients and practitioners, organisations need to reduce their own risks by ensuring high quality employment practice (including locum procedures and reviews of individual and team performance), a safe environment (including estates and privacy), and well designed policies on public involvement. Balancing these risk components may be an ideal that is difficult to achieve in practice. Recent research by Fischer and colleagues at the University of Oxford finds that tensions between 'first order' risks (based on clinical care) and 'second order' risks (based on organisational reputation) can produce unintended contradictions, conflict, and may even precipitate organisational crisis. === Information management === Information management in health: Patient records (demographic, Socioeconomic, Clinical information) proper collection, management and use of information within healthcare systems will determine the system's effectiveness in detecting health problems, defining priorities, identifying innovative solutions and allocating resources to improve health outcomes. == Application in the field == If clinical governance is to truly function effectively as a systematic approach to maintaining and improving the quality of patient care within a health system, it requires advocates. It also requires systems and people to be in place to promote and develop it. The system has found supporters outside of the UK. The not-for-profit UK hospital accreditation group the Trent Accreditation Scheme base their system upon NHS clinical governance, and apply it to hospitals in Hong Kong and Malta. Also in the Spanish National Health Service several experiences has been implemented, such the ones in Andalucía and Asturias. == Notes == == References == G. Scally and L. J. Donaldson, Clinical governance and the drive for quality improvement in the new NHS in England BMJ (4 July 1998): 61-65 N. Starey, 'What is clinical governance?', Evidence-based medicine, Hayward Medical Communications, What is clinical governance? == External links == NHS Clinical Governance Support Team (archived) Primary Care Training Centre Stephen Bolsin
Wikipedia/Clinical_governance
Nautilus is an American popular science magazine featuring journalism, essays, graphic narratives, fiction, and criticism. It covers most areas of science, and related topics in philosophy, technology, and history. Nautilus is published six times annually, with some of the print issues focusing on a selected theme, which also appear on its website. Issue themes have included human uniqueness, time, uncertainty, genius, mergers & acquisitions, creativity, consciousness, and reality, among many others. == Reception == In Nautilus' launch year (2013), it was cited as one of Library Journal's Ten Best New Magazines Launched; was named one of the World's Best-Designed news sites by the Society for News Design; received an honorary mention as one of RealClearScience's top science news sites; and received three awards from FOLIO: magazine, including Best Consumer Website and Best Full Issue. In 2014, the magazine won a Webby Award for best science website and was nominated for two others; had two stories selected to be included in 2014 edition of The Best American Science and Nature Writing; won a FOLIO award for Best Standalone Digital Consumer Magazine; and was nominated for two Webby Awards. In 2015, Nautilus won two National Magazine Awards (aka "Ellies"), for General Excellence (Literature, Science and Politics Magazines) and Best Website. It is the only magazine in the history of the award to have won multiple Ellies in its first year of eligibility. It also had one story included in the 2015 edition of The Best American Science and Nature Writing, and another story won a AAAS Kavli Science Journalism Award. RealClearScience again named it a top-10 science website. In 2016, Nautilus had one story included in the 2016 edition of The Best American Science and Nature Writing; won an American Society of Magazine Editor's Award for Best Style and Design of a cover; and was nominated for a Webby Award. In 2017, Nautilus had three stories selected for inclusion in the 2017 edition of The Best American Science and Nature Writing; one piece won a AAAS Kavli Science Journalism Award; another piece won a Solar Physics Division Popular Media Award from the American Astronomical Society; and was a Webby Award Nominee for Best Editorial Writing. More than a dozen Nautilus illustrations have been recognized by American Illustration, Spectrum, and the Society of Illustrators. == Contributors == Since the magazine's launch in April 2013, contributors have included scientists Peter Douglas Ward, Caleb Scharf, Gary Marcus, Robert Sapolsky, David Deutsch, Lisa Kaltenegger, Sabine Hossenfelder, Steven Pinker, Jim Davies, Laura Mersini-Houghton, Ian Tattersall, Max Tegmark, Julian Barbour, Stephen Hsu, Martin Rees, Helen Fisher, and Leonard Mlodinow; and writer/journalists Christian H. Cooper, Ayaan Hirsi Ali, Amir Aczel, Nicholas Carr, Carl Zimmer, B. J. Novak, Philip Ball, Kitty Ferguson, Jill Neimark, Robert Zubrin, Alan Lightman, Tom Vanderbilt, and George Musser. Cormac McCarthy made his non-fiction writing debut in Nautilus on 20 April 2017 with an article entitled, "The Kekulé Problem." == Name == The word "nautilus" has a number of meanings that are referred to in the title of the magazine. "'The nautilus is so steeped in math and myth and story, from Verne to the Golden Mean to the spectacular sea creature itself,' [Nautilus publisher John] Steele said, 'that it seemed a fitting namesake for the idea of connecting and illuminating science.'" == Controversy == On 13 December 2017, twenty of Nautilus' freelance writers published "An Open Letter from Freelancers at Nautilus Magazine" in the National Writers Union, alleging that the company was in arrears to them for $50,000 for unpaid work. They announced that ten of them had joined the NWU in order "to pursue a group non-payment grievance with legal action if necessary". On 15 December 2017, the Nautilus publisher, John Steele, published a reply explaining the magazine's financial situation and taking responsibility for the late payments. On 1 February 2018, the National Writers Union announced it had reached a settlement with Steele. On 7 November 2019, the National Writers Union announced in a letter that NautilusThink, and its parent NautilusNext, still owe $186,000 to former contributors. On 20 November 2019, chief executive of NautilusNext Nicholas White told Columbia Journalism Review that the magazine was committed to not take any profit until the writers it owed were paid back in full. "That commitment was made long before the National Writers Union issued a press release about the acquisition on November 7th," White said. "We did it because it was the right thing to do, and the right way to set a new course for the magazine’s future.” == Partnerships == On 20 March 2018, Nautilus announced a marketing partnership with Kalmbach Media, publisher of Discover and Astronomy magazines. At the time of the partnership, the three magazines had a combined reach of 10 million users. == See also == Aeon (magazine) Quanta Magazine == References == == External links == Official website
Wikipedia/Nautilus_(science_magazine)
Large language models have been used by officials and politicians in a wide variety of ways. == Overview == The Conversation described ChatGPT described as a uniquely terrible tool for government ministers. Google released certain details of usage of Gemini by the governments of Iran, China, Russia and North Korea. == Details by country == === Australia === The Australian Government has not issued a comprehensive directive on generative AI usage, leaving decisions to individual departments. In 2023, the Department of Home Affairs allowed ChatGPT use in limited circumstances, while the Australian Federal Police blocked it. ==== States and territories ==== Every state and territory, except South Australia, restricted the use of ChatGPT in public schools. In 2024, NSWEduChat was rolled out to replace ChatGPT. === Austria === In 2024, Austria used a chatbot based on ChatGPT to answer questions of the recipients of welfare. ==== Vienna ==== The Viennese government used ChatGPT to write an anthem for the city-state. === Brazil === City lawmakers in Porto Alegre enacted an ordinance, which was largely written using ChatGPT. === Germany === As of 2024, use of ChatGPT varied considerably between different federal ministries. ==== Schleswig-Holstein ==== The digitalisation minister, Dirk Schrödter, announced that the government of Schleswig-Holstein would use ChatGPT in its administration. === India === In 2025, the Ministry of Finance banned its employees from using ChatGPT and DeepSeek on government devices. === Israel === In February 2023, the president of Israel Isaac Herzog delivered a speech that had partially been written using ChatGPT. In March 2025, reporting by +972 Magazine revealed the development of a large language model by Unit 8200, an intelligence unit of the Israeli Defence Forces. === Japan === In January 2025, the Japanese Government launched a large language model tool to help doctors in diagnosing patients. === Korea === In February 2025, the Korean governemnt announced a plan to develop a Korean LLM with an investment of approximately ₩1,000,000,000,000. === New Zealand === In October 2024, the New Zealand Government launched its GovGPT pilot. === Poland === In February 2025, the Polish government announced the launch of PLLuM, the Polish Large Language Model, designed to specialise in content in the Polish language. === United Kingdom === In March 2025, the New Scientist revealed it had obtained science minister Peter Kyle's ChatGPT prompts. The topics of Kyle's prompts included policy advice, which podcasts to appear on and the definitions of various scientific terms. Peter Kyle's use of ChatGPT was defended by Sam Sharps of the Tony Blair Institute. In 2024, the Government of the United Kingdom launched Gov.uk Chat to provide guidance on business rules and support. In 2025, the UK Government started to develop Humphrey, named after the character in Yes Minister, as a large language model tool for civil servants and the Cabinet Office expanded trials of its Redbox Copilot project. ==== Scotland ==== Whistleblowers have alleged that civil servants have written government policy papers with the assistance of ChatGPT. ==== Wales ==== In 2023, MS Tom Giffard delivered a speech which had been written nearly completely using ChatGPT. === United States === In 2025, OpenAI released ChatGPT Gov, a version of ChatGPT designed for federal government agencies. According to reporting by the Verge, tariffs in the second Trump administration may have been assigned based on a formula written using ChatGPT. ==== California ==== In May 2024, Californian state agencies started to develop generative AI tools to solve common operational challenges. ==== New York ==== State lawmakers in New York passed legislation preventing agencies of the state government from replacing human workers with artificial intelligence. == References ==
Wikipedia/Large_language_models_in_government
Multistakeholder governance is a practice of governance that employs bringing multiple stakeholders together to participate in dialogue, decision making, and implementation of responses to jointly perceived problems. The principle behind such a structure is that if enough input is provided by multiple types of actors involved in a question, the eventual consensual decision gains more legitimacy, and can be more effectively implemented than a traditional state-based response. While the evolution of multistakeholder governance is occurring principally at the international level, public-private partnerships (PPPs) are domestic analogues. Stakeholders refer to a collection of actors from different social, political, economic spheres working intentionally together to govern a physical, social, economic, or policy area. The range of actors can include multinational corporations, national enterprises, governments, civil society bodies, academic experts, community leaders, religious figures, media personalities and other institutional groups. At a minimum a multistakeholder group must have two or more actors from different social, political, or economic groups. If not, then the group is a trade association (all business groups), a multilateral body (all governments), a professional body (all scholars), etc. Almost all multistakeholder bodies have at least one multinational corporation or business-affiliated body and at least one civil society organization or alliance of civil society organizations as key members. Alternative terminologies for multistakeholder governance include multi-stakeholder initiatives(MSIs), Multi-StakeHolder (MSH), multi-stakeholder processes (MSPs), public-private partnerships (PPPs), transnational multistakeholder Partnerships (transnational MSPs), informal governance arrangements, and non-state regulation. The key term 'multistakeholder' (or 'multistakeholderism') is increasingly spelled without a hyphen to maintain consistency with its predecessor 'multilateralism' and to associate this new form of governance with one of the key actors involved that is also generally spelled without a hyphen; 'multinationals'. 'Multistakeholderism' is similarly used in parallel to bilateralism and regionalism. As an evolving global governance form, only a limited number of organizations and institutions are involved in multistakeholderism. In a number of arenas, opposing forces are actively challenging the legitimacy, accountability, and effectiveness of these experimental changes in global governance. == Contemporary history and theory == Stakeholder management theory, stakeholder project management theory, and stakeholder government agency theory have all contributed to the intellectual foundation for multistakeholder governance. The history and theory of multistakeholder governance however departs from these models in four ways. The earlier theories describe how a central institution (be it a business, a project, or a government agency) should engage more formally with related institutions (be it other organizations, institutions, or communities). In multistakeholder governance, the central element of multistakeholder undertaking is a public concern (e.g. protection of the climate, management of the internet, or the use of natural resources), not a pre-existing organization. Second, the earlier theories aimed to strengthen a pre-existing institution. In multistakeholder governance, multistakeholder groups can strengthen associated institutions but they can also marginalize institutions or functions of existing governance bodies (e.g. governmental regulatory authorities, UN system). As earlier theories were concerned with improving the operations of corporations and project management, they did not need to address the public governance consequences of multistakeholder decision-making. They also provide little or no guidance to autonomous multistakeholder groups on their internal rules of governance, as the pre-existing institution had its own functioning decision-making system. As multistakeholderism is an evolving system of governance, a good deal of its theoretical underpinning is a combination of formal theoretical writing and theory-derived from practice. === World Economic Forum's Global Redesign Initiative === The most extensive theoretical writing and most detailed practical proposals comes from the World Economic Forum's Global Redesign Initiative (GRI). Its 2010 600-page report "Everybody's Business: Strengthening International Cooperation in a More Interdependent World" was a comprehensive proposal for re-designing global governance. The report sought to change in fundamental ways the global governance system built since World War II. The report authored by the leadership of the World Economic Forum, including Klaus Schwab, is a series of broad policy papers on multistakeholder governance and a broad array of theme-specific policy options. These policy and thematic program recommendations were designed to display the new governance structure's ability to respond to a range of global crises. These global policy areas include investment flows; educational systems; systemic financial risk; philanthropy and social investing; emerging multinationals; fragile states; social entrepreneurship; energy security; international security cooperation; mining and metals; the future of government; ocean governance; and ethical values. What sets the World Economic Forum's proposal apart is that it was developed as a cooperative effort involving 750 experts from the international business, governmental, and academic communities working in sixty separate task forces for one and a half years (2009/2010). WEF also had over fifty years' experience convening leading stakeholders from the political, economic, cultural, civil society, religious, and other communities to discuss the way forward in global affairs. As the three co-chairs observed in their introduction to the GRI report: "The time has come for a new stakeholder paradigm of international governance analogous to that embodied in the stakeholder theory of corporate governance on which the World Economic Forum itself was founded." === Intergovernmental bodies in the UN system === The United Nations effort to develop multistakeholder governance is widely regarded to have started with the 1992 U.N. Conference on Environment and Development (more commonly known at the Rio Conference). There, governments created nine major non-state groups which could be part of the official intergovernmental process. Ten years later in Johannesburg the follow-up conference created a new multistakeholder implementation process called officially "type II conference outcomes," where transnational corporations, NGOs, and governments pledged to work together to implement a specific section of the conference report. A separate government effort to define multistakeholder governance has been a series of United Nations General Assembly resolutions on 'partnerships'. The earliest resolution (2002) drew "the attention of Member States to multi-stakeholder initiatives, in particular, the Global Compact Initiative of the Secretary-General, the Global Alliance for Vaccines and Immunizations, the multi-stakeholder dialogue process of the Commission on Sustainable Development and the Information and Communication Technologies Task Force". Over the next 17 years until 2019, the governments at the United Nations continued to evolve their understanding of multistakeholder governance by adopting eight other related resolutions. In the most recent partnership resolution from 2019, governments identified a number of principles that should define a multistakeholder partnership. Governments "stresse[d] ...[A partnership should have a] common purpose, transparency, bestowing no unfair advantages upon any partner of the United Nations, mutual benefit and mutual respect, accountability, respect for the modalities of the United Nations, striving for balanced representation of relevant partners from developed and developing countries and countries with economies in transition, and not compromising the independence and neutrality of the United Nations system in general and the agencies in particular". In the same resolution, government further defined the 'common purpose' and 'mutual benefit and respect' as voluntary partnerships and as "collaborative relationships between various parties, both public and non-public, in which all participants agree to work together to achieve a common purpose or undertake a specific task and, as mutually agreed, to share risks and responsibilities, resources and benefits". === Civil society organizations within the UN system === Civil society organizations have had a series of parallel but distinct exchanges on the theory and practice of multistakeholder governance. Two elements of the definition of multistakeholder governance that are not central to the intergovernmental debate are (1) the connection between democracy and multistakeholder governance and (2) the assessment of the efficiency and effectiveness of multistakeholder projects. In 2019 Felix Dodds, a founder of the Stakeholder Forum, argued that "involving stakeholders in the decision-making process makes them more likely to partner with each other and with governments at all levels to help deliver on the commitments associated with [intergovernmentally adopted] agreements". In this perspective, the evolution of multistakeholder governance marks a positive transformation from representative democracy to stakeholder-based participatory democracy. The 2019 Transnational Institute (TNI) in Amsterdam report on multistakeholderism takes a different perspective. It considers that democracy is at great risk from multistakeholder governance. TNI sees the lack of a legitimate public selection process for 'stakeholders'; the inherent power imbalance between categories of 'stakeholders', particularly transnational corporations and community groups; and the intrusion of business interests in formal international decision-making as counter to the development of a globally representative democratic system. Gleckman, an associate of TNI and a senior fellow at the Center for Governance and Sustainability, UMass-Boston, advances other arguments on the inherently un-democratic character of multistakeholder governance. === International commissions === The 1991-1994 Commission on Global Governance, the 2003-2007 Helsinki Process on Globalisation and Democracy., and the 1998-2001 World Commission on Dams each addressed the evolution of the concept of multistakeholderism as a force in global governance. For example, The World Commission on Dams (WCD) was established in 1998 as a global multistakeholder body by the World Bank and the World Conservation Union (IUCN) in response to growing opposition to large dam projects. The twelve Commission members came from a variety of backgrounds, representing a broad spectrum of interests in large dams – including governments and nongovernmental organizations (NGOs), dam operators and grassroots people's movements, corporations and academics, industry associations and consultants. In WCD's final report from 2000, the chair Professor Kader Asmal described the Commissioners' views about multistakeholder governance this way: "We are a Commission to heal the deep and self-inflicted wounds torn open wherever and whenever far too few determine for far too many how best to develop or use water and energy resources. That is often the nature of power, and the motivation of those who question it. Most recently governments, industry and aid agencies have been challenged around the world for deciding the destiny of millions without including the poor, or even popular majorities of countries they believe to be helping. To confer legitimacy on such epochal decisions, real development must be people centred, while respecting the role of the state as mediating, and often representing, their interests...we do not endorse globalisation as led from above by a few men. We do endorse globalisation as led from below by all, a new approach to global water policy and development". === Key parties in internet governance === The role of multistakeholder processes in internet governance dominated the 2003-2005 World Summit on the Information Society (WSIS). However the summit failed to address the digital divide to the satisfaction of developing countries. The final outcome of the Summit, the Tunis Agenda (2005), enshrined a particular type of multistakeholder model for Internet governance, in which, at the urging of the United States, the key function of administration and management of naming and addressing was delegated to the private sector, the Internet Corporation for Assigned Names and Numbers (ICANN). This US policy of using multistakeholder processes in effect to favor privatization of functions which had been traditionally performed by government agencies was well expressed in a 2015 statement by Julie Napier Zoller, a senior official in the US Department of State's Bureau of Economic and Business Affairs. She argued that "Every meeting that is enriched by multistakeholder participation serves as an example and a precedent that opens doors for multistakeholder participation in future meetings and fora." == Definition of a 'stakeholder' == There are generally accepted definitions for 'stakeholder' in management theory and generally accepted processes for selecting 'stakeholders' in project management theory. However, there are no commonly accepted definition of 'stakeholder' and no generally recognized process to designate 'stakeholders' in multistakeholder governance. In a democracy, there is only one elemental category for public decision-making, the 'citizen'. Unlike the concept of 'citizen' in democratic governance theory, the concept of 'stakeholder' in multistakeholder governance theory and practice remains unsettled and ambiguous. In multistakeholder governance, there are three tiers of 'stakeholder' definitions: (1) the definition of the 'stakeholder category' (e.g. business); (2) the definition or the specification for selecting organizations or institutions within a 'stakeholder category' (e.g. micro-enterprises or women-owned businesses); and (3) the definition or the specification for selecting an individual person to represent a designated organization or institution within a stakeholder category (e.g. the CEO, the external affairs officer, or a professional staff member). In practice it is not uncommon for the founders of a multistakeholder groups to select a key individual to be a member of a multistakeholder group and then retroactively classify that individual and/or the individual's organization into an appropriate definitional category. === Multiple definitions of categories of stakeholders within the UN system === At the United Nations Rio conference in 1992, governments formally accepted nine Major Groups as 'stakeholder' categories. The designated Major Groups were Women, Children and Youth, Indigenous Peoples, Non-Governmental Organizations, Local Authorities, Workers and Trade Unions, Business and Industry, Scientific and Technological Community, and Farmers. Two decades later, the importance of effectively engaging these nine sectors of society was reaffirmed by the Rio+20 Conference. However that conference added other stakeholders, including local communities, volunteer groups and foundations, migrants and families, as well as older persons and persons with disabilities. Subsequently, governments also added as stakeholders private philanthropic organizations, educational and academic entities and other stakeholders active in areas related to sustainable development. The 'Major Groups' designation is now cited as 'Major Groups and Other Stakeholders'. The International Labour Organization (ILO)'s governance system functions with just three constituencies: 'workers', 'business', and 'government'. In this tri-partite arrangement, workers and business are on the same footing as governments. The Committee on World Food Security (CFS) has different main categories: 'Members', 'Participants’ and 'Observers'. The CFS sees itself as "the foremost inclusive international and intergovernmental platform for all stakeholders to work together to ensure food security and nutrition for all". Their 'Participants' category however includes a wide variety of social actors: (a) UN agencies and bodies, (b) civil society and non-governmental organizations and their networks, (c) international agricultural research systems, (d) international and regional financial institutions and (e) representatives of private sector associations and (f) private philanthropic foundations. === Multiple definitions of categories of stakeholders outside the UN system (selected examples) === Unlike the multiple definitions inside the UN system, the definition of stakeholder categories for autonomous multistakeholder groups are generally versions of "interest-based" definitions. International Organization for Standardization (ISO) defines a stakeholder individual or group "as one that has an interest in any decision or activity of an organization" (ISO 26000). Hemmati, a co-founder of the MSP Institute, a multistakeholder support organization, defines stakeholders as "those who have an interest in a particular decision, either as individuals or representatives of a group. This includes people who influence a decision, or can influence it, as well as those affected by it. The trade association of international environmental and social standard setting bodies, ISEAL, defines stakeholder groups as those "that are likely to have an interest in the standard or that are likely to be affected by its implementation, and provides them with mechanisms for participation that are appropriate and accessible." === Multiple definitions used to select organizations within individual stakeholder categories === There is also no consistent definition or selection process to define the individual organization(s) that may "represent" a given category of stakeholders in a given multistakeholder group. For example, the 'government' category can involve government offices at the national, regional, county/provincial and municipal levels, regional inter-government organizations (e.g. European Commission, Organization of American States), intergovernmental secretariats (e.g. FAO, WHO) or include members of parliaments, regulatory bodies, technical experts in specific government departments and courts. The 'civil society' category could similarly involve non-state organizations at the international, regional and national levels, social movements, religious bodies, professional associations, development organizations, humanitarian groups or environmental NGOs. The 'business' stakeholder category could mean multinational corporations, medium-sized national enterprises, small- and micro- local businesses, business trade associations at the international, national, or local level; businesses from developing countries, minority own businesses, women-owned enterprises or green global businesses. When 'academics' are a stakeholder category, the category members could be social scientists, physicists, philosophers, environmental experts, professors of religion, lawyers, university administrators, or a professional association affiliated with scholarly work. === Inclusive vs exclusive multistakeholder initiatives === At the G7 summit (Cornwall, UK, 11-13 June 2021) G7 leaders highlighted the importance of standards in line with their values and affirmed their support for "industry-led, inclusive multi-stakeholder approaches to standards setting". The definition 'inclusive' multi-stakeholder approach called for the use of common standards encouraging collaboration with International Organization for Standardization. ISO standards are voluntary consensus, therefore inclusive, developed using the core WTO Technical barriers to trade principles of transparency, openness, impartiality and consensus, effectiveness and relevance, coherence, and addressing the concerns of developing countries. In comparison, the definition 'exclusive' multi-stakeholder approach, where multinational corporations in the private sector create exclusive multi-stakeholder initiatives, adopting non-consensus private standards and holding majority voting rights. Not meeting the WTO principles described above. Exclusive multi-stakeholder initiatives, adopting private standards are discussed a report from The Institute for Multi-Stakeholder Initiative Integrity (MSI Integrity), another example of an exclusive multi-stakeholder initiative adopting private standards is the Global Food Safety Initiative which is designed to define their benchmarking requirements thus controlling the minimum requirements in the schemes they recognize. The difference between international standards and private standards is explained by a publication from ISO. === Selection of representatives === Each organization designated to "represent" a stakeholder category can use its own method to select an individual to participate in a stakeholder group. Having an individual from a given organization participate in the leadership of a multistakeholder group does not necessary mean that the sponsoring organization (be it a business, civil society organization or a government) is itself on board. The participation of any given individual may only mean that a particular office or department has chosen to work with that multistakeholder group. The individual involved may have been granted permission to liaise with a given multistakeholder group, provided leave to participate in their personal, professional capacity, or formally designated to represent a specific organization. This ambiguity between commitment of the institution as a whole and the participation of a representative of a specific office or agency can affect a number of different roles inside and outside the multistakeholder group. The multistakeholder group may well appreciate being able to assert publicly that x governments or y transnational corporations are part of the multistakeholder group in order to garner greater political-economic recognition. Internally the other participants may believe that the institutional capacities and financial resources of the parent organization may be available to meet the goals of the multistakeholder group. === Uniquely governance issues in the use of the term 'stakeholder' === There is no on-going international effort to standardize the core multistakeholder governance concept of 'stakeholder', nor any international efforts to standardize the procedure for designating an organization or an individual within any given stakeholder category. Unlike the use of 'stakeholder' in management theory and project management theory, there are a number of demographic, political, and social factors that can impact the use of the 'stakeholder' concept in governance. Among the identified issues are (a) the difficulty in balancing gender, class, ethnicity, and geographic representation in any given multistakeholder group; (b) the potential conflicts of interests between 'business' stakeholders and their commercial markets; (c) the asymmetric power of different categories of stakeholders and different organizations representing stakeholder categories within a multistakeholder group; and (d) the lack of a review structure or judicial mechanism to appeal the selection of stakeholder categories, stakeholder organizations within a category, or the selection of the person to represent a stakeholder organization. == Types of groups == Multistakeholder governance arrangements are being used - or are being proposed to be used - to address a wide range of global, regional, and national challenges. These governance challenges, often ones that have a significant political, economic, or security impact can be categorized as the following - (1) those involving the formulation of public policies with minimal or marginal government participation; (2) those involved in setting market-governing standards that were previously a state function; and (3) those involved in implementing large-scale projects, often large-scale infrastructure projects, with government participation. === Policy-oriented groups === Policy-oriented multistakeholder governance groups are used to address an international policy issue. These groups tend to arise when global actors believe a policy intervention is necessary but governments or intergovernmental organizations are unwilling or unable to resolve a policy matter. Most multistakeholder governance groups meet independently of multilateral organizations, while some may include the multilateral system for their endorsement or support. Examples of policy-oriented multistakeholder governance groups: World Economic Forum's Global Futures Councils World Commission on Dams Kimberley Process Certification Scheme Renewable Policy Network for the 21st Century Global Partnership for Oceans === Product, finance and process-oriented groups === Product, finance and process-oriented multistakeholder groups are organizations that set standards for internationally traded products and processes and/or provide financing with a multistakeholder board. For products, the goal is to facilitate ethical, environmental, and development-friendly products that are desired by consumers and beneficial for producers, manufacturers and retailers. Processes refer to new, rapidly evolving, complex and high impact technologies on the international market that lack domestic standards or regulatory oversight. The multistakeholder groups determine how the processes can best function internationally between competing commercial interests. These groups work with social justice civil society organizations, academic and government bodies to resolve conflicts and plan a path forward. Unlike traditional philanthropic organizations, finance-oriented multistakeholder groups operate with a governing body that explicitly designates individuals to "represent" the views of specific stakeholder categories. Examples of product-oriented multistakeholder groups: Aquaculture Stewardship Council (ASC) Better Cotton Initiative (BCI) Forest Stewardship Council (FSC) Global Coffee Platform Archived 16 June 2020 at the Wayback Machine (GCP) GoodWeave Marine Stewardship Council (MSC) Roundtable on Sustainable Biomaterials (RSB) Roundtable on Sustainable Palm Oil (RSPO) Initiative for Responsible Mining Assurance Examples of process-oriented multistakeholder groups: Carnegie Climate Geoengineering Governance Initiative Consumer Goods Forum (CGF) Fairtrade International (FLO) FramingNano Project Global Food Safety Initiative (GFSI) Global Partnership for Business and Biodiversity ICANN Internet Governance Forum (IGF) Examples of finance-oriented multistakeholder groups: GAVI, The Vaccine Alliance CGIAR (formerly the Consultative Group for International Agricultural Research) === Project-oriented groups === Project-oriented multistakeholder groups accomplish global or national tasks that governments or the multilateral system are unable to accomplish. Global project-oriented groups accomplish governance goals implemented by the multilateral system. National project-oriented groups address a public need that the relevant government is not able to fulfill. These may operate on the local, state, or national level. Project-oriented multistakeholder groups are frequently called public-private partnerships (PPP). Examples of global project-oriented groups: Alliance for Water Stewardship Roll Back Malaria Accord on Fire and Building Safety in Bangladesh Global Alliance for Improved Nutrition The Global Polio Eradication Initiative The Global Fund to Fight AIDS, Tuberculosis and Malaria Sustainable Energy for All Examples of where national project-oriented groups may act: Recreation areas Transportation infrastructure High-speed internet infrastructure Supply of municipal drinking water == Relationship with == === Multilateral system === Different parts of the multilateral system are involved in different ways with all three types of multistakeholder groups. These include multistakeholder bodies which are called for by an intergovernmental body (e.g. goal 17 of SDGs); multistakeholder bodies organized by and legally dependent on the secretariat of the UN system itself (e.g. Global Compact); multistakeholder bodies which offer to financially support certain UN goals and projects; UN affiliated project development organizations which regard multistakeholder implementation as more effective and efficient than state or UN system implementation; non-UN sponsored multistakeholder bodies which formally align themselves with the UN system (e.g. WEF strategic partnership) and non-UN sponsored multistakeholder bodies where UN system staff are allowed to serve in their personal, professional capacities. On the other hand, some multistakeholder bodies are intentionally independent of the UN system. This form of disengagement from the UN system was formulated by the Global Redesign Initiative as ‘plurilateral, often multi-stakeholder, coalitions of the willing and able" to work outside the intergovernmental framework. Examples of this practice are multistakeholder bodies which explicitly seek autonomy from legally binding state regulations and the soft law of the intergovernmental system (e.g internet governance); standard setting multistakeholder bodies, which perceive that the UN system failed to address their concerns, consequently elect to operate without UN system engagement; and international multistakeholder funding sources which opt to be independent of the relevant intergovernmental process (e.g. GAVI). Finally some multistakeholder bodies want to disengage from the UN system in their day to day activities but seek UN intergovernmental endorsement of the outcome of the autonomous arrangements (e.g. Kimberley Process Certification Scheme). i. Multilateral institutions’ views of multistakeholder processes and governance As an evolving global governance system, different parts of the UN system describe the importance of multistakeholderism in different ways. For example the World Bank notes multistakeholder initiatives bring together government, civil society, and the private sector to address complex development challenges that no one party alone has the capacity, resources, and know-how to do so more effectively; the Asian Development Bank asserts that multistakeholder groups allow communities to articulate their needs, help shape change processes and mobilize broad support for difficult reform; the Global Compact believes that convening committed companies with relevant experts and stakeholders, the UN can provide a collaborative space to generate and implement advanced corporate sustainability practice and inspire widespread uptake of sustainability solutions among businesses around the world; and SDG’s partnership goal (Goal 17) seeks to use multistakeholder partnerships to mobilize and share knowledge, expertise, technology and financial resources to implement the SDG program. ii. Public policy concerns raised about multistakeholder engagement with the multilateral system Some governments, civil society organizationss, and the international media have challenged the legitimacy and appropriateness of multistakeholder engagement with the multilateralism and have raised concerns that the integrity and legitimacy of the UN is endangered by multistakeholderism. They have contested a strategic partnership agreement between the office of the UN Secretary-General and the World Economic Forum; the planned hosting of international conferences that by-passes the traditional intergovernmental preparatory process for one centered on multistakeholder engagement with UN system secretariat (fn proposed World Food Summit); the shift for a bottom-up development to top-down multistakeholder-led development; the offer of free staff from the World Economic Forum to the Executive Director of a UN system treaty body; and the process of large international multistakeholder bodies setting global policy goals through their philanthropy. === Transnational corporations and industry-related organizations === Most transnational corporations (TNCs) and business-related organizations are not involved with multistakeholder groups. However, the business sector and large TNCs are all too often seen as essential participants in any multistakeholder undertaking. Some of these firms see long-term benefits from multistakeholderism. For some, multistakeholder governance bodies are the preferred alternative to state-oversight or intergovernmentally-drafted soft law. For firms in sectors with a high negative profile, multistakeholder bodies can be useful instruments to identify solutions to complex difficulties or to re-establish public creditability for their firm or sector. For other firms, multistakeholder groups provide an institutional entry into global governance structures or an institutional arrangement outside of the UN system to lead in defining international policies and programs (e.g. WEF’s Shaping the Future Councils). For other firms, the benefits are more short-term. The short-term benefits include working to shape the technical specification for a niche international market; creating public acceptability and expectations for new markets; and managing the public perceptions of their firm. By far however the greatest number of TNCs that engage with multistakeholderism are those that participate in project-focused, public-private partnerships (PPP) at the national and international levels. These TNCs and related national enterprises can use the PPP form to address both state-failures to address a given social-economic-environmental need and to gain state-approval for the privatization of a given sector or region of an economy. These shifts in role of the private sector alters long standing public-private distinctions and, as such, has implications for global and national democratic decision-making. Public–private partnerships have positioned corporations as a leading voice on decisions where public governance authorities have become dependent on private sector funding. Lobbying influences trade agreements for food systems which led to creating barriers to competition. === Civil society organizations / NGOs / social movements === One of the drivers for the creation of civil society organizations (CSOs), non-governmental organizations or social movements is to be autonomous from governments and commercial interests. With the advent of multistakeholder governance, some institutions have intentionally shifted from this autonomous position in order to further specific institutional goals; others have joined multistakeholder groups, particularly PPPs, out of an anxiety of being cut off from crucial decisions, while the majority of these organizations remain autonomous of governments and commercial interests and unconnected with multistakeholder groups. In the first case, some CSOs have been founders of international standard setting bodies in partnership with a sector-specific TNCs and national enterprises; have joined high level multistakeholder policy groups; participated with multistakeholder groups convened to implement UN system goals (e.g. SDG goal 17); and have joined international monitoring multistakeholder initiatives. In the second case, CSOs which have been confronted with the creation of a powerful PPP feel that non-participation would leave them at a severe local disadvantage; other CSOs would prefer that a government or the UN system would address a given topic and see no other way to set standards for that section (e.g. Global Coffee Platform). In the third case, CSOs, NGOs, and social movements have taken positive steps to dissuade governments, TNCs, and other CSOs, NGOs and social movements to not participate in multistakeholder groups; some of these organizations have appealed to the UN Secretary General to withdraw from partnerships with multistakeholder bodies. === Governments, particularly policy making bodies, regulatory agencies, and infrastructure offices === Some governments engage with multistakeholderism to develop public policies or to avoid developing public policies. These governments, or more precisely parts of governments, have supported multistakeholder groups that address complex public policy issues, have chosen to address sensitive intergovernmental issues without the involvement of the UN system, and have chosen to address para-military issues without the involvement of the UN system (e.g. International Code of Conduct for Private Security Service Providers). Governments are not uniform in their use of multistakeholder bodies for policy making. In several cases, some governments use multistakeholderism as a public policy mechanism. On that same public policy issue, other governments oppose the use of multitakeholderism, preferring instead to consider an issue though multilateral or bilateral arrangements. The two clearest examples are internet governance and private international standard-setting bodies which operate without developing country participation (UNCTAD's Forum on Sustainability Standards). In the case of internet governance the major private actors in this area seek to have little or no engagement with governments. Governments all have product standard-setting regulatory institutions. Multistakeholderism presents an opportunity to have an alternative arrangement that shifts the process of formulating and monitoring standards to a multistakeholder body and shifts the standards from obligatory to voluntary. Examples of this use of multistakeholder groups by governments include opting to follow the advice of expert-based multistakeholder groups rather than establish separate expert government-based organizations, welcoming efforts to have multistakeholder standards set by TNCs and civil society to avoid conflicts with home-country TNCs and other businesses (e.g. Accord on Fire and Building Safety in Bangladesh) and supporting voluntary private standard setting for un- and under-governed spaces (e.g. oceans). Many of these cases represent an indirect privatization of public services and goods. Other governments or parts of government actively participate in project-based public-private partnerships. In PPP, governments agree to grant dejura or de facto governance over a natural resource (i.e. access public water) or the area around an infrastructure project to a given multistakeholder group. The degree of control explicitly or implicitly transferred to the PPP and the extent that the initial expectations for operations and prices are not met has become a contentious governance issue. === Academy and professional associations === While over 250 academics assisted the WEF in developing their Global Redesign Initiative, most members of the academic community and most professional associations are not involved with multistakeholder groups. Those academics that are involved in multistakeholder groups tend to participate in policy making multistakeholder groups or the development of international product and process standard setting. Some university-based experts join business-oriented multistakeholder bodies in a similar manner to joining the corporate boards of individual firms. However, unlike providing their expertise to a business as consultants or board member, scholars on the board of a multistakeholder governance organization, particularly ones that sets international product or process standards, have moved from an advisor and investor role to one that is functionally similar to a state regulatory official. In some cases, university faculty are recruited by major firms or governments to create an academic-business-governmental organization to open new markets for that business or those in their sector. In other cases, multistakeholder groups and universities co-host multistakeholder events and research projects. == See also == Civil society Multi-stakeholder cooperatives Internet governance Internet multistakeholder governance == References == == Further reading == Marcus Kummer, "Multistakeholder Cooperation: Reflections on the emergence of a new phraseology in international cooperation", Internet Society Blog 2013 Archived 21 July 2017 at the Wayback Machine Michael Gurstein, "Multistakeholderism vs. Democracy: My Adventures in 'Stakeholderland'" Wordpress 2013 Adam, Lishan, Tina James, and Munyua Wanjira. 2007. "Frequently asked questions about multi-stakeholder partnerships in ICTs for development: A guide for national ICP policy animators." Melville, South Africa: Association for Progressive Communications. Alliance for Affordable Internet. n.d. "Members Archived 17 December 2013 at the Wayback Machine" Accessed 15 March 2018. Asmal, Kader. 2001. "Introduction: World Commission on Dams Report, Dams and Development." American University International Law Review 16, no. 6:1411-1433. Avant, Deborah D., Martha Finnemore, and Susan K. Sell. 2010. "Who governs the globe?" Cambridge: Cambridge University Press. Bernstein, Steven, and Benjamin Cashore. 2007. "Can non-state global governance be legitimate? An analytical framework." Regulation & Governance 1, no. 4: 347-371. Brinkerhoff, Derick W., and Jennifer M. Brinkerhoff. 2011. "Public-private partnerships: Perspectives on purposes, publicness, and good governance." Public Administration and Development 31, no. 1: 2-14. Cutler, A. Claire, Virginia Hauffler, and Tony Porter. 1999. "The Contours and Significance of Private Authority in International Affairs" in Cutler, A. Claire, Virginia Haufler, and Tony Porter (Eds.) Private Authority and International Affairs. Albany, NY: SUNY Press, 333–76. Dingwerth, Klaus and Philipp Pattberg. 2009. "World Politics and Organizational Fields: The Case of Transnational Sustainability Governance." European Journal of International Relations 15, no. 4: 707–744. Dingwerth, Klaus. 2007. "The New Transnationalism: Transnational Governance and Democratic Legitimacy". Basingstoke: Palgrave Macmillan. Gasser, Urs, Ryan Budish and Sarah West. 2015. "Multistakeholder as Governance Groups: Observations from Case Studies." Cambridge, Massachusetts: Berkman Klein Center, Harvard University. Gleckman, Harris. 2012. "Readers Guide: Global Resign Initiative." Boston: Center for Governance and Sustainability at the University of Massachusetts Boston. Gleckman, Harris 2018 Multistakeholder Governance and Democracy : A Global Challenge, Routledge, London Hemmati, Minu, Felix Dodds, Jasmin Enayati, and Jan McHarry. 2012. "Multi-stakeholder Processes for Governance and Sustainability: Beyond Deadlock and Conflict", London: Earthscan. Hohnen, Paul. 2001. "Multistakeholder Processes: Why, and Where Next?" Presentation at the UNED Forum Workshop, New York City, 28 April 2001. ICANN. 2012. "Governance Guidelines". Last modified 18 October 2012. Marten, Jan 2007. "Multistakeholder Partnerships – Future Models of Multilateralism" Global Policy Forum January 2007 McKeon, Nora. 2005. "Food Security Governance: Empowering Communities, Regulating Corporations." London: Routledge. MSI Integrity. 2020. "Not Fit-for-Purpose The Grand Experiment of Multi-Stakeholder Initiatives in Corporate Accountability, Human Rights and Global Governance." San Francisco: Institute for Multi-Stakeholder Initiative Integrity MSI Integrity. 2015. "Protecting the Cornerstone: Assessing the Governance of Extractive Industries Transparency Initiative Multi-Stakeholder Groups." San Francisco: Institute for Multi-Stakeholder Initiative Integrity Nelson, Jane and Beth Jenkins. 2016. "Tackling Global Challenges: Lessons in System Leadership from the World Economic Forum's New Vision for Agriculture Initiative." Cambridge, Massachusetts: CSR Initiative, Harvard Kennedy School. Pattberg, Philipp. 2012. "Public-private Partnerships for Sustainable Development: Emergence, Influence and Legitimacy." Cheltenham, UK: Edward Elgar Publishing.2014 Potts, Andy. 2016. "Internet Governance: We the Networks." The Economist, 5 March 2016. Raymond, Mark, and Laura DeNardis. 2015. "Multistakeholderism: anatomy of an inchoate global institution." International Theory 7, no. 3: 572-616. Schwab, Klaus. 2009. "World Economic Forum. A Partner in Shaping History: The First 40 Years." Davos: The World Economic Forum. The Commission on Global Governance. 1995. "Our Global Neighbourhood", Oxford: Oxford University Press. UN. 2002. Towards Global Partnerships, GA Agenda Item 39, UN GA 56th session, UN Doc A/56/76 (Distributed 24 January 2002). UN. 2008 Towards global partnerships GA Res 62/211, UN GA 62nd session, UN Doc A/RES/62/211 (11 March 2008, adopted 19 December 2007) UN. 2008. Towards Global Partnerships: on the report of the Second Committee (A/62/426). GA Agenda Item 61, UN GA 62nd session, UN Doc Res A/RES/62/211, (Distributed 11 March 2008, Adopted 19 December 2007). UN. 2013. UN-Business Partnerships: A Handbook, New York: United Nations Global Compact and Global Public Policy Institute. UN. 2015a. Towards global partnerships: a principle-based approach to enhanced cooperation between the United Nations and all relevant partners, GA Agenda Item 27, UN GA 70th session, UN Doc A/RES/70/224, (Distributed 23 February 2016, Adopted 22 December 2015). UNECE. 2008. "Guidebook on Promoting Good Governance in Public-Private Partnerships." New York and Geneva: United Nations Economic Commission for Europe. US Congress. 2012. H.Con.Res.127 Expressing the sense of Congress regarding actions to preserve and advance the multistakeholder governance model under which the Internet has thrived. 112th Congress. Washington: 30 May 2012. Utting, Peter. 2002. "Regulating Business via Multistakeholder Initiatives: A Preliminary Assessment." In Jenkins, Rhys, Peer Utting, and Renato Alva Pino (eds.). Voluntary Approaches to Corporate Readings and a Resource Guide. Geneva: United Nations Non-Governmental Liaison Service (NGLS) and United Nations Research Institute for Social Development (UNRISD). WEF. 2010. "Everyone's Business: Strengthening International Cooperation in a More Interdependent World : Report of the Global Redesign Initiative." Geneva: World Economic Forum. WEF. n.d.b. Global Futures Council on the Future of International Governance, Public-Private Cooperation & Sustainable Development. Accessed 4 April 2018. WHO. 2016. Framework of engagement with non-State actors. 69th World Health Assembly, Agenda item 11.3, WHO Doc. WHA69.10 (Distributed 28 May 2016). Zoller, Julie, "Advancing the Multistakeholder Approach in the Multilateral Context", speech at the Marvin Center at George Washington University, 16 July 2015, Washington DC US Govt Position - Advancing the Multistakeholder Approach in the Multilateral Context.html
Wikipedia/Multistakeholder_governance_model
Network governance is "interfirm coordination that is characterized by organic or informal social system, in contrast to bureaucratic structures within firms and formal relationships between them. The concepts of privatization, public private partnership, and contracting are defined in this context." Network governance constitutes a "distinct form of coordinating economic activity" (Powell, 1990:301) which contrasts and competes with markets and hierarchies. == Definition == Network governance involves a select, persistent, and structured set of autonomous firms (as well as nonprofit agencies) engaged in creating products or services based on implicit and open-ended contracts adapt to environmental contingencies and to coordinate and safeguard ex-changes. These contracts are socially—not legally—binding. As such, governance networks distinguish themselves from the hierarchical control of the state and the competitive regulation of the market in at least three ways: In terms of the relationship between the actors, governance networks can be described as a pluricentric system as opposed to the unicentric system. Governance networks involve a large number of interdependent actors who interact with each other in order to produce an outcome. In terms of decision-making, governance networks are based on negotiation rationality as opposed to the substantial rationality that governs state rule and the procedural rationality that governs market competition. Compliance is ensured through trust and political obligation which, over time, becomes sustained by self-constituted rules and norms. As a concept, network governance explains increased efficiency and reduced agency problems for organizations existing in highly turbulent environments. On the one hand, the efficiency is enhanced through distributed knowledge acquisition and decentralised problem-solving; on the other, the effectiveness is improved through the emergence of collective solutions to global problems in different self-regulated sectors of activity. Due to the rapid pace of modern society and competitive pressures from globalization, transnational network governance has gained prominence. Network governance first depends on the comprehension of the short- and long-term global business risks. It is based on the definition of the IT key objectives and their influence on the network. It includes the negotiation of the satisfaction criteria for the business lines and integrates processes for the measurement and improvement of the global efficiency and end user satisfaction. Beyond that, it allows the constitution and piloting of internal teams and external partners as well as the setting up of a control system enabling to validate the performance of the whole. Finally, it ensures permanent communication at all the various management levels. In the public sector, network governance is not universally accepted as a positive development by all public administration scholars. Some doubt its ability to adequately perform as a democratic governance structure while others view it as phenomenon that promotes efficient and effective delivery of public goods and services. Examining managed networks in health care, Ferlie and colleagues suggest that networks may be the 'least bad' form of governance addressing wicked problems, such as providing health care for the increasing number of older people. == Types == Provan and Kenis categorize network governance forms along two different dimensions: Network governance may or may not be brokered. They refer to a network whose organizations interact with every other organization to decentralizedly govern the network "shared governance". At the other extreme a network may be highly brokered via centralized network brokers with only few and limited direct organization-to-organization interactions. Network may be participant governed or externally governed. === Participant-governed networks === In participant governance a network is governed by its members themselves. They call such networks that involve most or all network members interacting on a relatively equal basis in the process of governance "shared participant governance". === Lead organization-governed networks === More centralized networks may be governed by and through a lead organization that is a network member. == Historical and modern examples of network governance == From the 10th to 13th centuries, merchants in Cairo begin forming a network of merchants that report to each other the intents and information on agents working for them, and collectively inflict sanctions on agents that perform poorly. This leads to a hub of trading formed in Cairo and Aden – this makes the information on the market conditions, and the reputation of various agents easier to access for the good of the whole. By the 12th century, Venice provides its merchants with an improved flow of information regarding the market conditions they face, as well as information on the practices of individual agents. This recording of information helps merchants make more informed business decisions. The formation of the English and Dutch East India Companies forms a cooperation between merchants and companies to better regulate and inform others on the reputations of trading actors in London, Amsterdam and ports in East Africa and Arabia. This is a collective movement by governments and companies to raise capital for both the country and businesses. These examples show how network governance worked in the eras of increased trade and cooperation between merchants and nations ranging from the 10th century to the 17th century. Ron Harris, in his article "Reputation at the Birth of Corporate Governance", writes: "The questions of who had a good reputation and who had a bad one, whom one could trust and entrust money to, were unaltered, but the relationships to which they applied changed, as did the institutions that provided answers to these questions." Amber Alert – In 1996 the Amber Alert system was established in the United States after nine-year-old Amber Hagerman was kidnapped and murdered in Arlington, Texas. Media networks, in collaboration with law enforcement, joined a grassroots movement to spread the cause in establishing a network to aid in broadcasting alerts in an effort to prevent future crimes. This movement has grown to include all fifty states, and spread alerts across state lines. The Amber Alert system has since been widely accepted as the first-response program for missing persons nationwide. Homeland Security Fusion Centers – After the September 11th attacks, the United States endeavoured to improve the coordination between national and local organizations concerned with security. The Department of Homeland Security and a Director of National Intelligence were implemented at the federal level in response to this problem. Soon after, states began creating their own networks to share information pertinent to homeland security. As a result, fusion centers have popped up in almost every state, as well as many regions. These fusion centers provide a hub for law enforcement agencies to collaborate on national security measures in an effort to promote transparency across agencies, whether it be at the state, local or federal level. == Importance of governmental relations == Relationships among governing positions and governing institutions are absolutely vital for the success of the internal workings of the governments that aid the public. While federal, state, and local governments differ in their policies, they all work in coherence in order for the foundations to work efficiently. "Checks and balances" is a prime synonym when referring to intergovernmental relations. All participating parties of the government must adhere to specific guidelines in order to cultivate a fair and even playing field that is both beneficial and just to the population it affects. A primary principal in governmental relationships is the balance of power between the parties. The federal government has a large amount of control in terms of national security, national finances and foreign affairs. However, in order to balance that control, state-level governments have a significant voice in intrastate politics. Specific examples of state-level policies include topics such as state highways, borderlines, and state parks. This allows states to still have flexibility while bonding to national policy. Unfortunately, creating relationships among different level governments and government agencies is a complicated, and often grueling process. However, many agencies make deals or compromise within the party in order to further benefit both institutions. For example, a state may fund a county in order to better the county roads because it could be a direct reflection of the state. Intra-governmental relations between agencies, state-level, local-level, and federal-level government must work together in order to prosper and create policies or laws that are beneficial to both the agencies and the public. == Role in environmental governance == In the wake of apparent failures to govern complex environmental problems by the central state, "new" modes of governance have been proposed in recent years. Network governance is the mode most commonly associated with the concept of governance, in which autonomous stakeholders work together to achieve common goals. The emergence of network governance can be characterised by an attempt to take into account the increasing importance of non-governmental organizations (NGOs), the private sector, scientific networks and international institutions in the performance of various functions of governance. Embedding interventions to make society better and to transform conflicts within "relational webs" can ensure better coordination with existing initiatives and institutions and greater local acceptance and buy-in, which makes the intervention more sustainable. Prominent examples of such networks that have been instrumental in forming successful working arrangements are the World Commission on Dams, the Global Environmental Facility and the flexible mechanism of the Kyoto Protocol. Another ongoing effort is the United Nations Global Compact, which combines multiple stakeholders in a trilateral construction including representatives from governments, private sector and the NGO community.: 6  One main reason for the proliferation of network approaches in environmental governance is their potential to integrate and make available different sources of knowledge and competences and to encourage individual and collective learning. Currently, environmental governance faces various challenges that are characterised by complexities and uncertainties inherent to environmental and sustainable problems. Network governance can provide a means to address these governance problems by institutionalising learning on facts and deliberation on value judgements. For example, in the realm of global chemical safety, transnational networks have formed around initiatives by international organisations and successfully developed rules for addressing global chemical issues, many of which have been implemented by national legislations. Most notably, these transnational networks made it possible to avoid the institutional apathy that is typically found in political settings with many actors of conflicting interests, especially on a global level. Through integration of actors from different sectors, governance networks are able to provide an innovative environment of learning, laying the way for adaptive and effective governance. One particular form of networks important to governance problems is epistemic communities in which actors share the same basic casual beliefs and normative values.: 3  Although participation in these epistemic communities requires an interest in the problem at stake, the actors involved do not necessarily share the same interest. In general, the interests are interdependent but can also be different or sometimes contesting, stressing the need for consensus building and the development of cognitive commodities.: 26  The main argument in the literature for the advantage of network governance over traditional command and control regulation or, alternatively, recourse to market regulation, is its capacity to deal with situations of intrinsic uncertainty and decision-making under bounded rationality. This is typically the case in the field of global environmental governance where one has to deal with complex and interrelated problems. In these situations, network institutions can create a synergy between different competences and sources of knowledge allowing dealing with complex and interlined problems. == Enhancement of corporate social responsibility == As increasing amounts of scientific data validate concerns about the deterioration of our environment, the role of non-governmental organizations (NGOs) in network governance is being utilized in ever-increasing ways to halt or at least slow this deterioration. One of the ways they are accomplishing this is by directing their activities to focus on improving corporate social responsibility (CSR). As a concept, CSR has existed since the first business was formed in civilization. The French philosopher Rousseau described it as the "social contract" between business and society. As theories about CSR have evolved in keeping with their times, today it is increasingly associated with sustainable practices and development, meaning that businesses have a "moral responsibility" to conduct their operations in an ecologically sustainable manner. It is no longer acceptable for corporations just to grow "the bottom line" and increase profits for their shareholders. Businesses remain free to pursue profits but are increasingly obligated to minimize their negative impact on the environment. Network governance, in the form of NGOs, is effectively bringing to light "bad practices" by corporations, as well as highlighting those actively working to reduce their carbon footprints. Private governance networks such as CSRHUB and the Carbon Disclosure Project (CDP) are entities that hold corporations accountable for their amount of corporate social responsibility. Founded to accelerate solutions to climate change and water management, the CDP discloses information and data on water management, greenhouse gas emissions, and climate change strategies on over 3,000 companies worldwide. It is the only global climate change reporting system and encourages corporations to engage in "best practices" regarding environmental impact by making their formerly private or unknown environmental impact information available to anyone, including the general public. This information can be used (by a variety of entities) to make consumer purchase and investment decisions, formulate governmental as well as corporate policy, educate people, develop less harmful business methods for corporations and formulate action plans by environmental advocacy groups, to name a few. Lord Adair Turner, Chairman of the UK Financial Services Authority, explains how network governance enhances CSR: "The first step towards managing carbon emissions is to measure them because in business what gets measured gets managed. The Carbon Disclosure Project has played a crucial role in encouraging companies to take the first steps in that measurement and management path". Leading European business schools joined with more than sixty multinationals to launch the Academy of Business in Society, the mission of which is to push CSR to the forefront of business practice. Their main activities in pursuing this goal are: 1) developing 'best-in-class' training practices and learning resources for businesses and corporate academies, 2) including the changing role of business in society in business education and 3) creating a global research bank on the role of business in society and delivering interdisciplinary research on CSR. This is an example of network governance using education to improve corporate social responsibility. Use of organization of networks in today's society is a valid means of moving forward in preserving the environment. == See also == Cyber manufacturing Multi-level governance Netocracy Network society Network economy Policy network analysis Social peer-to-peer processes Sharing economy == References == Are You a Theory X or a Theory Y Leader? (1999, 20 July). Retrieved 9 March 2016, from http://www.inplantgraphics.com/article/are-you-theory-x-theory-y-leader/ Grossman, S. A., & Holzer, M. (n.d.). Partnership governance in public management: A public solutions handbook. Gordon, C. E. (n.d.). Behavioural approaches to corporate governance. Bakvis, H., & Jarvis, M. D. (2012). From new public management to new political governance: Essays in honour of Peter C. Aucoin. Montreal: Published for the School of Public Administration at the University of Victoria by McGill-Queen's University Press. What is Network Governance? (n.d.). Retrieved 9 March 2016, from http://environmentalpolicy.ucdavis.edu/node/378 Van Alstyne, Marshall (June 1997). "The state of network organization: a survey in three frameworks". Journal of Organizational Computing and Electronic Commerce. 7 (2–3): 83–151. CiteSeerX 10.1.1.67.6033. doi:10.1080/10919392.1997.9681069.Full text. IFCS Intergovernmental Forum on Chemical Safety. World Health Organization 2011.
Wikipedia/Network_governance
The Financial Crimes Enforcement Network (FinCEN) is a bureau within the United States Department of the Treasury that collects and analyzes information about financial transactions to combat domestic and international money laundering, terrorist financing, and other financial crimes. == Mission == FinCEN's stated mission is to "safeguard the financial system from illicit activity, counter money laundering and the financing of terrorism, and promote national security through strategic use of financial authorities and the collection, analysis, and dissemination of financial intelligence." FinCEN serves as the U.S. Financial Intelligence Unit (FIU) and is one of 147 FIUs making up the Egmont Group of Financial Intelligence Units. FinCEN's self-described motto is "follow the money." It is a network bringing people and information together, by coordinating information sharing with law enforcement agencies, regulators and other partners in the financial industry. == History == FinCEN was established by Treasury Order 105-08 on April 25, 1990. In May 1994, its mission expanded to involve regulatory responsibilities. In October 1994, Treasury's Office of Financial Enforcement merged with FinCEN. On September 26, 2002, after passage of Title III of the PATRIOT Act, Treasury Order 180-01 designated FinCEN as an official bureau within the Department of the Treasury. Since 1995, FinCEN has employed the FinCEN Artificial Intelligence System (FAIS). In September 2012, FinCEN's information technology called FinCEN Portal and Query System, migrated with 11 years of data into FinCEN Query, a search engine similar to Google. It is a "one stop shop" [sic] accessible via the FinCEN Portal allowing broad searches across more fields than before and returning more results. Since September 2012 FinCEN generates 4 new reports: Suspicious Activity Report (SAR), Currency Transaction Report (CTR), the Designation of Exempt Person (DOEP), and Registered Money Service Business (RMSB). == Organization == As of November 2013, FinCEN employed approximately 340 people, mostly intelligence professionals with expertise in the financial industry, illicit finance, financial intelligence, the AML/CFT (anti-money laundering / combating the financing of terrorism) regulatory regime, computer technology, and enforcement". The majority of the staff are permanent FinCEN personnel, with about 20 long-term detailees assigned from 13 different regulatory and law enforcement agencies. FinCEN shares information with dozens of intelligence agencies, including the Bureau of Alcohol, Tobacco, and Firearms; the Drug Enforcement Administration; the Federal Bureau of Investigation; the U.S. Secret Service; the Internal Revenue Service; the Customs Service; and the U.S. Postal Inspection Service. === FinCEN directors === Brian M. Bruh (1990–1993) Stanley E. Morris (1994–1998) James F. Sloan (April 1999 – October 2003) William J. Fox (December 2003 – February 2006) Robert Werner Duemling (March 2006 – December 2006) James H. Freis, Jr. (March 2007 – August 2012) Jennifer Shasky Calvery (September 2012 – May 2016) Jamal El-Hindi (Acting, June 2016 – November 2017) Kenneth Blanco (November 2017 – April 2021) Michael Mosier (Acting, April 2021 – August 2021) Himamauli Das (Acting, August 2021 – September 2023) Andrea Gacki (July 2023 – Present) == 314 program == The 2001 USA PATRIOT Act required the Secretary of the Treasury to create a secure network for the transmission of information to enforce the relevant regulations. FinCEN's regulations under Section 314(a) enable federal law enforcement agencies, through FinCEN, to reach out to more than 45,000 points of contact at more than 27,000 financial institutions to locate accounts and transactions of persons that may be involved in terrorist financing and/or money laundering. A web interface allows the person(s) designated in §314(a)(3)(A) to register and transmit information to FinCEN. The partnership between the financial community and law enforcement allows disparate bits of information to be identified, centralized, and rapidly evaluated. == Hawala == In 2003, FinCEN disseminated information on "informal value transfer systems" (IVTS), including hawala, a network of people receiving money for the purpose of making the funds payable to a third party in another geographic location, generally taking place outside of the conventional banking system through non-bank financial institutions or other business entities whose primary business activity may not be the transmission of money. On September 1, 2010, FinCEN issued a guidance on IVTS referencing United States v. Banki and hawala. == Office of Special Investigations == The Enforcement Division is structured into three offices: Compliance and Enforcement, Special Measures, and Special Investigations. The Office of Special Investigations is responsible for investigating unauthorized BSA disclosures and providing criminal investigatory expertise and support to the rest of the division. The Office is staffed by FinCEN's special agents. == Virtual currencies == In July 2011, FinCEN added "other value that substitutes for currency" to its definition of money services businesses in preparation for adapting the respective rule to virtual currencies. On March 18, 2013, FinCEN issued a guidance regarding virtual currencies, according to which, exchangers and administrators, but not users of convertible virtual currency are considered money transmitters, and must comply with rules to prevent money laundering/terrorist financing ("AML/CFT") and other forms of financial crime, by record-keeping, reporting and registering with FinCEN. Jennifer Shasky Calvery, director of FinCEN said, "Virtual currencies are subject to the same rules as other currencies. … Basic money services business rules apply here." At a November 2013 Senate hearing, Calvery stated, "It is in the best interest of virtual currency providers to comply with these regulations for a number of reasons. First is the idea of corporate responsibility," contrasting Bitcoin's understanding of a peer to peer system bypassing corporate financial institutions. She stated that FinCEN collaborates with the Federal Financial Institutions Examination Council, a congressionally-chartered forum called the "Bank Secrecy Act (BSA) Advisory Group" and BSA Working Group to review and discuss new regulations and guidance, with the FBI-led "Virtual Currency Emerging Threats Working Group" (VCET) formed in early 2012, the FDIC-led "Cyber Fraud Working Group", the Terrorist Financing & Financial Crimes-led "Treasury Cyber Working Group", and with a community of other financial intelligence units. According to the Department of Justice, VCET members represent the FBI, the Drug Enforcement Administration, multiple U.S. Attorney's Offices, and the Criminal Division's Asset Forfeiture and Money Laundering Section and Computer Crime and Intellectual Property Section. In 2021, amendments to the Bank Secrecy Act and the federal AML/CTF framework officially incorporated existing FinCEN guidelines on digital assets. The legislation was updated to encompass "value that substitutes for currency," reinforcing FinCEN's authority over digital assets. As a result, exchanges dealing in these assets were required to register with FinCEN and adhere to specific reporting and recordkeeping obligations for transactions involving certain types of digital assets. In 2021, FinCEN received 1,137,451 Suspicious Activity Reports (SARs) from both traditional financial institutions and cryptocurrency trading entities. Within this total, there were reports of 7,914 suspicious cyber events and 284,989 potential money laundering activities. == Beneficial Ownership Information Reports == FinCEN is the regulatory agency tasked with overseeing the Beneficial Ownership Information Reporting (BOIR) system in the U.S. This responsibility was established under the Corporate Transparency Act (CTA), which mandates that certain business entities must disclose information about their beneficial owners to FinCEN. CTA aims to enhance transparency and combat financial crimes by preventing the use of anonymous shell companies for illicit purposes. On December 3, 2024, the U.S. District Court for the Eastern District of Texas issued a preliminary injunction against nationwide implementation of the CTA, citing concerns about its constitutionality and impact on small businesses. Treasury filed a notice of appeal on December 5, 2024. FinCEN administers the BOIR system to collect and maintain accurate records of beneficial ownership information. This information includes details such as the names, addresses, dates of birth, and identification numbers of individuals who ultimately own or control companies. By centralizing this data, FinCEN supports law enforcement efforts to investigate and prosecute financial crimes, ensuring greater accountability and integrity within the corporate sector. == Controversies == In 2009, the GAO found "opportunities" to improve "interagency and state examination coordination", noting that the federal banking regulators issued an interagency examination manual, that SEC, CFTC, and their respective self-regulatory organizations developed Bank Secrecy Act (BSA) examination modules, and that FinCEN and IRS examining nonbank financial institutions issued an examination manual for money services businesses. Therefore multiple regulators examine compliance of the BSA across industries and for some larger holding companies even within the same institution. Regulators need to promote greater consistency, coordination and information-sharing, reduce unnecessary regulatory burden, and find concerns across industries. FinCEN estimated that it would have data access agreements with 80 percent of state agencies that conduct BSA examinations after 2012. Since FinCEN's inception in 1990, the Electronic Frontier Foundation in San Francisco has debated its benefits compared to its threat to privacy. FinCEN does not disclose how many Suspicious Activity Reports result in investigations, indictments or convictions, and no studies exist to tally how many reports are filed on innocent people. FinCEN and money laundering laws have been criticized for being expensive and relatively ineffective while violating Fourth Amendment rights, as an investigator may use FinCEN's database to investigate people instead of crimes. It has also been alleged that FinCEN's regulations against structuring are enforced unfairly and arbitrarily; for example, it was reported in 2012 that small businesses selling at farmers' markets have been targeted, while politically connected people like Eliot Spitzer were not prosecuted. Spitzer's reasons for structuring were described as "innocent". In February 2019, it was reported that Mary Daly, the oldest daughter of United States Attorney General William Barr, is to leave her position at the United States Deputy Attorney General's office for a FinCEN position. In September 2020, findings based on a set of 2,657 documents including 2121 suspicious activity reports (SARs) leaked from FinCEN were published as the FinCEN Files. The leaked documents showed that although both FinCEN and the banks that filed SARs knew about billions of dollars in dirty money being moved through the banks, both did very little to prevent the transactions. == In popular culture == The 2016 film The Accountant features a FinCEN investigation into the title character. In the first episode of the 2017 Netflix show Ozark, FinCEN is mentioned as one of the agencies (along with the DEA, ATF, and FBI) active in monitoring cartel activity in Chicago. == See also == Casino regulations under the Bank Secrecy Act Currency transaction report FINTRAC – Canada's equivalent to FinCEN Timeline of post-election transition following Russian interference in the 2016 United States elections Timeline of investigations into Trump and Russia (January–June 2017) Timeline of investigations into Trump and Russia (July–December 2018) Title 31 of the Code of Federal Regulations List of financial regulatory authorities by jurisdiction == References == == External links == Official website FinCEN in the Federal Register
Wikipedia/Financial_Crimes_Enforcement_Network
In 2020, Ofqual, the regulator of qualifications, exams and tests in England, produced a grades standardisation algorithm to combat grade inflation and moderate the teacher-predicted grades for A level and GCSE qualifications in that year, after examinations were cancelled as part of the response to the COVID-19 pandemic. == History == In late March 2020, Gavin Williamson, the secretary of state for education in Boris Johnson's Conservative government, instructed the head of Ofqual, Sally Collier, to "ensure, as far as is possible, that qualification standards are maintained and the distribution of grades follows a similar profile to that in previous years". On 31 March, he issued a direction under the Children and Learning Act 2009. Then, in August, 82% of 'A level' grades were computed using an algorithm devised by Ofqual. More than 4.6 million GCSEs in England – about 97% of the total – were assigned solely by the algorithm. Teacher rankings were taken into consideration, but not the teacher-assessed grades submitted by schools and colleges. On 25 August, Collier, who oversaw the development of Williamson's algorithm calculation, resigned from the post of chief regulator of Ofqual following mounting pressure. === Vocational qualifications === The algorithm was not applied to vocational and technical qualifications (VTQs), such as BTECs, which are assessed on coursework or as short modules are completed, and in some cases adapted assessments were held. Nevertheless, because of the high level of grade inflation resulting from Ofqual's decision not to apply the algorithm to A levels and GCSEs, Pearson Edexcel, the BTEC examiner, decided to cancel the release of BTEC results on 19 August, the day before they were due to be released, to allow them to be re-moderated in line with Ofqual's grade inflation. == The algorithm == Ofqual's Direct Centre Performance model is based on the record of each centre (school or college) in the subject being assessed. Details of the algorithm were not released until after the results of its first use in August 2020, and then only in part. Schools were not only asked to make a fair and objective judgement of the grade they believed a student would have achieved, but also to rank the students within each grade. This was because the statistical standardisation process required more granular information than the grade alone. Some examining boards issued guidance on the process of forming the judgement to be used within centres, where several teachers taught a subject. This was to be submitted 29 May 2020. For A-level students, their school had already included a predicted grade as part of the UCAS university application reference. This was submitted by 15 January (15 October 2019 for Oxbridge and medicine) and had been shared with the students. This UCAS predicted grade is not the same as the Ofqual predicted grade. The normal way to test a predictive algorithm is to run it against the previous year's data: this was not possible as the teacher rank order was not collected in previous years. Instead, tests used the rank order that had emerged from the 2019 final results. == Effects of the algorithm == The A-level grades were announced in England, Wales and Northern Ireland on 13 August 2020. Nearly 36% were lower than teachers' assessments (the CAG) and 3% were down two grades. == Side-effects of the algorithm == Students at small schools or taking minority subjects, such as are offered at small private schools (which are also more likely to have fewer students even in popular subjects), could see their grades being higher than their teacher predictions, especially when falling into the small class/minority interest bracket. Such students traditionally have a narrower range of marks, the weaker students having been invited to leave. Students at large state schools, sixth-form colleges and FE colleges who have open access policies and historically have educated BAME students or vulnerable students saw their results plummet, in order to fit the historic distribution curve. Students found the system unfair, and pressure was applied on Williamson to explain the results and to reverse his decision to use the algorithm that he had commissioned and Ofqual had implemented. On 12 August Williamson announced 'a triple lock' that let students appeal the result using an undefined valid mock result. But on 15 August, the advice was published with eight conditions set which differed from the minister's statement. Hours after the announcement, Ofqual suspended the system. On 17 August, Ofqual accepted that students should be awarded the CAG grade, instead of the grade predicted by the algorithm. UCAS said on 19 August that 15,000 pupils were rejected by their first-choice university on the algorithm-generated grades. After the Ofqual decision to use unmoderated teacher predictions, many affected students had grades to meet their offer, and reapplied. 90% of them said they aimed to study at top-tier universities. The effect was that top-tier universities appeared to have a capacity problem. The Royal Statistical Society said they had offered to help with the construction of the algorithm, but withdrew that offer when they saw the nature of the non-disclosure agreement they would have been required to sign. Ofqual was not prepared to discuss it and delayed replying by 55 days. == Legal opinion == Lord Falconer, a former attorney general, opined that three laws had been broken, and gave an example of where Ofqual had ignored a direct instruction of the Secretary of State for Education. Falconer said the formula for standardising grades was in breach of the overarching objectives under which Ofqual was established by the Apprenticeships, Skills, Children and Learning Act 2009. The objectives require that the grading system gives a reliable indication of the knowledge, skills and understanding of the student, and that it allows for reliable comparisons to be made with students taking exams graded by other boards and to be made with students who took comparable exams in previous years. The Labour Party suggested that the process was unlawful in that the students were given no appeal mechanism, stating: "There will be a mass of discriminatory impacts by operating the process on the basis of reflecting the previous years' results from their institutions", and "It is bound to disadvantage a whole range of groups with protected characteristics, in breach of a range of anti-discrimination legislation." == See also == 2020 United Kingdom school exam grading controversy == References == == External links == Requirements for the calculation of results in summer 2020 – Ofqual, 7 July 2020, updated 20 August Student guide to post-16 qualification results: summer 2020 – Ofqual, 27 July 2020, updated 19 August Taking exams during the coronavirus (COVID-19) outbreak – guidance from the Department for Education, published 20 March 2020, updated 27 August Higher Education Policy Institute algorithm discussion, May 2020 Education Committee Oral evidence: The Impact of Covid-19 on education and children’s services, HC 254 Wednesday 2 September 2020
Wikipedia/Ofqual_exam_results_algorithm
The Bertelsmann Transformation Index (BTI) is a measure of the development status and governance of political and economic transformation processes in developing and transition countries around the world. The BTI has been published biennially by the Bertelsmann Stiftung since 2005, most recently in 2022 on 137 countries. The index measures and compares the quality of government action in a ranking list based on self-recorded data and analyzes successes and setbacks on the path to constitutional democracy and a market economy accompanied by sociopolitical support. For this purpose, the "Status Index" is calculated on the general level of development with regard to democratic and market-economy characteristics and the "Management Index" on the political management of decision-makers. == Status and Management Index == The Bertelsmann Transformation Index publishes two rankings. The Status Index is composed of the study dimensions Political and Economic Transformation. Political transformation includes essential features of a democratic state order. This includes participation rights, the rule of law, the stability of democratic institutions and the political and social integration of institutions, but also statehood as a basic condition for the functioning of a democracy. Economic transformation takes into account not only the classic market-economy characteristics such as economic performance, market and competition regulation, currency and price stability and the protection of private property, but also social components such as the level of socio-economic development, the social order and ecological and educational sustainability. The Management Index assesses the extent to which political decision-makers can steer and promote the transformation process. It is composed of the criteria steering capability, resource efficiency, consensus building and international cooperation. In calculating the Management Index, the degree of difficulty is taken into account, such as structural obstacles, civil society traditions and the intensity of conflict. === Bertelsmann Transformation Indices by country === The following list shows the Bertelsmann Transformation Indices since 2016. == Method of calculation == The Status and Management Index is composed of a modular system of investigation dimensions (2nd level), criteria (3rd level) and indicators (4th level). === Country selection === All developing and transition countries with more than one million inhabitants are examined. Developing and transition countries are those countries that are not considered to be democratically and market-economically consolidated. In the absence of a concretely applicable definition of the consolidation limit, OECD membership prior to 1989 is used as a consolidation criterion. In exceptional cases, countries with less than one million inhabitants (Bhutan, Djibouti and Montenegro) are also examined. From 2003 to 2022, the number of countries surveyed has increased from 116 to 137. === Survey procedure === The reports and assessments of each study, involving some 250 country experts, are based on a multi-stage survey and review process. The aim of the procedure is to arrive at results that are as objective and comparable as possible. Two experts per country - usually one international and one local expert - prepare and review qualitative country analyses on the basis of 49 standardized questions and translate the answers independently of each other into quantitative assessments. On this basis, seven regional coordinators standardise the results intra- and interregionally. A scientific advisory board of transformation experts monitors and discusses the results and adopts the final values. The prototype of the BTI was published in 2003 and subsequently methodologically revised. Since then, no fundamental methodological changes have taken place, so that comparable time series since 2006 can be formed. == Publications == The study results are published in the form of country and regional reports and a book series in English and partly in German. Initiated and financed by foreign think tanks, BTI study content has also been published in other languages: in Arabic by the Gulf Research Center in 2009, in Russian by the Moscow Center for Post-Industrial Studies in 2010 and in Spanish by the Argentinean Centro para la Apertura y el Desarrollo de América Latina in 2014. The BTI Atlas, a graphics application, offers individual visual access to the results and reports of all editions since 2006. == Use == The Bertelsmann Transformation Index is used both by governments around the world to assess partner countries and by international organizations to produce their own analyses. Transparency International's Corruption Perceptions Index and the Ibrahim Index of African Governance are based in part on BTI results. The sister project Sustainable Governance Indicators, which is methodologically modelled on the BTI, examines the reform capacity and sustainability of advanced democracies and market economies. The study covers all OECD and EU member states, including the OECD core countries not included in the BTI. == References == == External links == Website
Wikipedia/Bertelsmann_Transformation_Index
The two-dimensional critical Ising model is the critical limit of the Ising model in two dimensions. It is a two-dimensional conformal field theory whose symmetry algebra is the Virasoro algebra with the central charge c = 1 2 {\displaystyle c={\tfrac {1}{2}}} . Correlation functions of the spin and energy operators are described by the ( 4 , 3 ) {\displaystyle (4,3)} minimal model. While the minimal model has been exactly solved (see Ising critical exponents), the solution does not cover other observables such as connectivities of clusters. == The minimal model == === Space of states and conformal dimensions === The Kac table of the ( 4 , 3 ) {\displaystyle (4,3)} minimal model is: 2 1 2 1 16 0 1 0 1 16 1 2 1 2 3 {\displaystyle {\begin{array}{c|ccc}2&{\frac {1}{2}}&{\frac {1}{16}}&0\\1&0&{\frac {1}{16}}&{\frac {1}{2}}\\\hline &1&2&3\end{array}}} This means that the space of states is generated by three primary states, which correspond to three primary fields or operators: Kac table indices Dimension Primary field Name ( 1 , 1 ) or ( 3 , 2 ) 0 1 Identity ( 2 , 1 ) or ( 2 , 2 ) 1 16 σ Spin ( 1 , 2 ) or ( 3 , 1 ) 1 2 ϵ Energy {\displaystyle {\begin{array}{cccc}\hline {\text{Kac table indices}}&{\text{Dimension}}&{\text{Primary field}}&{\text{Name}}\\\hline (1,1){\text{ or }}(3,2)&0&\mathbf {1} &{\text{Identity}}\\(2,1){\text{ or }}(2,2)&{\frac {1}{16}}&\sigma &{\text{Spin}}\\(1,2){\text{ or }}(3,1)&{\frac {1}{2}}&\epsilon &{\text{Energy}}\\\hline \end{array}}} The decomposition of the space of states into irreducible representations of the product of the left- and right-moving Virasoro algebras is S = R 0 ⊗ R ¯ 0 ⊕ R 1 16 ⊗ R ¯ 1 16 ⊕ R 1 2 ⊗ R ¯ 1 2 {\displaystyle {\mathcal {S}}={\mathcal {R}}_{0}\otimes {\bar {\mathcal {R}}}_{0}\oplus {\mathcal {R}}_{\frac {1}{16}}\otimes {\bar {\mathcal {R}}}_{\frac {1}{16}}\oplus {\mathcal {R}}_{\frac {1}{2}}\otimes {\bar {\mathcal {R}}}_{\frac {1}{2}}} where R Δ {\displaystyle {\mathcal {R}}_{\Delta }} is the irreducible highest-weight representation of the Virasoro algebra with the conformal dimension Δ {\displaystyle \Delta } . In particular, the Ising model is diagonal and unitary. === Characters and partition function === The characters of the three representations of the Virasoro algebra that appear in the space of states are χ 0 ( q ) = 1 η ( q ) ∑ k ∈ Z ( q ( 24 k + 1 ) 2 48 − q ( 24 k + 7 ) 2 48 ) = 1 2 η ( q ) ( θ 3 ( 0 | q ) + θ 4 ( 0 | q ) ) χ 1 16 ( q ) = 1 η ( q ) ∑ k ∈ Z ( q ( 24 k + 2 ) 2 48 − q ( 24 k + 10 ) 2 48 ) = 1 2 η ( q ) ( θ 3 ( 0 | q ) − θ 4 ( 0 | q ) ) χ 1 2 ( q ) = 1 η ( q ) ∑ k ∈ Z ( q ( 24 k + 5 ) 2 48 − q ( 24 k + 11 ) 2 48 ) = 1 2 η ( q ) θ 2 ( 0 | q ) {\displaystyle {\begin{aligned}\chi _{0}(q)&={\frac {1}{\eta (q)}}\sum _{k\in \mathbb {Z} }\left(q^{\frac {(24k+1)^{2}}{48}}-q^{\frac {(24k+7)^{2}}{48}}\right)={\frac {1}{2{\sqrt {\eta (q)}}}}\left({\sqrt {\theta _{3}(0|q)}}+{\sqrt {\theta _{4}(0|q)}}\right)\\\chi _{\frac {1}{16}}(q)&={\frac {1}{\eta (q)}}\sum _{k\in \mathbb {Z} }\left(q^{\frac {(24k+2)^{2}}{48}}-q^{\frac {(24k+10)^{2}}{48}}\right)={\frac {1}{2{\sqrt {\eta (q)}}}}\left({\sqrt {\theta _{3}(0|q)}}-{\sqrt {\theta _{4}(0|q)}}\right)\\\chi _{\frac {1}{2}}(q)&={\frac {1}{\eta (q)}}\sum _{k\in \mathbb {Z} }\left(q^{\frac {(24k+5)^{2}}{48}}-q^{\frac {(24k+11)^{2}}{48}}\right)={\frac {1}{\sqrt {2\eta (q)}}}{\sqrt {\theta _{2}(0|q)}}\end{aligned}}} where η ( q ) {\displaystyle \eta (q)} is the Dedekind eta function, and θ i ( 0 | q ) {\displaystyle \theta _{i}(0|q)} are theta functions of the nome q = e 2 π i τ {\displaystyle q=e^{2\pi i\tau }} , for example θ 3 ( 0 | q ) = ∑ n ∈ Z q n 2 2 {\displaystyle \theta _{3}(0|q)=\sum _{n\in \mathbb {Z} }q^{\frac {n^{2}}{2}}} . The modular S-matrix, i.e. the matrix S {\displaystyle {\mathcal {S}}} such that χ i ( − 1 τ ) = ∑ j S i j χ j ( τ ) {\displaystyle \chi _{i}(-{\tfrac {1}{\tau }})=\sum _{j}{\mathcal {S}}_{ij}\chi _{j}(\tau )} , is S = 1 2 ( 1 1 2 1 1 − 2 2 − 2 0 ) {\displaystyle {\mathcal {S}}={\frac {1}{2}}\left({\begin{array}{ccc}1&1&{\sqrt {2}}\\1&1&-{\sqrt {2}}\\{\sqrt {2}}&-{\sqrt {2}}&0\end{array}}\right)} where the fields are ordered as 1 , ϵ , σ {\displaystyle 1,\epsilon ,\sigma } . The modular invariant partition function is Z ( q ) = | χ 0 ( q ) | 2 + | χ 1 16 ( q ) | 2 + | χ 1 2 ( q ) | 2 = | θ 2 ( 0 | q ) | + | θ 3 ( 0 | q ) | + | θ 4 ( 0 | q ) | 2 | η ( q ) | {\displaystyle Z(q)=\left|\chi _{0}(q)\right|^{2}+\left|\chi _{\frac {1}{16}}(q)\right|^{2}+\left|\chi _{\frac {1}{2}}(q)\right|^{2}={\frac {|\theta _{2}(0|q)|+|\theta _{3}(0|q)|+|\theta _{4}(0|q)|}{2|\eta (q)|}}} === Fusion rules and operator product expansions === The fusion rules of the model are 1 × 1 = 1 1 × σ = σ 1 × ϵ = ϵ σ × σ = 1 + ϵ σ × ϵ = σ ϵ × ϵ = 1 {\displaystyle {\begin{aligned}\mathbf {1} \times \mathbf {1} &=\mathbf {1} \\\mathbf {1} \times \sigma &=\sigma \\\mathbf {1} \times \epsilon &=\epsilon \\\sigma \times \sigma &=\mathbf {1} +\epsilon \\\sigma \times \epsilon &=\sigma \\\epsilon \times \epsilon &=\mathbf {1} \end{aligned}}} The fusion rules are invariant under the Z 2 {\displaystyle \mathbb {Z} _{2}} symmetry σ → − σ {\displaystyle \sigma \to -\sigma } . The three-point structure constants are C 1 1 1 = C 1 ϵ ϵ = C 1 σ σ = 1 , C σ σ ϵ = 1 2 {\displaystyle C_{\mathbf {1} \mathbf {1} \mathbf {1} }=C_{\mathbf {1} \epsilon \epsilon }=C_{\mathbf {1} \sigma \sigma }=1\quad ,\quad C_{\sigma \sigma \epsilon }={\frac {1}{2}}} Knowing the fusion rules and three-point structure constants, it is possible to write operator product expansions, for example σ ( z ) σ ( 0 ) = | z | 2 Δ 1 − 4 Δ σ C 1 σ σ ( 1 ( 0 ) + O ( z ) ) + | z | 2 Δ ϵ − 4 Δ σ C σ σ ϵ ( ϵ ( 0 ) + O ( z ) ) = | z | − 1 4 ( 1 ( 0 ) + O ( z ) ) + 1 2 | z | 3 4 ( ϵ ( 0 ) + O ( z ) ) {\displaystyle {\begin{aligned}\sigma (z)\sigma (0)&=|z|^{2\Delta _{\mathbf {1} }-4\Delta _{\sigma }}C_{\mathbf {1} \sigma \sigma }{\Big (}\mathbf {1} (0)+O(z){\Big )}+|z|^{2\Delta _{\epsilon }-4\Delta _{\sigma }}C_{\sigma \sigma \epsilon }{\Big (}\epsilon (0)+O(z){\Big )}\\&=|z|^{-{\frac {1}{4}}}{\Big (}\mathbf {1} (0)+O(z){\Big )}+{\frac {1}{2}}|z|^{\frac {3}{4}}{\Big (}\epsilon (0)+O(z){\Big )}\end{aligned}}} where Δ 1 , Δ σ , Δ ϵ {\displaystyle \Delta _{\mathbf {1} },\Delta _{\sigma },\Delta _{\epsilon }} are the conformal dimensions of the primary fields, and the omitted terms O ( z ) {\displaystyle O(z)} are contributions of descendant fields. === Correlation functions on the sphere === Any one-, two- and three-point function of primary fields is determined by conformal symmetry up to a multiplicative constant. This constant is set to be one for one- and two-point functions by a choice of field normalizations. The only non-trivial dynamical quantities are the three-point structure constants, which were given above in the context of operator product expansions. ⟨ 1 ( z 1 ) ⟩ = 1 , ⟨ σ ( z 1 ) ⟩ = 0 , ⟨ ϵ ( z 1 ) ⟩ = 0 {\displaystyle \left\langle \mathbf {1} (z_{1})\right\rangle =1\ ,\ \left\langle \sigma (z_{1})\right\rangle =0\ ,\ \left\langle \epsilon (z_{1})\right\rangle =0} ⟨ 1 ( z 1 ) 1 ( z 2 ) ⟩ = 1 , ⟨ σ ( z 1 ) σ ( z 2 ) ⟩ = | z 12 | − 1 4 , ⟨ ϵ ( z 1 ) ϵ ( z 2 ) ⟩ = | z 12 | − 2 {\displaystyle \left\langle \mathbf {1} (z_{1})\mathbf {1} (z_{2})\right\rangle =1\ ,\ \left\langle \sigma (z_{1})\sigma (z_{2})\right\rangle =|z_{12}|^{-{\frac {1}{4}}}\ ,\ \left\langle \epsilon (z_{1})\epsilon (z_{2})\right\rangle =|z_{12}|^{-2}} with z i j = z i − z j {\displaystyle z_{ij}=z_{i}-z_{j}} . ⟨ 1 σ ⟩ = ⟨ 1 ϵ ⟩ = ⟨ σ ϵ ⟩ = 0 {\displaystyle \langle \mathbf {1} \sigma \rangle =\langle \mathbf {1} \epsilon \rangle =\langle \sigma \epsilon \rangle =0} ⟨ 1 ( z 1 ) 1 ( z 2 ) 1 ( z 3 ) ⟩ = 1 , ⟨ σ ( z 1 ) σ ( z 2 ) 1 ( z 3 ) ⟩ = | z 12 | − 1 4 , ⟨ ϵ ( z 1 ) ϵ ( z 2 ) 1 ( z 3 ) ⟩ = | z 12 | − 2 {\displaystyle \left\langle \mathbf {1} (z_{1})\mathbf {1} (z_{2})\mathbf {1} (z_{3})\right\rangle =1\ ,\ \left\langle \sigma (z_{1})\sigma (z_{2})\mathbf {1} (z_{3})\right\rangle =|z_{12}|^{-{\frac {1}{4}}}\ ,\ \left\langle \epsilon (z_{1})\epsilon (z_{2})\mathbf {1} (z_{3})\right\rangle =|z_{12}|^{-2}} ⟨ σ ( z 1 ) σ ( z 2 ) ϵ ( z 3 ) ⟩ = 1 2 | z 12 | 3 4 | z 13 | − 1 | z 23 | − 1 {\displaystyle \left\langle \sigma (z_{1})\sigma (z_{2})\epsilon (z_{3})\right\rangle ={\frac {1}{2}}|z_{12}|^{\frac {3}{4}}|z_{13}|^{-1}|z_{23}|^{-1}} ⟨ 1 1 σ ⟩ = ⟨ 1 1 ϵ ⟩ = ⟨ 1 σ ϵ ⟩ = ⟨ σ ϵ ϵ ⟩ = ⟨ σ σ σ ⟩ = ⟨ ϵ ϵ ϵ ⟩ = 0 {\displaystyle \langle \mathbf {1} \mathbf {1} \sigma \rangle =\langle \mathbf {1} \mathbf {1} \epsilon \rangle =\langle \mathbf {1} \sigma \epsilon \rangle =\langle \sigma \epsilon \epsilon \rangle =\langle \sigma \sigma \sigma \rangle =\langle \epsilon \epsilon \epsilon \rangle =0} The three non-trivial four-point functions are of the type ⟨ σ 4 ⟩ , ⟨ σ 2 ϵ 2 ⟩ , ⟨ ϵ 4 ⟩ {\displaystyle \langle \sigma ^{4}\rangle ,\langle \sigma ^{2}\epsilon ^{2}\rangle ,\langle \epsilon ^{4}\rangle } . For a four-point function ⟨ ∏ i = 1 4 V i ( z i ) ⟩ {\displaystyle \left\langle \prod _{i=1}^{4}V_{i}(z_{i})\right\rangle } , let F j ( s ) {\displaystyle {\mathcal {F}}_{j}^{(s)}} and F j ( t ) {\displaystyle {\mathcal {F}}_{j}^{(t)}} be the s- and t-channel Virasoro conformal blocks, which respectively correspond to the contributions of V j ( z 2 ) {\displaystyle V_{j}(z_{2})} (and its descendants) in the operator product expansion V 1 ( z 1 ) V 2 ( z 2 ) {\displaystyle V_{1}(z_{1})V_{2}(z_{2})} , and of V j ( z 4 ) {\displaystyle V_{j}(z_{4})} (and its descendants) in the operator product expansion V 1 ( z 1 ) V 4 ( z 4 ) {\displaystyle V_{1}(z_{1})V_{4}(z_{4})} . Let x = z 12 z 34 z 13 z 24 {\displaystyle x={\frac {z_{12}z_{34}}{z_{13}z_{24}}}} be the cross-ratio. In the case of ⟨ ϵ 4 ⟩ {\displaystyle \langle \epsilon ^{4}\rangle } , fusion rules allow only one primary field in all channels, namely the identity field. ⟨ ϵ 4 ⟩ = | F 1 ( s ) | 2 = | F 1 ( t ) | 2 F 1 ( s ) = F 1 ( t ) = [ ∏ 1 ≤ i < j ≤ 4 z i j − 1 3 ] 1 − x + x 2 x 2 3 ( 1 − x ) 2 3 = ( z i ) = ( x , 0 , ∞ , 1 ) 1 x ( 1 − x ) − 1 {\displaystyle {\begin{aligned}&\langle \epsilon ^{4}\rangle =\left|{\mathcal {F}}_{\textbf {1}}^{(s)}\right|^{2}=\left|{\mathcal {F}}_{\textbf {1}}^{(t)}\right|^{2}\\&{\mathcal {F}}_{\textbf {1}}^{(s)}={\mathcal {F}}_{\textbf {1}}^{(t)}=\left[\prod _{1\leq i<j\leq 4}z_{ij}^{-{\frac {1}{3}}}\right]{\frac {1-x+x^{2}}{x^{\frac {2}{3}}(1-x)^{\frac {2}{3}}}}\ {\underset {(z_{i})=(x,0,\infty ,1)}{=}}\ {\frac {1}{x(1-x)}}-1\end{aligned}}} In the case of ⟨ σ 2 ϵ 2 ⟩ {\displaystyle \langle \sigma ^{2}\epsilon ^{2}\rangle } , fusion rules allow only the identity field in the s-channel, and the spin field in the t-channel. ⟨ σ 2 ϵ 2 ⟩ = | F 1 ( s ) | 2 = C σ σ ϵ 2 | F σ ( t ) | 2 = 1 4 | F σ ( t ) | 2 F 1 ( s ) = 1 2 F σ ( t ) = [ z 12 1 4 z 34 − 5 8 ( z 13 z 24 z 14 z 23 ) − 3 16 ] 1 − x 2 x 3 8 ( 1 − x ) 5 16 = ( z i ) = ( x , 0 , ∞ , 1 ) 1 − x 2 x 1 8 ( 1 − x ) 1 2 {\displaystyle {\begin{aligned}&\langle \sigma ^{2}\epsilon ^{2}\rangle =\left|{\mathcal {F}}_{\textbf {1}}^{(s)}\right|^{2}=C_{\sigma \sigma \epsilon }^{2}\left|{\mathcal {F}}_{\sigma }^{(t)}\right|^{2}={\frac {1}{4}}\left|{\mathcal {F}}_{\sigma }^{(t)}\right|^{2}\\&{\mathcal {F}}_{\textbf {1}}^{(s)}={\frac {1}{2}}{\mathcal {F}}_{\sigma }^{(t)}=\left[z_{12}^{\frac {1}{4}}z_{34}^{-{\frac {5}{8}}}\left(z_{13}z_{24}z_{14}z_{23}\right)^{-{\frac {3}{16}}}\right]{\frac {1-{\frac {x}{2}}}{x^{\frac {3}{8}}(1-x)^{\frac {5}{16}}}}\ {\underset {(z_{i})=(x,0,\infty ,1)}{=}}\ {\frac {1-{\frac {x}{2}}}{x^{\frac {1}{8}}(1-x)^{\frac {1}{2}}}}\end{aligned}}} In the case of ⟨ σ 4 ⟩ {\displaystyle \langle \sigma ^{4}\rangle } , fusion rules allow two primary fields in all channels: the identity field and the energy field. In this case we write the conformal blocks in the case ( z 1 , z 2 , z 3 , z 4 ) = ( x , 0 , ∞ , 1 ) {\displaystyle (z_{1},z_{2},z_{3},z_{4})=(x,0,\infty ,1)} only: the general case is obtained by inserting the prefactor x 1 24 ( 1 − x ) 1 24 ∏ 1 ≤ i < j ≤ 4 z i j − 1 24 {\displaystyle x^{\frac {1}{24}}(1-x)^{\frac {1}{24}}\prod _{1\leq i<j\leq 4}z_{ij}^{-{\frac {1}{24}}}} , and identifying x {\displaystyle x} with the cross-ratio. ⟨ σ 4 ⟩ = | F 1 ( s ) | 2 + 1 4 | F ϵ ( s ) | 2 = | F 1 ( t ) | 2 + 1 4 | F ϵ ( t ) | 2 = | 1 + x | + | 1 − x | 2 | x | 1 4 | 1 − x | 1 4 = x ∈ ( 0 , 1 ) 1 | x | 1 4 | 1 − x | 1 4 {\displaystyle {\begin{aligned}\langle \sigma ^{4}\rangle &=\left|{\mathcal {F}}_{\textbf {1}}^{(s)}\right|^{2}+{\frac {1}{4}}\left|{\mathcal {F}}_{\epsilon }^{(s)}\right|^{2}=\left|{\mathcal {F}}_{\textbf {1}}^{(t)}\right|^{2}+{\frac {1}{4}}\left|{\mathcal {F}}_{\epsilon }^{(t)}\right|^{2}\\&={\frac {|1+{\sqrt {x}}|+|1-{\sqrt {x}}|}{2|x|^{\frac {1}{4}}|1-x|^{\frac {1}{4}}}}\ {\underset {x\in (0,1)}{=}}\ {\frac {1}{|x|^{\frac {1}{4}}|1-x|^{\frac {1}{4}}}}\end{aligned}}} In the case of ⟨ σ 4 ⟩ {\displaystyle \langle \sigma ^{4}\rangle } , the conformal blocks are: F 1 ( s ) = 1 + 1 − x 2 x 1 8 ( 1 − x ) 1 8 , F ϵ ( s ) = 2 − 2 1 − x x 1 8 ( 1 − x ) 1 8 F 1 ( t ) = F 1 ( s ) 2 + F ϵ ( s ) 2 2 = 1 + x 2 x 1 8 ( 1 − x ) 1 8 , F ϵ ( t ) = 2 F 1 ( s ) − F ϵ ( s ) 2 = 2 − 2 x x 1 8 ( 1 − x ) 1 8 {\displaystyle {\begin{aligned}&{\mathcal {F}}_{\textbf {1}}^{(s)}={\frac {\sqrt {\frac {1+{\sqrt {1-x}}}{2}}}{x^{\frac {1}{8}}(1-x)^{\frac {1}{8}}}}\ ,\;\;{\mathcal {F}}_{\epsilon }^{(s)}={\frac {\sqrt {2-2{\sqrt {1-x}}}}{x^{\frac {1}{8}}(1-x)^{\frac {1}{8}}}}\\&{\mathcal {F}}_{\textbf {1}}^{(t)}={\frac {{\mathcal {F}}_{\textbf {1}}^{(s)}}{\sqrt {2}}}+{\frac {{\mathcal {F}}_{\epsilon }^{(s)}}{2{\sqrt {2}}}}={\frac {\sqrt {\frac {1+{\sqrt {x}}}{2}}}{x^{\frac {1}{8}}(1-x)^{\frac {1}{8}}}}\ ,\;\;{\mathcal {F}}_{\epsilon }^{(t)}={\sqrt {2}}{\mathcal {F}}_{\textbf {1}}^{(s)}-{\frac {{\mathcal {F}}_{\epsilon }^{(s)}}{\sqrt {2}}}={\frac {\sqrt {2-2{\sqrt {x}}}}{x^{\frac {1}{8}}(1-x)^{\frac {1}{8}}}}\end{aligned}}} From the representation of the model in terms of Dirac fermions, it is possible to compute correlation functions of any number of spin or energy operators: ⟨ ∏ i = 1 2 n ϵ ( z i ) ⟩ 2 = | det ( 1 z i j ) 1 ≤ i ≠ j ≤ 2 n | 2 {\displaystyle \left\langle \prod _{i=1}^{2n}\epsilon (z_{i})\right\rangle ^{2}=\left|\det \left({\frac {1}{z_{ij}}}\right)_{1\leq i\neq j\leq 2n}\right|^{2}} ⟨ ∏ i = 1 2 n σ ( z i ) ⟩ 2 = 1 2 n ∑ ϵ i = ± 1 ∑ i = 1 2 n ϵ i = 0 ∏ 1 ≤ i < j ≤ 2 n | z i j | ϵ i ϵ j 2 {\displaystyle \left\langle \prod _{i=1}^{2n}\sigma (z_{i})\right\rangle ^{2}={\frac {1}{2^{n}}}\sum _{\begin{array}{c}\epsilon _{i}=\pm 1\\\sum _{i=1}^{2n}\epsilon _{i}=0\end{array}}\prod _{1\leq i<j\leq 2n}|z_{ij}|^{\frac {\epsilon _{i}\epsilon _{j}}{2}}} These formulas have generalizations to correlation functions on the torus, which involve theta functions. == Other observables == === Disorder operator === The two-dimensional Ising model is mapped to itself by a high-low temperature duality. The image of the spin operator σ {\displaystyle \sigma } under this duality is a disorder operator μ {\displaystyle \mu } , which has the same left and right conformal dimensions ( Δ μ , Δ ¯ μ ) = ( Δ σ , Δ ¯ σ ) = ( 1 16 , 1 16 ) {\displaystyle (\Delta _{\mu },{\bar {\Delta }}_{\mu })=(\Delta _{\sigma },{\bar {\Delta }}_{\sigma })=({\tfrac {1}{16}},{\tfrac {1}{16}})} . Although the disorder operator does not belong to the minimal model, correlation functions involving the disorder operator can be computed exactly, for example ⟨ σ ( z 1 ) μ ( z 2 ) σ ( z 3 ) μ ( z 4 ) ⟩ 2 = 1 2 | z 13 z 24 | | z 12 z 34 z 23 z 14 | ( | x | + | 1 − x | − 1 ) {\displaystyle \left\langle \sigma (z_{1})\mu (z_{2})\sigma (z_{3})\mu (z_{4})\right\rangle ^{2}={\frac {1}{2}}{\sqrt {\frac {|z_{13}z_{24}|}{|z_{12}z_{34}z_{23}z_{14}|}}}{\Big (}|x|+|1-x|-1{\Big )}} whereas ⟨ ∏ i = 1 4 μ ( z i ) ⟩ 2 = ⟨ ∏ i = 1 4 σ ( z i ) ⟩ 2 = 1 2 | z 13 z 24 | | z 12 z 34 z 23 z 14 | ( | x | + | 1 − x | + 1 ) {\displaystyle \left\langle \prod _{i=1}^{4}\mu (z_{i})\right\rangle ^{2}=\left\langle \prod _{i=1}^{4}\sigma (z_{i})\right\rangle ^{2}={\frac {1}{2}}{\sqrt {\frac {|z_{13}z_{24}|}{|z_{12}z_{34}z_{23}z_{14}|}}}{\Big (}|x|+|1-x|+1{\Big )}} === Connectivities of clusters === The Ising model has a description as a random cluster model due to Fortuin and Kasteleyn. In this description, the natural observables are connectivities of clusters, i.e. probabilities that a number of points belong to the same cluster. The Ising model can then be viewed as the case q = 2 {\displaystyle q=2} of the q {\displaystyle q} -state Potts model, whose parameter q {\displaystyle q} can vary continuously, and is related to the central charge of the Virasoro algebra. In the critical limit, connectivities of clusters have the same behaviour under conformal transformations as correlation functions of the spin operator. Nevertheless, connectivities do not coincide with spin correlation functions: for example, the three-point connectivity does not vanish, while ⟨ σ σ σ ⟩ = 0 {\displaystyle \langle \sigma \sigma \sigma \rangle =0} . There are four independent four-point connectivities, and their sum coincides with ⟨ σ σ σ σ ⟩ {\displaystyle \langle \sigma \sigma \sigma \sigma \rangle } . Other combinations of four-point connectivities are not known analytically. In particular they are not related to correlation functions of the minimal model, although they are related to the q → 2 {\displaystyle q\to 2} limit of spin correlators in the q {\displaystyle q} -state Potts model. == References ==
Wikipedia/Two-dimensional_critical_Ising_model
In conformal geometry, a conformal Killing vector field on a manifold of dimension n with (pseudo) Riemannian metric g {\displaystyle g} (also called a conformal Killing vector, CKV, or conformal colineation), is a vector field X {\displaystyle X} whose (locally defined) flow defines conformal transformations, that is, preserve g {\displaystyle g} up to scale and preserve the conformal structure. Several equivalent formulations, called the conformal Killing equation, exist in terms of the Lie derivative of the flow e.g. L X g = λ g {\displaystyle {\mathcal {L}}_{X}g=\lambda g} for some function λ {\displaystyle \lambda } on the manifold. For n ≠ 2 {\displaystyle n\neq 2} there are a finite number of solutions, specifying the conformal symmetry of that space, but in two dimensions, there is an infinity of solutions. The name Killing refers to Wilhelm Killing, who first investigated Killing vector fields. == Densitized metric tensor and Conformal Killing vectors == A vector field X {\displaystyle X} is a Killing vector field if and only if its flow preserves the metric tensor g {\displaystyle g} (strictly speaking for each compact subsets of the manifold, the flow need only be defined for finite time). Formulated mathematically, X {\displaystyle X} is Killing if and only if it satisfies L X g = 0. {\displaystyle {\mathcal {L}}_{X}g=0.} where L X {\displaystyle {\mathcal {L}}_{X}} is the Lie derivative. More generally, define a w-Killing vector field X {\displaystyle X} as a vector field whose (local) flow preserves the densitized metric g μ g w {\displaystyle g\mu _{g}^{w}} , where μ g {\displaystyle \mu _{g}} is the volume density defined by g {\displaystyle g} (i.e. locally μ g = | det ( g ) | d x 1 ⋯ d x n {\displaystyle \mu _{g}={\sqrt {|\det(g)|}}\,dx^{1}\cdots dx^{n}} ) and w ∈ R {\displaystyle w\in \mathbf {R} } is its weight. Note that a Killing vector field preserves μ g {\displaystyle \mu _{g}} and so automatically also satisfies this more general equation. Also note that w = − 2 / n {\displaystyle w=-2/n} is the unique weight that makes the combination g μ g w {\displaystyle g\mu _{g}^{w}} invariant under scaling of the metric. Therefore, in this case, the condition depends only on the conformal structure. Now X {\displaystyle X} is a w-Killing vector field if and only if L X ( g μ g w ) = ( L X g ) μ g w + w g μ g w − 1 L X μ g = 0. {\displaystyle {\mathcal {L}}_{X}\left(g\mu _{g}^{w}\right)=({\mathcal {L}}_{X}g)\mu _{g}^{w}+wg\mu _{g}^{w-1}{\mathcal {L}}_{X}\mu _{g}=0.} Since L X μ g = div ⁡ ( X ) μ g {\displaystyle {\mathcal {L}}_{X}\mu _{g}=\operatorname {div} (X)\mu _{g}} this is equivalent to L X g = − w div ⁡ ( X ) g . {\displaystyle {\mathcal {L}}_{X}g=-w\operatorname {div} (X)g.} Taking traces of both sides, we conclude 2 d i v ⁡ ( X ) = − w n div ⁡ ( X ) {\displaystyle 2\mathop {\mathrm {div} } (X)=-wn\operatorname {div} (X)} . Hence for w ≠ − 2 / n {\displaystyle w\neq -2/n} , necessarily div ⁡ ( X ) = 0 {\displaystyle \operatorname {div} (X)=0} and a w-Killing vector field is just a normal Killing vector field whose flow preserves the metric. However, for w = − 2 / n {\displaystyle w=-2/n} , the flow of X {\displaystyle X} has to only preserve the conformal structure and is, by definition, a conformal Killing vector field. == Equivalent formulations == The following are equivalent X {\displaystyle X} is a conformal Killing vector field, The (locally defined) flow of X {\displaystyle X} preserves the conformal structure, L X ( g μ g − 2 / n ) = 0 , {\displaystyle {\mathcal {L}}_{X}(g\mu _{g}^{-2/n})=0,} L X g = 2 n div ⁡ ( X ) g , {\displaystyle {\mathcal {L}}_{X}g={\frac {2}{n}}\operatorname {div} (X)g,} L X g = λ g {\displaystyle {\mathcal {L}}_{X}g=\lambda g} for some function λ . {\displaystyle \lambda .} The discussion above proves the equivalence of all but the seemingly more general last form. However, the last two forms are also equivalent: taking traces shows that necessarily λ = ( 2 / n ) div ⁡ ( X ) {\displaystyle \lambda =(2/n)\operatorname {div} (X)} . The last form makes it clear that any Killing vector is also a conformal Killing vector, with λ ≅ 0. {\displaystyle \lambda \cong 0.} == The conformal Killing equation == Using that L X g = 2 ( ∇ X ♭ ) s y m m {\displaystyle {\mathcal {L}}_{X}g=2\left(\nabla X^{\flat }\right)^{\mathrm {symm} }} where ∇ {\displaystyle \nabla } is the Levi Civita derivative of g {\displaystyle g} (aka covariant derivative), and X ♭ = g ( X , ⋅ ) {\displaystyle X^{\flat }=g(X,\cdot )} is the dual 1 form of X {\displaystyle X} (aka associated covariant vector aka vector with lowered indices), and s y m m {\displaystyle {}^{\mathrm {symm} }} is projection on the symmetric part, one can write the conformal Killing equation in abstract index notation as ∇ a X b + ∇ b X a = 2 n g a b ∇ c X c . {\displaystyle \nabla _{a}X_{b}+\nabla _{b}X_{a}={\frac {2}{n}}g_{ab}\nabla _{c}X^{c}.} Another index notation to write the conformal Killing equations is X a ; b + X b ; a = 2 n g a b X c ; c . {\displaystyle X_{a;b}+X_{b;a}={\frac {2}{n}}g_{ab}X^{c}{}_{;c}.} == Examples == === Flat space === In n {\displaystyle n} -dimensional flat space, that is Euclidean space or pseudo-Euclidean space, there exist globally flat coordinates in which we have a constant metric g μ ν = η μ ν {\displaystyle g_{\mu \nu }=\eta _{\mu \nu }} where in space with signature ( p , q ) {\displaystyle (p,q)} , we have components ( η μ ν ) = diag ( + 1 , ⋯ , + 1 , − 1 , ⋯ , − 1 ) {\displaystyle (\eta _{\mu \nu })={\text{diag}}(+1,\cdots ,+1,-1,\cdots ,-1)} . In these coordinates, the connection components vanish, so the covariant derivative is the coordinate derivative. The conformal Killing equation in flat space is ∂ μ X ν + ∂ ν X μ = 2 n η μ ν ∂ ρ X ρ . {\displaystyle \partial _{\mu }X_{\nu }+\partial _{\nu }X_{\mu }={\frac {2}{n}}\eta _{\mu \nu }\partial _{\rho }X^{\rho }.} The solutions to the flat space conformal Killing equation includes the solutions to the flat space Killing equation discussed in the article on Killing vector fields. These generate the Poincaré group of isometries of flat space. Considering the ansatz X μ = M μ ν x ν , {\displaystyle X^{\mu }=M^{\mu \nu }x_{\nu },} , we remove the antisymmetric part of M μ ν {\displaystyle M^{\mu \nu }} as this corresponds to known solutions, and we're looking for new solutions. Then M μ ν {\displaystyle M^{\mu \nu }} is symmetric. It follows that this is a dilatation, with M ν μ = λ δ ν μ {\displaystyle M_{\nu }^{\mu }=\lambda \delta _{\nu }^{\mu }} for real λ {\displaystyle \lambda } , and corresponding Killing vector X μ = λ x μ {\displaystyle X^{\mu }=\lambda x^{\mu }} . From the general solution there are n {\displaystyle n} more generators, known as special conformal transformations, given by X μ = c μ ν ρ x ν x ρ , {\displaystyle X_{\mu }=c_{\mu \nu \rho }x^{\nu }x^{\rho },} where the traceless part of c μ ν ρ {\displaystyle c_{\mu \nu \rho }} over μ , ν {\displaystyle \mu ,\nu } vanishes, hence can be parametrised by c μ μ ν = b ν {\displaystyle c^{\mu }{}_{\mu \nu }=b_{\nu }} . Together, the n {\displaystyle n} translations, n ( n − 1 ) / 2 {\displaystyle n(n-1)/2} Lorentz transformations, 1 {\displaystyle 1} dilatation and n {\displaystyle n} special conformal transformations comprise the conformal algebra, which generate the conformal group of pseudo-Euclidean space. == See also == Affine vector field Conformal Killing tensor Curvature collineation Einstein manifold Homothetic vector field Invariant differential operator Killing vector field Matter collineation Spacetime symmetries == References == === Further reading === Wald, R. M. (1984). General Relativity. The University of Chicago Press.
Wikipedia/Conformal_Killing_equation
In mathematics, the complex Witt algebra, named after Ernst Witt, is the Lie algebra of meromorphic vector fields defined on the Riemann sphere that are holomorphic except at two fixed points. It is also the complexification of the Lie algebra of polynomial vector fields on a circle, and the Lie algebra of derivations of the ring C[z,z−1]. There are some related Lie algebras defined over finite fields, that are also called Witt algebras. The complex Witt algebra was first defined by Élie Cartan (1909), and its analogues over finite fields were studied by Witt in the 1930s. == Basis == A basis for the Witt algebra is given by the vector fields L n = − z n + 1 ∂ ∂ z {\displaystyle L_{n}=-z^{n+1}{\frac {\partial }{\partial z}}} , for n in Z {\displaystyle \mathbb {Z} } . The Lie bracket of two basis vector fields is given by [ L m , L n ] = ( m − n ) L m + n . {\displaystyle [L_{m},L_{n}]=(m-n)L_{m+n}.} This algebra has a central extension called the Virasoro algebra that is important in two-dimensional conformal field theory and string theory. Note that by restricting n to 1,0,-1, one gets a subalgebra. Taken over the field of complex numbers, this is just the Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} of the Lorentz group S O ( 3 , 1 ) {\displaystyle \mathrm {SO} (3,1)} . Over the reals, it is the algebra sl(2,R) = su(1,1). Conversely, su(1,1) suffices to reconstruct the original algebra in a presentation. == Over finite fields == Over a field k of characteristic p>0, the Witt algebra is defined to be the Lie algebra of derivations of the ring k[z]/zp The Witt algebra is spanned by Lm for −1≤ m ≤ p−2. == Images == == See also == Virasoro algebra Heisenberg algebra == References == Élie Cartan, Les groupes de transformations continus, infinis, simples. Ann. Sci. Ecole Norm. Sup. 26, 93-161 (1909). "Witt algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Witt_algebra
In statistical mechanics, the n-vector model or O(n) model is a simple system of interacting spins on a crystalline lattice. It was developed by H. Eugene Stanley as a generalization of the Ising model, XY model and Heisenberg model. In the n-vector model, n-component unit-length classical spins s i {\displaystyle \mathbf {s} _{i}} are placed on the vertices of a d-dimensional lattice. The Hamiltonian of the n-vector model is given by: H = K ∑ ⟨ i , j ⟩ s i ⋅ s j {\displaystyle H=K{\sum }_{\langle i,j\rangle }\mathbf {s} _{i}\cdot \mathbf {s} _{j}} where the sum runs over all pairs of neighboring spins ⟨ i , j ⟩ {\displaystyle \langle i,j\rangle } and ⋅ {\displaystyle \cdot } denotes the standard Euclidean inner product. Special cases of the n-vector model are: n = 0 {\displaystyle n=0} : The self-avoiding walk n = 1 {\displaystyle n=1} : The Ising model n = 2 {\displaystyle n=2} : The XY model n = 3 {\displaystyle n=3} : The Heisenberg model n = 4 {\displaystyle n=4} : Toy model for the Higgs sector of the Standard Model The general mathematical formalism used to describe and solve the n-vector model and certain generalizations are developed in the article on the Potts model. == Reformulation as a loop model == In a small coupling expansion, the weight of a configuration may be rewritten as e H ∼ K → 0 ∏ ⟨ i , j ⟩ ( 1 + K s i ⋅ s j ) {\displaystyle e^{H}{\underset {K\to 0}{\sim }}\prod _{\langle i,j\rangle }\left(1+K\mathbf {s} _{i}\cdot \mathbf {s} _{j}\right)} Integrating over the vector s i {\displaystyle \mathbf {s} _{i}} gives rise to expressions such as ∫ d s i ∏ j = 1 4 ( s i ⋅ s j ) = ( s 1 ⋅ s 2 ) ( s 3 ⋅ s 4 ) + ( s 1 ⋅ s 4 ) ( s 2 ⋅ s 3 ) + ( s 1 ⋅ s 3 ) ( s 2 ⋅ s 4 ) {\displaystyle \int d\mathbf {s} _{i}\ \prod _{j=1}^{4}\left(\mathbf {s} _{i}\cdot \mathbf {s} _{j}\right)=\left(\mathbf {s} _{1}\cdot \mathbf {s} _{2}\right)\left(\mathbf {s} _{3}\cdot \mathbf {s} _{4}\right)+\left(\mathbf {s} _{1}\cdot \mathbf {s} _{4}\right)\left(\mathbf {s} _{2}\cdot \mathbf {s} _{3}\right)+\left(\mathbf {s} _{1}\cdot \mathbf {s} _{3}\right)\left(\mathbf {s} _{2}\cdot \mathbf {s} _{4}\right)} which is interpreted as a sum over the 3 possible ways of connecting the vertices 1 , 2 , 3 , 4 {\displaystyle 1,2,3,4} pairwise using 2 lines going through vertex i {\displaystyle i} . Integrating over all vectors, the corresponding lines combine into closed loops, and the partition function becomes a sum over loop configurations: Z = ∑ L ∈ L K E ( L ) n | L | {\displaystyle Z=\sum _{L\in {\mathcal {L}}}K^{E(L)}n^{|L|}} where L {\displaystyle {\mathcal {L}}} is the set of loop configurations, with | L | {\displaystyle |L|} the number of loops in the configuration L {\displaystyle L} , and E ( L ) {\displaystyle E(L)} the total number of lattice edges. In two dimensions, it is common to assume that loops do not cross: either by choosing the lattice to be trivalent, or by considering the model in a dilute phase where crossings are irrelevant, or by forbidding crossings by hand. The resulting model of non-intersecting loops can then be studied using powerful algebraic methods, and its spectrum is exactly known. Moreover, the model is closely related to the random cluster model, which can also be formulated in terms of non-crossing loops. Much less is known in models where loops are allowed to cross, and in higher than two dimensions. == Continuum limit == The continuum limit can be understood to be the sigma model. This can be easily obtained by writing the Hamiltonian in terms of the product − 1 2 ( s i − s j ) ⋅ ( s i − s j ) = s i ⋅ s j − 1 {\displaystyle -{\tfrac {1}{2}}(\mathbf {s} _{i}-\mathbf {s} _{j})\cdot (\mathbf {s} _{i}-\mathbf {s} _{j})=\mathbf {s} _{i}\cdot \mathbf {s} _{j}-1} where s i ⋅ s i = 1 {\displaystyle \mathbf {s} _{i}\cdot \mathbf {s} _{i}=1} is the "bulk magnetization" term. Dropping this term as an overall constant factor added to the energy, the limit is obtained by defining the Newton finite difference as δ h [ s ] ( i , j ) = s i − s j h {\displaystyle \delta _{h}[\mathbf {s} ](i,j)={\frac {\mathbf {s} _{i}-\mathbf {s} _{j}}{h}}} on neighboring lattice locations i , j . {\displaystyle i,j.} Then δ h [ s ] → ∇ μ s {\displaystyle \delta _{h}[\mathbf {s} ]\to \nabla _{\mu }\mathbf {s} } in the limit h → 0 {\displaystyle h\to 0} , where ∇ μ {\displaystyle \nabla _{\mu }} is the gradient in the ( i , j ) → μ {\displaystyle (i,j)\to \mu } direction. Thus, in the limit, − s i ⋅ s j → 1 2 ∇ μ s ⋅ ∇ μ s {\displaystyle -\mathbf {s} _{i}\cdot \mathbf {s} _{j}\to {\tfrac {1}{2}}\nabla _{\mu }\mathbf {s} \cdot \nabla _{\mu }\mathbf {s} } which can be recognized as the kinetic energy of the field s {\displaystyle \mathbf {s} } in the sigma model. One still has two possibilities for the spin s {\displaystyle \mathbf {s} } : it is either taken from a discrete set of spins (the Potts model) or it is taken as a point on the sphere S n − 1 {\displaystyle S^{n-1}} ; that is, s {\displaystyle \mathbf {s} } is a continuously-valued vector of unit length. In the later case, this is referred to as the O ( n ) {\displaystyle O(n)} non-linear sigma model, as the rotation group O ( n ) {\displaystyle O(n)} is group of isometries of S n − 1 {\displaystyle S^{n-1}} , and obviously, S n − 1 {\displaystyle S^{n-1}} isn't "flat", i.e. isn't a linear field. == Conformal field theory == At the critical temperature and in the continuum limit, the model gives rise to a conformal field theory called the critical O(n) model. This CFT can be analyzed using expansions in the dimension d or in n, or using the conformal bootstrap approach. Its conformal data are functions of d and n, on which many results are known. == References ==
Wikipedia/N-vector_model
In theoretical physics, a minimal model or Virasoro minimal model is a two-dimensional conformal field theory whose spectrum is built from finitely many irreducible representations of the Virasoro algebra. Minimal models have been classified, giving rise to an ADE classification. Most minimal models have been solved, i.e. their 3-point structure constants have been computed analytically. The term minimal model can also refer to a rational CFT based on an algebra that is larger than the Virasoro algebra, such as a W-algebra. == Relevant representations of the Virasoro algebra == === Representations === In minimal models, the central charge of the Virasoro algebra takes values of the type c p , q = 1 − 6 ( p − q ) 2 p q . {\displaystyle c_{p,q}=1-6{(p-q)^{2} \over pq}\ .} where p , q {\displaystyle p,q} are coprime integers such that p , q ≥ 2 {\displaystyle p,q\geq 2} . Then the conformal dimensions of degenerate representations are h r , s = ( p r − q s ) 2 − ( p − q ) 2 4 p q , with r , s ∈ N ∗ , {\displaystyle h_{r,s}={\frac {(pr-qs)^{2}-(p-q)^{2}}{4pq}}\ ,\quad {\text{with}}\ r,s\in \mathbb {N} ^{*}\ ,} and they obey the identities h r , s = h q − r , p − s = h r + q , s + p . {\displaystyle h_{r,s}=h_{q-r,p-s}=h_{r+q,s+p}\ .} The spectrums of minimal models are made of irreducible, degenerate lowest-weight representations of the Virasoro algebra, whose conformal dimensions are of the type h r , s {\displaystyle h_{r,s}} with 1 ≤ r ≤ q − 1 , 1 ≤ s ≤ p − 1 . {\displaystyle 1\leq r\leq q-1\quad ,\quad 1\leq s\leq p-1\ .} Such a representation R r , s {\displaystyle {\mathcal {R}}_{r,s}} is a coset of a Verma module by its infinitely many nontrivial submodules. It is unitary if and only if | p − q | = 1 {\displaystyle |p-q|=1} . At a given central charge, there are 1 2 ( p − 1 ) ( q − 1 ) {\displaystyle {\frac {1}{2}}(p-1)(q-1)} distinct representations of this type. The set of these representations, or of their conformal dimensions, is called the Kac table with parameters ( p , q ) {\displaystyle (p,q)} . The Kac table is usually drawn as a rectangle of size ( q − 1 ) × ( p − 1 ) {\displaystyle (q-1)\times (p-1)} , where each representation appears twice due to the relation R r , s = R q − r , p − s . {\displaystyle {\mathcal {R}}_{r,s}={\mathcal {R}}_{q-r,p-s}\ .} === Fusion rules === The fusion rules of the multiply degenerate representations R r , s {\displaystyle {\mathcal {R}}_{r,s}} encode constraints from all their null vectors. They can therefore be deduced from the fusion rules of simply degenerate representations, which encode constraints from individual null vectors. Explicitly, the fusion rules are R r 1 , s 1 × R r 2 , s 2 = ∑ r 3 = 2 | r 1 − r 2 | + 1 min ( r 1 + r 2 , 2 q − r 1 − r 2 ) − 1 ∑ s 3 = 2 | s 1 − s 2 | + 1 min ( s 1 + s 2 , 2 p − s 1 − s 2 ) − 1 R r 3 , s 3 , {\displaystyle {\mathcal {R}}_{r_{1},s_{1}}\times {\mathcal {R}}_{r_{2},s_{2}}=\sum _{r_{3}{\overset {2}{=}}|r_{1}-r_{2}|+1}^{\min(r_{1}+r_{2},2q-r_{1}-r_{2})-1}\ \sum _{s_{3}{\overset {2}{=}}|s_{1}-s_{2}|+1}^{\min(s_{1}+s_{2},2p-s_{1}-s_{2})-1}{\mathcal {R}}_{r_{3},s_{3}}\ ,} where the sums run by increments of two. == Classification and spectrums == Minimal models are the only 2d CFTs that are consistent on any Riemann surface, and are built from finitely many representations of the Virasoro algebra. There are many more rational CFTs that are consistent on the sphere only: these CFTs are submodels of minimal models, built from subsets of the Kac table that are closed under fusion. Such submodels can also be classified. === A-series minimal models: the diagonal case === For any coprime integers p , q {\displaystyle p,q} such that p , q ≥ 2 {\displaystyle p,q\geq 2} , there exists a diagonal minimal model whose spectrum contains one copy of each distinct representation in the Kac table: S p , q A-series = 1 2 ⨁ r = 1 q − 1 ⨁ s = 1 p − 1 R r , s ⊗ R ¯ r , s . {\displaystyle {\mathcal {S}}_{p,q}^{\text{A-series}}={\frac {1}{2}}\bigoplus _{r=1}^{q-1}\bigoplus _{s=1}^{p-1}{\mathcal {R}}_{r,s}\otimes {\bar {\mathcal {R}}}_{r,s}\ .} The ( p , q ) {\displaystyle (p,q)} and ( q , p ) {\displaystyle (q,p)} models are the same. The OPE of two fields involves all the fields that are allowed by the fusion rules of the corresponding representations. === D-series minimal models === A D-series minimal model with the central charge c p , q {\displaystyle c_{p,q}} exists if p {\displaystyle p} or q {\displaystyle q} is even and at least 6 {\displaystyle 6} . Using the symmetry p ↔ q {\displaystyle p\leftrightarrow q} we assume that q {\displaystyle q} is even, then p {\displaystyle p} is odd. The spectrum is S p , q D-series = q ≡ 0 mod ⁡ 4 , q ≥ 8 1 2 ⨁ r = 2 1 q − 1 ⨁ s = 1 p − 1 R r , s ⊗ R ¯ r , s ⊕ 1 2 ⨁ r = 2 2 q − 2 ⨁ s = 1 p − 1 R r , s ⊗ R ¯ q − r , s , {\displaystyle {\mathcal {S}}_{p,q}^{\text{D-series}}\ \ {\underset {q\equiv 0\operatorname {mod} 4,\ q\geq 8}{=}}\ \ {\frac {1}{2}}\bigoplus _{r{\overset {2}{=}}1}^{q-1}\bigoplus _{s=1}^{p-1}{\mathcal {R}}_{r,s}\otimes {\bar {\mathcal {R}}}_{r,s}\oplus {\frac {1}{2}}\bigoplus _{r{\overset {2}{=}}2}^{q-2}\bigoplus _{s=1}^{p-1}{\mathcal {R}}_{r,s}\otimes {\bar {\mathcal {R}}}_{q-r,s}\ ,} S p , q D-series = q ≡ 2 mod ⁡ 4 , q ≥ 6 1 2 ⨁ r = 2 1 q − 1 ⨁ s = 1 p − 1 R r , s ⊗ R ¯ r , s ⊕ 1 2 ⨁ r = 2 1 q − 1 ⨁ s = 1 p − 1 R r , s ⊗ R ¯ q − r , s , {\displaystyle {\mathcal {S}}_{p,q}^{\text{D-series}}\ \ {\underset {q\equiv 2\operatorname {mod} 4,\ q\geq 6}{=}}\ \ {\frac {1}{2}}\bigoplus _{r{\overset {2}{=}}1}^{q-1}\bigoplus _{s=1}^{p-1}{\mathcal {R}}_{r,s}\otimes {\bar {\mathcal {R}}}_{r,s}\oplus {\frac {1}{2}}\bigoplus _{r{\overset {2}{=}}1}^{q-1}\bigoplus _{s=1}^{p-1}{\mathcal {R}}_{r,s}\otimes {\bar {\mathcal {R}}}_{q-r,s}\ ,} where the sums over r {\displaystyle r} run by increments of two. In any given spectrum, each representation has multiplicity one, except the representations of the type R q 2 , s ⊗ R ¯ q 2 , s {\displaystyle {\mathcal {R}}_{{\frac {q}{2}},s}\otimes {\bar {\mathcal {R}}}_{{\frac {q}{2}},s}} if q ≡ 2 m o d 4 {\displaystyle q\equiv 2\ \mathrm {mod} \ 4} , which have multiplicity two. These representations indeed appear in both terms in our formula for the spectrum. The OPE of two fields involves all the fields that are allowed by the fusion rules of the corresponding representations, and that respect the conservation of diagonality: the OPE of one diagonal and one non-diagonal field yields only non-diagonal fields, and the OPE of two fields of the same type yields only diagonal fields. For this rule, one copy of the representation R q 2 , s ⊗ R ¯ q 2 , s {\displaystyle {\mathcal {R}}_{{\frac {q}{2}},s}\otimes {\bar {\mathcal {R}}}_{{\frac {q}{2}},s}} counts as diagonal, and the other copy as non-diagonal. === E-series minimal models === There are three series of E-series minimal models. Each series exists for a given value of q ∈ { 12 , 18 , 30 } , {\displaystyle q\in \{12,18,30\},} for any p ≥ 2 {\displaystyle p\geq 2} that is coprime with q {\displaystyle q} . (This actually implies p ≥ 5 {\displaystyle p\geq 5} .) Using the notation | R | 2 = R ⊗ R ¯ {\displaystyle |{\mathcal {R}}|^{2}={\mathcal {R}}\otimes {\bar {\mathcal {R}}}} , the spectrums read: S p , 12 E-series = 1 2 ⨁ s = 1 p − 1 { | R 1 , s ⊕ R 7 , s | 2 ⊕ | R 4 , s ⊕ R 8 , s | 2 ⊕ | R 5 , s ⊕ R 11 , s | 2 } , {\displaystyle {\mathcal {S}}_{p,12}^{\text{E-series}}={\frac {1}{2}}\bigoplus _{s=1}^{p-1}\left\{\left|{\mathcal {R}}_{1,s}\oplus {\mathcal {R}}_{7,s}\right|^{2}\oplus \left|{\mathcal {R}}_{4,s}\oplus {\mathcal {R}}_{8,s}\right|^{2}\oplus \left|{\mathcal {R}}_{5,s}\oplus {\mathcal {R}}_{11,s}\right|^{2}\right\}\ ,} S p , 18 E-series = 1 2 ⨁ s = 1 p − 1 { | R 9 , s ⊕ 2 R 3 , s | 2 ⊖ 4 | R 3 , s | 2 ⊕ ⨁ r ∈ { 1 , 5 , 7 } | R r , s ⊕ R 18 − r , s | 2 } , {\displaystyle {\mathcal {S}}_{p,18}^{\text{E-series}}={\frac {1}{2}}\bigoplus _{s=1}^{p-1}\left\{\left|{\mathcal {R}}_{9,s}\oplus 2{\mathcal {R}}_{3,s}\right|^{2}\ominus 4\left|{\mathcal {R}}_{3,s}\right|^{2}\oplus \bigoplus _{r\in \{1,5,7\}}\left|{\mathcal {R}}_{r,s}\oplus {\mathcal {R}}_{18-r,s}\right|^{2}\right\}\ ,} S p , 30 E-series = 1 2 ⨁ s = 1 p − 1 { | ⨁ r ∈ { 1 , 11 , 19 , 29 } R r , s | 2 ⊕ | ⨁ r ∈ { 7 , 13 , 17 , 23 } R r , s | 2 } . {\displaystyle {\mathcal {S}}_{p,30}^{\text{E-series}}={\frac {1}{2}}\bigoplus _{s=1}^{p-1}\left\{\left|\bigoplus _{r\in \{1,11,19,29\}}{\mathcal {R}}_{r,s}\right|^{2}\oplus \left|\bigoplus _{r\in \{7,13,17,23\}}{\mathcal {R}}_{r,s}\right|^{2}\right\}\ .} == Examples == The following A-series minimal models are related to well-known physical systems: ( p , q ) = ( 3 , 2 ) {\displaystyle (p,q)=(3,2)} : trivial CFT, ( p , q ) = ( 5 , 2 ) {\displaystyle (p,q)=(5,2)} : Yang-Lee edge singularity, ( p , q ) = ( 4 , 3 ) {\displaystyle (p,q)=(4,3)} : critical Ising model, ( p , q ) = ( 5 , 4 ) {\displaystyle (p,q)=(5,4)} : tricritical Ising model, ( p , q ) = ( 6 , 5 ) {\displaystyle (p,q)=(6,5)} : tetracritical Ising model. The following D-series minimal models are related to well-known physical systems: ( p , q ) = ( 6 , 5 ) {\displaystyle (p,q)=(6,5)} : 3-state Potts model at criticality, ( p , q ) = ( 7 , 6 ) {\displaystyle (p,q)=(7,6)} : tricritical 3-state Potts model. The Kac tables of these models, together with a few other Kac tables with 2 ≤ q ≤ 6 {\displaystyle 2\leq q\leq 6} , are: 1 0 0 1 2 c 3 , 2 = 0 1 0 − 1 5 − 1 5 0 1 2 3 4 c 5 , 2 = − 22 5 {\displaystyle {\begin{array}{c}{\begin{array}{c|cc}1&0&0\\\hline &1&2\end{array}}\\c_{3,2}=0\end{array}}\qquad {\begin{array}{c}{\begin{array}{c|cccc}1&0&-{\frac {1}{5}}&-{\frac {1}{5}}&0\\\hline &1&2&3&4\end{array}}\\c_{5,2}=-{\frac {22}{5}}\end{array}}} 2 1 2 1 16 0 1 0 1 16 1 2 1 2 3 c 4 , 3 = 1 2 2 3 4 1 5 − 1 20 0 1 0 − 1 20 1 5 3 4 1 2 3 4 c 5 , 3 = − 3 5 {\displaystyle {\begin{array}{c}{\begin{array}{c|ccc}2&{\frac {1}{2}}&{\frac {1}{16}}&0\\1&0&{\frac {1}{16}}&{\frac {1}{2}}\\\hline &1&2&3\end{array}}\\c_{4,3}={\frac {1}{2}}\end{array}}\qquad {\begin{array}{c}{\begin{array}{c|cccc}2&{\frac {3}{4}}&{\frac {1}{5}}&-{\frac {1}{20}}&0\\1&0&-{\frac {1}{20}}&{\frac {1}{5}}&{\frac {3}{4}}\\\hline &1&2&3&4\end{array}}\\c_{5,3}=-{\frac {3}{5}}\end{array}}} 3 3 2 3 5 1 10 0 2 7 16 3 80 3 80 7 16 1 0 1 10 3 5 3 2 1 2 3 4 c 5 , 4 = 7 10 3 5 2 10 7 9 14 1 7 − 1 14 0 2 13 16 27 112 − 5 112 − 5 112 27 112 13 16 1 0 − 1 14 1 7 9 14 10 7 5 2 1 2 3 4 5 6 c 7 , 4 = − 13 14 {\displaystyle {\begin{array}{c}{\begin{array}{c|cccc}3&{\frac {3}{2}}&{\frac {3}{5}}&{\frac {1}{10}}&0\\2&{\frac {7}{16}}&{\frac {3}{80}}&{\frac {3}{80}}&{\frac {7}{16}}\\1&0&{\frac {1}{10}}&{\frac {3}{5}}&{\frac {3}{2}}\\\hline &1&2&3&4\end{array}}\\c_{5,4}={\frac {7}{10}}\end{array}}\qquad {\begin{array}{c}{\begin{array}{c|cccccc}3&{\frac {5}{2}}&{\frac {10}{7}}&{\frac {9}{14}}&{\frac {1}{7}}&-{\frac {1}{14}}&0\\2&{\frac {13}{16}}&{\frac {27}{112}}&-{\frac {5}{112}}&-{\frac {5}{112}}&{\frac {27}{112}}&{\frac {13}{16}}\\1&0&-{\frac {1}{14}}&{\frac {1}{7}}&{\frac {9}{14}}&{\frac {10}{7}}&{\frac {5}{2}}\\\hline &1&2&3&4&5&6\end{array}}\\c_{7,4}=-{\frac {13}{14}}\end{array}}} 4 3 13 8 2 3 1 8 0 3 7 5 21 40 1 15 1 40 2 5 2 2 5 1 40 1 15 21 40 7 5 1 0 1 8 2 3 13 8 3 1 2 3 4 5 c 6 , 5 = 4 5 4 15 4 16 7 33 28 3 7 1 28 0 3 9 5 117 140 8 35 − 3 140 3 35 11 20 2 11 20 3 35 − 3 140 8 35 117 140 9 5 1 0 1 28 3 7 33 28 16 7 15 4 1 2 3 4 5 6 c 7 , 5 = 11 35 {\displaystyle {\begin{array}{c}{\begin{array}{c|ccccc}4&3&{\frac {13}{8}}&{\frac {2}{3}}&{\frac {1}{8}}&0\\3&{\frac {7}{5}}&{\frac {21}{40}}&{\frac {1}{15}}&{\frac {1}{40}}&{\frac {2}{5}}\\2&{\frac {2}{5}}&{\frac {1}{40}}&{\frac {1}{15}}&{\frac {21}{40}}&{\frac {7}{5}}\\1&0&{\frac {1}{8}}&{\frac {2}{3}}&{\frac {13}{8}}&3\\\hline &1&2&3&4&5\end{array}}\\c_{6,5}={\frac {4}{5}}\end{array}}\qquad {\begin{array}{c}{\begin{array}{c|cccccc}4&{\frac {15}{4}}&{\frac {16}{7}}&{\frac {33}{28}}&{\frac {3}{7}}&{\frac {1}{28}}&0\\3&{\frac {9}{5}}&{\frac {117}{140}}&{\frac {8}{35}}&-{\frac {3}{140}}&{\frac {3}{35}}&{\frac {11}{20}}\\2&{\frac {11}{20}}&{\frac {3}{35}}&-{\frac {3}{140}}&{\frac {8}{35}}&{\frac {117}{140}}&{\frac {9}{5}}\\1&0&{\frac {1}{28}}&{\frac {3}{7}}&{\frac {33}{28}}&{\frac {16}{7}}&{\frac {15}{4}}\\\hline &1&2&3&4&5&6\end{array}}\\c_{7,5}={\frac {11}{35}}\end{array}}} 5 5 22 7 12 7 5 7 1 7 0 4 23 8 85 56 33 56 5 56 1 56 3 8 3 4 3 10 21 1 21 1 21 10 21 4 3 2 3 8 1 56 5 56 33 56 85 56 23 8 1 0 1 7 5 7 12 7 22 7 5 1 2 3 4 5 6 c 7 , 6 = 6 7 {\displaystyle {\begin{array}{c}{\begin{array}{c|cccccc}5&5&{\frac {22}{7}}&{\frac {12}{7}}&{\frac {5}{7}}&{\frac {1}{7}}&0\\4&{\frac {23}{8}}&{\frac {85}{56}}&{\frac {33}{56}}&{\frac {5}{56}}&{\frac {1}{56}}&{\frac {3}{8}}\\3&{\frac {4}{3}}&{\frac {10}{21}}&{\frac {1}{21}}&{\frac {1}{21}}&{\frac {10}{21}}&{\frac {4}{3}}\\2&{\frac {3}{8}}&{\frac {1}{56}}&{\frac {5}{56}}&{\frac {33}{56}}&{\frac {85}{56}}&{\frac {23}{8}}\\1&0&{\frac {1}{7}}&{\frac {5}{7}}&{\frac {12}{7}}&{\frac {22}{7}}&5\\\hline &1&2&3&4&5&6\end{array}}\\c_{7,6}={\frac {6}{7}}\end{array}}} == Solution of minimal models == The 3-point structure constants of minimal models take different forms depending on the series: For A-series minimal models, an expression in terms of the Gamma function was obtained using Coulomb gas techniques in the 1980s. For D-series minimal models, an expression in terms of the fusing matrix is known. For E-series minimal models with q = 12 {\displaystyle q=12} , an expression in terms of the double Gamma function is known. The A-series and D-series structure constants can also be rewritten in terms of the same special function. == Related conformal field theories == === Coset realizations === The A-series minimal model with indices ( p , q ) {\displaystyle (p,q)} coincides with the following coset of WZW models: S U ( 2 ) k × S U ( 2 ) 1 S U ( 2 ) k + 1 , where k = q p − q − 2 . {\displaystyle {\frac {SU(2)_{k}\times SU(2)_{1}}{SU(2)_{k+1}}}\ ,\quad {\text{where}}\quad k={\frac {q}{p-q}}-2\ .} Assuming p > q {\displaystyle p>q} , the level k {\displaystyle k} is integer if and only if p = q + 1 {\displaystyle p=q+1} i.e. if and only if the minimal model is unitary. There exist other realizations of certain minimal models, diagonal or not, as cosets of WZW models, not necessarily based on the group S U ( 2 ) {\displaystyle SU(2)} . === Generalized minimal models === For any central charge c ∈ C {\displaystyle c\in \mathbb {C} } , there is a diagonal CFT whose spectrum is made of all degenerate representations, S = ⨁ r , s = 1 ∞ R r , s ⊗ R ¯ r , s . {\displaystyle {\mathcal {S}}=\bigoplus _{r,s=1}^{\infty }{\mathcal {R}}_{r,s}\otimes {\bar {\mathcal {R}}}_{r,s}\ .} When the central charge tends to c p , q {\displaystyle c_{p,q}} , the generalized minimal models tend to the corresponding A-series minimal model. This means in particular that the degenerate representations that are not in the Kac table decouple. === Liouville theory === Since Liouville theory reduces to a generalized minimal model when the fields are taken to be degenerate, it further reduces to an A-series minimal model when the central charge is then sent to c p , q {\displaystyle c_{p,q}} . Moreover, A-series minimal models have a well-defined limit as c → 1 {\displaystyle c\to 1} : a diagonal CFT with a continuous spectrum called Runkel–Watts theory, which coincides with the limit of Liouville theory when c → 1 + {\displaystyle c\to 1^{+}} . === Products of minimal models === There are three cases of minimal models that are products of two minimal models. At the level of their spectrums, the relations are: S 2 , 5 A-series ⊗ S 2 , 5 A-series = S 3 , 10 D-series , {\displaystyle {\mathcal {S}}_{2,5}^{\text{A-series}}\otimes {\mathcal {S}}_{2,5}^{\text{A-series}}={\mathcal {S}}_{3,10}^{\text{D-series}}\ ,} S 2 , 5 A-series ⊗ S 3 , 4 A-series = S 5 , 12 E-series , {\displaystyle {\mathcal {S}}_{2,5}^{\text{A-series}}\otimes {\mathcal {S}}_{3,4}^{\text{A-series}}={\mathcal {S}}_{5,12}^{\text{E-series}}\ ,} S 2 , 5 A-series ⊗ S 2 , 7 A-series = S 7 , 30 E-series . {\displaystyle {\mathcal {S}}_{2,5}^{\text{A-series}}\otimes {\mathcal {S}}_{2,7}^{\text{A-series}}={\mathcal {S}}_{7,30}^{\text{E-series}}\ .} === Fermionic extensions of minimal models === If q ≡ 0 mod 4 {\displaystyle q\equiv 0{\bmod {4}}} , the A-series and the D-series ( p , q ) {\displaystyle (p,q)} minimal models each have a fermionic extension. These two fermionic extensions involve fields with half-integer spins, and they are related to one another by a parity-shift operation. == References ==
Wikipedia/Virasoro_minimal_model
In physics, a quantum state space is an abstract space in which different "positions" represent not literal locations, but rather quantum states of some physical system. It is the quantum analog of the phase space of classical mechanics. == Relative to Hilbert space == In quantum mechanics a state space is a separable complex Hilbert space. The dimension of this Hilbert space depends on the system we choose to describe. The different states that could come out of any particular measurement form an orthonormal basis, so any state vector in the state space can be written as a linear combination of these basis vectors. Having a nonzero component along multiple dimensions is called a superposition. In the formalism of quantum mechanics these state vectors are often written using Dirac's compact bra–ket notation.: 165  == Examples == The spin state of a silver atom in the Stern–Gerlach experiment can be represented in a two state space. The spin can be aligned with a measuring apparatus (arbitrarily called 'up') or oppositely ('down'). In Dirac's notation these two states can be written as | u ⟩ , | d ⟩ {\displaystyle |u\rangle ,|d\rangle } . The space of a two spin system has four states, | u u ⟩ , | u d ⟩ , | d u ⟩ , | d d ⟩ {\displaystyle |uu\rangle ,|ud\rangle ,|du\rangle ,|dd\rangle } . The spin state is a discrete degree of freedom; quantum state spaces can have continuous degrees of freedom. For example, a particle in one space dimension has one degree of freedom ranging from − ∞ {\displaystyle -\infty } to ∞ {\displaystyle \infty } . In Dirac notation, the states in this space might be written as | q ⟩ {\displaystyle |q\rangle } or | ψ ⟩ {\displaystyle |\psi \rangle } .: 302  == Relative to 3D space == Even in the early days of quantum mechanics, the state space (or configurations as they were called at first) was understood to be essential for understanding simple quantum-mechanical problems. In 1929, Nevill Mott showed that "tendency to picture the wave as existing in ordinary three dimensional space, whereas we are really dealing with wave functions in multispace" makes analysis of simple interaction problems more difficult. Mott analyzes α {\displaystyle \alpha } -particle emission in a cloud chamber. The emission process is isotropic, a spherical wave in quantum mechanics, but the tracks observed are linear. As Mott says, "it is a little difficult to picture how it is that an outgoing spherical wave can produce a straight track; we think intuitively that it should ionise atoms at random throughout space". This issue became known at the Mott problem. Mott then derives the straight track by considering correlations between the positions of the source and two representative atoms, showing that consecutive ionization results from just that state in which all three positions are co-linear. == Relative to classical phase space == Classical mechanics for multiple objects describes their motion in terms of a list or vector of every object's coordinates and velocity. As the objects move, the values in the vector change; the set of all possible values is called a phase space.: 88  In quantum mechanics a state space is similar, however in the state space two vectors which are scalar multiples of each other represent the same state. Furthermore, the character of values in the quantum state differ from the classical values: in the quantum case the values can only be measured statistically (by repetition over many examples) and thus do not have well defined values at every instant of time. : 294  == See also == Quantum mechanics – Description of physical properties at the atomic and subatomic scale Quantum state – Mathematical entity to describe the probability of each possible measurement on a system Configuration space (physics) – Space of possible positions for all objects in a physical system == References == == Further reading == Claude Cohen-Tannoudji (1977). Quantum Mechanics. John Wiley & Sons. Inc. ISBN 0-471-16433-X. David J. Griffiths (1995). Introduction to Quantum Mechanics. Prentice Hall. ISBN 0-13-124405-1. David H. McIntyre (2012). Quantum Mechanics: A Paradigms Approach. Pearson. ISBN 978-0321765796.
Wikipedia/State_space_(physics)
In theoretical physics, boundary conformal field theory (BCFT) is a conformal field theory defined on a spacetime with a boundary (or boundaries). Different kinds of boundary conditions for the fields may be imposed on the fundamental fields; for example, Neumann boundary condition or Dirichlet boundary condition is acceptable for free bosonic fields. BCFT was developed by John Cardy. In the context of string theory, physicists are often interested in two-dimensional BCFTs. The specific types of boundary conditions in a specific CFT describe different kinds of D-branes. BCFT is also used in condensed matter physics - it can be used to study boundary critical behavior and to solve quantum impurity models. == See also == Conformal field theory Operator product expansion Critical point == References == Herzog, Christopher P.; Huang, Kuo-Wei (2017). "Boundary conformal field theory and a boundary central charge". Journal of High Energy Physics. 2017 (10): 189. arXiv:1707.06224. Bibcode:2017JHEP...10..189H. doi:10.1007/jhep10(2017)189. ISSN 1029-8479. == Further reading == Cardy, John (2004). "Boundary Conformal Field Theory". arXiv:hep-th/0411189. Bibcode:2004hep.th...11189C. {{cite journal}}: Cite journal requires |journal= (help)
Wikipedia/Boundary_conformal_field_theory
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside. Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c (299792458 m/s). Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays. In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as (top to bottom: Gauss's law, Gauss's law for magnetism, Faraday's law, Ampère-Maxwell law) ∇ ⋅ E = ρ ε 0 ∇ ⋅ B = 0 ∇ × E = − ∂ B ∂ t ∇ × B = μ 0 ( J + ε 0 ∂ E ∂ t ) {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} \,\,\,&={\frac {\rho }{\varepsilon _{0}}}\\\nabla \cdot \mathbf {B} \,\,\,&=0\\\nabla \times \mathbf {E} &=-{\frac {\partial \mathbf {B} }{\partial t}}\\\nabla \times \mathbf {B} &=\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\end{aligned}}} With E {\displaystyle \mathbf {E} } the electric field, B {\displaystyle \mathbf {B} } the magnetic field, ρ {\displaystyle \rho } the electric charge density and J {\displaystyle \mathbf {J} } the current density. ε 0 {\displaystyle \varepsilon _{0}} is the vacuum permittivity and μ 0 {\displaystyle \mu _{0}} the vacuum permeability. The equations have two major variants: The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale. The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials. The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences. The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation. Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics. == History of the equations == == Conceptual descriptions == === Gauss's law === Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space. === Gauss's law for magnetism === Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field. === Faraday's law === The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to the negative curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface. The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire. === Ampère–Maxwell law === The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve. Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space. The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics. == Formulation in terms of electric and magnetic fields (microscopic or in vacuum version) == In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see § Alternative formulations). The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis. === Key to the notation === Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated. The equations introduce the electric field, E, a vector field, and the magnetic field, B, a pseudovector field, each generally having a time and location dependence. The sources are the total electric charge density (total charge per unit volume), ρ, and the total electric current density (total current per unit area), J. The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are: the permittivity of free space, ε0, and the permeability of free space, μ0, and the speed of light, c = ( ε 0 μ 0 ) − 1 / 2 {\displaystyle c=({\varepsilon _{0}\mu _{0}})^{-1/2}} ==== Differential equations ==== In the differential equations, the nabla symbol, ∇, denotes the three-dimensional gradient operator, del, the ∇⋅ symbol (pronounced "del dot") denotes the divergence operator, the ∇× symbol (pronounced "del cross") denotes the curl operator. ==== Integral equations ==== In the integral equations, Ω is any volume with closed boundary surface ∂Ω, and Σ is any surface with closed boundary curve ∂Σ, The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law: d d t ∬ Σ B ⋅ d S = ∬ Σ ∂ B ∂ t ⋅ d S , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot \mathrm {d} \mathbf {S} \,,} Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate. ∫ ∂ Ω {\displaystyle {\vphantom {\int }}_{\scriptstyle \partial \Omega }} is a surface integral over the boundary surface ∂Ω, with the loop indicating the surface is closed ∭ Ω {\displaystyle \iiint _{\Omega }} is a volume integral over the volume Ω, ∮ ∂ Σ {\displaystyle \oint _{\partial \Sigma }} is a line integral around the boundary curve ∂Σ, with the loop indicating the curve is closed. ∬ Σ {\displaystyle \iint _{\Sigma }} is a surface integral over the surface Σ, The total electric charge Q enclosed in Ω is the volume integral over Ω of the charge density ρ (see the "macroscopic formulation" section below): Q = ∭ Ω ρ d V , {\displaystyle Q=\iiint _{\Omega }\rho \ \mathrm {d} V,} where dV is the volume element. The net magnetic flux ΦB is the surface integral of the magnetic field B passing through a fixed surface, Σ: Φ B = ∬ Σ B ⋅ d S , {\displaystyle \Phi _{B}=\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} ,} The net electric flux ΦE is the surface integral of the electric field E passing through Σ: Φ E = ∬ Σ E ⋅ d S , {\displaystyle \Phi _{E}=\iint _{\Sigma }\mathbf {E} \cdot \mathrm {d} \mathbf {S} ,} The net electric current I is the surface integral of the electric current density J passing through Σ: I = ∬ Σ J ⋅ d S , {\displaystyle I=\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} ,} where dS denotes the differential vector element of surface area S, normal to surface Σ. (Vector area is sometimes denoted by A rather than S, but this conflicts with the notation for magnetic vector potential). === Formulation in the SI === === Formulation in the Gaussian system === The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of ε0 and μ0 into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension.: vii  Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units", the Maxwell equations become: The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1. Further changes are possible by absorbing factors of 4π. This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics). == Relationship between differential and integral formulations == The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem. === Flux and divergence === According to the (purely mathematical) Gauss divergence theorem, the electric flux through the boundary surface ∂Ω can be rewritten as ∮ ∂ Ω E ⋅ d S = ∭ Ω ∇ ⋅ E d V {\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {E} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {E} \,\mathrm {d} V} The integral version of Gauss's equation can thus be rewritten as ∭ Ω ( ∇ ⋅ E − ρ ε 0 ) d V = 0 {\displaystyle \iiint _{\Omega }\left(\nabla \cdot \mathbf {E} -{\frac {\rho }{\varepsilon _{0}}}\right)\,\mathrm {d} V=0} Since Ω is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is the differential equations formulation of Gauss equation up to a trivial rearrangement. Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives ∮ ∂ Ω B ⋅ d S = ∭ Ω ∇ ⋅ B d V = 0. {\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {B} \,\mathrm {d} V=0.} which is satisfied for all Ω if and only if ∇ ⋅ B = 0 {\displaystyle \nabla \cdot \mathbf {B} =0} everywhere. === Circulation and curl === By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve ∂Σ to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e. ∮ ∂ Σ B ⋅ d ℓ = ∬ Σ ( ∇ × B ) ⋅ d S , {\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {\ell }}=\iint _{\Sigma }(\nabla \times \mathbf {B} )\cdot \mathrm {d} \mathbf {S} ,} Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as ∬ Σ ( ∇ × B − μ 0 ( J + ε 0 ∂ E ∂ t ) ) ⋅ d S = 0. {\displaystyle \iint _{\Sigma }\left(\nabla \times \mathbf {B} -\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)\cdot \mathrm {d} \mathbf {S} =0.} Since Σ can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied. The equivalence of Faraday's law in differential and integral form follows likewise. The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field. == Charge conservation == The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: 0 = ∇ ⋅ ( ∇ × B ) = ∇ ⋅ ( μ 0 ( J + ε 0 ∂ E ∂ t ) ) = μ 0 ( ∇ ⋅ J + ε 0 ∂ ∂ t ∇ ⋅ E ) = μ 0 ( ∇ ⋅ J + ∂ ρ ∂ t ) {\displaystyle 0=\nabla \cdot (\nabla \times \mathbf {B} )=\nabla \cdot \left(\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +\varepsilon _{0}{\frac {\partial }{\partial t}}\nabla \cdot \mathbf {E} \right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +{\frac {\partial \rho }{\partial t}}\right)} i.e., ∂ ρ ∂ t + ∇ ⋅ J = 0. {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0.} By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary: d d t Q Ω = d d t ∭ Ω ρ d V = − {\displaystyle {\frac {d}{dt}}Q_{\Omega }={\frac {d}{dt}}\iiint _{\Omega }\rho \mathrm {d} V=-} ∮ ∂ Ω J ⋅ d S = − I ∂ Ω . {\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {J} \cdot {\rm {d}}\mathbf {S} =-I_{\partial \Omega }.} In particular, in an isolated system the total charge is conserved. == Vacuum equations, electromagnetic waves and speed of light == In a region with no charges (ρ = 0) and no currents (J = 0), such as in vacuum, Maxwell's equations reduce to: ∇ ⋅ E = 0 , ∇ × E + ∂ B ∂ t = 0 , ∇ ⋅ B = 0 , ∇ × B − μ 0 ε 0 ∂ E ∂ t = 0. {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0,&\nabla \times \mathbf {E} +{\frac {\partial \mathbf {B} }{\partial t}}=0,\\\nabla \cdot \mathbf {B} &=0,&\nabla \times \mathbf {B} -\mu _{0}\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}=0.\end{aligned}}} Taking the curl (∇×) of the curl equations, and using the curl of the curl identity we obtain μ 0 ε 0 ∂ 2 E ∂ t 2 − ∇ 2 E = 0 , μ 0 ε 0 ∂ 2 B ∂ t 2 − ∇ 2 B = 0. {\displaystyle {\begin{aligned}\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}} The quantity μ 0 ε 0 {\displaystyle \mu _{0}\varepsilon _{0}} has the dimension (T/L)2. Defining c = ( μ 0 ε 0 ) − 1 / 2 {\displaystyle c=(\mu _{0}\varepsilon _{0})^{-1/2}} , the equations above have the form of the standard wave equations 1 c 2 ∂ 2 E ∂ t 2 − ∇ 2 E = 0 , 1 c 2 ∂ 2 B ∂ t 2 − ∇ 2 B = 0. {\displaystyle {\begin{aligned}{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}} Already during Maxwell's lifetime, it was found that the known values for ε 0 {\displaystyle \varepsilon _{0}} and μ 0 {\displaystyle \mu _{0}} give c ≈ 2.998 × 10 8 m/s {\displaystyle c\approx 2.998\times 10^{8}~{\text{m/s}}} , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of μ 0 = 4 π × 10 − 7 {\displaystyle \mu _{0}=4\pi \times 10^{-7}} and c = 299 792 458 m/s {\displaystyle c=299\,792\,458~{\text{m/s}}} are defined constants, (which means that by definition ε 0 = 8.854 187 8... × 10 − 12 F/m {\displaystyle \varepsilon _{0}=8.854\,187\,8...\times 10^{-12}~{\text{F/m}}} ) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value. In materials with relative permittivity, εr, and relative permeability, μr, the phase velocity of light becomes v p = 1 μ 0 μ r ε 0 ε r , {\displaystyle v_{\text{p}}={\frac {1}{\sqrt {\mu _{0}\mu _{\text{r}}\varepsilon _{0}\varepsilon _{\text{r}}}}},} which is usually less than c. In addition, E and B are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity c. == Macroscopic formulation == The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping. The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents.: 5  "Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself. In the macroscopic equations, the influence of bound charge Qb and bound current Ib is incorporated into the displacement field D and the magnetizing field H, while the equations depend only on the free charges Qf and free currents If. This reflects a splitting of the total electric charge Q and current I (and their densities ρ and J) into free and bound parts: Q = Q f + Q b = ∭ Ω ( ρ f + ρ b ) d V = ∭ Ω ρ d V , I = I f + I b = ∬ Σ ( J f + J b ) ⋅ d S = ∬ Σ J ⋅ d S . {\displaystyle {\begin{aligned}Q&=Q_{\text{f}}+Q_{\text{b}}=\iiint _{\Omega }\left(\rho _{\text{f}}+\rho _{\text{b}}\right)\,\mathrm {d} V=\iiint _{\Omega }\rho \,\mathrm {d} V,\\I&=I_{\text{f}}+I_{\text{b}}=\iint _{\Sigma }\left(\mathbf {J} _{\text{f}}+\mathbf {J} _{\text{b}}\right)\cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} .\end{aligned}}} The cost of this splitting is that the additional fields D and H need to be determined through phenomenological constituent equations relating these fields to the electric field E and the magnetic field B, together with the bound charge and current. See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum; and the macroscopic equations, dealing with free charge and current, practical to use within materials. === Bound charge and current === When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization P of the material, its dipole moment per unit volume. If P is uniform, a macroscopic separation of charge is produced only at the surfaces where P enters and leaves the material. For non-uniform P, a charge is also produced in the bulk. Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization M. The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of P and M, which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume. === Auxiliary fields, polarization and magnetization === The definitions of the auxiliary fields are: D ( r , t ) = ε 0 E ( r , t ) + P ( r , t ) , H ( r , t ) = 1 μ 0 B ( r , t ) − M ( r , t ) , {\displaystyle {\begin{aligned}\mathbf {D} (\mathbf {r} ,t)&=\varepsilon _{0}\mathbf {E} (\mathbf {r} ,t)+\mathbf {P} (\mathbf {r} ,t),\\\mathbf {H} (\mathbf {r} ,t)&={\frac {1}{\mu _{0}}}\mathbf {B} (\mathbf {r} ,t)-\mathbf {M} (\mathbf {r} ,t),\end{aligned}}} where P is the polarization field and M is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density ρb and bound current density Jb in terms of polarization P and magnetization M are then defined as ρ b = − ∇ ⋅ P , J b = ∇ × M + ∂ P ∂ t . {\displaystyle {\begin{aligned}\rho _{\text{b}}&=-\nabla \cdot \mathbf {P} ,\\\mathbf {J} _{\text{b}}&=\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}.\end{aligned}}} If we define the total, bound, and free charge and current density by ρ = ρ b + ρ f , J = J b + J f , {\displaystyle {\begin{aligned}\rho &=\rho _{\text{b}}+\rho _{\text{f}},\\\mathbf {J} &=\mathbf {J} _{\text{b}}+\mathbf {J} _{\text{f}},\end{aligned}}} and use the defining relations above to eliminate D, and H, the "macroscopic" Maxwell's equations reproduce the "microscopic" equations. === Constitutive relations === In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field D and the electric field E, as well as the magnetizing field H and the magnetic field B. Equivalently, we have to specify the dependence of the polarization P (hence the bound charge) and the magnetization M (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description.: 44–45  For materials without polarization and magnetization, the constitutive relations are (by definition): 2  D = ε 0 E , H = 1 μ 0 B , {\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu _{0}}}\mathbf {B} ,} where ε0 is the permittivity of free space and μ0 the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal. An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization. More generally, for linear materials the constitutive relations are: 44–45  D = ε E , H = 1 μ B , {\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu }}\mathbf {B} ,} where ε is the permittivity and μ the permeability of the material. For the displacement field D the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field H {\displaystyle \mathbf {H} } , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however. For homogeneous materials, ε and μ are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time).: 463  For isotropic materials, ε and μ are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors.: 421 : 463  Materials are generally dispersive, so ε and μ depend on the frequency of any incident EM waves.: 625 : 397  Even more generally, in the case of non-linear materials (see for example nonlinear optics), D and P are not necessarily proportional to E, similarly H or M is not necessarily proportional to B. In general D and H depend on both E and B, on location and time, and possibly other physical quantities. In applications one also has to describe how the free currents and charge density behave in terms of E and B possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form J f = σ E . {\displaystyle \mathbf {J} _{\text{f}}=\sigma \mathbf {E} .} == Alternative formulations == Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential φ and the vector potential A. Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect). Each table describes one formalism. See the main article for details of each formulation. The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well. Each table below describes one formalism. In the tensor calculus formulation, the electromagnetic tensor Fαβ is an antisymmetric covariant order 2 tensor; the four-potential, Aα, is a covariant vector; the current, Jα, is a vector; the square brackets, [ ], denote antisymmetrization of indices; ∂α is the partial derivative with respect to the coordinate, xα. In Minkowski space coordinates are chosen with respect to an inertial frame; (xα) = (ct, x, y, z), so that the metric tensor used to raise and lower indices is ηαβ = diag(1, −1, −1, −1). The d'Alembert operator on Minkowski space is ◻ = ∂α∂α as in the vector formulation. In general spacetimes, the coordinate system xα is arbitrary, the covariant derivative ∇α, the Ricci tensor, Rαβ and raising and lowering of indices are defined by the Lorentzian metric, gαβ and the d'Alembert operator is defined as ◻ = ∇α∇α. The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line. In the differential form formulation on arbitrary space times, F = ⁠1/2⁠Fαβ‍dxα ∧ dxβ is the electromagnetic tensor considered as a 2-form, A = Aαdxα is the potential 1-form, J = − J α ⋆ d x α {\displaystyle J=-J_{\alpha }{\star }\mathrm {d} x^{\alpha }} is the current 3-form, d is the exterior derivative, and ⋆ {\displaystyle {\star }} is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star ⋆ {\displaystyle {\star }} depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator ◻ = ( − ⋆ d ⋆ d − d ⋆ d ⋆ ) {\displaystyle \Box =(-{\star }\mathrm {d} {\star }\mathrm {d} -\mathrm {d} {\star }\mathrm {d} {\star })} is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact. Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used. == Solutions == Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow. As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator). Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create. Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics. == Overdetermination of Maxwell's equations == Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of E and B) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles. This explanation was first introduced by Julius Adams Stratton in 1941. Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account. Both identities ∇ ⋅ ∇ × B ≡ 0 , ∇ ⋅ ∇ × E ≡ 0 {\displaystyle \nabla \cdot \nabla \times \mathbf {B} \equiv 0,\nabla \cdot \nabla \times \mathbf {E} \equiv 0} , which reduce eight equations to six independent ones, are the true reason of overdetermination. Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws. For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing. == Maxwell's equations as the classical limit of QED == Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However, they do not account for quantum effects, and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED). Some observed electromagnetic phenomena cannot be explained with Maxwell's equations if the source of the electromagnetic fields are the classical distributions of charge and current. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances. Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be explained using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations. This is known as semiclassical theory or self-field QED and was initially discovered by de Broglie and Schrödinger and later fully developed by E.T. Jaynes and A.O. Barut. == Variations == Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well. === Magnetic monopoles === Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.: 273–275  == See also == == Explanatory notes == == References == == Further reading == Imaeda, K. (1995), "Biquaternionic Formulation of Maxwell's Equations and their Solutions", in Ablamowicz, Rafał; Lounesto, Pertti (eds.), Clifford Algebras and Spinor Structures, Springer, pp. 265–280, doi:10.1007/978-94-015-8422-7_16, ISBN 978-90-481-4525-6 === Historical publications === On Faraday's Lines of Force – 1855/56. Maxwell's first paper (Part 1 & 2) – Compiled by Blaze Labs Research (PDF). On Physical Lines of Force – 1861. Maxwell's 1861 paper describing magnetic lines of force – Predecessor to 1873 Treatise. James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459–512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) A Dynamical Theory Of The Electromagnetic Field – 1865. Maxwell's 1865 paper describing his 20 equations, link from Google Books. J. Clerk Maxwell (1873), "A Treatise on Electricity and Magnetism": Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 2 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Developments before the theory of relativity Larmor Joseph (1897). "On a dynamical theory of the electric and luminiferous medium. Part 3, Relations with material media" . Phil. Trans. R. Soc. 190: 205–300. Lorentz Hendrik (1899). "Simplified theory of electrical and optical phenomena in moving systems" . Proc. Acad. Science Amsterdam. I: 427–443. Lorentz Hendrik (1904). "Electromagnetic phenomena in a system moving with any velocity less than that of light" . Proc. Acad. Science Amsterdam. IV: 669–678. Henri Poincaré (1900) "La théorie de Lorentz et le Principe de Réaction" (in French), Archives Néerlandaises, V, 253–278. Henri Poincaré (1902) "La Science et l'Hypothèse" (in French). Henri Poincaré (1905) "Sur la dynamique de l'électron" (in French), Comptes Rendus de l'Académie des Sciences, 140, 1504–1508. Catt, Walton and Davidson. "The History of Displacement Current" Archived 2008-05-06 at the Wayback Machine. Wireless World, March 1979. == External links == "Maxwell equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994] maxwells-equations.com — An intuitive tutorial of Maxwell's equations. The Feynman Lectures on Physics Vol. II Ch. 18: The Maxwell Equations Wikiversity Page on Maxwell's Equations === Modern treatments === Electromagnetism (ch. 11), B. Crowell, Fullerton College Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at Austin Electromagnetic waves from Maxwell's equations on Project PHYSNET. MIT Video Lecture Series (36 × 50 minute lectures) (in .mp4 format) – Electricity and Magnetism Taught by Professor Walter Lewin. === Other === Silagadze, Z. K. (2002). "Feynman's derivation of Maxwell equations and extra dimensions". Annales de la Fondation Louis de Broglie. 27: 241–256. arXiv:hep-ph/0106235. Nature Milestones: Photons – Milestone 2 (1861) Maxwell's equations
Wikipedia/Maxwell_theory
Conformal symmetry is a property of spacetime that ensures angles remain unchanged even when distances are altered. If you stretch, compress, or otherwise distort spacetime, the local angular relationships between lines or curves stay the same. This idea extends the familiar Poincaré group —which accounts for rotations, translations, and boosts—into the more comprehensive conformal group. Conformal symmetry encompasses special conformal transformations and dilations. In three spatial plus one time dimensions, conformal symmetry has 15 degrees of freedom: ten for the Poincaré group, four for special conformal transformations, and one for a dilation. Harry Bateman and Ebenezer Cunningham were the first to study the conformal symmetry of Maxwell's equations. They called a generic expression of conformal symmetry a spherical wave transformation. General relativity in two spacetime dimensions also enjoys conformal symmetry. == Generators == The Lie algebra of the conformal group has the following representation: M μ ν ≡ i ( x μ ∂ ν − x ν ∂ μ ) , P μ ≡ − i ∂ μ , D ≡ − i x μ ∂ μ , K μ ≡ i ( x 2 ∂ μ − 2 x μ x ν ∂ ν ) , {\displaystyle {\begin{aligned}&M_{\mu \nu }\equiv i(x_{\mu }\partial _{\nu }-x_{\nu }\partial _{\mu })\,,\\&P_{\mu }\equiv -i\partial _{\mu }\,,\\&D\equiv -ix_{\mu }\partial ^{\mu }\,,\\&K_{\mu }\equiv i(x^{2}\partial _{\mu }-2x_{\mu }x_{\nu }\partial ^{\nu })\,,\end{aligned}}} where M μ ν {\displaystyle M_{\mu \nu }} are the Lorentz generators, P μ {\displaystyle P_{\mu }} generates translations, D {\displaystyle D} generates scaling transformations (also known as dilatations or dilations) and K μ {\displaystyle K_{\mu }} generates the special conformal transformations. == Commutation relations == The commutation relations are as follows: [ D , K μ ] = − i K μ , [ D , P μ ] = i P μ , [ K μ , P ν ] = 2 i ( η μ ν D − M μ ν ) , [ K μ , M ν ρ ] = i ( η μ ν K ρ − η μ ρ K ν ) , [ P ρ , M μ ν ] = i ( η ρ μ P ν − η ρ ν P μ ) , [ M μ ν , M ρ σ ] = i ( η ν ρ M μ σ + η μ σ M ν ρ − η μ ρ M ν σ − η ν σ M μ ρ ) , {\displaystyle {\begin{aligned}&[D,K_{\mu }]=-iK_{\mu }\,,\\&[D,P_{\mu }]=iP_{\mu }\,,\\&[K_{\mu },P_{\nu }]=2i(\eta _{\mu \nu }D-M_{\mu \nu })\,,\\&[K_{\mu },M_{\nu \rho }]=i(\eta _{\mu \nu }K_{\rho }-\eta _{\mu \rho }K_{\nu })\,,\\&[P_{\rho },M_{\mu \nu }]=i(\eta _{\rho \mu }P_{\nu }-\eta _{\rho \nu }P_{\mu })\,,\\&[M_{\mu \nu },M_{\rho \sigma }]=i(\eta _{\nu \rho }M_{\mu \sigma }+\eta _{\mu \sigma }M_{\nu \rho }-\eta _{\mu \rho }M_{\nu \sigma }-\eta _{\nu \sigma }M_{\mu \rho })\,,\end{aligned}}} other commutators vanish. Here η μ ν {\displaystyle \eta _{\mu \nu }} is the Minkowski metric tensor. Additionally, D {\displaystyle D} is a scalar and K μ {\displaystyle K_{\mu }} is a covariant vector under the Lorentz transformations. The special conformal transformations are given by x μ → x μ − a μ x 2 1 − 2 a ⋅ x + a 2 x 2 {\displaystyle x^{\mu }\to {\frac {x^{\mu }-a^{\mu }x^{2}}{1-2a\cdot x+a^{2}x^{2}}}} where a μ {\displaystyle a^{\mu }} is a parameter describing the transformation. This special conformal transformation can also be written as x μ → x ′ μ {\displaystyle x^{\mu }\to x'^{\mu }} , where x ′ μ x ′ 2 = x μ x 2 − a μ , {\displaystyle {\frac {{x}'^{\mu }}{{x'}^{2}}}={\frac {x^{\mu }}{x^{2}}}-a^{\mu },} which shows that it consists of an inversion, followed by a translation, followed by a second inversion. In two-dimensional spacetime, the transformations of the conformal group are the conformal transformations. There are infinitely many of them. In more than two dimensions, Euclidean conformal transformations map circles to circles, and hyperspheres to hyperspheres with a straight line considered a degenerate circle and a hyperplane a degenerate hypercircle. In more than two Lorentzian dimensions, conformal transformations map null rays to null rays and light cones to light cones, with a null hyperplane being a degenerate light cone. == Applications == === Conformal field theory === In relativistic quantum field theories, the possibility of symmetries is strictly restricted by Coleman–Mandula theorem under physically reasonable assumptions. The largest possible global symmetry group of a non-supersymmetric interacting field theory is a direct product of the conformal group with an internal group. Such theories are known as conformal field theories. === Second-order phase transitions === One particular application is to critical phenomena in systems with local interactions. Fluctuations in such systems are conformally invariant at the critical point. That allows for classification of universality classes of phase transitions in terms of conformal field theories. Conformal invariance is also present in two-dimensional turbulence at high Reynolds number. === High-energy physics === Many theories studied in high-energy physics admit conformal symmetry due to it typically being implied by local scale invariance. A famous example is d=4, N=4 supersymmetric Yang–Mills theory due its relevance for AdS/CFT correspondence. Also, the worldsheet in string theory is described by a two-dimensional conformal field theory coupled to two-dimensional gravity. == Mathematical proofs of conformal invariance in lattice models == Physicists have found that many lattice models become conformally invariant in the critical limit. However, mathematical proofs of these results have only appeared much later, and only in some cases. In 2010, the mathematician Stanislav Smirnov was awarded the Fields medal "for the proof of conformal invariance of percolation and the planar Ising model in statistical physics". In 2020, the mathematician Hugo Duminil-Copin and his collaborators proved that rotational invariance exists at the boundary between phases in many physical systems. == See also == Conformal map Conformal group Coleman–Mandula theorem Renormalization group Scale invariance Superconformal algebra Conformal Killing equation == References == == Sources == Di Francesco, Philippe; Mathieu, Pierre; Sénéchal, David (1997). Conformal Field Theory. Springer Science & Business Media. ISBN 978-0-387-94785-3.
Wikipedia/Conformal_algebra
In thermodynamics, a critical point (or critical state) is the end point of a phase equilibrium curve. One example is the liquid–vapor critical point, the end point of the pressure–temperature curve that designates conditions under which a liquid and its vapor can coexist. At higher temperatures, the gas comes into a supercritical phase, and so cannot be liquefied by pressure alone. At the critical point, defined by a critical temperature Tc and a critical pressure pc, phase boundaries vanish. Other examples include the liquid–liquid critical points in mixtures, and the ferromagnet–paramagnet transition (Curie temperature) in the absence of an external magnetic field. == Liquid–vapor critical point == === Overview === For simplicity and clarity, the generic notion of critical point is best introduced by discussing a specific example, the vapor–liquid critical point. This was the first critical point to be discovered, and it is still the best known and most studied one. The figure shows the schematic P-T diagram of a pure substance (as opposed to mixtures, which have additional state variables and richer phase diagrams, discussed below). The commonly known phases solid, liquid and vapor are separated by phase boundaries, i.e. pressure–temperature combinations where two phases can coexist. At the triple point, all three phases can coexist. However, the liquid–vapor boundary terminates in an endpoint at some critical temperature Tc and critical pressure pc. This is the critical point. The critical point of water occurs at 647.096 K (373.946 °C; 705.103 °F) and 22.064 megapascals (3,200.1 psi; 217.75 atm; 220.64 bar). In the vicinity of the critical point, the physical properties of the liquid and the vapor change dramatically, with both phases becoming even more similar. For instance, liquid water under normal conditions is nearly incompressible, has a low thermal expansion coefficient, has a high dielectric constant, and is an excellent solvent for electrolytes. Near the critical point, all these properties change into the exact opposite: water becomes compressible, expandable, a poor dielectric, a bad solvent for electrolytes, and mixes more readily with nonpolar gases and organic molecules. At the critical point, only one phase exists. The heat of vaporization is zero. There is a stationary inflection point in the constant-temperature line (critical isotherm) on a PV diagram. This means that at the critical point: ( ∂ p ∂ V ) T = 0 , {\displaystyle \left({\frac {\partial p}{\partial V}}\right)_{T}=0,} ( ∂ 2 p ∂ V 2 ) T = 0. {\displaystyle \left({\frac {\partial ^{2}p}{\partial V^{2}}}\right)_{T}=0.} Above the critical point there exists a state of matter that is continuously connected with (can be transformed without phase transition into) both the liquid and the gaseous state. It is called supercritical fluid. The common textbook knowledge that all distinction between liquid and vapor disappears beyond the critical point has been challenged by Fisher and Widom, who identified a p–T line that separates states with different asymptotic statistical properties (Fisher–Widom line). Sometimes the critical point does not manifest in most thermodynamic or mechanical properties, but is "hidden" and reveals itself in the onset of inhomogeneities in elastic moduli, marked changes in the appearance and local properties of non-affine droplets, and a sudden enhancement in defect pair concentration. === History === The existence of a critical point was first discovered by Charles Cagniard de la Tour in 1822 and named by Dmitri Mendeleev in 1860 and Thomas Andrews in 1869. Cagniard showed that CO2 could be liquefied at 31 °C at a pressure of 73 atm, but not at a slightly higher temperature, even under pressures as high as 3000 atm. === Theory === Solving the above condition ( ∂ p / ∂ V ) T = 0 {\displaystyle (\partial p/\partial V)_{T}=0} for the van der Waals equation, one can compute the critical point as T c = 8 a 27 R b , V c = 3 n b , p c = a 27 b 2 . {\displaystyle T_{\text{c}}={\frac {8a}{27Rb}},\quad V_{\text{c}}=3nb,\quad p_{\text{c}}={\frac {a}{27b^{2}}}.} However, the van der Waals equation, based on a mean-field theory, does not hold near the critical point. In particular, it predicts wrong scaling laws. To analyse properties of fluids near the critical point, reduced state variables are sometimes defined relative to the critical properties T r = T T c , p r = p p c , V r = V R T c / p c . {\displaystyle T_{\text{r}}={\frac {T}{T_{\text{c}}}},\quad p_{\text{r}}={\frac {p}{p_{\text{c}}}},\quad V_{\text{r}}={\frac {V}{RT_{\text{c}}/p_{\text{c}}}}.} The principle of corresponding states indicates that substances at equal reduced pressures and temperatures have equal reduced volumes. This relationship is approximately true for many substances, but becomes increasingly inaccurate for large values of pr. For some gases, there is an additional correction factor, called Newton's correction, added to the critical temperature and critical pressure calculated in this manner. These are empirically derived values and vary with the pressure range of interest. === Table of liquid–vapor critical temperature and pressure for selected substances === == Mixtures: liquid–liquid critical point == The liquid–liquid critical point of a solution, which occurs at the critical solution temperature, occurs at the limit of the two-phase region of the phase diagram. In other words, it is the point at which an infinitesimal change in some thermodynamic variable (such as temperature or pressure) leads to separation of the mixture into two distinct liquid phases, as shown in the polymer–solvent phase diagram to the right. Two types of liquid–liquid critical points are the upper critical solution temperature (UCST), which is the hottest point at which cooling induces phase separation, and the lower critical solution temperature (LCST), which is the coldest point at which heating induces phase separation. === Mathematical definition === From a theoretical standpoint, the liquid–liquid critical point represents the temperature–concentration extremum of the spinodal curve (as can be seen in the figure to the right). Thus, the liquid–liquid critical point in a two-component system must satisfy two conditions: the condition of the spinodal curve (the second derivative of the free energy with respect to concentration must equal zero), and the extremum condition (the third derivative of the free energy with respect to concentration must also equal zero or the derivative of the spinodal temperature with respect to concentration must equal zero). == See also == == References == == Further reading == "Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam" (PDF). International Association for the Properties of Water and Steam. August 2007. Retrieved 2009-06-09. "Critical points for some common solvents". ProSciTech. Archived from the original on 2008-01-31. "Critical Temperature and Pressure". Department of Chemistry. Purdue University. Retrieved 2006-12-03.
Wikipedia/Critical_point_(physics)
In theoretical physics, a logarithmic conformal field theory is a conformal field theory in which the correlators of the basic fields are allowed to be logarithmic at short distance, instead of being powers of the fields' distance. Equivalently, the dilation operator is not diagonalizable. Examples of logarithmic conformal field theories include critical percolation. == In two dimensions == Just like conformal field theory in general, logarithmic conformal field theory has been particularly well-studied in two dimensions. Some two-dimensional logarithmic CFTs have been solved: The Gaberdiel–Kausch CFT at central charge c = − 2 {\displaystyle c=-2} , which is rational with respect to its extended symmetry algebra, namely the triplet algebra. The G L ( 1 | 1 ) {\displaystyle GL(1|1)} Wess–Zumino–Witten model, based on the simplest non-trivial supergroup. The triplet model at c = 0 {\displaystyle c=0} is also rational with respect to the triplet algebra. == References ==
Wikipedia/Logarithmic_conformal_field_theory
In quantum field theory, the Wightman distributions can be analytically continued to analytic functions in Euclidean space with the domain restricted to ordered n-tuples in R d {\displaystyle \mathbb {R} ^{d}} that are pairwise distinct. These functions are called the Schwinger functions (named after Julian Schwinger) and they are real-analytic, symmetric under the permutation of arguments (antisymmetric for fermionic fields), Euclidean covariant and satisfy a property known as reflection positivity. Properties of Schwinger functions are known as Osterwalder–Schrader axioms (named after Konrad Osterwalder and Robert Schrader). Schwinger functions are also referred to as Euclidean correlation functions. == Osterwalder–Schrader axioms == Here we describe Osterwalder–Schrader (OS) axioms for a Euclidean quantum field theory of a Hermitian scalar field ϕ ( x ) {\displaystyle \phi (x)} , x ∈ R d {\displaystyle x\in \mathbb {R} ^{d}} . Note that a typical quantum field theory will contain infinitely many local operators, including also composite operators, and their correlators should also satisfy OS axioms similar to the ones described below. The Schwinger functions of ϕ {\displaystyle \phi } are denoted as S n ( x 1 , … , x n ) ≡ ⟨ ϕ ( x 1 ) ϕ ( x 2 ) … ϕ ( x n ) ⟩ , x k ∈ R d . {\displaystyle S_{n}(x_{1},\ldots ,x_{n})\equiv \langle \phi (x_{1})\phi (x_{2})\ldots \phi (x_{n})\rangle ,\quad x_{k}\in \mathbb {R} ^{d}.} OS axioms from are numbered (E0)-(E4) and have the following meaning: (E0) Temperedness (E1) Euclidean covariance (E2) Positivity (E3) Symmetry (E4) Cluster property === Temperedness === Temperedness axiom (E0) says that Schwinger functions are tempered distributions away from coincident points. This means that they can be integrated against Schwartz test functions which vanish with all their derivatives at configurations where two or more points coincide. It can be shown from this axiom and other OS axioms (but not the linear growth condition) that Schwinger functions are in fact real-analytic away from coincident points. === Euclidean covariance === Euclidean covariance axiom (E1) says that Schwinger functions transform covariantly under rotations and translations, namely: S n ( x 1 , … , x n ) = S n ( R x 1 + b , … , R x n + b ) {\displaystyle S_{n}(x_{1},\ldots ,x_{n})=S_{n}(Rx_{1}+b,\ldots ,Rx_{n}+b)} for an arbitrary rotation matrix R ∈ S O ( d ) {\displaystyle R\in SO(d)} and an arbitrary translation vector b ∈ R d {\displaystyle b\in \mathbb {R} ^{d}} . OS axioms can be formulated for Schwinger functions of fields transforming in arbitrary representations of the rotation group. === Symmetry === Symmetry axiom (E3) says that Schwinger functions are invariant under permutations of points: S n ( x 1 , … , x n ) = S n ( x π ( 1 ) , … , x π ( n ) ) {\displaystyle S_{n}(x_{1},\ldots ,x_{n})=S_{n}(x_{\pi (1)},\ldots ,x_{\pi (n)})} , where π {\displaystyle \pi } is an arbitrary permutation of { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} . Schwinger functions of fermionic fields are instead antisymmetric; for them this equation would have a ± sign equal to the signature of the permutation. === Cluster property === Cluster property (E4) says that Schwinger function S p + q {\displaystyle S_{p+q}} reduces to the product S p S q {\displaystyle S_{p}S_{q}} if two groups of points are separated from each other by a large constant translation: lim b → ∞ S p + q ( x 1 , … , x p , x p + 1 + b , … , x p + q + b ) = S p ( x 1 , … , x p ) S q ( x p + 1 , … , x p + q ) {\displaystyle \lim _{b\to \infty }S_{p+q}(x_{1},\ldots ,x_{p},x_{p+1}+b,\ldots ,x_{p+q}+b)=S_{p}(x_{1},\ldots ,x_{p})S_{q}(x_{p+1},\ldots ,x_{p+q})} . The limit is understood in the sense of distributions. There is also a technical assumption that the two groups of points lie on two sides of the x 0 = 0 {\displaystyle x^{0}=0} hyperplane, while the vector b {\displaystyle b} is parallel to it: x 1 0 , … , x p 0 > 0 , x p + 1 0 , … , x p + q 0 < 0 , b 0 = 0. {\displaystyle x_{1}^{0},\ldots ,x_{p}^{0}>0,\quad x_{p+1}^{0},\ldots ,x_{p+q}^{0}<0,\quad b^{0}=0.} === Reflection positivity === Positivity axioms (E2) asserts the following property called (Osterwalder–Schrader) reflection positivity. Pick any arbitrary coordinate τ and pick a test function fN with N points as its arguments. Assume fN has its support in the "time-ordered" subset of N points with 0 < τ1 < ... < τN. Choose one such fN for each positive N, with the f's being zero for all N larger than some integer M. Given a point x {\displaystyle x} , let x θ {\displaystyle x^{\theta }} be the reflected point about the τ = 0 hyperplane. Then, ∑ m , n ∫ d d x 1 ⋯ d d x m d d y 1 ⋯ d d y n S m + n ( x 1 , … , x m , y 1 , … , y n ) f m ( x 1 θ , … , x m θ ) ∗ f n ( y 1 , … , y n ) ≥ 0 {\displaystyle \sum _{m,n}\int d^{d}x_{1}\cdots d^{d}x_{m}\,d^{d}y_{1}\cdots d^{d}y_{n}S_{m+n}(x_{1},\dots ,x_{m},y_{1},\dots ,y_{n})f_{m}(x_{1}^{\theta },\dots ,x_{m}^{\theta })^{*}f_{n}(y_{1},\dots ,y_{n})\geq 0} where * represents complex conjugation. Sometimes in theoretical physics literature reflection positivity is stated as the requirement that the Schwinger function of arbitrary even order should be non-negative if points are inserted symmetrically with respect to the τ = 0 {\displaystyle \tau =0} hyperplane: S 2 n ( x 1 , … , x n , x n θ , … , x 1 θ ) ≥ 0 {\displaystyle S_{2n}(x_{1},\dots ,x_{n},x_{n}^{\theta },\dots ,x_{1}^{\theta })\geq 0} . This property indeed follows from the reflection positivity but it is weaker than full reflection positivity. ==== Intuitive understanding ==== One way of (formally) constructing Schwinger functions which satisfy the above properties is through the Euclidean path integral. In particular, Euclidean path integrals (formally) satisfy reflection positivity. Let F be any polynomial functional of the field φ which only depends upon the value of φ(x) for those points x whose τ coordinates are nonnegative. Then ∫ D ϕ F [ ϕ ( x ) ] F [ ϕ ( x θ ) ] ∗ e − S [ ϕ ] = ∫ D ϕ 0 ∫ ϕ + ( τ = 0 ) = ϕ 0 D ϕ + F [ ϕ + ] e − S + [ ϕ + ] ∫ ϕ − ( τ = 0 ) = ϕ 0 D ϕ − F [ ( ϕ − ) θ ] ∗ e − S − [ ϕ − ] . {\displaystyle \int {\mathcal {D}}\phi F[\phi (x)]F[\phi (x^{\theta })]^{*}e^{-S[\phi ]}=\int {\mathcal {D}}\phi _{0}\int _{\phi _{+}(\tau =0)=\phi _{0}}{\mathcal {D}}\phi _{+}F[\phi _{+}]e^{-S_{+}[\phi _{+}]}\int _{\phi _{-}(\tau =0)=\phi _{0}}{\mathcal {D}}\phi _{-}F[(\phi _{-})^{\theta }]^{*}e^{-S_{-}[\phi _{-}]}.} Since the action S is real and can be split into S + {\displaystyle S_{+}} , which only depends on φ on the positive half-space ( ϕ + {\displaystyle \phi _{+}} ), and S − {\displaystyle S_{-}} which only depends upon φ on the negative half-space ( ϕ − {\displaystyle \phi _{-}} ), and if S also happens to be invariant under the combined action of taking a reflection and complex conjugating all the fields, then the previous quantity has to be nonnegative. == Osterwalder–Schrader theorem == The Osterwalder–Schrader theorem states that Euclidean Schwinger functions which satisfy the above axioms (E0)-(E4) and an additional property (E0') called linear growth condition can be analytically continued to Lorentzian Wightman distributions which satisfy Wightman axioms and thus define a quantum field theory. === Linear growth condition === This condition, called (E0') in, asserts that when the Schwinger function of order n {\displaystyle n} is paired with an arbitrary Schwartz test function f {\displaystyle f} which vanishes at coincident points, we have the following bound: | S n ( f ) | ≤ σ n | f | C ⋅ n , {\displaystyle |S_{n}(f)|\leq \sigma _{n}|f|_{C\cdot n},} where C ∈ N {\displaystyle C\in \mathbb {N} } is an integer constant, | f | C ⋅ n {\displaystyle |f|_{C\cdot n}} is the Schwartz-space seminorm of order N = C ⋅ n {\displaystyle N=C\cdot n} , i.e. | f | N = sup | α | ≤ N , x ∈ R d | ( 1 + | x | ) N D α f ( x ) | , {\displaystyle |f|_{N}=\sup _{|\alpha |\leq N,x\in \mathbb {R} ^{d}}|(1+|x|)^{N}D^{\alpha }f(x)|,} and σ n {\displaystyle \sigma _{n}} a sequence of constants of factorial growth, i.e. σ n ≤ A ( n ! ) B {\displaystyle \sigma _{n}\leq A(n!)^{B}} with some constants A , B {\displaystyle A,B} . Linear growth condition is subtle as it has to be satisfied for all Schwinger functions simultaneously. It also has not been derived from the Wightman axioms, so that the system of OS axioms (E0)-(E4) plus the linear growth condition (E0') appears to be stronger than the Wightman axioms. === History === At first, Osterwalder and Schrader claimed a stronger theorem that the axioms (E0)-(E4) by themselves imply the Wightman axioms, however their proof contained an error which could not be corrected without adding extra assumptions. Two years later they published a new theorem, with the linear growth condition added as an assumption, and a correct proof. The new proof is based on a complicated inductive argument (proposed also by Vladimir Glaser), by which the region of analyticity of Schwinger functions is gradually extended towards the Minkowski space, and Wightman distributions are recovered as a limit. The linear growth condition (E0') is crucially used to show that the limit exists and is a tempered distribution. Osterwalder's and Schrader's paper also contains another theorem replacing (E0') by yet another assumption called (E0) ˇ {\displaystyle {\check {\text{(E0)}}}} . This other theorem is rarely used, since (E0) ˇ {\displaystyle {\check {\text{(E0)}}}} is hard to check in practice. == Other axioms for Schwinger functions == === Axioms by Glimm and Jaffe === An alternative approach to axiomatization of Euclidean correlators is described by Glimm and Jaffe in their book. In this approach one assumes that one is given a measure d μ {\displaystyle d\mu } on the space of distributions ϕ ∈ D ′ ( R d ) {\displaystyle \phi \in D'(\mathbb {R} ^{d})} . One then considers a generating functional S ( f ) = ∫ e ϕ ( f ) d μ , f ∈ D ( R d ) {\displaystyle S(f)=\int e^{\phi (f)}d\mu ,\quad f\in D(\mathbb {R} ^{d})} which is assumed to satisfy properties OS0-OS4: (OS0) Analyticity. This asserts that z = ( z 1 , … , z n ) ↦ S ( ∑ i = 1 n z i f i ) {\displaystyle z=(z_{1},\ldots ,z_{n})\mapsto S\left(\sum _{i=1}^{n}z_{i}f_{i}\right)} is an entire-analytic function of z ∈ R n {\displaystyle z\in \mathbb {R} ^{n}} for any collection of n {\displaystyle n} compactly supported test functions f i ∈ D ( R d ) {\displaystyle f_{i}\in D(\mathbb {R} ^{d})} . Intuitively, this means that the measure d μ {\displaystyle d\mu } decays faster than any exponential. (OS1) Regularity. This demands a growth bound for S ( f ) {\displaystyle S(f)} in terms of f {\displaystyle f} , such as | S ( f ) | ≤ exp ⁡ ( C ∫ d d x | f ( x ) | ) {\displaystyle |S(f)|\leq \exp \left(C\int d^{d}x|f(x)|\right)} . See for the precise condition. (OS2) Euclidean invariance. This says that the functional S ( f ) {\displaystyle S(f)} is invariant under Euclidean transformations f ( x ) ↦ f ( R x + b ) {\displaystyle f(x)\mapsto f(Rx+b)} . (OS3) Reflection positivity. Take a finite sequence of test functions f i ∈ D ( R d ) {\displaystyle f_{i}\in D(\mathbb {R} ^{d})} which are all supported in the upper half-space i.e. at x 0 > 0 {\displaystyle x^{0}>0} . Denote by θ f i ( x ) = f i ( θ x ) {\displaystyle \theta f_{i}(x)=f_{i}(\theta x)} where θ {\displaystyle \theta } is a reflection operation defined above. This axioms says that the matrix M i j = S ( f i + θ f j ) {\displaystyle M_{ij}=S(f_{i}+\theta f_{j})} has to be positive semidefinite. (OS4) Ergodicity. The time translation semigroup acts ergodically on the measure space ( D ′ ( R d ) , d μ ) {\displaystyle (D'(\mathbb {R} ^{d}),d\mu )} . See for the precise condition. ==== Relation to Osterwalder–Schrader axioms ==== Although the above axioms were named by Glimm and Jaffe (OS0)-(OS4) in honor of Osterwalder and Schrader, they are not equivalent to the Osterwalder–Schrader axioms. Given (OS0)-(OS4), one can define Schwinger functions of ϕ {\displaystyle \phi } as moments of the measure d μ {\displaystyle d\mu } , and show that these moments satisfy Osterwalder–Schrader axioms (E0)-(E4) and also the linear growth conditions (E0'). Then one can appeal to the Osterwalder–Schrader theorem to show that Wightman functions are tempered distributions. Alternatively, and much easier, one can derive Wightman axioms directly from (OS0)-(OS4). Note however that the full quantum field theory will contain infinitely many other local operators apart from ϕ {\displaystyle \phi } , such as ϕ 2 {\displaystyle \phi ^{2}} , ϕ 4 {\displaystyle \phi ^{4}} and other composite operators built from ϕ {\displaystyle \phi } and its derivatives. It's not easy to extract these Schwinger functions from the measure d μ {\displaystyle d\mu } and show that they satisfy OS axioms, as it should be the case. To summarize, the axioms called (OS0)-(OS4) by Glimm and Jaffe are stronger than the OS axioms as far as the correlators of the field ϕ {\displaystyle \phi } are concerned, but weaker than then the full set of OS axioms since they don't say much about correlators of composite operators. === Nelson's axioms === These axioms were proposed by Edward Nelson. See also their description in the book of Barry Simon. Like in the above axioms by Glimm and Jaffe, one assumes that the field ϕ ∈ D ′ ( R d ) {\displaystyle \phi \in D'(\mathbb {R} ^{d})} is a random distribution with a measure d μ {\displaystyle d\mu } . This measure is sufficiently regular so that the field ϕ {\displaystyle \phi } has regularity of a Sobolev space of negative derivative order. The crucial feature of these axioms is to consider the field restricted to a surface. One of the axioms is Markov property, which formalizes the intuitive notion that the state of the field inside a closed surface depends only on the state of the field on the surface. == See also == Wick rotation Axiomatic quantum field theory Wightman axioms == References ==
Wikipedia/Schwinger_functions
In computer science, an output-sensitive algorithm is an algorithm whose running time depends on the size of the output, instead of, or in addition to, the size of the input. For certain problems where the output size varies widely, for example from linear in the size of the input to quadratic in the size of the input, analyses that take the output size explicitly into account can produce better runtime bounds that differentiate algorithms that would otherwise have identical asymptotic complexity. == Examples == === Division by subtraction === A simple example of an output-sensitive algorithm is given by the division algorithm division by subtraction which computes the quotient and remainder of dividing two positive integers using only addition, subtraction, and comparisons: Example output: This algorithm takes Θ(Q) time, and so can be fast in scenarios where the quotient Q is known to be small. In cases where Q is large however, it is outperformed by more complex algorithms such as long division. === Computational geometry === Convex hull algorithms for finding the convex hull of a finite set of points in the plane require Ω(n log n) time for n points; even relatively simple algorithms like the Graham scan achieve this lower bound. If the convex hull uses all n points, this is the best we can do; however, for many practical sets of points, and in particular for random sets of points, the number of points h in the convex hull is typically much smaller than n. Consequently, output-sensitive algorithms such as the ultimate convex hull algorithm and Chan's algorithm which require only O(n log h) time are considerably faster for such point sets. Output-sensitive algorithms arise frequently in computational geometry applications and have been described for problems such as hidden surface removal and resolving range filter conflicts in router tables. Frank Nielsen describes a general paradigm of output-sensitive algorithms known as grouping and querying and gives such an algorithm for computing cells of a Voronoi diagram. Nielsen breaks these algorithms into two stages: estimating the output size, and then building data structures based on that estimate which are queried to construct the final solution. == Generalizations == A more general kind of output-sensitive algorithms are enumeration algorithms, which enumerate the set of solutions to a problem. In this context, the performance of algorithms is also measured in an output-sensitive way, in addition to more sensitive measures, e.g., bounded the delay between any two successive solutions. == See also == Lazy evaluation == References ==
Wikipedia/Output-sensitive_algorithm
The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the best in the uniform norm L∞ sense. It is sometimes referred to as Remes algorithm or Reme algorithm. A typical example of a Chebyshev space is the subspace of Chebyshev polynomials of order n in the space of real continuous functions on an interval, C[a, b]. The polynomial of best approximation within a given subspace is defined to be the one that minimizes the maximum absolute difference between the polynomial and the function. In this case, the form of the solution is precised by the equioscillation theorem. == Procedure == The Remez algorithm starts with the function f {\displaystyle f} to be approximated and a set X {\displaystyle X} of n + 2 {\displaystyle n+2} sample points x 1 , x 2 , . . . , x n + 2 {\displaystyle x_{1},x_{2},...,x_{n+2}} in the approximation interval, usually the extrema of Chebyshev polynomial linearly mapped to the interval. The steps are: Solve the linear system of equations b 0 + b 1 x i + . . . + b n x i n + ( − 1 ) i E = f ( x i ) {\displaystyle b_{0}+b_{1}x_{i}+...+b_{n}x_{i}^{n}+(-1)^{i}E=f(x_{i})} (where i = 1 , 2 , . . . n + 2 {\displaystyle i=1,2,...n+2} ), for the unknowns b 0 , b 1 . . . b n {\displaystyle b_{0},b_{1}...b_{n}} and E. Use the b i {\displaystyle b_{i}} as coefficients to form a polynomial P n {\displaystyle P_{n}} . Find the set M {\displaystyle M} of points of local maximum error | P n ( x ) − f ( x ) | {\displaystyle |P_{n}(x)-f(x)|} . If the errors at every m ∈ M {\displaystyle m\in M} are of equal magnitude and alternate in sign, then P n {\displaystyle P_{n}} is the minimax approximation polynomial. If not, replace X {\displaystyle X} with M {\displaystyle M} and repeat the steps above. The result is called the polynomial of best approximation or the minimax approximation algorithm. A review of technicalities in implementing the Remez algorithm is given by W. Fraser. === Choice of initialization === The Chebyshev nodes are a common choice for the initial approximation because of their role in the theory of polynomial interpolation. For the initialization of the optimization problem for function f by the Lagrange interpolant Ln(f), it can be shown that this initial approximation is bounded by ‖ f − L n ( f ) ‖ ∞ ≤ ( 1 + ‖ L n ‖ ∞ ) inf p ∈ P n ‖ f − p ‖ {\displaystyle \lVert f-L_{n}(f)\rVert _{\infty }\leq (1+\lVert L_{n}\rVert _{\infty })\inf _{p\in P_{n}}\lVert f-p\rVert } with the norm or Lebesgue constant of the Lagrange interpolation operator Ln of the nodes (t1, ..., tn + 1) being ‖ L n ‖ ∞ = Λ ¯ n ( T ) = max − 1 ≤ x ≤ 1 λ n ( T ; x ) , {\displaystyle \lVert L_{n}\rVert _{\infty }={\overline {\Lambda }}_{n}(T)=\max _{-1\leq x\leq 1}\lambda _{n}(T;x),} T being the zeros of the Chebyshev polynomials, and the Lebesgue functions being λ n ( T ; x ) = ∑ j = 1 n + 1 | l j ( x ) | , l j ( x ) = ∏ i ≠ j i = 1 n + 1 ( x − t i ) ( t j − t i ) . {\displaystyle \lambda _{n}(T;x)=\sum _{j=1}^{n+1}\left|l_{j}(x)\right|,\quad l_{j}(x)=\prod _{\stackrel {i=1}{i\neq j}}^{n+1}{\frac {(x-t_{i})}{(t_{j}-t_{i})}}.} Theodore A. Kilgore, Carl de Boor, and Allan Pinkus proved that there exists a unique ti for each Ln, although not known explicitly for (ordinary) polynomials. Similarly, Λ _ n ( T ) = min − 1 ≤ x ≤ 1 λ n ( T ; x ) {\displaystyle {\underline {\Lambda }}_{n}(T)=\min _{-1\leq x\leq 1}\lambda _{n}(T;x)} , and the optimality of a choice of nodes can be expressed as Λ ¯ n − Λ _ n ≥ 0. {\displaystyle {\overline {\Lambda }}_{n}-{\underline {\Lambda }}_{n}\geq 0.} For Chebyshev nodes, which provides a suboptimal, but analytically explicit choice, the asymptotic behavior is known as Λ ¯ n ( T ) = 2 π log ⁡ ( n + 1 ) + 2 π ( γ + log ⁡ 8 π ) + α n + 1 {\displaystyle {\overline {\Lambda }}_{n}(T)={\frac {2}{\pi }}\log(n+1)+{\frac {2}{\pi }}\left(\gamma +\log {\frac {8}{\pi }}\right)+\alpha _{n+1}} (γ being the Euler–Mascheroni constant) with 0 < α n < π 72 n 2 {\displaystyle 0<\alpha _{n}<{\frac {\pi }{72n^{2}}}} for n ≥ 1 , {\displaystyle n\geq 1,} and upper bound Λ ¯ n ( T ) ≤ 2 π log ⁡ ( n + 1 ) + 1 {\displaystyle {\overline {\Lambda }}_{n}(T)\leq {\frac {2}{\pi }}\log(n+1)+1} Lev Brutman obtained the bound for n ≥ 3 {\displaystyle n\geq 3} , and T ^ {\displaystyle {\hat {T}}} being the zeros of the expanded Chebyshev polynomials: Λ ¯ n ( T ^ ) − Λ _ n ( T ^ ) < Λ ¯ 3 − 1 6 cot ⁡ π 8 + π 64 1 sin 2 ⁡ ( 3 π / 16 ) − 2 π ( γ − log ⁡ π ) ≈ 0.201. {\displaystyle {\overline {\Lambda }}_{n}({\hat {T}})-{\underline {\Lambda }}_{n}({\hat {T}})<{\overline {\Lambda }}_{3}-{\frac {1}{6}}\cot {\frac {\pi }{8}}+{\frac {\pi }{64}}{\frac {1}{\sin ^{2}(3\pi /16)}}-{\frac {2}{\pi }}(\gamma -\log \pi )\approx 0.201.} Rüdiger Günttner obtained from a sharper estimate for n ≥ 40 {\displaystyle n\geq 40} Λ ¯ n ( T ^ ) − Λ _ n ( T ^ ) < 0.0196. {\displaystyle {\overline {\Lambda }}_{n}({\hat {T}})-{\underline {\Lambda }}_{n}({\hat {T}})<0.0196.} == Detailed discussion == This section provides more information on the steps outlined above. In this section, the index i runs from 0 to n+1. Step 1: Given x 0 , x 1 , . . . x n + 1 {\displaystyle x_{0},x_{1},...x_{n+1}} , solve the linear system of n+2 equations b 0 + b 1 x i + . . . + b n x i n + ( − 1 ) i E = f ( x i ) {\displaystyle b_{0}+b_{1}x_{i}+...+b_{n}x_{i}^{n}+(-1)^{i}E=f(x_{i})} (where i = 0 , 1 , . . . n + 1 {\displaystyle i=0,1,...n+1} ), for the unknowns b 0 , b 1 , . . . b n {\displaystyle b_{0},b_{1},...b_{n}} and E. It should be clear that ( − 1 ) i E {\displaystyle (-1)^{i}E} in this equation makes sense only if the nodes x 0 , . . . , x n + 1 {\displaystyle x_{0},...,x_{n+1}} are ordered, either strictly increasing or strictly decreasing. Then this linear system has a unique solution. (As is well known, not every linear system has a solution.) Also, the solution can be obtained with only O ( n 2 ) {\displaystyle O(n^{2})} arithmetic operations while a standard solver from the library would take O ( n 3 ) {\displaystyle O(n^{3})} operations. Here is the simple proof: Compute the standard n-th degree interpolant p 1 ( x ) {\displaystyle p_{1}(x)} to f ( x ) {\displaystyle f(x)} at the first n+1 nodes and also the standard n-th degree interpolant p 2 ( x ) {\displaystyle p_{2}(x)} to the ordinates ( − 1 ) i {\displaystyle (-1)^{i}} p 1 ( x i ) = f ( x i ) , p 2 ( x i ) = ( − 1 ) i , i = 0 , . . . , n . {\displaystyle p_{1}(x_{i})=f(x_{i}),p_{2}(x_{i})=(-1)^{i},i=0,...,n.} To this end, use each time Newton's interpolation formula with the divided differences of order 0 , . . . , n {\displaystyle 0,...,n} and O ( n 2 ) {\displaystyle O(n^{2})} arithmetic operations. The polynomial p 2 ( x ) {\displaystyle p_{2}(x)} has its i-th zero between x i − 1 {\displaystyle x_{i-1}} and x i , i = 1 , . . . , n {\displaystyle x_{i},\ i=1,...,n} , and thus no further zeroes between x n {\displaystyle x_{n}} and x n + 1 {\displaystyle x_{n+1}} : p 2 ( x n ) {\displaystyle p_{2}(x_{n})} and p 2 ( x n + 1 ) {\displaystyle p_{2}(x_{n+1})} have the same sign ( − 1 ) n {\displaystyle (-1)^{n}} . The linear combination p ( x ) := p 1 ( x ) − p 2 ( x ) ⋅ E {\displaystyle p(x):=p_{1}(x)-p_{2}(x)\!\cdot \!E} is also a polynomial of degree n and p ( x i ) = p 1 ( x i ) − p 2 ( x i ) ⋅ E = f ( x i ) − ( − 1 ) i E , i = 0 , … , n . {\displaystyle p(x_{i})=p_{1}(x_{i})-p_{2}(x_{i})\!\cdot \!E\ =\ f(x_{i})-(-1)^{i}E,\ \ \ \ i=0,\ldots ,n.} This is the same as the equation above for i = 0 , . . . , n {\displaystyle i=0,...,n} and for any choice of E. The same equation for i = n+1 is p ( x n + 1 ) = p 1 ( x n + 1 ) − p 2 ( x n + 1 ) ⋅ E = f ( x n + 1 ) − ( − 1 ) n + 1 E {\displaystyle p(x_{n+1})\ =\ p_{1}(x_{n+1})-p_{2}(x_{n+1})\!\cdot \!E\ =\ f(x_{n+1})-(-1)^{n+1}E} and needs special reasoning: solved for the variable E, it is the definition of E: E := p 1 ( x n + 1 ) − f ( x n + 1 ) p 2 ( x n + 1 ) + ( − 1 ) n . {\displaystyle E\ :=\ {\frac {p_{1}(x_{n+1})-f(x_{n+1})}{p_{2}(x_{n+1})+(-1)^{n}}}.} As mentioned above, the two terms in the denominator have same sign: E and thus p ( x ) ≡ b 0 + b 1 x + … + b n x n {\displaystyle p(x)\equiv b_{0}+b_{1}x+\ldots +b_{n}x^{n}} are always well-defined. The error at the given n+2 ordered nodes is positive and negative in turn because p ( x i ) − f ( x i ) = − ( − 1 ) i E , i = 0 , . . . , n + 1. {\displaystyle p(x_{i})-f(x_{i})\ =\ -(-1)^{i}E,\ \ i=0,...,n\!+\!1.} The theorem of de La Vallée Poussin states that under this condition no polynomial of degree n exists with error less than E. Indeed, if such a polynomial existed, call it p ~ ( x ) {\displaystyle {\tilde {p}}(x)} , then the difference p ( x ) − p ~ ( x ) = ( p ( x ) − f ( x ) ) − ( p ~ ( x ) − f ( x ) ) {\displaystyle p(x)-{\tilde {p}}(x)=(p(x)-f(x))-({\tilde {p}}(x)-f(x))} would still be positive/negative at the n+2 nodes x i {\displaystyle x_{i}} and therefore have at least n+1 zeros which is impossible for a polynomial of degree n. Thus, this E is a lower bound for the minimum error which can be achieved with polynomials of degree n. Step 2 changes the notation from b 0 + b 1 x + . . . + b n x n {\displaystyle b_{0}+b_{1}x+...+b_{n}x^{n}} to p ( x ) {\displaystyle p(x)} . Step 3 improves upon the input nodes x 0 , . . . , x n + 1 {\displaystyle x_{0},...,x_{n+1}} and their errors ± E {\displaystyle \pm E} as follows. In each P-region, the current node x i {\displaystyle x_{i}} is replaced with the local maximizer x ¯ i {\displaystyle {\bar {x}}_{i}} and in each N-region x i {\displaystyle x_{i}} is replaced with the local minimizer. (Expect x ¯ 0 {\displaystyle {\bar {x}}_{0}} at A, the x ¯ i {\displaystyle {\bar {x}}_{i}} near x i {\displaystyle x_{i}} , and x ¯ n + 1 {\displaystyle {\bar {x}}_{n+1}} at B.) No high precision is required here, the standard line search with a couple of quadratic fits should suffice. (See ) Let z i := p ( x ¯ i ) − f ( x ¯ i ) {\displaystyle z_{i}:=p({\bar {x}}_{i})-f({\bar {x}}_{i})} . Each amplitude | z i | {\displaystyle |z_{i}|} is greater than or equal to E. The Theorem of de La Vallée Poussin and its proof also apply to z 0 , . . . , z n + 1 {\displaystyle z_{0},...,z_{n+1}} with min { | z i | } ≥ E {\displaystyle \min\{|z_{i}|\}\geq E} as the new lower bound for the best error possible with polynomials of degree n. Moreover, max { | z i | } {\displaystyle \max\{|z_{i}|\}} comes in handy as an obvious upper bound for that best possible error. Step 4: With min { | z i | } {\displaystyle \min \,\{|z_{i}|\}} and max { | z i | } {\displaystyle \max \,\{|z_{i}|\}} as lower and upper bound for the best possible approximation error, one has a reliable stopping criterion: repeat the steps until max { | z i | } − min { | z i | } {\displaystyle \max\{|z_{i}|\}-\min\{|z_{i}|\}} is sufficiently small or no longer decreases. These bounds indicate the progress. == Variants == Some modifications of the algorithm are present on the literature. These include: Replacing more than one sample point with the locations of nearby maximum absolute differences. Replacing all of the sample points with in a single iteration with the locations of all, alternating sign, maximum differences. Using the relative error to measure the difference between the approximation and the function, especially if the approximation will be used to compute the function on a computer which uses floating point arithmetic; Including zero-error point constraints. The Fraser-Hart variant, used to determine the best rational Chebyshev approximation. == See also == Hadamard's lemma Laurent series – Power series with negative powers Padé approximant – 'Best' approximation of a function by a rational function of given order Newton series – Discrete analog of a derivativePages displaying short descriptions of redirect targets Approximation theory – Theory of getting acceptably close inexact mathematical calculations Function approximation – Approximating an arbitrary function with a well-behaved one == References == == External links == Minimax Approximations and the Remez Algorithm, background chapter in the Boost Math Tools documentation, with link to an implementation in C++ Intro to DSP Aarts, Ronald M.; Bond, Charles; Mendelsohn, Phil & Weisstein, Eric W. "Remez Algorithm". MathWorld.
Wikipedia/Remez_algorithm
In computer science, the Floyd–Warshall algorithm (also known as Floyd's algorithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algorithm) is an algorithm for finding shortest paths in a directed weighted graph with positive or negative edge weights (but with no negative cycles). A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure of a relation R {\displaystyle R} , or (in connection with the Schulze voting system) widest paths between all pairs of vertices in a weighted graph. == History and naming == The Floyd–Warshall algorithm is an example of dynamic programming, and was published in its currently recognized form by Robert Floyd in 1962. However, it is essentially the same as algorithms previously published by Bernard Roy in 1959 and also by Stephen Warshall in 1962 for finding the transitive closure of a graph, and is closely related to Kleene's algorithm (published in 1956) for converting a deterministic finite automaton into a regular expression, with the difference being the use of a min-plus semiring. The modern formulation of the algorithm as three nested for-loops was first described by Peter Ingerman, also in 1962. == Algorithm == The Floyd–Warshall algorithm compares many possible paths through the graph between each pair of vertices. It is guaranteed to find all shortest paths and is able to do this with Θ ( | V | 3 ) {\displaystyle \Theta (|V|^{3})} comparisons in a graph, even though there may be Θ ( | V | 2 ) {\displaystyle \Theta (|V|^{2})} edges in the graph. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is optimal. Consider a graph G {\displaystyle G} with vertices V {\displaystyle V} numbered 1 through N {\displaystyle N} . Further consider a function s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} that returns the length of the shortest possible path (if one exists) from i {\displaystyle i} to j {\displaystyle j} using vertices only from the set { 1 , 2 , … , k } {\displaystyle \{1,2,\ldots ,k\}} as intermediate points along the way. Now, given this function, our goal is to find the length of the shortest path from each i {\displaystyle i} to each j {\displaystyle j} using any vertex in { 1 , 2 , … , N } {\displaystyle \{1,2,\ldots ,N\}} . By definition, this is the value s h o r t e s t P a t h ( i , j , N ) {\displaystyle \mathrm {shortestPath} (i,j,N)} , which we will find recursively. Observe that s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} must be less than or equal to s h o r t e s t P a t h ( i , j , k − 1 ) {\displaystyle \mathrm {shortestPath} (i,j,k-1)} : we have more flexibility if we are allowed to use the vertex k {\displaystyle k} . If s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} is in fact less than s h o r t e s t P a t h ( i , j , k − 1 ) {\displaystyle \mathrm {shortestPath} (i,j,k-1)} , then there must be a path from i {\displaystyle i} to j {\displaystyle j} using the vertices { 1 , 2 , … , k } {\displaystyle \{1,2,\ldots ,k\}} that is shorter than any such path that does not use the vertex k {\displaystyle k} . Since there are no negative cycles this path can be decomposed as: (1) a path from i {\displaystyle i} to k {\displaystyle k} that uses the vertices { 1 , 2 , … , k − 1 } {\displaystyle \{1,2,\ldots ,k-1\}} , followed by (2) a path from k {\displaystyle k} to j {\displaystyle j} that uses the vertices { 1 , 2 , … , k − 1 } {\displaystyle \{1,2,\ldots ,k-1\}} . And of course, these must be a shortest such path (or several of them), otherwise we could further decrease the length. In other words, we have arrived at the recursive formula: s h o r t e s t P a t h ( i , j , k ) = {\displaystyle \mathrm {shortestPath} (i,j,k)=} m i n ( s h o r t e s t P a t h ( i , j , k − 1 ) , {\displaystyle \mathrm {min} {\Big (}\mathrm {shortestPath} (i,j,k-1),} s h o r t e s t P a t h ( i , k , k − 1 ) + s h o r t e s t P a t h ( k , j , k − 1 ) ) {\displaystyle \mathrm {shortestPath} (i,k,k-1)+\mathrm {shortestPath} (k,j,k-1){\Big )}} . The base case is given by s h o r t e s t P a t h ( i , j , 0 ) = w ( i , j ) , {\displaystyle \mathrm {shortestPath} (i,j,0)=w(i,j),} where w ( i , j ) {\displaystyle w(i,j)} denotes the weight of the edge from i {\displaystyle i} to j {\displaystyle j} if one exists and ∞ (infinity) otherwise. These formulas are the heart of the Floyd–Warshall algorithm. The algorithm works by first computing s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} for all ( i , j ) {\displaystyle (i,j)} pairs for k = 0 {\displaystyle k=0} , then k = 1 {\displaystyle k=1} , then k = 2 {\displaystyle k=2} , and so on. This process continues until k = N {\displaystyle k=N} , and we have found the shortest path for all ( i , j ) {\displaystyle (i,j)} pairs using any intermediate vertices. Pseudocode for this basic version follows. === Pseudocode === let dist be a |V| × |V| array of minimum distances initialized to ∞ (infinity) for each edge (u, v) do dist[u][v] = w(u, v) // The weight of the edge (u, v) for each vertex v do dist[v][v] = 0 for k from 1 to |V| for i from 1 to |V| for j from 1 to |V| if dist[i][j] > dist[i][k] + dist[k][j] dist[i][j] = dist[i][k] + dist[k][j] end if == Example == The algorithm above is executed on the graph on the left below: Prior to the first recursion of the outer loop, labeled k = 0 above, the only known paths correspond to the single edges in the graph. At k = 1, paths that go through the vertex 1 are found: in particular, the path [2,1,3] is found, replacing the path [2,3] which has fewer edges but is longer (in terms of weight). At k = 2, paths going through the vertices {1,2} are found. The red and blue boxes show how the path [4,2,1,3] is assembled from the two known paths [4,2] and [2,1,3] encountered in previous iterations, with 2 in the intersection. The path [4,2,3] is not considered, because [2,1,3] is the shortest path encountered so far from 2 to 3. At k = 3, paths going through the vertices {1,2,3} are found. Finally, at k = 4, all shortest paths are found. The distance matrix at each iteration of k, with the updated distances in bold, will be: == Behavior with negative cycles == A negative cycle is a cycle whose edges sum to a negative value. There is no shortest path between any pair of vertices i {\displaystyle i} , j {\displaystyle j} which form part of a negative cycle, because path-lengths from i {\displaystyle i} to j {\displaystyle j} can be arbitrarily small (negative). For numerically meaningful output, the Floyd–Warshall algorithm assumes that there are no negative cycles. Nevertheless, if there are negative cycles, the Floyd–Warshall algorithm can be used to detect them. The intuition is as follows: The Floyd–Warshall algorithm iteratively revises path lengths between all pairs of vertices ( i , j ) {\displaystyle (i,j)} , including where i = j {\displaystyle i=j} ; Initially, the length of the path ( i , i ) {\displaystyle (i,i)} is zero; A path [ i , k , … , i ] {\displaystyle [i,k,\ldots ,i]} can only improve upon this if it has length less than zero, i.e. denotes a negative cycle; Thus, after the algorithm, ( i , i ) {\displaystyle (i,i)} will be negative if there exists a negative-length path from i {\displaystyle i} back to i {\displaystyle i} . Hence, to detect negative cycles using the Floyd–Warshall algorithm, one can inspect the diagonal of the path matrix, and the presence of a negative number indicates that the graph contains at least one negative cycle. However, when a negative cycle is present, during the execution of the algorithm exponentially large numbers on the order of Ω ( 6 n ⋅ w m a x ) {\displaystyle \Omega (6^{n}\cdot w_{max})} can appear, where w m a x {\displaystyle w_{max}} is the largest absolute value edge weight in the graph. To avoid integer underflow problems, one should check for a negative cycle within the innermost for loop of the algorithm. == Path reconstruction == The Floyd–Warshall algorithm typically only provides the lengths of the paths between all pairs of vertices. With simple modifications, it is possible to create a method to reconstruct the actual path between any two endpoint vertices. While one may be inclined to store the actual path from each vertex to each other vertex, this is not necessary, and in fact, is very costly in terms of memory. Instead, we can use the shortest-path tree, which can be calculated for each node in Θ ( | E | ) {\displaystyle \Theta (|E|)} time using Θ ( | V | ) {\displaystyle \Theta (|V|)} memory, and allows us to efficiently reconstruct a directed path between any two connected vertices. === Pseudocode === The array prev[u][v] holds the penultimate vertex on the path from u to v (except in the case of prev[v][v], where it always contains v even if there is no self-loop on v): let dist be a | V | × | V | {\displaystyle |V|\times |V|} array of minimum distances initialized to ∞ {\displaystyle \infty } (infinity) let prev be a | V | × | V | {\displaystyle |V|\times |V|} array of vertex indices initialized to null procedure FloydWarshallWithPathReconstruction() is for each edge (u, v) do dist[u][v] = w(u, v) // The weight of the edge (u, v) prev[u][v] = u for each vertex v do dist[v][v] = 0 prev[v][v] = v for k from 1 to |V| do // standard Floyd-Warshall implementation for i from 1 to |V| for j from 1 to |V| if dist[i][j] > dist[i][k] + dist[k][j] then dist[i][j] = dist[i][k] + dist[k][j] prev[i][j] = prev[k][j] procedure Path(u, v) is if prev[u][v] = null then return [] path = [v] while u ≠ v do v = prev[u][v] path.prepend(v) return path == Time complexity == Let n {\displaystyle n} be | V | {\displaystyle |V|} , the number of vertices. To find all n 2 {\displaystyle n^{2}} of s h o r t e s t P a t h ( i , j , k ) {\displaystyle \mathrm {shortestPath} (i,j,k)} (for all i {\displaystyle i} and j {\displaystyle j} ) from those of s h o r t e s t P a t h ( i , j , k − 1 ) {\displaystyle \mathrm {shortestPath} (i,j,k-1)} requires Θ ( n 2 ) {\displaystyle \Theta (n^{2})} operations. Since we begin with s h o r t e s t P a t h ( i , j , 0 ) = e d g e C o s t ( i , j ) {\displaystyle \mathrm {shortestPath} (i,j,0)=\mathrm {edgeCost} (i,j)} and compute the sequence of n {\displaystyle n} matrices s h o r t e s t P a t h ( i , j , 1 ) {\displaystyle \mathrm {shortestPath} (i,j,1)} , s h o r t e s t P a t h ( i , j , 2 ) {\displaystyle \mathrm {shortestPath} (i,j,2)} , … {\displaystyle \ldots } , s h o r t e s t P a t h ( i , j , n ) {\displaystyle \mathrm {shortestPath} (i,j,n)} , each having a cost of Θ ( n 2 ) {\displaystyle \Theta (n^{2})} , the total time complexity of the algorithm is n ⋅ Θ ( n 2 ) = Θ ( n 3 ) {\displaystyle n\cdot \Theta (n^{2})=\Theta (n^{3})} . == Applications and generalizations == The Floyd–Warshall algorithm can be used to solve the following problems, among others: Shortest paths in directed graphs (Floyd's algorithm). Transitive closure of directed graphs (Warshall's algorithm). In Warshall's original formulation of the algorithm, the graph is unweighted and represented by a Boolean adjacency matrix. Then the addition operation is replaced by logical conjunction (AND) and the minimum operation by logical disjunction (OR). Finding a regular expression denoting the regular language accepted by a finite automaton (Kleene's algorithm, a closely related generalization of the Floyd–Warshall algorithm) Inversion of real matrices (Gauss–Jordan algorithm) Optimal routing. In this application one is interested in finding the path with the maximum flow between two vertices. This means that, rather than taking minima as in the pseudocode above, one instead takes maxima. The edge weights represent fixed constraints on flow. Path weights represent bottlenecks; so the addition operation above is replaced by the minimum operation. Fast computation of Pathfinder networks. Widest paths/Maximum bandwidth paths Computing canonical form of difference bound matrices (DBMs) Computing the similarity between graphs Transitive closure in AND/OR/threshold graphs. == Implementations == Implementations are available for many programming languages. For C++, in the boost::graph library For C#, at QuikGraph For C#, at QuickGraphPCL (A fork of QuickGraph with better compatibility with projects using Portable Class Libraries.) For Java, in the Apache Commons Graph library For JavaScript, in the Cytoscape library For Julia, in the Graphs.jl package For MATLAB, in the Matlab_bgl package For Perl, in the Graph module For Python, in the SciPy library (module scipy.sparse.csgraph) or NetworkX library For R, in packages e1071 and Rfast For C, a pthreads, parallelized, implementation including a SQLite interface to the data at floydWarshall.h == Comparison with other shortest path algorithms == For graphs with non-negative edge weights, Dijkstra's algorithm can be used to find all shortest paths from a single vertex with running time Θ ( | E | + | V | log ⁡ | V | ) {\displaystyle \Theta (|E|+|V|\log |V|)} . Thus, running Dijkstra starting at each vertex takes time Θ ( | E | | V | + | V | 2 log ⁡ | V | ) {\displaystyle \Theta (|E||V|+|V|^{2}\log |V|)} . Since | E | = O ( | V | 2 ) {\displaystyle |E|=O(|V|^{2})} , this yields a worst-case running time of repeated Dijkstra of O ( | V | 3 ) {\displaystyle O(|V|^{3})} . While this matches the asymptotic worst-case running time of the Floyd-Warshall algorithm, the constants involved matter quite a lot. When a graph is dense (i.e., | E | ≈ | V | 2 {\displaystyle |E|\approx |V|^{2}} ), the Floyd-Warshall algorithm tends to perform better in practice. When the graph is sparse (i.e., | E | {\displaystyle |E|} is significantly smaller than | V | 2 {\displaystyle |V|^{2}} ), Dijkstra tends to dominate. For sparse graphs with negative edges but no negative cycles, Johnson's algorithm can be used, with the same asymptotic running time as the repeated Dijkstra approach. There are also known algorithms using fast matrix multiplication to speed up all-pairs shortest path computation in dense graphs, but these typically make extra assumptions on the edge weights (such as requiring them to be small integers). In addition, because of the high constant factors in their running time, they would only provide a speedup over the Floyd–Warshall algorithm for very large graphs. == References == == External links == Interactive animation of the Floyd–Warshall algorithm Interactive animation of the Floyd–Warshall algorithm (Technical University of Munich)
Wikipedia/Floyd's_algorithm
In theoretical computer science, in particular in formal language theory, Kleene's algorithm transforms a given nondeterministic finite automaton (NFA) into a regular expression. Together with other conversion algorithms, it establishes the equivalence of several description formats for regular languages. Alternative presentations of the same method include the "elimination method" attributed to Brzozowski and McCluskey, the algorithm of McNaughton and Yamada, and the use of Arden's lemma. == Algorithm description == According to Gross and Yellen (2004), the algorithm can be traced back to Kleene (1956). A presentation of the algorithm in the case of deterministic finite automata (DFAs) is given in Hopcroft and Ullman (1979). The presentation of the algorithm for NFAs below follows Gross and Yellen (2004). Given a nondeterministic finite automaton M = (Q, Σ, δ, q0, F), with Q = { q0,...,qn } its set of states, the algorithm computes the sets Rkij of all strings that take M from state qi to qj without going through any state numbered higher than k. Here, "going through a state" means entering and leaving it, so both i and j may be higher than k, but no intermediate state may. Each set Rkij is represented by a regular expression; the algorithm computes them step by step for k = -1, 0, ..., n. Since there is no state numbered higher than n, the regular expression Rn0j represents the set of all strings that take M from its start state q0 to qj. If F = { q1,...,qf } is the set of accept states, the regular expression Rn01 | ... | Rn0f represents the language accepted by M. The initial regular expressions, for k = -1, are computed as follows for i≠j: R−1ij = a1 | ... | am where qj ∈ δ(qi,a1), ..., qj ∈ δ(qi,am) and as follows for i=j: R−1ii = a1 | ... | am | ε where qi ∈ δ(qi,a1), ..., qi ∈ δ(qi,am) In other words, R−1ij mentions all letters that label a transition from i to j, and we also include ε in the case where i=j. After that, in each step the expressions Rkij are computed from the previous ones by Rkij = Rk-1ik (Rk-1kk)* Rk-1kj | Rk-1ij Another way to understand the operation of the algorithm is as an "elimination method", where the states from 0 to n are successively removed: when state k is removed, the regular expression Rk-1ij, which describes the words that label a path from state i>k to state j>k, is rewritten into Rkij so as to take into account the possibility of going via the "eliminated" state k. By induction on k, it can be shown that the length of each expression Rkij is at most ⁠1/3⁠(4k+1(6s+7) - 4) symbols, where s denotes the number of characters in Σ. Therefore, the length of the regular expression representing the language accepted by M is at most ⁠1/3⁠(4n+1(6s+7)f - f - 3) symbols, where f denotes the number of final states. This exponential blowup is inevitable, because there exist families of DFAs for which any equivalent regular expression must be of exponential size. In practice, the size of the regular expression obtained by running the algorithm can be very different depending on the order in which the states are considered by the procedure, i.e., the order in which they are numbered from 0 to n. == Example == The automaton shown in the picture can be described as M = (Q, Σ, δ, q0, F) with the set of states Q = { q0, q1, q2 }, the input alphabet Σ = { a, b }, the transition function δ with δ(q0,a)=q0, δ(q0,b)=q1, δ(q1,a)=q2, δ(q1,b)=q1, δ(q2,a)=q1, and δ(q2,b)=q1, the start state q0, and set of accept states F = { q1 }. Kleene's algorithm computes the initial regular expressions as After that, the Rkij are computed from the Rk-1ij step by step for k = 0, 1, 2. Kleene algebra equalities are used to simplify the regular expressions as much as possible. Step 0 Step 1 Step 2 Since q0 is the start state and q1 is the only accept state, the regular expression R201 denotes the set of all strings accepted by the automaton. == See also == Floyd–Warshall algorithm — an algorithm on weighted graphs that can be implemented by Kleene's algorithm using a particular Kleene algebra Star height problem — what is the minimum stars' nesting depth of all regular expressions corresponding to a given DFA? Generalized star height problem — if a complement operator is allowed additionally in regular expressions, can the stars' nesting depth of Kleene's algorithm's output be limited to a fixed bound? Thompson's construction algorithm — transforms a regular expression to a finite automaton == References ==
Wikipedia/Kleene's_algorithm
The Schulze method (), also known as the beatpath method, is a single winner ranked-choice voting rule developed by Markus Schulze. The Schulze method is a Condorcet completion method, which means it will elect a majority-preferred candidate if one exists. In other words, if most people rank A above B, A will defeat B (whenever this is possible). Schulze's method breaks cyclic ties by using indirect victories. The idea is that if Alice beats Bob, and Bob beats Charlie, then Alice (indirectly) beats Charlie; this kind of indirect win is called a "beatpath". For proportional representation, a single transferable vote (STV) variant known as Schulze STV also exists. The Schulze method is used by several organizations including Debian, Ubuntu, Gentoo, Pirate Party political parties and many others. It was also used by Wikimedia prior to their adoption of score voting. == Description of the method == Schulze's method uses ranked ballots with equal ratings allowed. There are two common (equivalent) descriptions of Schulze's method. === Beatpath explanation === The idea behind Schulze's method is that if Alice defeats Bob, and Bob beats Charlie, then Alice "indirectly" defeats Charlie. These chained sequences of "beats" are called 'beatpaths'. Every beatpath is assigned a particular strength. The strength of a single-step beatpath from Alice to Bob is just the number of voters who rank Alice over Bob. For a longer beatpath, consisting of multiple beats, a beatpath is as strong as its weakest link (i.e. the beat with the smallest number of winning votes). We say Alice has a "beatpath-win" over Bob if her strongest beatpath to Bob is stronger than all of Bob's strongest beatpaths to Alice. The winner is the candidate who has a beatpath-win over every other candidate. Markus Schulze proved that this definition of a beatpath-win is transitive: in other words, if Alice has a beatpath-win over Bob, and Bob has a beatpath-win over Charlie, Alice has a beatpath-win over Charlie.: §4.1  As a result, the Schulze method is a Condorcet method, providing a full extension of the majority rule to any set of ballots. === Iterative description === The Schulze winner can also be constructed iteratively, using a defeat-dropping method: Draw a directed graph with all the candidates as nodes; label the edges with the number of votes supporting the winner. If there is more than one candidate left: Check if any candidates are tied (and if so, break the ties by random ballot). Eliminate all candidates outside the majority-preferred set. Delete the edge closest to being tied. The winner is the only candidate left at the end of the procedure. == Example == In the following example 45 voters rank 5 candidates. The pairwise preferences have to be computed first. For example, when comparing A and B pairwise, there are 5+5+3+7=20 voters who prefer A to B, and 8+2+7+8=25 voters who prefer B to A. So d [ A , B ] = 20 {\displaystyle d[A,B]=20} and d [ B , A ] = 25 {\displaystyle d[B,A]=25} . The full set of pairwise preferences is: The cells for d[X, Y] have a light green background if d[X, Y] > d[Y, X], otherwise the background is light red. There is no undisputed winner by only looking at the pairwise differences here. Now the strongest paths have to be identified. To help visualize the strongest paths, the set of pairwise preferences is depicted in the diagram on the right in the form of a directed graph. An arrow from the node representing a candidate X to the one representing a candidate Y is labelled with d[X, Y]. To avoid cluttering the diagram, an arrow has only been drawn from X to Y when d[X, Y] > d[Y, X] (i.e. the table cells with light green background), omitting the one in the opposite direction (the table cells with light red background). One example of computing the strongest path strength is p[B, D] = 33: the strongest path from B to D is the direct path (B, D) which has strength 33. But when computing p[A, C], the strongest path from A to C is not the direct path (A, C) of strength 26, rather the strongest path is the indirect path (A, D, C) which has strength min(30, 28) = 28. The strength of a path is the strength of its weakest link. For each pair of candidates X and Y, the following table shows the strongest path from candidate X to candidate Y in red, with the weakest link underlined. Now the output of the Schulze method can be determined. For example, when comparing A and B, since ( 28 = ) p [ A , B ] > p [ B , A ] ( = 25 ) {\displaystyle (28=)p[A,B]>p[B,A](=25)} , for the Schulze method candidate A is better than candidate B. Another example is that ( 31 = ) p [ E , D ] > p [ D , E ] ( = 24 ) {\displaystyle (31=)p[E,D]>p[D,E](=24)} , so candidate E is better than candidate D. Continuing in this way, the result is that the Schulze ranking is E > A > C > B > D {\displaystyle E>A>C>B>D} , and E wins. In other words, E wins since p [ E , X ] ≥ p [ X , E ] {\displaystyle p[E,X]\geq p[X,E]} for every other candidate X. == Implementation == The only difficult step in implementing the Schulze method is computing the strongest path strengths. However, this is a well-known problem in graph theory sometimes called the widest path problem. One simple way to compute the strengths, therefore, is a variant of the Floyd–Warshall algorithm. The following pseudocode illustrates the algorithm.This algorithm is efficient and has running time O(C3) where C is the number of candidates. == Ties and alternative implementations == When allowing users to have ties in their preferences, the outcome of the Schulze method naturally depends on how these ties are interpreted in defining d[*,*]. Two natural choices are that d[A, B] represents either the number of voters who strictly prefer A to B (A>B), or the margin of (voters with A>B) minus (voters with B>A). But no matter how the ds are defined, the Schulze ranking has no cycles, and assuming the ds are unique it has no ties. Although ties in the Schulze ranking are unlikely, they are possible. Schulze's original paper recommended breaking ties by random ballot. There is another alternative way to demonstrate the winner of the Schulze method. This method is equivalent to the others described here, but the presentation is optimized for the significance of steps being visually apparent as a human goes through it, not for computation. Make the results table, called the "matrix of pairwise preferences", such as used above in the example. Then, every positive number is a pairwise win for the candidate on that row (and marked green), ties are zeroes, and losses are negative (marked red). Order the candidates by how long they last in elimination. If there is a candidate with no red on their line, they win. Otherwise, draw a square box around the Schwartz set in the upper left corner. It can be described as the minimal "winner's circle" of candidates who do not lose to anyone outside the circle. Note that to the right of the box there is no red, which means it is a winner's circle, and note that within the box there is no reordering possible that would produce a smaller winner's circle. Cut away every part of the table outside the box. If there is still no candidate with no red on their line, something needs to be compromised on; every candidate lost some race, and the loss we tolerate the best is the one where the loser obtained the most votes. So, take the red cell with the highest number (if going by margins, the least negative), make it green—or any color other than red—and go back step 2. Here is a margins table made from the above example. Note the change of order used for demonstration purposes. The first drop (A's loss to E by 1 vote) does not help shrink the Schwartz set. So we get straight to the second drop (E's loss to C by 3 votes), and that shows us the winner, E, with its clear row. This method can also be used to calculate a result, if the table is remade in such a way that one can conveniently and reliably rearrange the order of the candidates on both the row and the column, with the same order used on both at all times. == Satisfied and failed criteria == === Satisfied criteria === The Schulze method satisfies the following criteria: === Failed criteria === Since the Schulze method satisfies the Condorcet criterion, it automatically fails the following criteria: Participation: §3.4  Consistency Invulnerability to burying Later-no-harm Likewise, since the Schulze method is not a dictatorship and is a ranked voting system (not rated), Arrow's Theorem implies it fails independence of irrelevant alternatives, meaning it can be vulnerable to the spoiler effect in some rare circumstances. The Schulze method also fails Peyton Young's criterion of Local Independence of Irrelevant Alternatives. === Comparison table === The following table compares the Schulze method with other single-winner election methods: === Difference from ranked pairs === Ranked pairs is another Condorcet method which is very similar to Schulze's rule, and typically produces the same outcome. There are slight differences, however. The main difference between the beatpath method and ranked pairs is that Schulze retains behavior closer to minimax. Say that the minimax score of a set X of candidates is the strength of the strongest pairwise win of a candidate A ∉ X against a candidate B ∈ X. Then the Schulze method, but not ranked pairs, guarantees the winner is always a candidate of the set with minimum minimax score.: §4.8  This is the sense in which the Schulze method minimizes the largest majority that has to be reversed when determining the winner. On the other hand, Ranked Pairs minimizes the largest majority that has to be reversed to determine the order of finish. In other words, when Ranked Pairs and the Schulze method produce different orders of finish, for the majorities on which the two orders of finish disagree, the Schulze order reverses a larger majority than the Ranked Pairs order. == History == The Schulze method was developed by Markus Schulze in 1997. It was first discussed in public mailing lists in 1997–1998 and in 2000. In 2011, Schulze published the method in the academic journal Social Choice and Welfare. == Usage == === Government === The Schulze method is used by the city of Silla, Spain for all referendums. It is also used by the cities of Turin and San Donà di Piave in Italy and by the London Borough of Southwark through their use of the WeGovNow platform, which in turn uses the LiquidFeedback decision tool. === Political parties === Schulze was adopted by the Pirate Party of Sweden (2009), and the Pirate Party of Germany (2010). The Boise, Idaho chapter of the Democratic Socialists of America in February chose this method for their first special election held in March 2018. Five Star Movement of Campobasso, Fondi, Monte Compatri, Montemurlo, Pescara, and San Cesareo Pirate Parties of Australia, Austria, Belgium, Brazil, Germany, Iceland, Italy, the Netherlands, Sweden, Switzerland, and the United States SustainableUnion Volt Europe === Student government and associations === AEGEE – European Students' Forum Club der Ehemaligen der Deutschen SchülerAkademien e. V. Associated Student Government at École normale supérieure de Paris Flemish Society of Engineering Students Leuven Graduate Student Organization at the State University of New York: Computer Science (GSOCS) Hillegass Parker House Kingman Hall Associated Students of Minerva Schools at KGI Associated Student Government at Northwestern University Associated Student Government at University of Freiburg Associated Student Government at the Computer Sciences Department of the University of Kaiserslautern-Landau === Organizations === It is used by the Institute of Electrical and Electronics Engineers, by the Association for Computing Machinery, and by USENIX through their use of the HotCRP decision tool. Organizations which currently use the Schulze method include: == Generalizations == In 2008, Camps et. al devised a method that, while ranking candidates in the same order of finish as Schulze, also provides ratings indicating the candidates' relative strength of victory. == Notes == == External links == Schulze, Markus (2018). "The Schulze Method of Voting". arXiv:1804.02973 [cs.GT]. The Schulze Method by Hubert Bray Spieltheorie (in German) by Bernhard Nebel Accurate Democracy by Rob Loring Christoph Börgers (2009), Mathematics of Social Choice: Voting, Compensation, and Division, SIAM, ISBN 0-89871-695-0 Nicolaus Tideman (2006), Collective Decisions and Voting: The Potential for Public Choice, Burlington: Ashgate, ISBN 0-7546-4717-X preftools by the Public Software Group Arizonans for Condorcet Ranked Voting Condorcet PHP Command line application and PHP library, supporting multiple Condorcet methods, including Schulze. Implementation in Java Implementation in Ruby Implementation in Python 2 Implementation in Python 3
Wikipedia/Schulze_method
In computer science, cycle detection or cycle finding is the algorithmic problem of finding a cycle in a sequence of iterated function values. For any function f that maps a finite set S to itself, and any initial value x0 in S, the sequence of iterated function values x 0 , x 1 = f ( x 0 ) , x 2 = f ( x 1 ) , … , x i = f ( x i − 1 ) , … {\displaystyle x_{0},\ x_{1}=f(x_{0}),\ x_{2}=f(x_{1}),\ \dots ,\ x_{i}=f(x_{i-1}),\ \dots } must eventually use the same value twice: there must be some pair of distinct indices i and j such that xi = xj. Once this happens, the sequence must continue periodically, by repeating the same sequence of values from xi to xj − 1. Cycle detection is the problem of finding i and j, given f and x0. Several algorithms are known for finding cycles quickly and with little memory. Robert W. Floyd's tortoise and hare algorithm moves two pointers at different speeds through the sequence of values until they both point to equal values. Alternatively, Brent's algorithm is based on the idea of exponential search. Both Floyd's and Brent's algorithms use only a constant number of memory cells, and take a number of function evaluations that is proportional to the distance from the start of the sequence to the first repetition. Several other algorithms trade off larger amounts of memory for fewer function evaluations. The applications of cycle detection include testing the quality of pseudorandom number generators and cryptographic hash functions, computational number theory algorithms, detection of infinite loops in computer programs and periodic configurations in cellular automata, automated shape analysis of linked list data structures, and detection of deadlocks for transactions management in DBMS. == Example == The figure shows a function f that maps the set S = {0,1,2,3,4,5,6,7,8} to itself. If one starts from x0 = 2 and repeatedly applies f, one sees the sequence of values 2, 0, 6, 3, 1, 6, 3, 1, 6, 3, 1, .... The cycle in this value sequence is 6, 3, 1. == Definitions == Let S be any finite set, f be any endofunction from S to itself, and x0 be any element of S. For any i > 0, let xi = f(xi − 1). Let μ be the smallest index such that the value xμ reappears infinitely often within the sequence of values xi, and let λ (the loop length) be the smallest positive integer such that xμ = xλ + μ. The cycle detection problem is the task of finding λ and μ. One can view the same problem graph-theoretically, by constructing a functional graph (that is, a directed graph in which each vertex has a single outgoing edge) the vertices of which are the elements of S and the edges of which map an element to the corresponding function value, as shown in the figure. The set of vertices reachable from starting vertex x0 form a subgraph with a shape resembling the Greek letter rho (ρ): a path of length μ from x0 to a cycle of λ vertices. Practical cycle-detection algorithms do not find λ and μ exactly. They usually find lower and upper bounds μl ≤ μ ≤ μh for the start of the cycle, and a more detailed search of the range must be performed if the exact value of μ is needed. Also, most algorithms do not guarantee to find λ directly, but may find some multiple kλ < μ + λ. (Continuing the search for an additional kλ/q steps, where q is the smallest prime divisor of kλ, will either find the true λ or prove that k = 1.) == Computer representation == Except in toy examples like the above, f will not be specified as a table of values. Such a table implies O(|S|) space complexity, and if that is permissible, an associative array mapping xi to i will detect the first repeated value. Rather, a cycle detection algorithm is given a black box for generating the sequence xi, and the task is to find λ and μ using very little memory. The black box might consist of an implementation of the recurrence function f, but it might also store additional internal state to make the computation more efficient. Although xi = f(xi−1) must be true in principle, this might be expensive to compute directly; the function could be defined in terms of the discrete logarithm of xi−1 or some other difficult-to-compute property which can only be practically computed in terms of additional information. In such cases, the number of black boxes required becomes a figure of merit distinguishing the algorithms. A second reason to use one of these algorithms is that they are pointer algorithms which do no operations on elements of S other than testing for equality. An associative array implementation requires computing a hash function on the elements of S, or ordering them. But cycle detection can be applied in cases where neither of these are possible. The classic example is Pollard's rho algorithm for integer factorization, which searches for a factor p of a given number n by looking for values xi and xi+λ which are equal modulo p without knowing p in advance. This is done by computing the greatest common divisor of the difference xi − xi+λ with a known multiple of p, namely n. If the gcd is non-trivial (neither 1 nor n), then the value is a proper factor of n, as desired. If n is not prime, it must have at least one factor p ≤ √n, and by the birthday paradox, a random function f has an expected cycle length (modulo p) of √p ≤ 4√n. == Algorithms == If the input is given as a subroutine for calculating f, the cycle detection problem may be trivially solved using only λ + μ function applications, simply by computing the sequence of values xi and using a data structure such as a hash table to store these values and test whether each subsequent value has already been stored. However, the space complexity of this algorithm is proportional to λ + μ, unnecessarily large. Additionally, to implement this method as a pointer algorithm would require applying the equality test to each pair of values, resulting in quadratic time overall. Thus, research in this area has concentrated on two goals: using less space than this naive algorithm, and finding pointer algorithms that use fewer equality tests. === Floyd's tortoise and hare === Floyd's cycle-finding algorithm is a pointer algorithm that uses only two pointers, which move through the sequence at different speeds. It is also called the "tortoise and the hare algorithm", alluding to Aesop's fable of The Tortoise and the Hare. The algorithm is named after Robert W. Floyd, who was credited with its invention by Donald Knuth. However, the algorithm does not appear in Floyd's published work, and this may be a misattribution: Floyd describes algorithms for listing all simple cycles in a directed graph in a 1967 paper, but this paper does not describe the cycle-finding problem in functional graphs that is the subject of this article. In fact, Knuth's statement (in 1969), attributing it to Floyd, without citation, is the first known appearance in print, and it thus may be a folk theorem, not attributable to a single individual. The key insight in the algorithm is as follows. If there is a cycle, then, for any integers i ≥ μ and k ≥ 0, xi = xi + kλ, where λ is the length of the loop to be found, μ is the index of the first element of the cycle, and k is a whole integer representing the number of loops. Based on this, it can then be shown that i = kλ ≥ μ for some k if and only if xi = x2i (if xi = x2i in the cycle, then there exists some k such that 2i = i + kλ, which implies that i = kλ; and if there are some i and k such that i = kλ, then 2i = i + kλ and x2i = xi + kλ). Thus, the algorithm only needs to check for repeated values of this special form, one twice as far from the start of the sequence as the other, to find a period ν of a repetition that is a multiple of λ. Once ν is found, the algorithm retraces the sequence from its start to find the first repeated value xμ in the sequence, using the fact that λ divides ν and therefore that xμ = xμ + v. Finally, once the value of μ is known it is trivial to find the length λ of the shortest repeating cycle, by searching for the first position μ + λ for which xμ + λ = xμ. The algorithm thus maintains two pointers into the given sequence, one (the tortoise) at xi, and the other (the hare) at x2i. At each step of the algorithm, it increases i by one, moving the tortoise one step forward and the hare two steps forward in the sequence, and then compares the sequence values at these two pointers. The smallest value of i > 0 for which the tortoise and hare point to equal values is the desired value ν. The following Python code shows how this idea may be implemented as an algorithm. This code only accesses the sequence by storing and copying pointers, function evaluations, and equality tests; therefore, it qualifies as a pointer algorithm. The algorithm uses O(λ + μ) operations of these types, and O(1) storage space. === Brent's algorithm === Richard P. Brent described an alternative cycle detection algorithm that, like the tortoise and hare algorithm, requires only two pointers into the sequence. However, it is based on a different principle: searching for the smallest power of two 2i that is larger than both λ and μ. For i = 0, 1, 2, ..., the algorithm compares x2i−1 with each subsequent sequence value up to the next power of two, stopping when it finds a match. It has two advantages compared to the tortoise and hare algorithm: it finds the correct length λ of the cycle directly, rather than needing to search for it in a subsequent stage, and its steps involve only one evaluation of the function f rather than three. The following Python code shows how this technique works in more detail. Like the tortoise and hare algorithm, this is a pointer algorithm that uses O(λ + μ) tests and function evaluations and O(1) storage space. It is not difficult to show that the number of function evaluations can never be higher than for Floyd's algorithm. Brent claims that, on average, his cycle finding algorithm runs around 36% more quickly than Floyd's and that it speeds up the Pollard rho algorithm by around 24%. He also performs an average case analysis for a randomized version of the algorithm in which the sequence of indices traced by the slower of the two pointers is not the powers of two themselves, but rather a randomized multiple of the powers of two. Although his main intended application was in integer factorization algorithms, Brent also discusses applications in testing pseudorandom number generators. === Gosper's algorithm === R. W. Gosper's algorithm finds the period λ {\displaystyle \lambda } , and the lower and upper bound of the starting point, μ l {\displaystyle \mu _{l}} and μ u {\displaystyle \mu _{u}} , of the first cycle. The difference between the lower and upper bound is of the same order as the period, i.e. μ l + λ ≈ μ h {\displaystyle \mu _{l}+\lambda \approx \mu _{h}} . The algorithm maintains an array of tortoises T j {\displaystyle T_{j}} . For each x i {\displaystyle x_{i}} : For each 0 ≤ j ≤ log 2 ⁡ i , {\displaystyle 0\leq j\leq \log _{2}i,} compare x i {\displaystyle x_{i}} to T j {\displaystyle T_{j}} . If x i = T j {\displaystyle x_{i}=T_{j}} , a cycle has been detected, of length λ = ( i − 2 j ) mod 2 j + 1 + 1. {\displaystyle \lambda =(i-2^{j}){\bmod {2}}^{j+1}+1.} If no match is found, set T k ← x i {\displaystyle T_{k}\leftarrow x_{i}} , where k {\displaystyle k} is the number of trailing zeros in the binary representation of i + 1 {\displaystyle i+1} . I.e. the greatest power of 2 which divides i + 1 {\displaystyle i+1} . If it is inconvenient to vary the number of comparisons as i {\displaystyle i} increases, you may initialize all of the T j = x 0 {\displaystyle T_{j}=x_{0}} , but must then return λ = i {\displaystyle \lambda =i} if x i = T j {\displaystyle x_{i}=T_{j}} while i < 2 j {\displaystyle i<2^{j}} . ==== Advantages ==== The main features of Gosper's algorithm are that it is economical in space, very economical in evaluations of the generator function, and always finds the exact cycle length (never a multiple). The cost is a large number of equality comparisons. It could be roughly described as a concurrent version of Brent's algorithm. While Brent's algorithm uses a single tortoise, repositioned every time the hare passes a power of two, Gosper's algorithm uses several tortoises (several previous values are saved), which are roughly exponentially spaced. According to the note in HAKMEM item 132, this algorithm will detect repetition before the third occurrence of any value, i.e. the cycle will be iterated at most twice. HAKMEM also states that it is sufficient to store ⌈ log 2 ⁡ λ ⌉ {\displaystyle \lceil \log _{2}\lambda \rceil } previous values; however, this only offers a saving if we know a priori that λ {\displaystyle \lambda } is significantly smaller than μ {\displaystyle \mu } . The standard implementations store ⌈ log 2 ⁡ ( μ + 2 λ ) ⌉ {\displaystyle \lceil \log _{2}(\mu +2\lambda )\rceil } values. For example, assume the function values are 32-bit integers, so μ + λ ≤ 2 32 {\displaystyle \mu +\lambda \leq 2^{32}} and μ + 2 λ ≤ 2 33 . {\displaystyle \mu +2\lambda \leq 2^{33}.} Then Gosper's algorithm will find the cycle after less than μ + 2 λ {\displaystyle \mu +2\lambda } function evaluations (in fact, the most possible is 3 ⋅ 2 31 − 1 {\displaystyle 3\cdot 2^{31}-1} ), while consuming the space of 33 values (each value being a 32-bit integer). ==== Complexity ==== Upon the i {\displaystyle i} -th evaluation of the generator function, the algorithm compares the generated value with log 2 ⁡ i {\displaystyle \log _{2}i} previous values; observe that i {\displaystyle i} goes up to at least μ + λ {\displaystyle \mu +\lambda } and at most μ + 2 λ {\displaystyle \mu +2\lambda } . Therefore, the time complexity of this algorithm is O ( ( μ + λ ) ⋅ log ⁡ ( μ + λ ) ) {\displaystyle O((\mu +\lambda )\cdot \log(\mu +\lambda ))} . Since it stores log 2 ⁡ ( μ + 2 λ ) {\displaystyle \log _{2}(\mu +2\lambda )} values, its space complexity is Θ ( log ⁡ ( μ + λ ) ) {\displaystyle \Theta (\log(\mu +\lambda ))} . This is under the usual transdichotomous model, assumed throughout this article, in which the size of the function values is constant. Without this assumption, we know it requires Ω ( log ⁡ ( μ + λ ) ) {\displaystyle \Omega (\log(\mu +\lambda ))} space to store μ + λ {\displaystyle \mu +\lambda } distinct values, so the overall space complexity is Ω ( log 2 ⁡ ( μ + λ ) ) . {\displaystyle \Omega (\log ^{2}(\mu +\lambda )).} === Time–space tradeoffs === A number of authors have studied techniques for cycle detection that use more memory than Floyd's and Brent's methods, but detect cycles more quickly. In general these methods store several previously-computed sequence values, and test whether each new value equals one of the previously-computed values. In order to do so quickly, they typically use a hash table or similar data structure for storing the previously-computed values, and therefore are not pointer algorithms: in particular, they usually cannot be applied to Pollard's rho algorithm. Where these methods differ is in how they determine which values to store. Following Nivasch, we survey these techniques briefly. Brent already describes variations of his technique in which the indices of saved sequence values are powers of a number R other than two. By choosing R to be a number close to one, and storing the sequence values at indices that are near a sequence of consecutive powers of R, a cycle detection algorithm can use a number of function evaluations that is within an arbitrarily small factor of the optimum λ + μ. Sedgewick, Szymanski, and Yao provide a method that uses M memory cells and requires in the worst case only ( λ + μ ) ( 1 + c M − 1 / 2 ) {\displaystyle (\lambda +\mu )(1+cM^{-1/2})} function evaluations, for some constant c, which they show to be optimal. The technique involves maintaining a numerical parameter d, storing in a table only those positions in the sequence that are multiples of d, and clearing the table and doubling d whenever too many values have been stored. Several authors have described distinguished point methods that store function values in a table based on a criterion involving the values, rather than (as in the method of Sedgewick et al.) based on their positions. For instance, values equal to zero modulo some value d might be stored. More simply, Nivasch credits D. P. Woodruff with the suggestion of storing a random sample of previously seen values, making an appropriate random choice at each step so that the sample remains random. Nivasch describes an algorithm that does not use a fixed amount of memory, but for which the expected amount of memory used (under the assumption that the input function is random) is logarithmic in the sequence length. An item is stored in the memory table, with this technique, when no later item has a smaller value. As Nivasch shows, the items with this technique can be maintained using a stack data structure, and each successive sequence value need be compared only to the top of the stack. The algorithm terminates when the repeated sequence element with smallest value is found. Running the same algorithm with multiple stacks, using random permutations of the values to reorder the values within each stack, allows a time–space tradeoff similar to the previous algorithms. However, even the version of this algorithm with a single stack is not a pointer algorithm, due to the comparisons needed to determine which of two values is smaller. Any cycle detection algorithm that stores at most M values from the input sequence must perform at least ( λ + μ ) ( 1 + 1 M − 1 ) {\displaystyle (\lambda +\mu )\left(1+{\frac {1}{M-1}}\right)} function evaluations. == Applications == Cycle detection has been used in many applications. Determining the cycle length of a pseudorandom number generator is one measure of its strength. This is the application cited by Knuth in describing Floyd's method. Brent describes the results of testing a linear congruential generator in this fashion; its period turned out to be significantly smaller than advertised. For more complex generators, the sequence of values in which the cycle is to be found may not represent the output of the generator, but rather its internal state. Several number-theoretic algorithms are based on cycle detection, including Pollard's rho algorithm for integer factorization and his related kangaroo algorithm for the discrete logarithm problem. In cryptographic applications, the ability to find two distinct values xμ−1 and xλ+μ−1 mapped by some cryptographic function ƒ to the same value xμ may indicate a weakness in ƒ. For instance, Quisquater and Delescaille apply cycle detection algorithms in the search for a message and a pair of Data Encryption Standard keys that map that message to the same encrypted value; Kaliski, Rivest, and Sherman also use cycle detection algorithms to attack DES. The technique may also be used to find a collision in a cryptographic hash function. Cycle detection may be helpful as a way of discovering infinite loops in certain types of computer programs. Periodic configurations in cellular automaton simulations may be found by applying cycle detection algorithms to the sequence of automaton states. Shape analysis of linked list data structures is a technique for verifying the correctness of an algorithm using those structures. If a node in the list incorrectly points to an earlier node in the same list, the structure will form a cycle that can be detected by these algorithms. In Common Lisp, the S-expression printer, under control of the *print-circle* variable, detects circular list structure and prints it compactly. Teske describes applications in computational group theory: determining the structure of an Abelian group from a set of its generators. The cryptographic algorithms of Kaliski et al. may also be viewed as attempting to infer the structure of an unknown group. Fich (1981) briefly mentions an application to computer simulation of celestial mechanics, which she attributes to William Kahan. In this application, cycle detection in the phase space of an orbital system may be used to determine whether the system is periodic to within the accuracy of the simulation. In Mandelbrot Set fractal generation some performance techniques are used to speed up the image generation. One of them is called "period checking", which consists of finding the cycles in a point orbit. This article describes the "period checking" technique. You can find another explanation here. Some cycle detection algorithms have to be implemented in order to implement this technique. == References == == External links == Gabriel Nivasch, The Cycle Detection Problem and the Stack Algorithm Tortoise and Hare, Portland Pattern Repository Floyd's Cycle Detection Algorithm (The Tortoise and the Hare) Brent's Cycle Detection Algorithm (The Teleporting Turtle)
Wikipedia/Floyd's_cycle-finding_algorithm
In mathematics, more specifically in functional analysis, a positive linear functional on an ordered vector space ( V , ≤ ) {\displaystyle (V,\leq )} is a linear functional f {\displaystyle f} on V {\displaystyle V} so that for all positive elements v ∈ V , {\displaystyle v\in V,} that is v ≥ 0 , {\displaystyle v\geq 0,} it holds that f ( v ) ≥ 0. {\displaystyle f(v)\geq 0.} In other words, a positive linear functional is guaranteed to take nonnegative values for positive elements. The significance of positive linear functionals lies in results such as Riesz–Markov–Kakutani representation theorem. When V {\displaystyle V} is a complex vector space, it is assumed that for all v ≥ 0 , {\displaystyle v\geq 0,} f ( v ) {\displaystyle f(v)} is real. As in the case when V {\displaystyle V} is a C*-algebra with its partially ordered subspace of self-adjoint elements, sometimes a partial order is placed on only a subspace W ⊆ V , {\displaystyle W\subseteq V,} and the partial order does not extend to all of V , {\displaystyle V,} in which case the positive elements of V {\displaystyle V} are the positive elements of W , {\displaystyle W,} by abuse of notation. This implies that for a C*-algebra, a positive linear functional sends any x ∈ V {\displaystyle x\in V} equal to s ∗ s {\displaystyle s^{\ast }s} for some s ∈ V {\displaystyle s\in V} to a real number, which is equal to its complex conjugate, and therefore all positive linear functionals preserve the self-adjointness of such x . {\displaystyle x.} This property is exploited in the GNS construction to relate positive linear functionals on a C*-algebra to inner products. == Sufficient conditions for continuity of all positive linear functionals == There is a comparatively large class of ordered topological vector spaces on which every positive linear form is necessarily continuous. This includes all topological vector lattices that are sequentially complete. Theorem Let X {\displaystyle X} be an Ordered topological vector space with positive cone C ⊆ X {\displaystyle C\subseteq X} and let B ⊆ P ( X ) {\displaystyle {\mathcal {B}}\subseteq {\mathcal {P}}(X)} denote the family of all bounded subsets of X . {\displaystyle X.} Then each of the following conditions is sufficient to guarantee that every positive linear functional on X {\displaystyle X} is continuous: C {\displaystyle C} has non-empty topological interior (in X {\displaystyle X} ). X {\displaystyle X} is complete and metrizable and X = C − C . {\displaystyle X=C-C.} X {\displaystyle X} is bornological and C {\displaystyle C} is a semi-complete strict B {\displaystyle {\mathcal {B}}} -cone in X . {\displaystyle X.} X {\displaystyle X} is the inductive limit of a family ( X α ) α ∈ A {\displaystyle \left(X_{\alpha }\right)_{\alpha \in A}} of ordered Fréchet spaces with respect to a family of positive linear maps where X α = C α − C α {\displaystyle X_{\alpha }=C_{\alpha }-C_{\alpha }} for all α ∈ A , {\displaystyle \alpha \in A,} where C α {\displaystyle C_{\alpha }} is the positive cone of X α . {\displaystyle X_{\alpha }.} == Continuous positive extensions == The following theorem is due to H. Bauer and independently, to Namioka. Theorem: Let X {\displaystyle X} be an ordered topological vector space (TVS) with positive cone C , {\displaystyle C,} let M {\displaystyle M} be a vector subspace of E , {\displaystyle E,} and let f {\displaystyle f} be a linear form on M . {\displaystyle M.} Then f {\displaystyle f} has an extension to a continuous positive linear form on X {\displaystyle X} if and only if there exists some convex neighborhood U {\displaystyle U} of 0 {\displaystyle 0} in X {\displaystyle X} such that Re ⁡ f {\displaystyle \operatorname {Re} f} is bounded above on M ∩ ( U − C ) . {\displaystyle M\cap (U-C).} Corollary: Let X {\displaystyle X} be an ordered topological vector space with positive cone C , {\displaystyle C,} let M {\displaystyle M} be a vector subspace of E . {\displaystyle E.} If C ∩ M {\displaystyle C\cap M} contains an interior point of C {\displaystyle C} then every continuous positive linear form on M {\displaystyle M} has an extension to a continuous positive linear form on X . {\displaystyle X.} Corollary: Let X {\displaystyle X} be an ordered vector space with positive cone C , {\displaystyle C,} let M {\displaystyle M} be a vector subspace of E , {\displaystyle E,} and let f {\displaystyle f} be a linear form on M . {\displaystyle M.} Then f {\displaystyle f} has an extension to a positive linear form on X {\displaystyle X} if and only if there exists some convex absorbing subset W {\displaystyle W} in X {\displaystyle X} containing the origin of X {\displaystyle X} such that Re ⁡ f {\displaystyle \operatorname {Re} f} is bounded above on M ∩ ( W − C ) . {\displaystyle M\cap (W-C).} Proof: It suffices to endow X {\displaystyle X} with the finest locally convex topology making W {\displaystyle W} into a neighborhood of 0 ∈ X . {\displaystyle 0\in X.} == Examples == Consider, as an example of V , {\displaystyle V,} the C*-algebra of complex square matrices with the positive elements being the positive-definite matrices. The trace function defined on this C*-algebra is a positive functional, as the eigenvalues of any positive-definite matrix are positive, and so its trace is positive. Consider the Riesz space C c ( X ) {\displaystyle \mathrm {C} _{\mathrm {c} }(X)} of all continuous complex-valued functions of compact support on a locally compact Hausdorff space X . {\displaystyle X.} Consider a Borel regular measure μ {\displaystyle \mu } on X , {\displaystyle X,} and a functional ψ {\displaystyle \psi } defined by ψ ( f ) = ∫ X f ( x ) d μ ( x ) for all f ∈ C c ( X ) . {\displaystyle \psi (f)=\int _{X}f(x)d\mu (x)\quad {\text{ for all }}f\in \mathrm {C} _{\mathrm {c} }(X).} Then, this functional is positive (the integral of any positive function is a positive number). Moreover, any positive functional on this space has this form, as follows from the Riesz–Markov–Kakutani representation theorem. == Positive linear functionals (C*-algebras) == Let M {\displaystyle M} be a C*-algebra (more generally, an operator system in a C*-algebra A {\displaystyle A} ) with identity 1. {\displaystyle 1.} Let M + {\displaystyle M^{+}} denote the set of positive elements in M . {\displaystyle M.} A linear functional ρ {\displaystyle \rho } on M {\displaystyle M} is said to be positive if ρ ( a ) ≥ 0 , {\displaystyle \rho (a)\geq 0,} for all a ∈ M + . {\displaystyle a\in M^{+}.} Theorem. A linear functional ρ {\displaystyle \rho } on M {\displaystyle M} is positive if and only if ρ {\displaystyle \rho } is bounded and ‖ ρ ‖ = ρ ( 1 ) . {\displaystyle \|\rho \|=\rho (1).} === Cauchy–Schwarz inequality === If ρ {\displaystyle \rho } is a positive linear functional on a C*-algebra A , {\displaystyle A,} then one may define a semidefinite sesquilinear form on A {\displaystyle A} by ⟨ a , b ⟩ = ρ ( b ∗ a ) . {\displaystyle \langle a,b\rangle =\rho (b^{\ast }a).} Thus from the Cauchy–Schwarz inequality we have | ρ ( b ∗ a ) | 2 ≤ ρ ( a ∗ a ) ⋅ ρ ( b ∗ b ) . {\displaystyle \left|\rho (b^{\ast }a)\right|^{2}\leq \rho (a^{\ast }a)\cdot \rho (b^{\ast }b).} == Applications to economics == Given a space C {\displaystyle C} , a price system can be viewed as a continuous, positive, linear functional on C {\displaystyle C} . == See also == Positive element – Group with a compatible partial orderPages displaying short descriptions of redirect targets Positive linear operator – Concept in functional analysis == References == == Bibliography == Kadison, Richard, Fundamentals of the Theory of Operator Algebras, Vol. I : Elementary Theory, American Mathematical Society. ISBN 978-0821808191. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Wikipedia/Positive_linear_functional
In mathematics, in the field of functional analysis, a Minkowski functional (after Hermann Minkowski) or gauge function is a function that recovers a notion of distance on a linear space. If K {\textstyle K} is a subset of a real or complex vector space X , {\textstyle X,} then the Minkowski functional or gauge of K {\textstyle K} is defined to be the function p K : X → [ 0 , ∞ ] , {\textstyle p_{K}:X\to [0,\infty ],} valued in the extended real numbers, defined by p K ( x ) := inf { r ∈ R : r > 0 and x ∈ r K } for every x ∈ X , {\displaystyle p_{K}(x):=\inf\{r\in \mathbb {R} :r>0{\text{ and }}x\in rK\}\quad {\text{ for every }}x\in X,} where the infimum of the empty set is defined to be positive infinity ∞ {\textstyle \,\infty \,} (which is not a real number so that p K ( x ) {\textstyle p_{K}(x)} would then not be real-valued). The set K {\textstyle K} is often assumed/picked to have properties, such as being an absorbing disk in X {\textstyle X} , that guarantee that p K {\textstyle p_{K}} will be a real-valued seminorm on X . {\textstyle X.} In fact, every seminorm p {\textstyle p} on X {\textstyle X} is equal to the Minkowski functional (that is, p = p K {\textstyle p=p_{K}} ) of any subset K {\textstyle K} of X {\textstyle X} satisfying { x ∈ X : p ( x ) < 1 } ⊆ K ⊆ { x ∈ X : p ( x ) ≤ 1 } {\displaystyle \{x\in X:p(x)<1\}\subseteq K\subseteq \{x\in X:p(x)\leq 1\}} (where all three of these sets are necessarily absorbing in X {\textstyle X} and the first and last are also disks). Thus every seminorm (which is a function defined by purely algebraic properties) can be associated (non-uniquely) with an absorbing disk (which is a set with certain geometric properties) and conversely, every absorbing disk can be associated with its Minkowski functional (which will necessarily be a seminorm). These relationships between seminorms, Minkowski functionals, and absorbing disks is a major reason why Minkowski functionals are studied and used in functional analysis. In particular, through these relationships, Minkowski functionals allow one to "translate" certain geometric properties of a subset of X {\textstyle X} into certain algebraic properties of a function on X . {\textstyle X.} The Minkowski function is always non-negative (meaning p K ≥ 0 {\textstyle p_{K}\geq 0} ). This property of being nonnegative stands in contrast to other classes of functions, such as sublinear functions and real linear functionals, that do allow negative values. However, p K {\textstyle p_{K}} might not be real-valued since for any given x ∈ X , {\textstyle x\in X,} the value p K ( x ) {\textstyle p_{K}(x)} is a real number if and only if { r > 0 : x ∈ r K } {\textstyle \{r>0:x\in rK\}} is not empty. Consequently, K {\textstyle K} is usually assumed to have properties (such as being absorbing in X , {\textstyle X,} for instance) that will guarantee that p K {\textstyle p_{K}} is real-valued. == Definition == Let K {\textstyle K} be a subset of a real or complex vector space X . {\textstyle X.} Define the gauge of K {\textstyle K} or the Minkowski functional associated with or induced by K {\textstyle K} as being the function p K : X → [ 0 , ∞ ] , {\textstyle p_{K}:X\to [0,\infty ],} valued in the extended real numbers, defined by p K ( x ) := inf { r > 0 : x ∈ r K } , {\displaystyle p_{K}(x):=\inf\{r>0:x\in rK\},} (recall that the infimum of the empty set is ∞ {\textstyle \,\infty } , that is, inf ∅ = ∞ {\textstyle \inf \varnothing =\infty } ). Here, { r > 0 : x ∈ r K } {\textstyle \{r>0:x\in rK\}} is shorthand for { r ∈ R : r > 0 and x ∈ r K } . {\textstyle \{r\in \mathbb {R} :r>0{\text{ and }}x\in rK\}.} For any x ∈ X , {\textstyle x\in X,} p K ( x ) ≠ ∞ {\textstyle p_{K}(x)\neq \infty } if and only if { r > 0 : x ∈ r K } {\textstyle \{r>0:x\in rK\}} is not empty. The arithmetic operations on R {\textstyle \mathbb {R} } can be extended to operate on ± ∞ , {\textstyle \pm \infty ,} where r ± ∞ := 0 {\textstyle {\frac {r}{\pm \infty }}:=0} for all non-zero real − ∞ < r < ∞ . {\textstyle -\infty <r<\infty .} The products 0 ⋅ ∞ {\textstyle 0\cdot \infty } and 0 ⋅ − ∞ {\textstyle 0\cdot -\infty } remain undefined. === Some conditions making a gauge real-valued === In the field of convex analysis, the map p K {\textstyle p_{K}} taking on the value of ∞ {\textstyle \,\infty \,} is not necessarily an issue. However, in functional analysis p K {\textstyle p_{K}} is almost always real-valued (that is, to never take on the value of ∞ {\textstyle \,\infty \,} ), which happens if and only if the set { r > 0 : x ∈ r K } {\textstyle \{r>0:x\in rK\}} is non-empty for every x ∈ X . {\textstyle x\in X.} In order for p K {\textstyle p_{K}} to be real-valued, it suffices for the origin of X {\textstyle X} to belong to the algebraic interior or core of K {\textstyle K} in X . {\textstyle X.} If K {\textstyle K} is absorbing in X , {\textstyle X,} where recall that this implies that 0 ∈ K , {\textstyle 0\in K,} then the origin belongs to the algebraic interior of K {\textstyle K} in X {\textstyle X} and thus p K {\textstyle p_{K}} is real-valued. Characterizations of when p K {\textstyle p_{K}} is real-valued are given below. == Motivating examples == === Example 1 === Consider a normed vector space ( X , ‖ ⋅ ‖ ) , {\textstyle (X,\|\,\cdot \,\|),} with the norm ‖ ⋅ ‖ {\textstyle \|\,\cdot \,\|} and let U := { x ∈ X : ‖ x ‖ ≤ 1 } {\textstyle U:=\{x\in X:\|x\|\leq 1\}} be the unit ball in X . {\textstyle X.} Then for every x ∈ X , {\textstyle x\in X,} ‖ x ‖ = p U ( x ) . {\textstyle \|x\|=p_{U}(x).} Thus the Minkowski functional p U {\textstyle p_{U}} is just the norm on X . {\textstyle X.} === Example 2 === Let X {\textstyle X} be a vector space without topology with underlying scalar field K . {\textstyle \mathbb {K} .} Let f : X → K {\textstyle f:X\to \mathbb {K} } be any linear functional on X {\textstyle X} (not necessarily continuous). Fix a > 0. {\textstyle a>0.} Let K {\textstyle K} be the set K := { x ∈ X : | f ( x ) | ≤ a } {\displaystyle K:=\{x\in X:|f(x)|\leq a\}} and let p K {\textstyle p_{K}} be the Minkowski functional of K . {\textstyle K.} Then p K ( x ) = 1 a | f ( x ) | for all x ∈ X . {\displaystyle p_{K}(x)={\frac {1}{a}}|f(x)|\quad {\text{ for all }}x\in X.} The function p K {\textstyle p_{K}} has the following properties: It is subadditive: p K ( x + y ) ≤ p K ( x ) + p K ( y ) . {\textstyle p_{K}(x+y)\leq p_{K}(x)+p_{K}(y).} It is absolutely homogeneous: p K ( s x ) = | s | p K ( x ) {\textstyle p_{K}(sx)=|s|p_{K}(x)} for all scalars s . {\textstyle s.} It is nonnegative: p K ≥ 0. {\textstyle p_{K}\geq 0.} Therefore, p K {\textstyle p_{K}} is a seminorm on X , {\textstyle X,} with an induced topology. This is characteristic of Minkowski functionals defined via "nice" sets. There is a one-to-one correspondence between seminorms and the Minkowski functional given by such sets. What is meant precisely by "nice" is discussed in the section below. Notice that, in contrast to a stronger requirement for a norm, p K ( x ) = 0 {\textstyle p_{K}(x)=0} need not imply x = 0. {\textstyle x=0.} In the above example, one can take a nonzero x {\textstyle x} from the kernel of f . {\textstyle f.} Consequently, the resulting topology need not be Hausdorff. == Common conditions guaranteeing gauges are seminorms == To guarantee that p K ( 0 ) = 0 , {\textstyle p_{K}(0)=0,} it will henceforth be assumed that 0 ∈ K . {\textstyle 0\in K.} In order for p K {\textstyle p_{K}} to be a seminorm, it suffices for K {\textstyle K} to be a disk (that is, convex and balanced) and absorbing in X , {\textstyle X,} which are the most common assumption placed on K . {\textstyle K.} More generally, if K {\textstyle K} is convex and the origin belongs to the algebraic interior of K , {\textstyle K,} then p K {\textstyle p_{K}} is a nonnegative sublinear functional on X , {\textstyle X,} which implies in particular that it is subadditive and positive homogeneous. If K {\textstyle K} is absorbing in X {\textstyle X} then p [ 0 , 1 ] K {\textstyle p_{[0,1]K}} is positive homogeneous, meaning that p [ 0 , 1 ] K ( s x ) = s p [ 0 , 1 ] K ( x ) {\textstyle p_{[0,1]K}(sx)=sp_{[0,1]K}(x)} for all real s ≥ 0 , {\textstyle s\geq 0,} where [ 0 , 1 ] K = { t k : t ∈ [ 0 , 1 ] , k ∈ K } . {\textstyle [0,1]K=\{tk:t\in [0,1],k\in K\}.} If q {\textstyle q} is a nonnegative real-valued function on X {\textstyle X} that is positive homogeneous, then the sets U := { x ∈ X : q ( x ) < 1 } {\textstyle U:=\{x\in X:q(x)<1\}} and D := { x ∈ X : q ( x ) ≤ 1 } {\textstyle D:=\{x\in X:q(x)\leq 1\}} satisfy [ 0 , 1 ] U = U {\textstyle [0,1]U=U} and [ 0 , 1 ] D = D ; {\textstyle [0,1]D=D;} if in addition q {\textstyle q} is absolutely homogeneous then both U {\textstyle U} and D {\textstyle D} are balanced. === Gauges of absorbing disks === Arguably the most common requirements placed on a set K {\textstyle K} to guarantee that p K {\textstyle p_{K}} is a seminorm are that K {\textstyle K} be an absorbing disk in X . {\textstyle X.} Due to how common these assumptions are, the properties of a Minkowski functional p K {\textstyle p_{K}} when K {\textstyle K} is an absorbing disk will now be investigated. Since all of the results mentioned above made few (if any) assumptions on K , {\textstyle K,} they can be applied in this special case. === Algebraic properties === Let X {\textstyle X} be a real or complex vector space and let K {\textstyle K} be an absorbing disk in X . {\textstyle X.} p K {\textstyle p_{K}} is a seminorm on X . {\textstyle X.} p K {\textstyle p_{K}} is a norm on X {\textstyle X} if and only if K {\textstyle K} does not contain a non-trivial vector subspace. p s K = 1 | s | p K {\textstyle p_{sK}={\frac {1}{|s|}}p_{K}} for any scalar s ≠ 0. {\textstyle s\neq 0.} If J {\textstyle J} is an absorbing disk in X {\textstyle X} and J ⊆ K {\textstyle J\subseteq K} then p K ≤ p J . {\textstyle p_{K}\leq p_{J}.} If K {\textstyle K} is a set satisfying { x ∈ X : p ( x ) < 1 } ⊆ K ⊆ { x ∈ X : p ( x ) ≤ 1 } {\textstyle \{x\in X:p(x)<1\}\;\subseteq \;K\;\subseteq \;\{x\in X:p(x)\leq 1\}} then K {\textstyle K} is absorbing in X {\textstyle X} and p = p K , {\textstyle p=p_{K},} where p K {\textstyle p_{K}} is the Minkowski functional associated with K ; {\textstyle K;} that is, it is the gauge of K . {\textstyle K.} In particular, if K {\textstyle K} is as above and q {\textstyle q} is any seminorm on X , {\textstyle X,} then q = p {\textstyle q=p} if and only if { x ∈ X : q ( x ) < 1 } ⊆ K ⊆ { x ∈ X : q ( x ) ≤ 1 } . {\textstyle \{x\in X:q(x)<1\}\;\subseteq \;K\;\subseteq \;\{x\in X:q(x)\leq 1\}.} If x ∈ X {\textstyle x\in X} satisfies p K ( x ) < 1 {\textstyle p_{K}(x)<1} then x ∈ K . {\textstyle x\in K.} === Topological properties === Assume that X {\textstyle X} is a (real or complex) topological vector space (TVS) (not necessarily Hausdorff or locally convex) and let K {\textstyle K} be an absorbing disk in X . {\textstyle X.} Then Int X ⁡ K ⊆ { x ∈ X : p K ( x ) < 1 } ⊆ K ⊆ { x ∈ X : p K ( x ) ≤ 1 } ⊆ Cl X ⁡ K , {\displaystyle \operatorname {Int} _{X}K\;\subseteq \;\{x\in X:p_{K}(x)<1\}\;\subseteq \;K\;\subseteq \;\{x\in X:p_{K}(x)\leq 1\}\;\subseteq \;\operatorname {Cl} _{X}K,} where Int X ⁡ K {\textstyle \operatorname {Int} _{X}K} is the topological interior and Cl X ⁡ K {\textstyle \operatorname {Cl} _{X}K} is the topological closure of K {\textstyle K} in X . {\textstyle X.} Importantly, it was not assumed that p K {\textstyle p_{K}} was continuous nor was it assumed that K {\textstyle K} had any topological properties. Moreover, the Minkowski functional p K {\textstyle p_{K}} is continuous if and only if K {\textstyle K} is a neighborhood of the origin in X . {\textstyle X.} If p K {\textstyle p_{K}} is continuous then Int X ⁡ K = { x ∈ X : p K ( x ) < 1 } and Cl X ⁡ K = { x ∈ X : p K ( x ) ≤ 1 } . {\displaystyle \operatorname {Int} _{X}K=\{x\in X:p_{K}(x)<1\}\quad {\text{ and }}\quad \operatorname {Cl} _{X}K=\{x\in X:p_{K}(x)\leq 1\}.} == Minimal requirements on the set == This section will investigate the most general case of the gauge of any subset K {\textstyle K} of X . {\textstyle X.} The more common special case where K {\textstyle K} is assumed to be an absorbing disk in X {\textstyle X} was discussed above. === Properties === All results in this section may be applied to the case where K {\textstyle K} is an absorbing disk. Throughout, K {\textstyle K} is any subset of X . {\textstyle X.} === Examples === If L {\textstyle {\mathcal {L}}} is a non-empty collection of subsets of X {\textstyle X} then p ∪ L ( x ) = inf { p L ( x ) : L ∈ L } {\textstyle p_{\cup {\mathcal {L}}}(x)=\inf \left\{p_{L}(x):L\in {\mathcal {L}}\right\}} for all x ∈ X , {\textstyle x\in X,} where ∪ L = def ⋃ L ∈ L L . {\textstyle \cup {\mathcal {L}}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \bigcup \limits _{L\in {\mathcal {L}}}}L.} Thus p K ∪ L ( x ) = min { p K ( x ) , p L ( x ) } {\textstyle p_{K\cup L}(x)=\min \left\{p_{K}(x),p_{L}(x)\right\}} for all x ∈ X . {\textstyle x\in X.} If L {\textstyle {\mathcal {L}}} is a non-empty collection of subsets of X {\textstyle X} and I ⊆ X {\textstyle I\subseteq X} satisfies { x ∈ X : p L ( x ) < 1 for all L ∈ L } ⊆ I ⊆ { x ∈ X : p L ( x ) ≤ 1 for all L ∈ L } {\displaystyle \left\{x\in X:p_{L}(x)<1{\text{ for all }}L\in {\mathcal {L}}\right\}\quad \subseteq \quad I\quad \subseteq \quad \left\{x\in X:p_{L}(x)\leq 1{\text{ for all }}L\in {\mathcal {L}}\right\}} then p I ( x ) = sup { p L ( x ) : L ∈ L } {\textstyle p_{I}(x)=\sup \left\{p_{L}(x):L\in {\mathcal {L}}\right\}} for all x ∈ X . {\textstyle x\in X.} The following examples show that the containment ( 0 , R ] K ⊆ ⋂ e > 0 ( 0 , R + e ) K {\textstyle (0,R]K\;\subseteq \;{\textstyle \bigcap \limits _{e>0}}(0,R+e)K} could be proper. Example: If R = 0 {\textstyle R=0} and K = X {\textstyle K=X} then ( 0 , R ] K = ( 0 , 0 ] X = ∅ X = ∅ {\textstyle (0,R]K=(0,0]X=\varnothing X=\varnothing } but ⋂ e > 0 ( 0 , e ) K = ⋂ e > 0 X = X , {\textstyle {\textstyle \bigcap \limits _{e>0}}(0,e)K={\textstyle \bigcap \limits _{e>0}}X=X,} which shows that its possible for ( 0 , R ] K {\textstyle (0,R]K} to be a proper subset of ⋂ e > 0 ( 0 , R + e ) K {\textstyle {\textstyle \bigcap \limits _{e>0}}(0,R+e)K} when R = 0. {\textstyle R=0.} ◼ {\textstyle \blacksquare } The next example shows that the containment can be proper when R = 1 ; {\textstyle R=1;} the example may be generalized to any real R > 0. {\textstyle R>0.} Assuming that [ 0 , 1 ] K ⊆ K , {\textstyle [0,1]K\subseteq K,} the following example is representative of how it happens that x ∈ X {\textstyle x\in X} satisfies p K ( x ) = 1 {\textstyle p_{K}(x)=1} but x ∉ ( 0 , 1 ] K . {\textstyle x\not \in (0,1]K.} Example: Let x ∈ X {\textstyle x\in X} be non-zero and let K = [ 0 , 1 ) x {\textstyle K=[0,1)x} so that [ 0 , 1 ] K = K {\textstyle [0,1]K=K} and x ∉ K . {\textstyle x\not \in K.} From x ∉ ( 0 , 1 ) K = K {\textstyle x\not \in (0,1)K=K} it follows that p K ( x ) ≥ 1. {\textstyle p_{K}(x)\geq 1.} That p K ( x ) ≤ 1 {\textstyle p_{K}(x)\leq 1} follows from observing that for every e > 0 , {\textstyle e>0,} ( 0 , 1 + e ) K = [ 0 , 1 + e ) ( [ 0 , 1 ) x ) = [ 0 , 1 + e ) x , {\textstyle (0,1+e)K=[0,1+e)([0,1)x)=[0,1+e)x,} which contains x . {\textstyle x.} Thus p K ( x ) = 1 {\textstyle p_{K}(x)=1} and x ∈ ⋂ e > 0 ( 0 , 1 + e ) K . {\textstyle x\in {\textstyle \bigcap \limits _{e>0}}(0,1+e)K.} However, ( 0 , 1 ] K = ( 0 , 1 ] ( [ 0 , 1 ) x ) = [ 0 , 1 ) x = K {\textstyle (0,1]K=(0,1]([0,1)x)=[0,1)x=K} so that x ∉ ( 0 , 1 ] K , {\textstyle x\not \in (0,1]K,} as desired. ◼ {\textstyle \blacksquare } === Positive homogeneity characterizes Minkowski functionals === The next theorem shows that Minkowski functionals are exactly those functions f : X → [ 0 , ∞ ] {\textstyle f:X\to [0,\infty ]} that have a certain purely algebraic property that is commonly encountered. This theorem can be extended to characterize certain classes of [ − ∞ , ∞ ] {\textstyle [-\infty ,\infty ]} -valued maps (for example, real-valued sublinear functions) in terms of Minkowski functionals. For instance, it can be used to describe how every real homogeneous function f : X → R {\textstyle f:X\to \mathbb {R} } (such as linear functionals) can be written in terms of a unique Minkowski functional having a certain property. === Characterizing Minkowski functionals on star sets === === Characterizing Minkowski functionals that are seminorms === In this next theorem, which follows immediately from the statements above, K {\textstyle K} is not assumed to be absorbing in X {\textstyle X} and instead, it is deduced that ( 0 , 1 ) K {\textstyle (0,1)K} is absorbing when p K {\textstyle p_{K}} is a seminorm. It is also not assumed that K {\textstyle K} is balanced (which is a property that K {\textstyle K} is often required to have); in its place is the weaker condition that ( 0 , 1 ) s K ⊆ ( 0 , 1 ) K {\textstyle (0,1)sK\subseteq (0,1)K} for all scalars s {\textstyle s} satisfying | s | = 1. {\textstyle |s|=1.} The common requirement that K {\textstyle K} be convex is also weakened to only requiring that ( 0 , 1 ) K {\textstyle (0,1)K} be convex. === Positive sublinear functions and Minkowski functionals === It may be shown that a real-valued subadditive function f : X → R {\textstyle f:X\to \mathbb {R} } on an arbitrary topological vector space X {\textstyle X} is continuous at the origin if and only if it is uniformly continuous, where if in addition f {\textstyle f} is nonnegative, then f {\textstyle f} is continuous if and only if V := { x ∈ X : f ( x ) < 1 } {\textstyle V:=\{x\in X:f(x)<1\}} is an open neighborhood in X . {\textstyle X.} If f : X → R {\textstyle f:X\to \mathbb {R} } is subadditive and satisfies f ( 0 ) = 0 , {\textstyle f(0)=0,} then f {\textstyle f} is continuous if and only if its absolute value | f | : X → [ 0 , ∞ ) {\textstyle |f|:X\to [0,\infty )} is continuous. A nonnegative sublinear function is a nonnegative homogeneous function f : X → [ 0 , ∞ ) {\textstyle f:X\to [0,\infty )} that satisfies the triangle inequality. It follows immediately from the results below that for such a function f , {\textstyle f,} if V := { x ∈ X : f ( x ) < 1 } {\textstyle V:=\{x\in X:f(x)<1\}} then f = p V . {\textstyle f=p_{V}.} Given K ⊆ X , {\textstyle K\subseteq X,} the Minkowski functional p K {\textstyle p_{K}} is a sublinear function if and only if it is real-valued and subadditive, which is happens if and only if ( 0 , ∞ ) K = X {\textstyle (0,\infty )K=X} and ( 0 , 1 ) K {\textstyle (0,1)K} is convex. === Correspondence between open convex sets and positive continuous sublinear functions === == See also == Asymmetric norm – Generalization of the concept of a norm Auxiliary normed space Cauchy's functional equation – Functional equation Finest locally convex topology – Vector space with a topology defined by convex open setsPages displaying short descriptions of redirect targets Finsler manifold – Generalization of Riemannian manifolds Hadwiger's theorem – Theorem in integral geometry Hugo Hadwiger – Swiss mathematician (1908–1981) Locally convex topological vector space – Vector space with a topology defined by convex open sets Morphological image processing – Theory and technique for handling geometrical structuresPages displaying short descriptions of redirect targets Norm (mathematics) – Length in a vector space Seminorm – Mathematical function Topological vector space – Vector space with a notion of nearness == Notes == == References == Berberian, Sterling K. (1974). Lectures in Functional Analysis and Operator Theory. Graduate Texts in Mathematics. Vol. 15. New York: Springer. ISBN 978-0-387-90081-0. OCLC 878109401. Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190. Conway, John (1990). A course in functional analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908. Diestel, Joe (2008). The Metric Theory of Tensor Products: Grothendieck's Résumé Revisited. Vol. 16. Providence, R.I.: American Mathematical Society. ISBN 9781470424831. OCLC 185095773. Dineen, Seán (1981). Complex Analysis in Locally Convex Spaces. North-Holland Mathematics Studies. Vol. 57. Amsterdam New York New York: North-Holland Pub. Co., Elsevier Science Pub. Co. ISBN 978-0-08-087168-4. OCLC 16549589. Dunford, Nelson; Schwartz, Jacob T. (1988). Linear Operators. Pure and applied mathematics. Vol. 1. New York: Wiley-Interscience. ISBN 978-0-471-60848-6. OCLC 18412261. Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138. Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098. Hogbe-Nlend, Henri (1977). Bornologies and Functional Analysis: Introductory Course on the Theory of Duality Topology-Bornology and its use in Functional Analysis. North-Holland Mathematics Studies. Vol. 26. Amsterdam New York New York: North Holland. ISBN 978-0-08-087137-0. MR 0500064. OCLC 316549583. Hogbe-Nlend, Henri; Moscatelli, V. B. (1981). Nuclear and Conuclear Spaces: Introductory Course on Nuclear and Conuclear Spaces in the Light of the Duality "topology-bornology". North-Holland Mathematics Studies. Vol. 52. Amsterdam New York New York: North Holland. ISBN 978-0-08-087163-9. OCLC 316564345. Husain, Taqdir; Khaleelulla, S. M. (1978). Barrelledness in Topological and Ordered Vector Spaces. Lecture Notes in Mathematics. Vol. 692. Berlin, New York, Heidelberg: Springer-Verlag. ISBN 978-3-540-09096-0. OCLC 4493665. Keller, Hans (1974). Differential Calculus in Locally Convex Spaces. Lecture Notes in Mathematics. Vol. 417. Berlin New York: Springer-Verlag. ISBN 978-3-540-06962-1. OCLC 1103033. Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370. Kubrusly, Carlos S. (2011). The Elements of Operator Theory (Second ed.). Boston: Birkhäuser. ISBN 978-0-8176-4998-2. OCLC 710154895. Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342. Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704. Köthe, Gottfried (1979). Topological Vector Spaces II. Grundlehren der mathematischen Wissenschaften. Vol. 237. New York: Springer Science & Business Media. ISBN 978-0-387-90400-9. OCLC 180577972. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Pietsch, Albrecht (1979). Nuclear Locally Convex Spaces. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 66 (Second ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-05644-9. OCLC 539541. Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Thompson, Anthony C. (1996). Minkowski Geometry. Encyclopedia of Mathematics and Its Applications. Cambridge University Press. ISBN 0-521-40472-X. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. Schaefer, H. H. (1999). Topological Vector Spaces. New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114. Wong, Yau-Chuen (1979). Schwartz Spaces, Nuclear Spaces, and Tensor Products. Lecture Notes in Mathematics. Vol. 726. Berlin New York: Springer-Verlag. ISBN 978-3-540-09513-2. OCLC 5126158. == Further reading == F. Simeski, A. M. P. Boelens, and M. Ihme. "Modeling Adsorption in Silica Pores via Minkowski Functionals and Molecular Electrostatic Moments". Energies 13 (22) 5976 (2020). doi:10.3390/en13225976 unflagged free DOI (link).
Wikipedia/Minkowski_functional
In mathematics, particularly functional analysis, spaces of linear maps between two vector spaces can be endowed with a variety of topologies. Studying space of linear maps and these topologies can give insight into the spaces themselves. The article operator topologies discusses topologies on spaces of linear maps between normed spaces, whereas this article discusses topologies on such spaces in the more general setting of topological vector spaces (TVSs). == Topologies of uniform convergence on arbitrary spaces of maps == Throughout, the following is assumed: T {\displaystyle T} is any non-empty set and G {\displaystyle {\mathcal {G}}} is a non-empty collection of subsets of T {\displaystyle T} directed by subset inclusion (i.e. for any G , H ∈ G {\displaystyle G,H\in {\mathcal {G}}} there exists some K ∈ G {\displaystyle K\in {\mathcal {G}}} such that G ∪ H ⊆ K {\displaystyle G\cup H\subseteq K} ). Y {\displaystyle Y} is a topological vector space (not necessarily Hausdorff or locally convex). N {\displaystyle {\mathcal {N}}} is a basis of neighborhoods of 0 in Y . {\displaystyle Y.} F {\displaystyle F} is a vector subspace of Y T = ∏ t ∈ T Y , {\displaystyle Y^{T}=\prod _{t\in T}Y,} which denotes the set of all Y {\displaystyle Y} -valued functions f : T → Y {\displaystyle f:T\to Y} with domain T . {\displaystyle T.} === 𝒢-topology === The following sets will constitute the basic open subsets of topologies on spaces of linear maps. For any subsets G ⊆ T {\displaystyle G\subseteq T} and N ⊆ Y , {\displaystyle N\subseteq Y,} let U ( G , N ) := { f ∈ F : f ( G ) ⊆ N } . {\displaystyle {\mathcal {U}}(G,N):=\{f\in F:f(G)\subseteq N\}.} The family { U ( G , N ) : G ∈ G , N ∈ N } {\displaystyle \{{\mathcal {U}}(G,N):G\in {\mathcal {G}},N\in {\mathcal {N}}\}} forms a neighborhood basis at the origin for a unique translation-invariant topology on F , {\displaystyle F,} where this topology is not necessarily a vector topology (that is, it might not make F {\displaystyle F} into a TVS). This topology does not depend on the neighborhood basis N {\displaystyle {\mathcal {N}}} that was chosen and it is known as the topology of uniform convergence on the sets in G {\displaystyle {\mathcal {G}}} or as the G {\displaystyle {\mathcal {G}}} -topology. However, this name is frequently changed according to the types of sets that make up G {\displaystyle {\mathcal {G}}} (e.g. the "topology of uniform convergence on compact sets" or the "topology of compact convergence", see the footnote for more details). A subset G 1 {\displaystyle {\mathcal {G}}_{1}} of G {\displaystyle {\mathcal {G}}} is said to be fundamental with respect to G {\displaystyle {\mathcal {G}}} if each G ∈ G {\displaystyle G\in {\mathcal {G}}} is a subset of some element in G 1 . {\displaystyle {\mathcal {G}}_{1}.} In this case, the collection G {\displaystyle {\mathcal {G}}} can be replaced by G 1 {\displaystyle {\mathcal {G}}_{1}} without changing the topology on F . {\displaystyle F.} One may also replace G {\displaystyle {\mathcal {G}}} with the collection of all subsets of all finite unions of elements of G {\displaystyle {\mathcal {G}}} without changing the resulting G {\displaystyle {\mathcal {G}}} -topology on F . {\displaystyle F.} Call a subset B {\displaystyle B} of T {\displaystyle T} F {\displaystyle F} -bounded if f ( B ) {\displaystyle f(B)} is a bounded subset of Y {\displaystyle Y} for every f ∈ F . {\displaystyle f\in F.} Properties Properties of the basic open sets will now be described, so assume that G ∈ G {\displaystyle G\in {\mathcal {G}}} and N ∈ N . {\displaystyle N\in {\mathcal {N}}.} Then U ( G , N ) {\displaystyle {\mathcal {U}}(G,N)} is an absorbing subset of F {\displaystyle F} if and only if for all f ∈ F , {\displaystyle f\in F,} N {\displaystyle N} absorbs f ( G ) {\displaystyle f(G)} . If N {\displaystyle N} is balanced (respectively, convex) then so is U ( G , N ) . {\displaystyle {\mathcal {U}}(G,N).} The equality U ( ∅ , N ) = F {\displaystyle {\mathcal {U}}(\varnothing ,N)=F} always holds. If s {\displaystyle s} is a scalar then s U ( G , N ) = U ( G , s N ) , {\displaystyle s{\mathcal {U}}(G,N)={\mathcal {U}}(G,sN),} so that in particular, − U ( G , N ) = U ( G , − N ) . {\displaystyle -{\mathcal {U}}(G,N)={\mathcal {U}}(G,-N).} Moreover, U ( G , N ) − U ( G , N ) ⊆ U ( G , N − N ) {\displaystyle {\mathcal {U}}(G,N)-{\mathcal {U}}(G,N)\subseteq {\mathcal {U}}(G,N-N)} and similarly U ( G , M ) + U ( G , N ) ⊆ U ( G , M + N ) . {\displaystyle {\mathcal {U}}(G,M)+{\mathcal {U}}(G,N)\subseteq {\mathcal {U}}(G,M+N).} For any subsets G , H ⊆ X {\displaystyle G,H\subseteq X} and any non-empty subsets M , N ⊆ Y , {\displaystyle M,N\subseteq Y,} U ( G ∪ H , M ∩ N ) ⊆ U ( G , M ) ∩ U ( H , N ) {\displaystyle {\mathcal {U}}(G\cup H,M\cap N)\subseteq {\mathcal {U}}(G,M)\cap {\mathcal {U}}(H,N)} which implies: if M ⊆ N {\displaystyle M\subseteq N} then U ( G , M ) ⊆ U ( G , N ) . {\displaystyle {\mathcal {U}}(G,M)\subseteq {\mathcal {U}}(G,N).} if G ⊆ H {\displaystyle G\subseteq H} then U ( H , N ) ⊆ U ( G , N ) . {\displaystyle {\mathcal {U}}(H,N)\subseteq {\mathcal {U}}(G,N).} For any M , N ∈ N {\displaystyle M,N\in {\mathcal {N}}} and subsets G , H , K {\displaystyle G,H,K} of T , {\displaystyle T,} if G ∪ H ⊆ K {\displaystyle G\cup H\subseteq K} then U ( K , M ∩ N ) ⊆ U ( G , M ) ∩ U ( H , N ) . {\displaystyle {\mathcal {U}}(K,M\cap N)\subseteq {\mathcal {U}}(G,M)\cap {\mathcal {U}}(H,N).} For any family S {\displaystyle {\mathcal {S}}} of subsets of T {\displaystyle T} and any family M {\displaystyle {\mathcal {M}}} of neighborhoods of the origin in Y , {\displaystyle Y,} U ( ⋃ S ∈ S S , N ) = ⋂ S ∈ S U ( S , N ) and U ( G , ⋂ M ∈ M M ) = ⋂ M ∈ M U ( G , M ) . {\displaystyle {\mathcal {U}}\left(\bigcup _{S\in {\mathcal {S}}}S,N\right)=\bigcap _{S\in {\mathcal {S}}}{\mathcal {U}}(S,N)\qquad {\text{ and }}\qquad {\mathcal {U}}\left(G,\bigcap _{M\in {\mathcal {M}}}M\right)=\bigcap _{M\in {\mathcal {M}}}{\mathcal {U}}(G,M).} === Uniform structure === For any G ⊆ T {\displaystyle G\subseteq T} and U ⊆ Y × Y {\displaystyle U\subseteq Y\times Y} be any entourage of Y {\displaystyle Y} (where Y {\displaystyle Y} is endowed with its canonical uniformity), let W ( G , U ) := { ( u , v ) ∈ Y T × Y T : ( u ( g ) , v ( g ) ) ∈ U for every g ∈ G } . {\displaystyle {\mathcal {W}}(G,U)~:=~\left\{(u,v)\in Y^{T}\times Y^{T}~:~(u(g),v(g))\in U\;{\text{ for every }}g\in G\right\}.} Given G ⊆ T , {\displaystyle G\subseteq T,} the family of all sets W ( G , U ) {\displaystyle {\mathcal {W}}(G,U)} as U {\displaystyle U} ranges over any fundamental system of entourages of Y {\displaystyle Y} forms a fundamental system of entourages for a uniform structure on Y T {\displaystyle Y^{T}} called the uniformity of uniform converges on G {\displaystyle G} or simply the G {\displaystyle G} -convergence uniform structure. The G {\displaystyle {\mathcal {G}}} -convergence uniform structure is the least upper bound of all G {\displaystyle G} -convergence uniform structures as G ∈ G {\displaystyle G\in {\mathcal {G}}} ranges over G . {\displaystyle {\mathcal {G}}.} Nets and uniform convergence Let f ∈ F {\displaystyle f\in F} and let f ∙ = ( f i ) i ∈ I {\displaystyle f_{\bullet }=\left(f_{i}\right)_{i\in I}} be a net in F . {\displaystyle F.} Then for any subset G {\displaystyle G} of T , {\displaystyle T,} say that f ∙ {\displaystyle f_{\bullet }} converges uniformly to f {\displaystyle f} on G {\displaystyle G} if for every N ∈ N {\displaystyle N\in {\mathcal {N}}} there exists some i 0 ∈ I {\displaystyle i_{0}\in I} such that for every i ∈ I {\displaystyle i\in I} satisfying i ≥ i 0 , I {\displaystyle i\geq i_{0},I} f i − f ∈ U ( G , N ) {\displaystyle f_{i}-f\in {\mathcal {U}}(G,N)} (or equivalently, f i ( g ) − f ( g ) ∈ N {\displaystyle f_{i}(g)-f(g)\in N} for every g ∈ G {\displaystyle g\in G} ). === Inherited properties === Local convexity If Y {\displaystyle Y} is locally convex then so is the G {\displaystyle {\mathcal {G}}} -topology on F {\displaystyle F} and if ( p i ) i ∈ I {\displaystyle \left(p_{i}\right)_{i\in I}} is a family of continuous seminorms generating this topology on Y {\displaystyle Y} then the G {\displaystyle {\mathcal {G}}} -topology is induced by the following family of seminorms: p G , i ( f ) := sup x ∈ G p i ( f ( x ) ) , {\displaystyle p_{G,i}(f):=\sup _{x\in G}p_{i}(f(x)),} as G {\displaystyle G} varies over G {\displaystyle {\mathcal {G}}} and i {\displaystyle i} varies over I {\displaystyle I} . Hausdorffness If Y {\displaystyle Y} is Hausdorff and T = ⋃ G ∈ G G {\displaystyle T=\bigcup _{G\in {\mathcal {G}}}G} then the G {\displaystyle {\mathcal {G}}} -topology on F {\displaystyle F} is Hausdorff. Suppose that T {\displaystyle T} is a topological space. If Y {\displaystyle Y} is Hausdorff and F {\displaystyle F} is the vector subspace of Y T {\displaystyle Y^{T}} consisting of all continuous maps that are bounded on every G ∈ G {\displaystyle G\in {\mathcal {G}}} and if ⋃ G ∈ G G {\displaystyle \bigcup _{G\in {\mathcal {G}}}G} is dense in T {\displaystyle T} then the G {\displaystyle {\mathcal {G}}} -topology on F {\displaystyle F} is Hausdorff. Boundedness A subset H {\displaystyle H} of F {\displaystyle F} is bounded in the G {\displaystyle {\mathcal {G}}} -topology if and only if for every G ∈ G , {\displaystyle G\in {\mathcal {G}},} H ( G ) = ⋃ h ∈ H h ( G ) {\displaystyle H(G)=\bigcup _{h\in H}h(G)} is bounded in Y . {\displaystyle Y.} === Examples of 𝒢-topologies === Pointwise convergence If we let G {\displaystyle {\mathcal {G}}} be the set of all finite subsets of T {\displaystyle T} then the G {\displaystyle {\mathcal {G}}} -topology on F {\displaystyle F} is called the topology of pointwise convergence. The topology of pointwise convergence on F {\displaystyle F} is identical to the subspace topology that F {\displaystyle F} inherits from Y T {\displaystyle Y^{T}} when Y T {\displaystyle Y^{T}} is endowed with the usual product topology. If X {\displaystyle X} is a non-trivial completely regular Hausdorff topological space and C ( X ) {\displaystyle C(X)} is the space of all real (or complex) valued continuous functions on X , {\displaystyle X,} the topology of pointwise convergence on C ( X ) {\displaystyle C(X)} is metrizable if and only if X {\displaystyle X} is countable. == 𝒢-topologies on spaces of continuous linear maps == Throughout this section we will assume that X {\displaystyle X} and Y {\displaystyle Y} are topological vector spaces. G {\displaystyle {\mathcal {G}}} will be a non-empty collection of subsets of X {\displaystyle X} directed by inclusion. L ( X ; Y ) {\displaystyle L(X;Y)} will denote the vector space of all continuous linear maps from X {\displaystyle X} into Y . {\displaystyle Y.} If L ( X ; Y ) {\displaystyle L(X;Y)} is given the G {\displaystyle {\mathcal {G}}} -topology inherited from Y X {\displaystyle Y^{X}} then this space with this topology is denoted by L G ( X ; Y ) {\displaystyle L_{\mathcal {G}}(X;Y)} . The continuous dual space of a topological vector space X {\displaystyle X} over the field F {\displaystyle \mathbb {F} } (which we will assume to be real or complex numbers) is the vector space L ( X ; F ) {\displaystyle L(X;\mathbb {F} )} and is denoted by X ′ {\displaystyle X^{\prime }} . The G {\displaystyle {\mathcal {G}}} -topology on L ( X ; Y ) {\displaystyle L(X;Y)} is compatible with the vector space structure of L ( X ; Y ) {\displaystyle L(X;Y)} if and only if for all G ∈ G {\displaystyle G\in {\mathcal {G}}} and all f ∈ L ( X ; Y ) {\displaystyle f\in L(X;Y)} the set f ( G ) {\displaystyle f(G)} is bounded in Y , {\displaystyle Y,} which we will assume to be the case for the rest of the article. Note in particular that this is the case if G {\displaystyle {\mathcal {G}}} consists of (von-Neumann) bounded subsets of X . {\displaystyle X.} === Assumptions on 𝒢 === Assumptions that guarantee a vector topology ( G {\displaystyle {\mathcal {G}}} is directed): G {\displaystyle {\mathcal {G}}} will be a non-empty collection of subsets of X {\displaystyle X} directed by (subset) inclusion. That is, for any G , H ∈ G , {\displaystyle G,H\in {\mathcal {G}},} there exists K ∈ G {\displaystyle K\in {\mathcal {G}}} such that G ∪ H ⊆ K {\displaystyle G\cup H\subseteq K} . The above assumption guarantees that the collection of sets U ( G , N ) {\displaystyle {\mathcal {U}}(G,N)} forms a filter base. The next assumption will guarantee that the sets U ( G , N ) {\displaystyle {\mathcal {U}}(G,N)} are balanced. Every TVS has a neighborhood basis at 0 consisting of balanced sets so this assumption isn't burdensome. ( N ∈ N {\displaystyle N\in {\mathcal {N}}} are balanced): N {\displaystyle {\mathcal {N}}} is a neighborhoods basis of the origin in Y {\displaystyle Y} that consists entirely of balanced sets. The following assumption is very commonly made because it will guarantee that each set U ( G , N ) {\displaystyle {\mathcal {U}}(G,N)} is absorbing in L ( X ; Y ) . {\displaystyle L(X;Y).} ( G ∈ G {\displaystyle G\in {\mathcal {G}}} are bounded): G {\displaystyle {\mathcal {G}}} is assumed to consist entirely of bounded subsets of X . {\displaystyle X.} The next theorem gives ways in which G {\displaystyle {\mathcal {G}}} can be modified without changing the resulting G {\displaystyle {\mathcal {G}}} -topology on Y . {\displaystyle Y.} Common assumptions Some authors (e.g. Narici) require that G {\displaystyle {\mathcal {G}}} satisfy the following condition, which implies, in particular, that G {\displaystyle {\mathcal {G}}} is directed by subset inclusion: G {\displaystyle {\mathcal {G}}} is assumed to be closed with respect to the formation of subsets of finite unions of sets in G {\displaystyle {\mathcal {G}}} (i.e. every subset of every finite union of sets in G {\displaystyle {\mathcal {G}}} belongs to G {\displaystyle {\mathcal {G}}} ). Some authors (e.g. Trèves ) require that G {\displaystyle {\mathcal {G}}} be directed under subset inclusion and that it satisfy the following condition: If G ∈ G {\displaystyle G\in {\mathcal {G}}} and s {\displaystyle s} is a scalar then there exists a H ∈ G {\displaystyle H\in {\mathcal {G}}} such that s G ⊆ H . {\displaystyle sG\subseteq H.} If G {\displaystyle {\mathcal {G}}} is a bornology on X , {\displaystyle X,} which is often the case, then these axioms are satisfied. If G {\displaystyle {\mathcal {G}}} is a saturated family of bounded subsets of X {\displaystyle X} then these axioms are also satisfied. === Properties === Hausdorffness A subset of a TVS X {\displaystyle X} whose linear span is a dense subset of X {\displaystyle X} is said to be a total subset of X . {\displaystyle X.} If G {\displaystyle {\mathcal {G}}} is a family of subsets of a TVS T {\displaystyle T} then G {\displaystyle {\mathcal {G}}} is said to be total in T {\displaystyle T} if the linear span of ⋃ G ∈ G G {\displaystyle \bigcup _{G\in {\mathcal {G}}}G} is dense in T . {\displaystyle T.} If F {\displaystyle F} is the vector subspace of Y T {\displaystyle Y^{T}} consisting of all continuous linear maps that are bounded on every G ∈ G , {\displaystyle G\in {\mathcal {G}},} then the G {\displaystyle {\mathcal {G}}} -topology on F {\displaystyle F} is Hausdorff if Y {\displaystyle Y} is Hausdorff and G {\displaystyle {\mathcal {G}}} is total in T . {\displaystyle T.} Completeness For the following theorems, suppose that X {\displaystyle X} is a topological vector space and Y {\displaystyle Y} is a locally convex Hausdorff spaces and G {\displaystyle {\mathcal {G}}} is a collection of bounded subsets of X {\displaystyle X} that covers X , {\displaystyle X,} is directed by subset inclusion, and satisfies the following condition: if G ∈ G {\displaystyle G\in {\mathcal {G}}} and s {\displaystyle s} is a scalar then there exists a H ∈ G {\displaystyle H\in {\mathcal {G}}} such that s G ⊆ H . {\displaystyle sG\subseteq H.} L G ( X ; Y ) {\displaystyle L_{\mathcal {G}}(X;Y)} is complete if If X {\displaystyle X} is a Mackey space then L G ( X ; Y ) {\displaystyle L_{\mathcal {G}}(X;Y)} is complete if and only if both X G ′ {\displaystyle X_{\mathcal {G}}^{\prime }} and Y {\displaystyle Y} are complete. If X {\displaystyle X} is barrelled then L G ( X ; Y ) {\displaystyle L_{\mathcal {G}}(X;Y)} is Hausdorff and quasi-complete. Let X {\displaystyle X} and Y {\displaystyle Y} be TVSs with Y {\displaystyle Y} quasi-complete and assume that (1) X {\displaystyle X} is barreled, or else (2) X {\displaystyle X} is a Baire space and X {\displaystyle X} and Y {\displaystyle Y} are locally convex. If G {\displaystyle {\mathcal {G}}} covers X {\displaystyle X} then every closed equicontinuous subset of L ( X ; Y ) {\displaystyle L(X;Y)} is complete in L G ( X ; Y ) {\displaystyle L_{\mathcal {G}}(X;Y)} and L G ( X ; Y ) {\displaystyle L_{\mathcal {G}}(X;Y)} is quasi-complete. Let X {\displaystyle X} be a bornological space, Y {\displaystyle Y} a locally convex space, and G {\displaystyle {\mathcal {G}}} a family of bounded subsets of X {\displaystyle X} such that the range of every null sequence in X {\displaystyle X} is contained in some G ∈ G . {\displaystyle G\in {\mathcal {G}}.} If Y {\displaystyle Y} is quasi-complete (respectively, complete) then so is L G ( X ; Y ) {\displaystyle L_{\mathcal {G}}(X;Y)} . Boundedness Let X {\displaystyle X} and Y {\displaystyle Y} be topological vector spaces and H {\displaystyle H} be a subset of L ( X ; Y ) . {\displaystyle L(X;Y).} Then the following are equivalent: H {\displaystyle H} is bounded in L G ( X ; Y ) {\displaystyle L_{\mathcal {G}}(X;Y)} ; For every G ∈ G , {\displaystyle G\in {\mathcal {G}},} H ( G ) := ⋃ h ∈ H h ( G ) {\displaystyle H(G):=\bigcup _{h\in H}h(G)} is bounded in Y {\displaystyle Y} ; For every neighborhood V {\displaystyle V} of the origin in Y {\displaystyle Y} the set ⋂ h ∈ H h − 1 ( V ) {\displaystyle \bigcap _{h\in H}h^{-1}(V)} absorbs every G ∈ G . {\displaystyle G\in {\mathcal {G}}.} If G {\displaystyle {\mathcal {G}}} is a collection of bounded subsets of X {\displaystyle X} whose union is total in X {\displaystyle X} then every equicontinuous subset of L ( X ; Y ) {\displaystyle L(X;Y)} is bounded in the G {\displaystyle {\mathcal {G}}} -topology. Furthermore, if X {\displaystyle X} and Y {\displaystyle Y} are locally convex Hausdorff spaces then if H {\displaystyle H} is bounded in L σ ( X ; Y ) {\displaystyle L_{\sigma }(X;Y)} (that is, pointwise bounded or simply bounded) then it is bounded in the topology of uniform convergence on the convex, balanced, bounded, complete subsets of X . {\displaystyle X.} if X {\displaystyle X} is quasi-complete (meaning that closed and bounded subsets are complete), then the bounded subsets of L ( X ; Y ) {\displaystyle L(X;Y)} are identical for all G {\displaystyle {\mathcal {G}}} -topologies where G {\displaystyle {\mathcal {G}}} is any family of bounded subsets of X {\displaystyle X} covering X . {\displaystyle X.} === Examples === ==== The topology of pointwise convergence ==== By letting G {\displaystyle {\mathcal {G}}} be the set of all finite subsets of X , {\displaystyle X,} L ( X ; Y ) {\displaystyle L(X;Y)} will have the weak topology on L ( X ; Y ) {\displaystyle L(X;Y)} or the topology of pointwise convergence or the topology of simple convergence and L ( X ; Y ) {\displaystyle L(X;Y)} with this topology is denoted by L σ ( X ; Y ) {\displaystyle L_{\sigma }(X;Y)} . Unfortunately, this topology is also sometimes called the strong operator topology, which may lead to ambiguity; for this reason, this article will avoid referring to this topology by this name. A subset of L ( X ; Y ) {\displaystyle L(X;Y)} is called simply bounded or weakly bounded if it is bounded in L σ ( X ; Y ) {\displaystyle L_{\sigma }(X;Y)} . The weak-topology on L ( X ; Y ) {\displaystyle L(X;Y)} has the following properties: If X {\displaystyle X} is separable (that is, it has a countable dense subset) and if Y {\displaystyle Y} is a metrizable topological vector space then every equicontinuous subset H {\displaystyle H} of L σ ( X ; Y ) {\displaystyle L_{\sigma }(X;Y)} is metrizable; if in addition Y {\displaystyle Y} is separable then so is H . {\displaystyle H.} So in particular, on every equicontinuous subset of L ( X ; Y ) , {\displaystyle L(X;Y),} the topology of pointwise convergence is metrizable. Let Y X {\displaystyle Y^{X}} denote the space of all functions from X {\displaystyle X} into Y . {\displaystyle Y.} If L ( X ; Y ) {\displaystyle L(X;Y)} is given the topology of pointwise convergence then space of all linear maps (continuous or not) X {\displaystyle X} into Y {\displaystyle Y} is closed in Y X {\displaystyle Y^{X}} . In addition, L ( X ; Y ) {\displaystyle L(X;Y)} is dense in the space of all linear maps (continuous or not) X {\displaystyle X} into Y . {\displaystyle Y.} Suppose X {\displaystyle X} and Y {\displaystyle Y} are locally convex. Any simply bounded subset of L ( X ; Y ) {\displaystyle L(X;Y)} is bounded when L ( X ; Y ) {\displaystyle L(X;Y)} has the topology of uniform convergence on convex, balanced, bounded, complete subsets of X . {\displaystyle X.} If in addition X {\displaystyle X} is quasi-complete then the families of bounded subsets of L ( X ; Y ) {\displaystyle L(X;Y)} are identical for all G {\displaystyle {\mathcal {G}}} -topologies on L ( X ; Y ) {\displaystyle L(X;Y)} such that G {\displaystyle {\mathcal {G}}} is a family of bounded sets covering X . {\displaystyle X.} Equicontinuous subsets The weak-closure of an equicontinuous subset of L ( X ; Y ) {\displaystyle L(X;Y)} is equicontinuous. If Y {\displaystyle Y} is locally convex, then the convex balanced hull of an equicontinuous subset of L ( X ; Y ) {\displaystyle L(X;Y)} is equicontinuous. Let X {\displaystyle X} and Y {\displaystyle Y} be TVSs and assume that (1) X {\displaystyle X} is barreled, or else (2) X {\displaystyle X} is a Baire space and X {\displaystyle X} and Y {\displaystyle Y} are locally convex. Then every simply bounded subset of L ( X ; Y ) {\displaystyle L(X;Y)} is equicontinuous. On an equicontinuous subset H {\displaystyle H} of L ( X ; Y ) , {\displaystyle L(X;Y),} the following topologies are identical: (1) topology of pointwise convergence on a total subset of X {\displaystyle X} ; (2) the topology of pointwise convergence; (3) the topology of precompact convergence. ==== Compact convergence ==== By letting G {\displaystyle {\mathcal {G}}} be the set of all compact subsets of X , {\displaystyle X,} L ( X ; Y ) {\displaystyle L(X;Y)} will have the topology of compact convergence or the topology of uniform convergence on compact sets and L ( X ; Y ) {\displaystyle L(X;Y)} with this topology is denoted by L c ( X ; Y ) {\displaystyle L_{c}(X;Y)} . The topology of compact convergence on L ( X ; Y ) {\displaystyle L(X;Y)} has the following properties: If X {\displaystyle X} is a Fréchet space or a LF-space and if Y {\displaystyle Y} is a complete locally convex Hausdorff space then L c ( X ; Y ) {\displaystyle L_{c}(X;Y)} is complete. On equicontinuous subsets of L ( X ; Y ) , {\displaystyle L(X;Y),} the following topologies coincide: The topology of pointwise convergence on a dense subset of X , {\displaystyle X,} The topology of pointwise convergence on X , {\displaystyle X,} The topology of compact convergence. The topology of precompact convergence. If X {\displaystyle X} is a Montel space and Y {\displaystyle Y} is a topological vector space, then L c ( X ; Y ) {\displaystyle L_{c}(X;Y)} and L b ( X ; Y ) {\displaystyle L_{b}(X;Y)} have identical topologies. ==== Topology of bounded convergence ==== By letting G {\displaystyle {\mathcal {G}}} be the set of all bounded subsets of X , {\displaystyle X,} L ( X ; Y ) {\displaystyle L(X;Y)} will have the topology of bounded convergence on X {\displaystyle X} or the topology of uniform convergence on bounded sets and L ( X ; Y ) {\displaystyle L(X;Y)} with this topology is denoted by L b ( X ; Y ) {\displaystyle L_{b}(X;Y)} . The topology of bounded convergence on L ( X ; Y ) {\displaystyle L(X;Y)} has the following properties: If X {\displaystyle X} is a bornological space and if Y {\displaystyle Y} is a complete locally convex Hausdorff space then L b ( X ; Y ) {\displaystyle L_{b}(X;Y)} is complete. If X {\displaystyle X} and Y {\displaystyle Y} are both normed spaces then the topology on L ( X ; Y ) {\displaystyle L(X;Y)} induced by the usual operator norm is identical to the topology on L b ( X ; Y ) {\displaystyle L_{b}(X;Y)} . In particular, if X {\displaystyle X} is a normed space then the usual norm topology on the continuous dual space X ′ {\displaystyle X^{\prime }} is identical to the topology of bounded convergence on X ′ {\displaystyle X^{\prime }} . Every equicontinuous subset of L ( X ; Y ) {\displaystyle L(X;Y)} is bounded in L b ( X ; Y ) {\displaystyle L_{b}(X;Y)} . == Polar topologies == Throughout, we assume that X {\displaystyle X} is a TVS. === 𝒢-topologies versus polar topologies === If X {\displaystyle X} is a TVS whose bounded subsets are exactly the same as its weakly bounded subsets (e.g. if X {\displaystyle X} is a Hausdorff locally convex space), then a G {\displaystyle {\mathcal {G}}} -topology on X ′ {\displaystyle X^{\prime }} (as defined in this article) is a polar topology and conversely, every polar topology if a G {\displaystyle {\mathcal {G}}} -topology. Consequently, in this case the results mentioned in this article can be applied to polar topologies. However, if X {\displaystyle X} is a TVS whose bounded subsets are not exactly the same as its weakly bounded subsets, then the notion of "bounded in X {\displaystyle X} " is stronger than the notion of " σ ( X , X ′ ) {\displaystyle \sigma \left(X,X^{\prime }\right)} -bounded in X {\displaystyle X} " (i.e. bounded in X {\displaystyle X} implies σ ( X , X ′ ) {\displaystyle \sigma \left(X,X^{\prime }\right)} -bounded in X {\displaystyle X} ) so that a G {\displaystyle {\mathcal {G}}} -topology on X ′ {\displaystyle X^{\prime }} (as defined in this article) is not necessarily a polar topology. One important difference is that polar topologies are always locally convex while G {\displaystyle {\mathcal {G}}} -topologies need not be. Polar topologies have stronger results than the more general topologies of uniform convergence described in this article and we refer the read to the main article: polar topology. We list here some of the most common polar topologies. === List of polar topologies === Suppose that X {\displaystyle X} is a TVS whose bounded subsets are the same as its weakly bounded subsets. Notation: If Δ ( Y , X ) {\displaystyle \Delta (Y,X)} denotes a polar topology on Y {\displaystyle Y} then Y {\displaystyle Y} endowed with this topology will be denoted by Y Δ ( Y , X ) {\displaystyle Y_{\Delta (Y,X)}} or simply Y Δ {\displaystyle Y_{\Delta }} (e.g. for σ ( Y , X ) {\displaystyle \sigma (Y,X)} we would have Δ = σ {\displaystyle \Delta =\sigma } so that Y σ ( Y , X ) {\displaystyle Y_{\sigma (Y,X)}} and Y σ {\displaystyle Y_{\sigma }} all denote Y {\displaystyle Y} with endowed with σ ( Y , X ) {\displaystyle \sigma (Y,X)} ). == 𝒢-ℋ topologies on spaces of bilinear maps == We will let B ( X , Y ; Z ) {\displaystyle {\mathcal {B}}(X,Y;Z)} denote the space of separately continuous bilinear maps and B ( X , Y ; Z ) {\displaystyle B(X,Y;Z)} denote the space of continuous bilinear maps, where X , Y , {\displaystyle X,Y,} and Z {\displaystyle Z} are topological vector space over the same field (either the real or complex numbers). In an analogous way to how we placed a topology on L ( X ; Y ) {\displaystyle L(X;Y)} we can place a topology on B ( X , Y ; Z ) {\displaystyle {\mathcal {B}}(X,Y;Z)} and B ( X , Y ; Z ) {\displaystyle B(X,Y;Z)} . Let G {\displaystyle {\mathcal {G}}} (respectively, H {\displaystyle {\mathcal {H}}} ) be a family of subsets of X {\displaystyle X} (respectively, Y {\displaystyle Y} ) containing at least one non-empty set. Let G × H {\displaystyle {\mathcal {G}}\times {\mathcal {H}}} denote the collection of all sets G × H {\displaystyle G\times H} where G ∈ G , {\displaystyle G\in {\mathcal {G}},} H ∈ H . {\displaystyle H\in {\mathcal {H}}.} We can place on Z X × Y {\displaystyle Z^{X\times Y}} the G × H {\displaystyle {\mathcal {G}}\times {\mathcal {H}}} -topology, and consequently on any of its subsets, in particular on B ( X , Y ; Z ) {\displaystyle B(X,Y;Z)} and on B ( X , Y ; Z ) {\displaystyle {\mathcal {B}}(X,Y;Z)} . This topology is known as the G − H {\displaystyle {\mathcal {G}}-{\mathcal {H}}} -topology or as the topology of uniform convergence on the products G × H {\displaystyle G\times H} of G × H {\displaystyle {\mathcal {G}}\times {\mathcal {H}}} . However, as before, this topology is not necessarily compatible with the vector space structure of B ( X , Y ; Z ) {\displaystyle {\mathcal {B}}(X,Y;Z)} or of B ( X , Y ; Z ) {\displaystyle B(X,Y;Z)} without the additional requirement that for all bilinear maps, b {\displaystyle b} in this space (that is, in B ( X , Y ; Z ) {\displaystyle {\mathcal {B}}(X,Y;Z)} or in B ( X , Y ; Z ) {\displaystyle B(X,Y;Z)} ) and for all G ∈ G {\displaystyle G\in {\mathcal {G}}} and H ∈ H , {\displaystyle H\in {\mathcal {H}},} the set b ( G , H ) {\displaystyle b(G,H)} is bounded in X . {\displaystyle X.} If both G {\displaystyle {\mathcal {G}}} and H {\displaystyle {\mathcal {H}}} consist of bounded sets then this requirement is automatically satisfied if we are topologizing B ( X , Y ; Z ) {\displaystyle B(X,Y;Z)} but this may not be the case if we are trying to topologize B ( X , Y ; Z ) {\displaystyle {\mathcal {B}}(X,Y;Z)} . The G − H {\displaystyle {\mathcal {G}}-{\mathcal {H}}} -topology on B ( X , Y ; Z ) {\displaystyle {\mathcal {B}}(X,Y;Z)} will be compatible with the vector space structure of B ( X , Y ; Z ) {\displaystyle {\mathcal {B}}(X,Y;Z)} if both G {\displaystyle {\mathcal {G}}} and H {\displaystyle {\mathcal {H}}} consists of bounded sets and any of the following conditions hold: X {\displaystyle X} and Y {\displaystyle Y} are barrelled spaces and Z {\displaystyle Z} is locally convex. X {\displaystyle X} is a F-space, Y {\displaystyle Y} is metrizable, and Z {\displaystyle Z} is Hausdorff, in which case B ( X , Y ; Z ) = B ( X , Y ; Z ) . {\displaystyle {\mathcal {B}}(X,Y;Z)=B(X,Y;Z).} X , Y , {\displaystyle X,Y,} and Z {\displaystyle Z} are the strong duals of reflexive Fréchet spaces. X {\displaystyle X} is normed and Y {\displaystyle Y} and Z {\displaystyle Z} the strong duals of reflexive Fréchet spaces. === The ε-topology === Suppose that X , Y , {\displaystyle X,Y,} and Z {\displaystyle Z} are locally convex spaces and let G ′ {\displaystyle {\mathcal {G}}^{\prime }} and H ′ {\displaystyle {\mathcal {H}}^{\prime }} be the collections of equicontinuous subsets of X ′ {\displaystyle X^{\prime }} and X ′ {\displaystyle X^{\prime }} , respectively. Then the G ′ − H ′ {\displaystyle {\mathcal {G}}^{\prime }-{\mathcal {H}}^{\prime }} -topology on B ( X b ( X ′ , X ) ′ , Y b ( X ′ , X ) ′ ; Z ) {\displaystyle {\mathcal {B}}\left(X_{b\left(X^{\prime },X\right)}^{\prime },Y_{b\left(X^{\prime },X\right)}^{\prime };Z\right)} will be a topological vector space topology. This topology is called the ε-topology and B ( X b ( X ′ , X ) ′ , Y b ( X ′ , X ) ; Z ) {\displaystyle {\mathcal {B}}\left(X_{b\left(X^{\prime },X\right)}^{\prime },Y_{b\left(X^{\prime },X\right)};Z\right)} with this topology it is denoted by B ϵ ( X b ( X ′ , X ) ′ , Y b ( X ′ , X ) ′ ; Z ) {\displaystyle {\mathcal {B}}_{\epsilon }\left(X_{b\left(X^{\prime },X\right)}^{\prime },Y_{b\left(X^{\prime },X\right)}^{\prime };Z\right)} or simply by B ϵ ( X b ′ , Y b ′ ; Z ) . {\displaystyle {\mathcal {B}}_{\epsilon }\left(X_{b}^{\prime },Y_{b}^{\prime };Z\right).} Part of the importance of this vector space and this topology is that it contains many subspace, such as B ( X σ ( X ′ , X ) ′ , Y σ ( X ′ , X ) ′ ; Z ) , {\displaystyle {\mathcal {B}}\left(X_{\sigma \left(X^{\prime },X\right)}^{\prime },Y_{\sigma \left(X^{\prime },X\right)}^{\prime };Z\right),} which we denote by B ( X σ ′ , Y σ ′ ; Z ) . {\displaystyle {\mathcal {B}}\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime };Z\right).} When this subspace is given the subspace topology of B ϵ ( X b ′ , Y b ′ ; Z ) {\displaystyle {\mathcal {B}}_{\epsilon }\left(X_{b}^{\prime },Y_{b}^{\prime };Z\right)} it is denoted by B ϵ ( X σ ′ , Y σ ′ ; Z ) . {\displaystyle {\mathcal {B}}_{\epsilon }\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime };Z\right).} In the instance where Z {\displaystyle Z} is the field of these vector spaces, B ( X σ ′ , Y σ ′ ) {\displaystyle {\mathcal {B}}\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)} is a tensor product of X {\displaystyle X} and Y . {\displaystyle Y.} In fact, if X {\displaystyle X} and Y {\displaystyle Y} are locally convex Hausdorff spaces then B ( X σ ′ , Y σ ′ ) {\displaystyle {\mathcal {B}}\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)} is vector space-isomorphic to L ( X σ ( X ′ , X ) ′ ; Y σ ( Y ′ , Y ) ) , {\displaystyle L\left(X_{\sigma \left(X^{\prime },X\right)}^{\prime };Y_{\sigma (Y^{\prime },Y)}\right),} which is in turn is equal to L ( X τ ( X ′ , X ) ′ ; Y ) . {\displaystyle L\left(X_{\tau \left(X^{\prime },X\right)}^{\prime };Y\right).} These spaces have the following properties: If X {\displaystyle X} and Y {\displaystyle Y} are locally convex Hausdorff spaces then B ε ( X σ ′ , Y σ ′ ) {\displaystyle {\mathcal {B}}_{\varepsilon }\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)} is complete if and only if both X {\displaystyle X} and Y {\displaystyle Y} are complete. If X {\displaystyle X} and Y {\displaystyle Y} are both normed (respectively, both Banach) then so is B ϵ ( X σ ′ , Y σ ′ ) {\displaystyle {\mathcal {B}}_{\epsilon }\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)} == See also == Bornological space – Space where bounded operators are continuous Bounded linear operator – Linear transformation between topological vector spacesPages displaying short descriptions of redirect targets Dual system Dual topology List of topologies – List of concrete topologies and topological spaces Modes of convergence – Property of a sequence or series Operator norm – Measure of the "size" of linear operators Polar topology – Dual space topology of uniform convergence on some sub-collection of bounded subsets Strong dual space – Continuous dual space endowed with the topology of uniform convergence on bounded sets Topologies on the set of operators on a Hilbert space Uniform convergence – Mode of convergence of a function sequence Uniform space – Topological space with a notion of uniform properties Weak topology – Mathematical term Vague topology == References == == Bibliography == Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098. Hogbe-Nlend, Henri (1977). Bornologies and Functional Analysis: Introductory Course on the Theory of Duality Topology-Bornology and its use in Functional Analysis. North-Holland Mathematics Studies. Vol. 26. Amsterdam New York New York: North Holland. ISBN 978-0-08-087137-0. MR 0500064. OCLC 316549583. Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342. Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Wikipedia/Topology_of_uniform_convergence
In mathematics, weak topology is an alternative term for certain initial topologies, often on topological vector spaces or spaces of linear operators, for instance on a Hilbert space. The term is most commonly used for the initial topology of a topological vector space (such as a normed vector space) with respect to its continuous dual. The remainder of this article will deal with this case, which is one of the concepts of functional analysis. One may call subsets of a topological vector space weakly closed (respectively, weakly compact, etc.) if they are closed (respectively, compact, etc.) with respect to the weak topology. Likewise, functions are sometimes called weakly continuous (respectively, weakly differentiable, weakly analytic, etc.) if they are continuous (respectively, differentiable, analytic, etc.) with respect to the weak topology. == History == Starting in the early 1900s, David Hilbert and Marcel Riesz made extensive use of weak convergence. The early pioneers of functional analysis did not elevate norm convergence above weak convergence and oftentimes viewed weak convergence as preferable. In 1929, Banach introduced weak convergence for normed spaces and also introduced the analogous weak-* convergence. The weak topology is called topologie faible in French and schwache Topologie in German. == The weak and strong topologies == Let K {\displaystyle \mathbb {K} } be a topological field, namely a field with a topology such that addition, multiplication, and division are continuous. In most applications K {\displaystyle \mathbb {K} } will be either the field of complex numbers or the field of real numbers with the familiar topologies. === Weak topology with respect to a pairing === Both the weak topology and the weak* topology are special cases of a more general construction for pairings, which we now describe. The benefit of this more general construction is that any definition or result proved for it applies to both the weak topology and the weak* topology, thereby making redundant the need for many definitions, theorem statements, and proofs. This is also the reason why the weak* topology is also frequently referred to as the "weak topology"; because it is just an instance of the weak topology in the setting of this more general construction. Suppose (X, Y, b) is a pairing of vector spaces over a topological field K {\displaystyle \mathbb {K} } (i.e. X and Y are vector spaces over K {\displaystyle \mathbb {K} } and b : X × Y → K {\displaystyle \mathbb {K} } is a bilinear map). Notation. For all x ∈ X, let b(x, •) : Y → K {\displaystyle \mathbb {K} } denote the linear functional on Y defined by y ↦ b(x, y). Similarly, for all y ∈ Y, let b(•, y) : X → K {\displaystyle \mathbb {K} } be defined by x ↦ b(x, y). Definition. The weak topology on X induced by Y (and b) is the weakest topology on X, denoted by 𝜎(X, Y, b) or simply 𝜎(X, Y), making all maps b(•, y) : X → K {\displaystyle \mathbb {K} } continuous, as y ranges over Y. The weak topology on Y is now automatically defined as described in the article Dual system. However, for clarity, we now repeat it. Definition. The weak topology on Y induced by X (and b) is the weakest topology on Y, denoted by 𝜎(Y, X, b) or simply 𝜎(Y, X), making all maps b(x, •) : Y → K {\displaystyle \mathbb {K} } continuous, as x ranges over X. If the field K {\displaystyle \mathbb {K} } has an absolute value |⋅|, then the weak topology 𝜎(X, Y, b) on X is induced by the family of seminorms, py : X → R {\displaystyle \mathbb {R} } , defined by py(x) := |b(x, y)| for all y ∈ Y and x ∈ X. This shows that weak topologies are locally convex. Assumption. We will henceforth assume that K {\displaystyle \mathbb {K} } is either the real numbers R {\displaystyle \mathbb {R} } or the complex numbers C {\displaystyle \mathbb {C} } . ==== Canonical duality ==== We now consider the special case where Y is a vector subspace of the algebraic dual space of X (i.e. a vector space of linear functionals on X). There is a pairing, denoted by ( X , Y , ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle (X,Y,\langle \cdot ,\cdot \rangle )} or ( X , Y ) {\displaystyle (X,Y)} , called the canonical pairing whose bilinear map ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the canonical evaluation map, defined by ⟨ x , x ′ ⟩ = x ′ ( x ) {\displaystyle \langle x,x'\rangle =x'(x)} for all x ∈ X {\displaystyle x\in X} and x ′ ∈ Y {\displaystyle x'\in Y} . Note in particular that ⟨ ⋅ , x ′ ⟩ {\displaystyle \langle \cdot ,x'\rangle } is just another way of denoting x ′ {\displaystyle x'} i.e. ⟨ ⋅ , x ′ ⟩ = x ′ ( ⋅ ) {\displaystyle \langle \cdot ,x'\rangle =x'(\cdot )} . Assumption. If Y is a vector subspace of the algebraic dual space of X then we will assume that they are associated with the canonical pairing ⟨X, Y⟩. In this case, the weak topology on X (resp. the weak topology on Y), denoted by 𝜎(X,Y) (resp. by 𝜎(Y,X)) is the weak topology on X (resp. on Y) with respect to the canonical pairing ⟨X, Y⟩. The topology σ(X,Y) is the initial topology of X with respect to Y. If Y is a vector space of linear functionals on X, then the continuous dual of X with respect to the topology σ(X,Y) is precisely equal to Y.(Rudin 1991, Theorem 3.10) ==== The weak and weak* topologies ==== Let X be a topological vector space (TVS) over K {\displaystyle \mathbb {K} } , that is, X is a K {\displaystyle \mathbb {K} } vector space equipped with a topology so that vector addition and scalar multiplication are continuous. We call the topology that X starts with the original, starting, or given topology (the reader is cautioned against using the terms "initial topology" and "strong topology" to refer to the original topology since these already have well-known meanings, so using them may cause confusion). We may define a possibly different topology on X using the topological or continuous dual space X ∗ {\displaystyle X^{*}} , which consists of all linear functionals from X into the base field K {\displaystyle \mathbb {K} } that are continuous with respect to the given topology. Recall that ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the canonical evaluation map defined by ⟨ x , x ′ ⟩ = x ′ ( x ) {\displaystyle \langle x,x'\rangle =x'(x)} for all x ∈ X {\displaystyle x\in X} and x ′ ∈ X ∗ {\displaystyle x'\in X^{*}} , where in particular, ⟨ ⋅ , x ′ ⟩ = x ′ ( ⋅ ) = x ′ {\displaystyle \langle \cdot ,x'\rangle =x'(\cdot )=x'} . Definition. The weak topology on X is the weak topology on X with respect to the canonical pairing ⟨ X , X ∗ ⟩ {\displaystyle \langle X,X^{*}\rangle } . That is, it is the weakest topology on X making all maps x ′ = ⟨ ⋅ , x ′ ⟩ : X → K {\displaystyle x'=\langle \cdot ,x'\rangle :X\to \mathbb {K} } continuous, as x ′ {\displaystyle x'} ranges over X ∗ {\displaystyle X^{*}} . Definition: The weak topology on X ∗ {\displaystyle X^{*}} is the weak topology on X ∗ {\displaystyle X^{*}} with respect to the canonical pairing ⟨ X , X ∗ ⟩ {\displaystyle \langle X,X^{*}\rangle } . That is, it is the weakest topology on X ∗ {\displaystyle X^{*}} making all maps ⟨ x , ⋅ ⟩ : X ∗ → K {\displaystyle \langle x,\cdot \rangle :X^{*}\to \mathbb {K} } continuous, as x ranges over X. This topology is also called the weak* topology. We give alternative definitions below. === Weak topology induced by the continuous dual space === Alternatively, the weak topology on a TVS X is the initial topology with respect to the family X ∗ {\displaystyle X^{*}} . In other words, it is the coarsest topology on X such that each element of X ∗ {\displaystyle X^{*}} remains a continuous function. A subbase for the weak topology is the collection of sets of the form ϕ − 1 ( U ) {\displaystyle \phi ^{-1}(U)} where ϕ ∈ X ∗ {\displaystyle \phi \in X^{*}} and U is an open subset of the base field K {\displaystyle \mathbb {K} } . In other words, a subset of X is open in the weak topology if and only if it can be written as a union of (possibly infinitely many) sets, each of which is an intersection of finitely many sets of the form ϕ − 1 ( U ) {\displaystyle \phi ^{-1}(U)} . From this point of view, the weak topology is the coarsest polar topology. === Weak convergence === The weak topology is characterized by the following condition: a net ( x λ ) {\displaystyle (x_{\lambda })} in X converges in the weak topology to the element x of X if and only if ϕ ( x λ ) {\displaystyle \phi (x_{\lambda })} converges to ϕ ( x ) {\displaystyle \phi (x)} in R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } for all ϕ ∈ X ∗ {\displaystyle \phi \in X^{*}} . In particular, if x n {\displaystyle x_{n}} is a sequence in X, then x n {\displaystyle x_{n}} converges weakly to x if φ ( x n ) → φ ( x ) {\displaystyle \varphi (x_{n})\to \varphi (x)} as n → ∞ for all φ ∈ X ∗ {\displaystyle \varphi \in X^{*}} . In this case, it is customary to write x n ⟶ w x {\displaystyle x_{n}{\overset {\mathrm {w} }{\longrightarrow }}x} or, sometimes, x n ⇀ x . {\displaystyle x_{n}\rightharpoonup x.} === Other properties === If X is equipped with the weak topology, then addition and scalar multiplication remain continuous operations, and X is a locally convex topological vector space. If X is a normed space, then the dual space X ∗ {\displaystyle X^{*}} is itself a normed vector space by using the norm ‖ ϕ ‖ = sup ‖ x ‖ ≤ 1 | ϕ ( x ) | . {\displaystyle \|\phi \|=\sup _{\|x\|\leq 1}|\phi (x)|.} This norm gives rise to a topology, called the strong topology, on X ∗ {\displaystyle X^{*}} . This is the topology of uniform convergence. The uniform and strong topologies are generally different for other spaces of linear maps; see below. == Weak-* topology == The weak* topology is an important example of a polar topology. A space X can be embedded into its double dual X** by x ↦ { T x : X ∗ → K T x ( ϕ ) = ϕ ( x ) {\displaystyle x\mapsto {\begin{cases}T_{x}:X^{*}\to \mathbb {K} \\T_{x}(\phi )=\phi (x)\end{cases}}} Thus T : X → X ∗ ∗ {\displaystyle T:X\to X^{**}} is an injective linear mapping, though not necessarily surjective (spaces for which this canonical embedding is surjective are called reflexive). The weak-* topology on X ∗ {\displaystyle X^{*}} is the weak topology induced by the image of T : T ( X ) ⊂ X ∗ ∗ {\displaystyle T:T(X)\subset X^{**}} . In other words, it is the coarsest topology such that the maps Tx, defined by T x ( ϕ ) = ϕ ( x ) {\displaystyle T_{x}(\phi )=\phi (x)} from X ∗ {\displaystyle X^{*}} to the base field R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } remain continuous. Weak-* convergence A net ϕ λ {\displaystyle \phi _{\lambda }} in X ∗ {\displaystyle X^{*}} is convergent to ϕ {\displaystyle \phi } in the weak-* topology if it converges pointwise: ϕ λ ( x ) → ϕ ( x ) {\displaystyle \phi _{\lambda }(x)\to \phi (x)} for all x ∈ X {\displaystyle x\in X} . In particular, a sequence of ϕ n ∈ X ∗ {\displaystyle \phi _{n}\in X^{*}} converges to ϕ {\displaystyle \phi } provided that ϕ n ( x ) → ϕ ( x ) {\displaystyle \phi _{n}(x)\to \phi (x)} for all x ∈ X. In this case, one writes ϕ n → w ∗ ϕ {\displaystyle \phi _{n}{\overset {w^{*}}{\to }}\phi } as n → ∞. Weak-* convergence is sometimes called the simple convergence or the pointwise convergence. Indeed, it coincides with the pointwise convergence of linear functionals. === Properties === If X is a separable (i.e. has a countable dense subset) locally convex space and H is a norm-bounded subset of its continuous dual space, then H endowed with the weak* (subspace) topology is a metrizable topological space. However, for infinite-dimensional spaces, the metric cannot be translation-invariant. If X is a separable metrizable locally convex space then the weak* topology on the continuous dual space of X is separable. Properties on normed spaces By definition, the weak* topology is weaker than the weak topology on X ∗ {\displaystyle X^{*}} . An important fact about the weak* topology is the Banach–Alaoglu theorem: if X is normed, then the closed unit ball in X ∗ {\displaystyle X^{*}} is weak*-compact (more generally, the polar in X ∗ {\displaystyle X^{*}} of a neighborhood of 0 in X is weak*-compact). Moreover, the closed unit ball in a normed space X is compact in the weak topology if and only if X is reflexive. In more generality, let F be locally compact valued field (e.g., the reals, the complex numbers, or any of the p-adic number systems). Let X be a normed topological vector space over F, compatible with the absolute value in F. Then in X ∗ {\displaystyle X^{*}} , the topological dual space X of continuous F-valued linear functionals on X, all norm-closed balls are compact in the weak* topology. If X is a normed space, a version of the Heine-Borel theorem holds. In particular, a subset of the continuous dual is weak* compact if and only if it is weak* closed and norm-bounded. This implies, in particular, that when X is an infinite-dimensional normed space then the closed unit ball at the origin in the dual space of X does not contain any weak* neighborhood of 0 (since any such neighborhood is norm-unbounded). Thus, even though norm-closed balls are compact, X* is not weak* locally compact. If X is a normed space, then X is separable if and only if the weak* topology on the closed unit ball of X ∗ {\displaystyle X^{*}} is metrizable, in which case the weak* topology is metrizable on norm-bounded subsets of X ∗ {\displaystyle X^{*}} . If a normed space X has a dual space that is separable (with respect to the dual-norm topology) then X is necessarily separable. If X is a Banach space, the weak* topology is not metrizable on all of X ∗ {\displaystyle X^{*}} unless X is finite-dimensional. == Examples == === Hilbert spaces === Consider, for example, the difference between strong and weak convergence of functions in the Hilbert space L2( R n {\displaystyle \mathbb {R} ^{n}} ). Strong convergence of a sequence ψ k ∈ L 2 ( R n ) {\displaystyle \psi _{k}\in L^{2}(\mathbb {R} ^{n})} to an element ψ means that ∫ R n | ψ k − ψ | 2 d μ → 0 {\displaystyle \int _{\mathbb {R} ^{n}}|\psi _{k}-\psi |^{2}\,{\rm {d}}\mu \,\to 0} as k → ∞. Here the notion of convergence corresponds to the norm on L2. In contrast weak convergence only demands that ∫ R n ψ ¯ k f d μ → ∫ R n ψ ¯ f d μ {\displaystyle \int _{\mathbb {R} ^{n}}{\bar {\psi }}_{k}f\,\mathrm {d} \mu \to \int _{\mathbb {R} ^{n}}{\bar {\psi }}f\,\mathrm {d} \mu } for all functions f ∈ L2 (or, more typically, all f in a dense subset of L2 such as a space of test functions, if the sequence {ψk} is bounded). For given test functions, the relevant notion of convergence only corresponds to the topology used in C {\displaystyle \mathbb {C} } . For example, in the Hilbert space L2(0,π), the sequence of functions ψ k ( x ) = 2 / π sin ⁡ ( k x ) {\displaystyle \psi _{k}(x)={\sqrt {2/\pi }}\sin(kx)} form an orthonormal basis. In particular, the (strong) limit of ψ k {\displaystyle \psi _{k}} as k → ∞ does not exist. On the other hand, by the Riemann–Lebesgue lemma, the weak limit exists and is zero. === Distributions === One normally obtains spaces of distributions by forming the strong dual of a space of test functions (such as the compactly supported smooth functions on R n {\displaystyle \mathbb {R} ^{n}} ). In an alternative construction of such spaces, one can take the weak dual of a space of test functions inside a Hilbert space such as L2. Thus one is led to consider the idea of a rigged Hilbert space. === Weak topology induced by the algebraic dual === Suppose that X is a vector space and X# is the algebraic dual space of X (i.e. the vector space of all linear functionals on X). If X is endowed with the weak topology induced by X# then the continuous dual space of X is X#, every bounded subset of X is contained in a finite-dimensional vector subspace of X, every vector subspace of X is closed and has a topological complement. == Operator topologies == If X and Y are topological vector spaces, the space L(X,Y) of continuous linear operators f : X → Y may carry a variety of different possible topologies. The naming of such topologies depends on the kind of topology one is using on the target space Y to define operator convergence (Yosida 1980, IV.7 Topologies of linear maps). There are, in general, a vast array of possible operator topologies on L(X,Y), whose naming is not entirely intuitive. For example, the strong operator topology on L(X,Y) is the topology of pointwise convergence. For instance, if Y is a normed space, then this topology is defined by the seminorms indexed by x ∈ X: f ↦ ‖ f ( x ) ‖ Y . {\displaystyle f\mapsto \|f(x)\|_{Y}.} More generally, if a family of seminorms Q defines the topology on Y, then the seminorms pq, x on L(X,Y) defining the strong topology are given by p q , x : f ↦ q ( f ( x ) ) , {\displaystyle p_{q,x}:f\mapsto q(f(x)),} indexed by q ∈ Q and x ∈ X. In particular, see the weak operator topology and weak* operator topology. == See also == Eberlein compactum, a compact set in the weak topology Weak convergence (Hilbert space) Weak-star operator topology Weak convergence of measures Topologies on spaces of linear maps Topologies on the set of operators on a Hilbert space Vague topology == References == == Bibliography == Conway, John B. (1994), A Course in Functional Analysis (2nd ed.), Springer-Verlag, ISBN 0-387-97245-5 Folland, G.B. (1999). Real Analysis: Modern Techniques and Their Applications (Second ed.). John Wiley & Sons, Inc. ISBN 978-0-471-31716-6. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Pedersen, Gert (1989), Analysis Now, Springer, ISBN 0-387-96788-5 Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Willard, Stephen (February 2004). General Topology. Courier Dover Publications. ISBN 9780486434797. Yosida, Kosaku (1980), Functional analysis (6th ed.), Springer, ISBN 978-3-540-58654-8
Wikipedia/Weak_topology_(polar_topology)
In mathematics, a strong topology is a topology which is stronger than some other "default" topology. This term is used to describe different topologies depending on context, and it may refer to: the final topology on the disjoint union the topology arising from a norm the strong operator topology the strong topology (polar topology), which subsumes all topologies above. A topology τ is stronger than a topology σ (is a finer topology) if τ contains all the open sets of σ. In algebraic geometry, it usually means the topology of an algebraic variety as complex manifold or subspace of complex projective space, as opposed to the Zariski topology (which is rarely even a Hausdorff space). == See also == Weak topology
Wikipedia/Strong_topology
In mathematical analysis, and especially functional analysis, a fundamental role is played by the space of continuous functions on a compact Hausdorff space X {\displaystyle X} with values in the real or complex numbers. This space, denoted by C ( X ) , {\displaystyle {\mathcal {C}}(X),} is a vector space with respect to the pointwise addition of functions and scalar multiplication by constants. It is, moreover, a normed space with norm defined by ‖ f ‖ = sup x ∈ X | f ( x ) | , {\displaystyle \|f\|=\sup _{x\in X}|f(x)|,} the uniform norm. The uniform norm defines the topology of uniform convergence of functions on X . {\displaystyle X.} The space C ( X ) {\displaystyle {\mathcal {C}}(X)} is a Banach algebra with respect to this norm.(Rudin 1991, §10.3(a)) == Properties == By Urysohn's lemma, C ( X ) {\displaystyle {\mathcal {C}}(X)} separates points of X {\displaystyle X} : If x , y ∈ X {\displaystyle x,y\in X} are distinct points, then there is an f ∈ C ( X ) {\displaystyle f\in {\mathcal {C}}(X)} such that f ( x ) ≠ f ( y ) . {\displaystyle f(x)\neq f(y).} The space C ( X ) {\displaystyle {\mathcal {C}}(X)} is infinite-dimensional whenever X {\displaystyle X} is an infinite space (since it separates points). Hence, in particular, it is generally not locally compact. The Riesz–Markov–Kakutani representation theorem gives a characterization of the continuous dual space of C ( X ) . {\displaystyle {\mathcal {C}}(X).} Specifically, this dual space is the space of Radon measures on X {\displaystyle X} (regular Borel measures), denoted by rca ⁡ ( X ) . {\displaystyle \operatorname {rca} (X).} This space, with the norm given by the total variation of a measure, is also a Banach space belonging to the class of ba spaces. (Dunford & Schwartz 1958, §IV.6.3) Positive linear functionals on C ( X ) {\displaystyle {\mathcal {C}}(X)} correspond to (positive) regular Borel measures on X , {\displaystyle X,} by a different form of the Riesz representation theorem. (Rudin 1966, Chapter 2) If X {\displaystyle X} is infinite, then C ( X ) {\displaystyle {\mathcal {C}}(X)} is not reflexive, nor is it weakly complete. The Arzelà–Ascoli theorem holds: A subset K {\displaystyle K} of C ( X ) {\displaystyle {\mathcal {C}}(X)} is relatively compact if and only if it is bounded in the norm of C ( X ) , {\displaystyle {\mathcal {C}}(X),} and equicontinuous. The Stone–Weierstrass theorem holds for C ( X ) . {\displaystyle {\mathcal {C}}(X).} In the case of real functions, if A {\displaystyle A} is a subring of C ( X ) {\displaystyle {\mathcal {C}}(X)} that contains all constants and separates points, then the closure of A {\displaystyle A} is C ( X ) . {\displaystyle {\mathcal {C}}(X).} In the case of complex functions, the statement holds with the additional hypothesis that A {\displaystyle A} is closed under complex conjugation. If X {\displaystyle X} and Y {\displaystyle Y} are two compact Hausdorff spaces, and F : C ( X ) → C ( Y ) {\displaystyle F:{\mathcal {C}}(X)\to {\mathcal {C}}(Y)} is a homomorphism of algebras which commutes with complex conjugation, then F {\displaystyle F} is continuous. Furthermore, F {\displaystyle F} has the form F ( h ) ( y ) = h ( f ( y ) ) {\displaystyle F(h)(y)=h(f(y))} for some continuous function f : Y → X . {\displaystyle f:Y\to X.} In particular, if C ( X ) {\displaystyle C(X)} and C ( Y ) {\displaystyle C(Y)} are isomorphic as algebras, then X {\displaystyle X} and Y {\displaystyle Y} are homeomorphic topological spaces. Let Δ {\displaystyle \Delta } be the space of maximal ideals in C ( X ) . {\displaystyle {\mathcal {C}}(X).} Then there is a one-to-one correspondence between Δ and the points of X . {\displaystyle X.} Furthermore, Δ {\displaystyle \Delta } can be identified with the collection of all complex homomorphisms C ( X ) → C . {\displaystyle {\mathcal {C}}(X)\to \mathbb {C} .} Equip Δ {\displaystyle \Delta } with the initial topology with respect to this pairing with C ( X ) {\displaystyle {\mathcal {C}}(X)} (that is, the Gelfand transform). Then X {\displaystyle X} is homeomorphic to Δ equipped with this topology. (Rudin 1991, §11.13(a)) A sequence in C ( X ) {\displaystyle {\mathcal {C}}(X)} is weakly Cauchy if and only if it is (uniformly) bounded in C ( X ) {\displaystyle {\mathcal {C}}(X)} and pointwise convergent. In particular, C ( X ) {\displaystyle {\mathcal {C}}(X)} is only weakly complete for X {\displaystyle X} a finite set. The vague topology is the weak* topology on the dual of C ( X ) . {\displaystyle {\mathcal {C}}(X).} The Banach–Alaoglu theorem implies that any normed space is isometrically isomorphic to a subspace of C ( X ) {\displaystyle C(X)} for some X . {\displaystyle X.} == Generalizations == The space C ( X ) {\displaystyle C(X)} of real or complex-valued continuous functions can be defined on any topological space X . {\displaystyle X.} In the non-compact case, however, C ( X ) {\displaystyle C(X)} is not in general a Banach space with respect to the uniform norm since it may contain unbounded functions. Hence it is more typical to consider the space, denoted here C B ( X ) {\displaystyle C_{B}(X)} of bounded continuous functions on X . {\displaystyle X.} This is a Banach space (in fact a commutative Banach algebra with identity) with respect to the uniform norm. (Hewitt & Stromberg 1965, Theorem 7.9) It is sometimes desirable, particularly in measure theory, to further refine this general definition by considering the special case when X {\displaystyle X} is a locally compact Hausdorff space. In this case, it is possible to identify a pair of distinguished subsets of C B ( X ) {\displaystyle C_{B}(X)} : (Hewitt & Stromberg 1965, §II.7) C 00 ( X ) , {\displaystyle C_{00}(X),} the subset of C ( X ) {\displaystyle C(X)} consisting of functions with compact support. This is called the space of functions vanishing in a neighborhood of infinity. C 0 ( X ) , {\displaystyle C_{0}(X),} the subset of C ( X ) {\displaystyle C(X)} consisting of functions such that for every r > 0 , {\displaystyle r>0,} there is a compact set K ⊆ X {\displaystyle K\subseteq X} such that | f ( x ) | < r {\displaystyle |f(x)|<r} for all x ∈ X ∖ K . {\displaystyle x\in X\backslash K.} This is called the space of functions vanishing at infinity. The closure of C 00 ( X ) {\displaystyle C_{00}(X)} is precisely C 0 ( X ) . {\displaystyle C_{0}(X).} In particular, the latter is a Banach space. == References == Dunford, N.; Schwartz, J.T. (1958), Linear operators, Part I, Wiley-Interscience. Hewitt, Edwin; Stromberg, Karl (1965), Real and abstract analysis, Springer-Verlag. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Rudin, Walter (1966), Real and complex analysis, McGraw-Hill, ISBN 0-07-054234-1.
Wikipedia/Continuous_functions_on_a_compact_Hausdorff_space
In functional analysis, an area of mathematics, the projective tensor product of two locally convex topological vector spaces is a natural topological vector space structure on their tensor product. Namely, given locally convex topological vector spaces X {\displaystyle X} and Y {\displaystyle Y} , the projective topology, or π-topology, on X ⊗ Y {\displaystyle X\otimes Y} is the strongest topology which makes X ⊗ Y {\displaystyle X\otimes Y} a locally convex topological vector space such that the canonical map ( x , y ) ↦ x ⊗ y {\displaystyle (x,y)\mapsto x\otimes y} (from X × Y {\displaystyle X\times Y} to X ⊗ Y {\displaystyle X\otimes Y} ) is continuous. When equipped with this topology, X ⊗ Y {\displaystyle X\otimes Y} is denoted X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} and called the projective tensor product of X {\displaystyle X} and Y {\displaystyle Y} . It is a particular instance of a topological tensor product. == Definitions == Let X {\displaystyle X} and Y {\displaystyle Y} be locally convex topological vector spaces. Their projective tensor product X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} is the unique locally convex topological vector space with underlying vector space X ⊗ Y {\displaystyle X\otimes Y} having the following universal property: For any locally convex topological vector space Z {\displaystyle Z} , if Φ Z {\displaystyle \Phi _{Z}} is the canonical map from the vector space of bilinear maps X × Y → Z {\displaystyle X\times Y\to Z} to the vector space of linear maps X ⊗ Y → Z {\displaystyle X\otimes Y\to Z} , then the image of the restriction of Φ Z {\displaystyle \Phi _{Z}} to the continuous bilinear maps is the space of continuous linear maps X ⊗ π Y → Z {\displaystyle X\otimes _{\pi }Y\to Z} . When the topologies of X {\displaystyle X} and Y {\displaystyle Y} are induced by seminorms, the topology of X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} is induced by seminorms constructed from those on X {\displaystyle X} and Y {\displaystyle Y} as follows. If p {\displaystyle p} is a seminorm on X {\displaystyle X} , and q {\displaystyle q} is a seminorm on Y {\displaystyle Y} , define their tensor product p ⊗ q {\displaystyle p\otimes q} to be the seminorm on X ⊗ Y {\displaystyle X\otimes Y} given by ( p ⊗ q ) ( b ) = inf r > 0 , b ∈ r W r {\displaystyle (p\otimes q)(b)=\inf _{r>0,\,b\in rW}r} for all b {\displaystyle b} in X ⊗ Y {\displaystyle X\otimes Y} , where W {\displaystyle W} is the balanced convex hull of the set { x ⊗ y : p ( x ) ≤ 1 , q ( y ) ≤ 1 } {\displaystyle \left\{x\otimes y:p(x)\leq 1,q(y)\leq 1\right\}} . The projective topology on X ⊗ Y {\displaystyle X\otimes Y} is generated by the collection of such tensor products of the seminorms on X {\displaystyle X} and Y {\displaystyle Y} . When X {\displaystyle X} and Y {\displaystyle Y} are normed spaces, this definition applied to the norms on X {\displaystyle X} and Y {\displaystyle Y} gives a norm, called the projective norm, on X ⊗ Y {\displaystyle X\otimes Y} which generates the projective topology. == Properties == Throughout, all spaces are assumed to be locally convex. The symbol X ⊗ ^ π Y {\displaystyle X{\widehat {\otimes }}_{\pi }Y} denotes the completion of the projective tensor product of X {\displaystyle X} and Y {\displaystyle Y} . If X {\displaystyle X} and Y {\displaystyle Y} are both Hausdorff then so is X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} ; if X {\displaystyle X} and Y {\displaystyle Y} are Fréchet spaces then X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} is barelled. For any two continuous linear operators u 1 : X 1 → Y 1 {\displaystyle u_{1}:X_{1}\to Y_{1}} and u 2 : X 2 → Y 2 {\displaystyle u_{2}:X_{2}\to Y_{2}} , their tensor product (as linear maps) u 1 ⊗ u 2 : X 1 ⊗ π X 2 → Y 1 ⊗ π Y 2 {\displaystyle u_{1}\otimes u_{2}:X_{1}\otimes _{\pi }X_{2}\to Y_{1}\otimes _{\pi }Y_{2}} is continuous. In general, the projective tensor product does not respect subspaces (e.g. if Z {\displaystyle Z} is a vector subspace of X {\displaystyle X} then the TVS Z ⊗ π Y {\displaystyle Z\otimes _{\pi }Y} has in general a coarser topology than the subspace topology inherited from X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} ). If E {\displaystyle E} and F {\displaystyle F} are complemented subspaces of X {\displaystyle X} and Y , {\displaystyle Y,} respectively, then E ⊗ F {\displaystyle E\otimes F} is a complemented vector subspace of X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} and the projective norm on E ⊗ π F {\displaystyle E\otimes _{\pi }F} is equivalent to the projective norm on X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} restricted to the subspace E ⊗ F {\displaystyle E\otimes F} . Furthermore, if X {\displaystyle X} and F {\displaystyle F} are complemented by projections of norm 1, then E ⊗ F {\displaystyle E\otimes F} is complemented by a projection of norm 1. Let E {\displaystyle E} and F {\displaystyle F} be vector subspaces of the Banach spaces X {\displaystyle X} and Y {\displaystyle Y} , respectively. Then E ⊗ ^ F {\displaystyle E{\widehat {\otimes }}F} is a TVS-subspace of X ⊗ ^ π Y {\displaystyle X{\widehat {\otimes }}_{\pi }Y} if and only if every bounded bilinear form on E × F {\displaystyle E\times F} extends to a continuous bilinear form on X × Y {\displaystyle X\times Y} with the same norm. == Completion == In general, the space X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} is not complete, even if both X {\displaystyle X} and Y {\displaystyle Y} are complete (in fact, if X {\displaystyle X} and Y {\displaystyle Y} are both infinite-dimensional Banach spaces then X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} is necessarily not complete). However, X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} can always be linearly embedded as a dense vector subspace of some complete locally convex TVS, which is generally denoted by X ⊗ ^ π Y {\displaystyle X{\widehat {\otimes }}_{\pi }Y} . The continuous dual space of X ⊗ ^ π Y {\displaystyle X{\widehat {\otimes }}_{\pi }Y} is the same as that of X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} , namely, the space of continuous bilinear forms B ( X , Y ) {\displaystyle B(X,Y)} . === Grothendieck's representation of elements in the completion === In a Hausdorff locally convex space X , {\displaystyle X,} a sequence ( x i ) i = 1 ∞ {\displaystyle \left(x_{i}\right)_{i=1}^{\infty }} in X {\displaystyle X} is absolutely convergent if ∑ i = 1 ∞ p ( x i ) < ∞ {\displaystyle \sum _{i=1}^{\infty }p\left(x_{i}\right)<\infty } for every continuous seminorm p {\displaystyle p} on X . {\displaystyle X.} We write x = ∑ i = 1 ∞ x i {\displaystyle x=\sum _{i=1}^{\infty }x_{i}} if the sequence of partial sums ( ∑ i = 1 n x i ) n = 1 ∞ {\displaystyle \left(\sum _{i=1}^{n}x_{i}\right)_{n=1}^{\infty }} converges to x {\displaystyle x} in X . {\displaystyle X.} The following fundamental result in the theory of topological tensor products is due to Alexander Grothendieck. The next theorem shows that it is possible to make the representation of z {\displaystyle z} independent of the sequences ( x i ) i = 1 ∞ {\displaystyle \left(x_{i}\right)_{i=1}^{\infty }} and ( y i ) i = 1 ∞ . {\displaystyle \left(y_{i}\right)_{i=1}^{\infty }.} === Topology of bi-bounded convergence === Let B X {\displaystyle {\mathfrak {B}}_{X}} and B Y {\displaystyle {\mathfrak {B}}_{Y}} denote the families of all bounded subsets of X {\displaystyle X} and Y , {\displaystyle Y,} respectively. Since the continuous dual space of X ⊗ ^ π Y {\displaystyle X{\widehat {\otimes }}_{\pi }Y} is the space of continuous bilinear forms B ( X , Y ) , {\displaystyle B(X,Y),} we can place on B ( X , Y ) {\displaystyle B(X,Y)} the topology of uniform convergence on sets in B X × B Y , {\displaystyle {\mathfrak {B}}_{X}\times {\mathfrak {B}}_{Y},} which is also called the topology of bi-bounded convergence. This topology is coarser than the strong topology on B ( X , Y ) {\displaystyle B(X,Y)} , and in (Grothendieck 1955), Alexander Grothendieck was interested in when these two topologies were identical. This is equivalent to the problem: Given a bounded subset B ⊆ X ⊗ ^ Y , {\displaystyle B\subseteq X{\widehat {\otimes }}Y,} do there exist bounded subsets B 1 ⊆ X {\displaystyle B_{1}\subseteq X} and B 2 ⊆ Y {\displaystyle B_{2}\subseteq Y} such that B {\displaystyle B} is a subset of the closed convex hull of B 1 ⊗ B 2 := { b 1 ⊗ b 2 : b 1 ∈ B 1 , b 2 ∈ B 2 } {\displaystyle B_{1}\otimes B_{2}:=\{b_{1}\otimes b_{2}:b_{1}\in B_{1},b_{2}\in B_{2}\}} ? Grothendieck proved that these topologies are equal when X {\displaystyle X} and Y {\displaystyle Y} are both Banach spaces or both are DF-spaces (a class of spaces introduced by Grothendieck). They are also equal when both spaces are Fréchet with one of them being nuclear. === Strong dual and bidual === Let X {\displaystyle X} be a locally convex topological vector space and let X ′ {\displaystyle X^{\prime }} be its continuous dual space. Alexander Grothendieck characterized the strong dual and bidual for certain situations: == Examples == For ( X , A , μ ) {\displaystyle (X,{\mathcal {A}},\mu )} a measure space, let L 1 {\displaystyle L^{1}} be the real Lebesgue space L 1 ( μ ) {\displaystyle L^{1}(\mu )} ; let E {\displaystyle E} be a real Banach space. Let L E 1 {\displaystyle L_{E}^{1}} be the completion of the space of simple functions X → E {\displaystyle X\to E} , modulo the subspace of functions X → E {\displaystyle X\to E} whose pointwise norms, considered as functions X → R {\displaystyle X\to \mathbb {R} } , have integral 0 {\displaystyle 0} with respect to μ {\displaystyle \mu } . Then L E 1 {\displaystyle L_{E}^{1}} is isometrically isomorphic to L 1 ⊗ ^ π E {\displaystyle L^{1}{\widehat {\otimes }}_{\pi }E} . == See also == Inductive tensor product – binary operation on topological vector spacesPages displaying wikidata descriptions as a fallback Injective tensor product Tensor product of Hilbert spaces – Tensor product space endowed with a special inner product == Citations == == References == Ryan, Raymond (2002). Introduction to tensor products of Banach spaces. London New York: Springer. ISBN 1-85233-437-1. OCLC 48092184. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. == Further reading == Diestel, Joe (2008). The metric theory of tensor products : Grothendieck's résumé revisited. Providence, R.I: American Mathematical Society. ISBN 978-0-8218-4440-3. OCLC 185095773. Grothendieck, Alexander (1955). "Produits Tensoriels Topologiques et Espaces Nucléaires" [Topological Tensor Products and Nuclear Spaces]. Memoirs of the American Mathematical Society Series (in French). 16. Providence: American Mathematical Society. MR 0075539. OCLC 9308061. Grothendieck, Grothendieck (1966). Produits tensoriels topologiques et espaces nucléaires (in French). Providence: American Mathematical Society. ISBN 0-8218-1216-5. OCLC 1315788. Pietsch, Albrecht (1972). Nuclear locally convex spaces. Berlin, New York: Springer-Verlag. ISBN 0-387-05644-0. OCLC 539541. Wong (1979). Schwartz spaces, nuclear spaces, and tensor products. Berlin New York: Springer-Verlag. ISBN 3-540-09513-6. OCLC 5126158. == External links == Nuclear space at ncatlab
Wikipedia/Projective_tensor_product
In functional analysis, a branch of mathematics, the strong operator topology, often abbreviated SOT, is the locally convex topology on the set of bounded operators on a Hilbert space H induced by the seminorms of the form T ↦ ‖ T x ‖ {\displaystyle T\mapsto \|Tx\|} , as x varies in H. Equivalently, it is the coarsest topology such that, for each fixed x in H, the evaluation map T ↦ T x {\displaystyle T\mapsto Tx} (taking values in H) is continuous in T. The equivalence of these two definitions can be seen by observing that a subbase for both topologies is given by the sets U ( T 0 , x , ϵ ) = { T : ‖ T x − T 0 x ‖ < ϵ } {\displaystyle U(T_{0},x,\epsilon )=\{T:\|Tx-T_{0}x\|<\epsilon \}} (where T0 is any bounded operator on H, x is any vector and ε is any positive real number). In concrete terms, this means that T i → T {\displaystyle T_{i}\to T} in the strong operator topology if and only if ‖ T i x − T x ‖ → 0 {\displaystyle \|T_{i}x-Tx\|\to 0} for each x in H. The SOT is stronger than the weak operator topology and weaker than the norm topology. The SOT lacks some of the nicer properties that the weak operator topology has, but being stronger, things are sometimes easier to prove in this topology. It can be viewed as more natural, too, since it is simply the topology of pointwise convergence. The SOT topology also provides the framework for the measurable functional calculus, just as the norm topology does for the continuous functional calculus. The linear functionals on the set of bounded operators on a Hilbert space that are continuous in the SOT are precisely those continuous in the weak operator topology (WOT). Because of this, the closure of a convex set of operators in the WOT is the same as the closure of that set in the SOT. This language translates into convergence properties of Hilbert space operators. For a complex Hilbert space, it is easy to verify by the polarization identity, that Strong Operator convergence implies Weak Operator convergence. == See also == Strongly continuous semigroup Topologies on the set of operators on a Hilbert space == References == Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Pedersen, Gert (1989). Analysis Now. Springer. ISBN 0-387-96788-5. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Wikipedia/Strong_operator_topology
In functional analysis, an area of mathematics, the injective tensor product is a particular topological tensor product, a topological vector space (TVS) formed by equipping the tensor product of the underlying vector spaces of two TVSs with a compatible topology. It was introduced by Alexander Grothendieck and used by him to define nuclear spaces. Injective tensor products have applications outside of nuclear spaces: as described below, many constructions of TVSs, and in particular Banach spaces, as spaces of functions or sequences amount to injective tensor products of simpler spaces. == Definition == Let X {\displaystyle X} and Y {\displaystyle Y} be locally convex topological vector spaces over C {\displaystyle \mathbb {C} } , with continuous dual spaces X ′ {\displaystyle X^{\prime }} and Y ′ . {\displaystyle Y^{\prime }.} A subscript σ {\displaystyle \sigma } as in X σ ′ {\displaystyle X_{\sigma }^{\prime }} denotes the weak-* topology. Although written in terms of complex TVSs, results described generally also apply to the real case. The vector space B ( X σ ′ , Y σ ′ ) {\displaystyle B\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)} of continuous bilinear functionals X σ ′ × Y σ ′ → C {\displaystyle X_{\sigma }^{\prime }\times Y_{\sigma }^{\prime }\to \mathbb {C} } is isomorphic to the (vector space) tensor product X ⊗ Y {\displaystyle X\otimes Y} , as follows. For each simple tensor x ⊗ y {\displaystyle x\otimes y} in X ⊗ Y {\displaystyle X\otimes Y} , there is a bilinear map f ∈ B ( X σ ′ , Y σ ′ ) {\displaystyle f\in B\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)} , given by f ( φ , ψ ) = φ ( x ) ψ ( y ) {\displaystyle f(\varphi ,\psi )=\varphi (x)\psi (y)} . It can be shown that the map x ⊗ y ↦ f {\displaystyle x\otimes y\mapsto f} , extended linearly to X ⊗ Y {\displaystyle X\otimes Y} , is an isomorphism. Let X b ′ , Y b ′ {\displaystyle X_{b}^{\prime },Y_{b}^{\prime }} denote the respective dual spaces with the topology of bounded convergence. If Z {\displaystyle Z} is a locally convex topological vector space, then B ( X σ ′ , Y σ ′ ; Z ) ⊆ B ( X b ′ , Y b ′ ; Z ) {\textstyle B\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime };Z\right)~\subseteq ~B\left(X_{b}^{\prime },Y_{b}^{\prime };Z\right)} . The topology of the injective tensor product is the topology induced from a certain topology on B ( X b ′ , Y b ′ ; Z ) {\displaystyle B\left(X_{b}^{\prime },Y_{b}^{\prime };Z\right)} , whose basic open sets are constructed as follows. For any equicontinuous subsets G ⊆ X ′ {\displaystyle G\subseteq X^{\prime }} and H ⊆ Y ′ {\displaystyle H\subseteq Y^{\prime }} , and any neighborhood N {\displaystyle N} in Z {\displaystyle Z} , define U ( G , H , N ) = { b ∈ B ( X b ′ , Y b ′ ; Z ) : b ( G × H ) ⊆ N } {\displaystyle {\mathcal {U}}(G,H,N)=\left\{b\in B\left(X_{b}^{\prime },Y_{b}^{\prime };Z\right)~:~b(G\times H)\subseteq N\right\}} where every set b ( G × H ) {\displaystyle b(G\times H)} is bounded in Z , {\displaystyle Z,} which is necessary and sufficient for the collection of all U ( G , H , N ) {\displaystyle {\mathcal {U}}(G,H,N)} to form a locally convex TVS topology on B ( X b ′ , Y b ′ ; Z ) . {\displaystyle {\mathcal {B}}\left(X_{b}^{\prime },Y_{b}^{\prime };Z\right).} This topology is called the ε {\displaystyle \varepsilon } -topology or injective topology. In the special case where Z = C {\displaystyle Z=\mathbb {C} } is the underlying scalar field, B ( X σ ′ , Y σ ′ ) {\displaystyle B\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)} is the tensor product X ⊗ Y {\displaystyle X\otimes Y} as above, and the topological vector space consisting of X ⊗ Y {\displaystyle X\otimes Y} with the ε {\displaystyle \varepsilon } -topology is denoted by X ⊗ ε Y {\displaystyle X\otimes _{\varepsilon }Y} , and is not necessarily complete; its completion is the injective tensor product of X {\displaystyle X} and Y {\displaystyle Y} and denoted by X ⊗ ^ ε Y {\displaystyle X{\widehat {\otimes }}_{\varepsilon }Y} . If X {\displaystyle X} and Y {\displaystyle Y} are normed spaces then X ⊗ ε Y {\displaystyle X\otimes _{\varepsilon }Y} is normable. If X {\displaystyle X} and Y {\displaystyle Y} are Banach spaces, then X ⊗ ^ ε Y {\displaystyle X{\widehat {\otimes }}_{\varepsilon }Y} is also. Its norm can be expressed in terms of the (continuous) duals of X {\displaystyle X} and Y {\displaystyle Y} . Denoting the unit balls of the dual spaces X ∗ {\displaystyle X^{*}} and Y ∗ {\displaystyle Y^{*}} by B X ∗ {\displaystyle B_{X^{*}}} and B Y ∗ {\displaystyle B_{Y^{*}}} , the injective norm ‖ u ‖ ε {\displaystyle \|u\|_{\varepsilon }} of an element u ∈ X ⊗ Y {\displaystyle u\in X\otimes Y} is defined as ‖ u ‖ ε = sup { | ∑ i φ ( x i ) ψ ( y i ) | : φ ∈ B X ∗ , ψ ∈ B Y ∗ } {\displaystyle \|u\|_{\varepsilon }=\sup {\big \{}{\big |}\sum _{i}\varphi (x_{i})\psi (y_{i}){\big |}:\varphi \in B_{X^{*}},\psi \in B_{Y^{*}}{\big \}}} where the supremum is taken over all expressions u = ∑ i x i ⊗ y i {\displaystyle u=\sum _{i}x_{i}\otimes y_{i}} . Then the completion of X ⊗ Y {\displaystyle X\otimes Y} under the injective norm is isomorphic as a topological vector space to X ⊗ ^ ε Y {\displaystyle X{\widehat {\otimes }}_{\varepsilon }Y} . == Basic properties == The map ( x , y ) ↦ x ⊗ y : X × Y → X ⊗ ε Y {\displaystyle (x,y)\mapsto x\otimes y:X\times Y\to X\otimes _{\varepsilon }Y} is continuous. Suppose that u : X 1 → Y 1 {\displaystyle u:X_{1}\to Y_{1}} and v : X 2 → Y 2 {\displaystyle v:X_{2}\to Y_{2}} are two linear maps between locally convex spaces. If both u {\displaystyle u} and v {\displaystyle v} are continuous then so is their tensor product u ⊗ v : X 1 ⊗ ε X 2 → Y 1 ⊗ ε Y 2 {\displaystyle u\otimes v:X_{1}\otimes _{\varepsilon }X_{2}\to Y_{1}\otimes _{\varepsilon }Y_{2}} . Moreover: If u {\displaystyle u} and v {\displaystyle v} are both TVS-embeddings then so is u ⊗ ^ ε v : X 1 ⊗ ^ ε X 2 → Y 1 ⊗ ^ ε Y 2 . {\displaystyle u{\widehat {\otimes }}_{\varepsilon }v:X_{1}{\widehat {\otimes }}_{\varepsilon }X_{2}\to Y_{1}{\widehat {\otimes }}_{\varepsilon }Y_{2}.} If X 1 {\displaystyle X_{1}} (resp. Y 1 {\displaystyle Y_{1}} ) is a linear subspace of X 2 {\displaystyle X_{2}} (resp. Y 2 {\displaystyle Y_{2}} ) then X 1 ⊗ ε Y 1 {\displaystyle X_{1}\otimes _{\varepsilon }Y_{1}} is canonically isomorphic to a linear subspace of X 2 ⊗ ε Y 2 {\displaystyle X_{2}\otimes _{\varepsilon }Y_{2}} and X 1 ⊗ ^ ε Y 1 {\displaystyle X_{1}{\widehat {\otimes }}_{\varepsilon }Y_{1}} is canonically isomorphic to a linear subspace of X 2 ⊗ ^ ε Y 2 . {\displaystyle X_{2}{\widehat {\otimes }}_{\varepsilon }Y_{2}.} There are examples of u {\displaystyle u} and v {\displaystyle v} such that both u {\displaystyle u} and v {\displaystyle v} are surjective homomorphisms but u ⊗ ^ ε v : X 1 ⊗ ^ ε X 2 → Y 1 ⊗ ^ ε Y 2 {\displaystyle u{\widehat {\otimes }}_{\varepsilon }v:X_{1}{\widehat {\otimes }}_{\varepsilon }X_{2}\to Y_{1}{\widehat {\otimes }}_{\varepsilon }Y_{2}} is not a homomorphism. If all four spaces are normed then ‖ u ⊗ v ‖ ε = ‖ u ‖ ‖ v ‖ . {\displaystyle \|u\otimes v\|_{\varepsilon }=\|u\|\|v\|.} == Relation to projective tensor product == The projective topology or the π {\displaystyle \pi } -topology is the finest locally convex topology on B ( X σ ′ , Y σ ′ ) = X ⊗ Y {\displaystyle B\left(X_{\sigma }^{\prime },Y_{\sigma }^{\prime }\right)=X\otimes Y} that makes continuous the canonical map X × Y → X ⊗ Y {\displaystyle X\times Y\to X\otimes Y} defined by sending ( x , y ) ∈ X × Y {\displaystyle (x,y)\in X\times Y} to the bilinear form x ⊗ y . {\displaystyle x\otimes y.} When X ⊗ Y {\displaystyle X\otimes Y} is endowed with this topology then it will be denoted by X ⊗ π Y {\displaystyle X\otimes _{\pi }Y} and called the projective tensor product of X {\displaystyle X} and Y . {\displaystyle Y.} The injective topology is always coarser than the projective topology, which is in turn coarser than the inductive topology (the finest locally convex TVS topology making X × Y → X ⊗ Y {\displaystyle X\times Y\to X\otimes Y} separately continuous). The space X ⊗ ε Y {\displaystyle X\otimes _{\varepsilon }Y} is Hausdorff if and only if both X {\displaystyle X} and Y {\displaystyle Y} are Hausdorff. If X {\displaystyle X} and Y {\displaystyle Y} are normed then ‖ θ ‖ ε ≤ ‖ θ ‖ π {\displaystyle \|\theta \|_{\varepsilon }\leq \|\theta \|_{\pi }} for all θ ∈ X ⊗ Y {\displaystyle \theta \in X\otimes Y} , where ‖ ⋅ ‖ π {\displaystyle \|\cdot \|_{\pi }} is the projective norm. The injective and projective topologies both figure in Grothendieck's definition of nuclear spaces. == Duals of injective tensor products == The continuous dual space of X ⊗ ε Y {\displaystyle X\otimes _{\varepsilon }Y} is a vector subspace of B ( X , Y ) {\displaystyle B(X,Y)} , denoted by J ( X , Y ) . {\displaystyle J(X,Y).} The elements of J ( X , Y ) {\displaystyle J(X,Y)} are called integral forms on X × Y {\displaystyle X\times Y} , a term justified by the following fact. The dual J ( X , Y ) {\displaystyle J(X,Y)} of X ⊗ ^ ε Y {\displaystyle X{\widehat {\otimes }}_{\varepsilon }Y} consists of exactly those continuous bilinear forms v {\displaystyle v} on X × Y {\displaystyle X\times Y} for which v ( x , y ) = ∫ S × T φ ( x ) ψ ( y ) d μ ( φ , ψ ) {\displaystyle v(x,y)=\int _{S\times T}\varphi (x)\psi (y)\,d\mu (\varphi ,\psi )} for some closed, equicontinuous subsets S {\displaystyle S} and T {\displaystyle T} of X σ ′ {\displaystyle X_{\sigma }^{\prime }} and Y σ ′ , {\displaystyle Y_{\sigma }^{\prime },} respectively, and some Radon measure μ {\displaystyle \mu } on the compact set S × T {\displaystyle S\times T} with total mass ≤ 1 {\displaystyle \leq 1} . In the case where X , Y {\displaystyle X,Y} are Banach spaces, S {\displaystyle S} and T {\displaystyle T} can be taken to be the unit balls B X ∗ {\displaystyle B_{X^{*}}} and B Y ∗ {\displaystyle B_{Y^{*}}} . Furthermore, if A {\displaystyle A} is an equicontinuous subset of J ( X , Y ) {\displaystyle J(X,Y)} then the elements v ∈ A {\displaystyle v\in A} can be represented with S × T {\displaystyle S\times T} fixed and μ {\displaystyle \mu } running through a norm bounded subset of the space of Radon measures on S × T . {\displaystyle S\times T.} == Examples == For X {\displaystyle X} a Banach space, certain constructions related to X {\displaystyle X} in Banach space theory can be realized as injective tensor products. Let c 0 ( X ) {\displaystyle c_{0}(X)} be the space of sequences of elements of X {\displaystyle X} converging to 0 {\displaystyle 0} , equipped with the norm ‖ ( x i ) ‖ = sup i ‖ x i ‖ X {\displaystyle \|(x_{i})\|=\sup _{i}\|x_{i}\|_{X}} . Let ℓ 1 ( X ) {\displaystyle \ell _{1}(X)} be the space of unconditionally summable sequences in X {\displaystyle X} , equipped with the norm ‖ ( x i ) ‖ = sup { ∑ i = 1 ∞ | φ ( x i ) | : φ ∈ B X ∗ } . {\displaystyle \|(x_{i})\|=\sup {\big \{}\sum _{i=1}^{\infty }|\varphi (x_{i})|:\varphi \in B_{X^{*}}{\big \}}.} Then c 0 ( X ) {\displaystyle c_{0}(X)} and ℓ 1 ( X ) {\displaystyle \ell _{1}(X)} are Banach spaces, and isometrically c 0 ( X ) ≅ c 0 ⊗ ^ ε X {\displaystyle c_{0}(X)\cong c_{0}{\widehat {\otimes }}_{\varepsilon }X} and ℓ 1 ( X ) ≅ ℓ 1 ⊗ ^ ε X {\displaystyle \ell _{1}(X)\cong \ell _{1}{\widehat {\otimes }}_{\varepsilon }X} (where c 0 , ℓ 1 {\displaystyle c_{0},\,\ell _{1}} are the classical sequence spaces). These facts can be generalized to the case where X {\displaystyle X} is a locally convex TVS. If H {\displaystyle H} and K {\displaystyle K} are compact Hausdorff spaces, then C ( H × K ) ≅ C ( H ) ⊗ ^ ε C ( K ) {\displaystyle C(H\times K)\cong C(H){\widehat {\otimes }}_{\varepsilon }C(K)} as Banach spaces, where C ( X ) {\displaystyle C(X)} denotes the Banach space of continuous functions on X {\displaystyle X} . === Spaces of differentiable functions === Let Ω {\displaystyle \Omega } be an open subset of R n {\displaystyle \mathbb {R} ^{n}} , let Y {\displaystyle Y} be a complete, Hausdorff, locally convex topological vector space, and let C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} be the space of k {\displaystyle k} -times continuously differentiable Y {\displaystyle Y} -valued functions. Then C k ( Ω ; Y ) ≅ C k ( Ω ) ⊗ ^ ε Y {\displaystyle C^{k}(\Omega ;Y)\cong C^{k}(\Omega ){\widehat {\otimes }}_{\varepsilon }Y} . The Schwartz spaces L ( R n ) {\displaystyle {\mathcal {L}}\left(\mathbb {R} ^{n}\right)} can also be generalized to TVSs, as follows: let L ( R n ; Y ) {\displaystyle {\mathcal {L}}\left(\mathbb {R} ^{n};Y\right)} be the space of all f ∈ C ∞ ( R n ; Y ) {\displaystyle f\in C^{\infty }\left(\mathbb {R} ^{n};Y\right)} such that for all pairs of polynomials P {\displaystyle P} and Q {\displaystyle Q} in n {\displaystyle n} variables, { P ( x ) Q ( ∂ / ∂ x ) f ( x ) : x ∈ R n } {\displaystyle \left\{P(x)Q\left(\partial /\partial x\right)f(x):x\in \mathbb {R} ^{n}\right\}} is a bounded subset of Y . {\displaystyle Y.} Topologize L ( R n ; Y ) {\displaystyle {\mathcal {L}}\left(\mathbb {R} ^{n};Y\right)} with the topology of uniform convergence over R n {\displaystyle \mathbb {R} ^{n}} of the functions P ( x ) Q ( ∂ / ∂ x ) f ( x ) , {\displaystyle P(x)Q\left(\partial /\partial x\right)f(x),} as P {\displaystyle P} and Q {\displaystyle Q} vary over all possible pairs of polynomials in n {\displaystyle n} variables. Then, L ( R n ; Y ) ≅ L ( R n ) ⊗ ^ ε Y . {\displaystyle {\mathcal {L}}\left(\mathbb {R} ^{n};Y\right)\cong {\mathcal {L}}\left(\mathbb {R} ^{n}\right){\widehat {\otimes }}_{\varepsilon }Y.} == Notes == == References == Ryan, Raymond (2002). Introduction to tensor products of Banach spaces. London New York: Springer. ISBN 1-85233-437-1. OCLC 48092184. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. == Further reading == Diestel, Joe (2008). The metric theory of tensor products : Grothendieck's résumé revisited. Providence, R.I: American Mathematical Society. ISBN 978-0-8218-4440-3. OCLC 185095773. Grothendieck, Alexander (1955). "Produits Tensoriels Topologiques et Espaces Nucléaires" [Topological Tensor Products and Nuclear Spaces]. Memoirs of the American Mathematical Society Series (in French). 16. Providence: American Mathematical Society. MR 0075539. OCLC 9308061. Grothendieck, Grothendieck (1966). Produits tensoriels topologiques et espaces nucléaires (in French). Providence: American Mathematical Society. ISBN 0-8218-1216-5. OCLC 1315788. Pietsch, Albrecht (1972). Nuclear locally convex spaces. Berlin, New York: Springer-Verlag. ISBN 0-387-05644-0. OCLC 539541. Wong (1979). Schwartz spaces, nuclear spaces, and tensor products. Berlin New York: Springer-Verlag. ISBN 3-540-09513-6. OCLC 5126158. == External links == Nuclear space at ncatlab
Wikipedia/Injective_tensor_product
In functional analysis and related areas of mathematics a dual topology is a locally convex topology on a vector space that is induced by the continuous dual of the vector space, by means of the bilinear form (also called pairing) associated with the dual pair. The different dual topologies for a given dual pair are characterized by the Mackey–Arens theorem. All locally convex topologies with their continuous dual are trivially a dual pair and the locally convex topology is a dual topology. Several topological properties depend only on the dual pair and not on the chosen dual topology and thus it is often possible to substitute a complicated dual topology by a simpler one. == Definition == Given a dual pair ( X , Y , ⟨ , ⟩ ) {\displaystyle (X,Y,\langle ,\rangle )} , a dual topology on X {\displaystyle X} is a locally convex topology τ {\displaystyle \tau } so that ( X , τ ) ′ ≃ Y . {\displaystyle (X,\tau )'\simeq Y.} Here ( X , τ ) ′ {\displaystyle (X,\tau )'} denotes the continuous dual of ( X , τ ) {\displaystyle (X,\tau )} and ( X , τ ) ′ ≃ Y {\displaystyle (X,\tau )'\simeq Y} means that there is a linear isomorphism Ψ : Y → ( X , τ ) ′ , y ↦ ( x ↦ ⟨ x , y ⟩ ) . {\displaystyle \Psi :Y\to (X,\tau )',\quad y\mapsto (x\mapsto \langle x,y\rangle ).} (If a locally convex topology τ {\displaystyle \tau } on X {\displaystyle X} is not a dual topology, then either Ψ {\displaystyle \Psi } is not surjective or it is ill-defined since the linear functional x ↦ ⟨ x , y ⟩ {\displaystyle x\mapsto \langle x,y\rangle } is not continuous on X {\displaystyle X} for some y {\displaystyle y} .) == Properties == Theorem (by Mackey): Given a dual pair, the bounded sets under any dual topology are identical. Under any dual topology the same sets are barrelled. == Characterization of dual topologies == The Mackey–Arens theorem, named after George Mackey and Richard Arens, characterizes all possible dual topologies on a locally convex space. The theorem shows that the coarsest dual topology is the weak topology, the topology of uniform convergence on all finite subsets of X ′ {\displaystyle X'} , and the finest topology is the Mackey topology, the topology of uniform convergence on all absolutely convex weakly compact subsets of X ′ {\displaystyle X'} . === Mackey–Arens theorem === Given a dual pair ( X , X ′ ) {\displaystyle (X,X')} with X {\displaystyle X} a locally convex space and X ′ {\displaystyle X'} its continuous dual, then τ {\displaystyle \tau } is a dual topology on X {\displaystyle X} if and only if it is a topology of uniform convergence on a family of absolutely convex and weakly compact subsets of X ′ {\displaystyle X'} == See also == Polar topology == References == Bogachev, Vladimir I; Smolyanov, Oleg G. (2017). Topological Vector Spaces and Their Applications. Springer Monographs in Mathematics. Cham, Switzerland: Springer International Publishing. ISBN 978-3-319-57117-1. OCLC 987790956.
Wikipedia/Dual_topology
In functional analysis, a branch of mathematics, nest algebras are a class of operator algebras that generalise the upper-triangular matrix algebras to a Hilbert space context. They were introduced by Ringrose (1965) and have many interesting properties. They are non-selfadjoint algebras, are closed in the weak operator topology and are reflexive. Nest algebras are among the simplest examples of commutative subspace lattice algebras. Indeed, they are formally defined as the algebra of bounded operators leaving invariant each subspace contained in a subspace nest, that is, a set of subspaces which is totally ordered by inclusion and is also a complete lattice. Since the orthogonal projections corresponding to the subspaces in a nest commute, nests are commutative subspace lattices. By way of an example, let us apply this definition to recover the finite-dimensional upper-triangular matrices. Let us work in the n {\displaystyle n} -dimensional complex vector space C n {\displaystyle \mathbb {C} ^{n}} , and let e 1 , e 2 , … , e n {\displaystyle e_{1},e_{2},\dots ,e_{n}} be the standard basis. For j = 0 , 1 , 2 , … , n {\displaystyle j=0,1,2,\dots ,n} , let S j {\displaystyle S_{j}} be the j {\displaystyle j} -dimensional subspace of C n {\displaystyle \mathbb {C} ^{n}} spanned by the first j {\displaystyle j} basis vectors e 1 , … , e j {\displaystyle e_{1},\dots ,e_{j}} . Let N = { ( 0 ) = S 0 , S 1 , S 2 , … , S n − 1 , S n = C n } ; {\displaystyle N=\{(0)=S_{0},S_{1},S_{2},\dots ,S_{n-1},S_{n}=\mathbb {C} ^{n}\};} then N is a subspace nest, and the corresponding nest algebra of n × n complex matrices M leaving each subspace in N invariant that is, satisfying M S ⊆ S {\displaystyle MS\subseteq S} for each S in N – is precisely the set of upper-triangular matrices. If we omit one or more of the subspaces Sj from N then the corresponding nest algebra consists of block upper-triangular matrices. == Properties == Nest algebras are hyperreflexive with distance constant 1. == See also == flag manifold == References == Ringrose, John R. (1965), "On some algebras of operators", Proceedings of the London Mathematical Society, Third Series, 15: 61–83, doi:10.1112/plms/s3-15.1.61, ISSN 0024-6115, MR 0171174
Wikipedia/Nest_algebra
In functional analysis, a branch of mathematics, the ultraweak topology, also called the weak-* topology, or weak-* operator topology or σ-weak topology, is a topology on B(H), the space of bounded operators on a Hilbert space H. B(H) admits a predual B*(H), the trace class operators on H. The ultraweak topology is the weak-* topology so induced; in other words, the ultraweak topology is the weakest topology such that predual elements remain continuous on B(H). == Relation with the weak (operator) topology == The ultraweak topology is similar to the weak operator topology. For example, on any norm-bounded set the weak operator and ultraweak topologies are the same, and in particular, the unit ball is compact in both topologies. The ultraweak topology is stronger than the weak operator topology. One problem with the weak operator topology is that the dual of B(H) with the weak operator topology is "too small". The ultraweak topology fixes this problem: the dual is the full predual B*(H) of all trace class operators. In general the ultraweak topology is more useful than the weak operator topology, but it is more complicated to define, and the weak operator topology is often more apparently convenient. The ultraweak topology can be obtained from the weak operator topology as follows. If H1 is a separable infinite dimensional Hilbert space then B(H) can be embedded in B(H⊗H1) by tensoring with the identity map on H1. Then the restriction of the weak operator topology on B(H⊗H1) is the ultraweak topology of B(H). == See also == Topologies on the set of operators on a Hilbert space Ultrastrong topology Weak operator topology == References == Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Strătilă, Șerban Valentin; Zsidó, László (1979). Lectures on Von Neumann Algebras (1st English ed.). Editura Academici / Abacus. pp. 16–17. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Wikipedia/Ultraweak_topology
In mathematics, Riesz's lemma (after Frigyes Riesz) is a lemma in functional analysis. It specifies (often easy to check) conditions that guarantee that a subspace in a normed vector space is dense. The lemma may also be called the Riesz lemma or Riesz inequality. It can be seen as a substitute for orthogonality when the normed space is not an inner product space. == Statement == If X {\displaystyle X} is a reflexive Banach space then this conclusion is also true when α = 1. {\displaystyle \alpha =1.} Metric reformulation As usual, let d ( x , y ) := ‖ x − y ‖ {\displaystyle d(x,y):=\|x-y\|} denote the canonical metric induced by the norm, call the set { x ∈ X : ‖ x ‖ = 1 } {\displaystyle \{x\in X:\|x\|=1\}} of all vectors that are a distance of 1 {\displaystyle 1} from the origin the unit sphere, and denote the distance from a point u {\displaystyle u} to the set Y ⊆ X {\displaystyle Y\subseteq X} by d ( u , Y ) := inf y ∈ Y d ( u , y ) = inf y ∈ Y ‖ u − y ‖ . {\displaystyle d(u,Y)~:=~\inf _{y\in Y}d(u,y)~=~\inf _{y\in Y}\|u-y\|.} The inequality α ≤ d ( u , Y ) {\displaystyle \alpha \leq d(u,Y)} holds if and only if ‖ u − y ‖ ≥ α {\displaystyle \|u-y\|\geq \alpha } for all y ∈ Y , {\displaystyle y\in Y,} and it formally expresses the notion that the distance between u {\displaystyle u} and Y {\displaystyle Y} is at least α . {\displaystyle \alpha .} Because every vector subspace (such as Y {\displaystyle Y} ) contains the origin 0 , {\displaystyle 0,} substituting y := 0 {\displaystyle y:=0} in this infimum shows that d ( u , Y ) ≤ ‖ u ‖ {\displaystyle d(u,Y)\leq \|u\|} for every vector u ∈ X . {\displaystyle u\in X.} In particular, d ( u , Y ) ≤ 1 {\displaystyle d(u,Y)\leq 1} when ‖ u ‖ = 1 {\displaystyle \|u\|=1} is a unit vector. Using this new notation, the conclusion of Riesz's lemma may be restated more succinctly as: α ≤ d ( u , Y ) ≤ 1 = ‖ u ‖ {\displaystyle \alpha \leq d(u,Y)\leq 1=\|u\|} holds for some u ∈ X . {\displaystyle u\in X.} Using this new terminology, Riesz's lemma may also be restated in plain English as: Given any closed proper vector subspace of a normed space X , {\displaystyle X,} for any desired minimum distance α {\displaystyle \alpha } less than 1 , {\displaystyle 1,} there exists some vector in the unit sphere of X {\displaystyle X} that is at least this desired distance away from the subspace. The proof can be found in functional analysis texts such as Kreyszig. Minimum distances α {\displaystyle \alpha } not satisfying the hypotheses When X = { 0 } {\displaystyle X=\{0\}} is trivial then it has no proper vector subspace Y , {\displaystyle Y,} and so Riesz's lemma holds vacuously for all real numbers α ∈ R . {\displaystyle \alpha \in \mathbb {R} .} The remainder of this section will assume that X ≠ { 0 } , {\displaystyle X\neq \{0\},} which guarantees that a unit vector exists. The inclusion of the hypotheses 0 < α < 1 {\displaystyle 0<\alpha <1} can be explained by considering the three cases: α ≤ 0 {\displaystyle \alpha \leq 0} , α = 1 , {\displaystyle \alpha =1,} and α > 1. {\displaystyle \alpha >1.} The lemma holds when α ≤ 0 {\displaystyle \alpha \leq 0} since every unit vector u ∈ X {\displaystyle u\in X} satisfies the conclusion α ≤ 0 ≤ d ( u , Y ) ≤ 1 = ‖ u ‖ . {\displaystyle \alpha \leq 0\leq d(u,Y)\leq 1=\|u\|.} The hypotheses 0 < α {\displaystyle 0<\alpha } is included solely to exclude this trivial case and is sometimes omitted from the lemma's statement. Riesz's lemma is always false when α > 1 {\displaystyle \alpha >1} because for every unit vector u ∈ X , {\displaystyle u\in X,} the required inequality ‖ u − y ‖ ≥ α {\displaystyle \|u-y\|\geq \alpha } fails to hold for y := 0 ∈ Y {\displaystyle y:=0\in Y} (since ‖ u − 0 ‖ = 1 < α {\displaystyle \|u-0\|=1<\alpha } ). Another consequence of d ( u , Y ) > 1 {\displaystyle d(u,Y)>1} being impossible is that the inequality d ( u , Y ) ≥ 1 {\displaystyle d(u,Y)\geq 1} holds if and only if equality d ( u , Y ) = 1 {\displaystyle d(u,Y)=1} holds. === Reflexivity === This leaves only the case α = 1 {\displaystyle \alpha =1} for consideration, in which case the statement of Riesz’s lemma becomes: For every closed proper vector subspace Y {\displaystyle Y} of X , {\displaystyle X,} there exists some vector u {\displaystyle u} of unit norm that satisfies d ( u , Y ) = 1. {\displaystyle d(u,Y)=1.} When X {\displaystyle X} is a Banach space, then this statement is true if and only if X {\displaystyle X} is a reflexive space. Explicitly, a Banach space X {\displaystyle X} is reflexive if and only if for every closed proper vector subspace Y , {\displaystyle Y,} there is some vector u {\displaystyle u} on the unit sphere of X {\displaystyle X} that is always at least a distance of 1 = d ( u , Y ) {\displaystyle 1=d(u,Y)} away from the subspace. In a non-reflexive Banach space, such as the Lebesgue space ℓ ∞ ( N ) {\displaystyle \ell _{\infty }(\mathbb {N} )} of all bounded sequences, Riesz’s lemma does not hold for α = 1 {\displaystyle \alpha =1} . Since every finite dimensional normed space is a reflexive Banach space, Riesz’s lemma does holds for α = 1 {\displaystyle \alpha =1} when the normed space is finite-dimensional, as will now be shown. When the dimension of X {\displaystyle X} is finite then the closed unit ball B ⊆ X {\displaystyle B\subseteq X} is compact. Since the distance function d ( ⋅ , Y ) {\displaystyle d(\cdot ,Y)} is continuous, its image on the closed unit ball B {\displaystyle B} must be a compact subset of the real line, proving the claim. The "perpendicular" vector may be found pictorially by drawing a unit sphere that is supported by Y {\displaystyle Y} at the origin. For example, if the reflexive Banach space X = R 3 {\displaystyle X=\mathbb {R} ^{3}} is endowed with the usual ‖ ⋅ ‖ 2 {\displaystyle \|\cdot \|_{2}} Euclidean norm and if Y = R × R × { 0 } {\displaystyle Y=\mathbb {R} \times \mathbb {R} \times \{0\}} is the x - y {\displaystyle x{\text{-}}y} plane then the points u = ( 0 , 0 , ± 1 ) {\displaystyle u=(0,0,\pm 1)} satisfy the conclusion d ( u , Y ) = 1. {\displaystyle d(u,Y)=1.} If Z = { ( 0 , 0 ) } × R {\displaystyle Z=\{(0,0)\}\times \mathbb {R} } is z {\displaystyle z} -axis then every point u {\displaystyle u} belonging to the unit circle in the x - y {\displaystyle x{\text{-}}y} plane satisfies the conclusion d ( u , Z ) = 1. {\displaystyle d(u,Z)=1.} But if X = R 3 {\displaystyle X=\mathbb {R} ^{3}} was endowed with the ‖ ⋅ ‖ 1 {\displaystyle \|\cdot \|_{1}} taxicab norm (instead of the Euclidean norm), then the conclusion d ( u , Z ) = 1 {\displaystyle d(u,Z)=1} would be satisfied by every point u = ( x , y , 0 ) {\displaystyle u=(x,y,0)} belonging to the “diamond” | x | + | y | = 1 {\displaystyle |x|+|y|=1} in the x - y {\displaystyle x{\text{-}}y} plane (a square with vertices at ( ± 1 , 0 , 0 ) {\displaystyle (\pm 1,0,0)} and ( 0 , ± 1 , 0 ) {\displaystyle (0,\pm 1,0)} ). == Some consequences == Riesz's lemma guarantees that for any given 0 < α < 1 , {\displaystyle 0<\alpha <1,} every infinite-dimensional normed space contains a sequence x 1 , x 2 , … {\displaystyle x_{1},x_{2},\ldots } of (distinct) unit vectors satisfying ‖ x n − x m ‖ > α {\displaystyle \|x_{n}-x_{m}\|>\alpha } for m ≠ n ; {\displaystyle m\neq n;} or stated in plain English, these vectors are all separated from each other by a distance of more than α {\displaystyle \alpha } while simultaneously also all lying on the unit sphere. Such an infinite sequence of vectors cannot be found in the unit sphere of any finite dimensional normed space (just consider for example the unit circle in R 2 {\displaystyle \mathbb {R} ^{2}} ). This sequence can be constructed by induction for any constant 0 < α < 1. {\displaystyle 0<\alpha <1.} Start by picking any element x 1 {\displaystyle x_{1}} from the unit sphere. Let Y n − 1 {\displaystyle Y_{n-1}} be the linear span of { x 1 , … , x n − 1 } {\displaystyle \{x_{1},\ldots ,x_{n-1}\}} and (using Riesz's lemma) pick x n {\displaystyle x_{n}} from the unit sphere such that d ( x n , Y n − 1 ) > α {\displaystyle d\left(x_{n},Y_{n-1}\right)>\alpha } where d ( x n , Y ) = inf y ∈ Y ‖ x n − y ‖ . {\displaystyle d(x_{n},Y)=\inf _{y\in Y}\|x_{n}-y\|.} This sequence x 1 , x 2 , … {\displaystyle x_{1},x_{2},\ldots } contains no convergent subsequence, which implies that the closed unit ball is not compact. === Characterization of finite dimension === Riesz's lemma can be applied directly to show that the unit ball of an infinite-dimensional normed space X {\displaystyle X} is never compact. This can be used to characterize finite dimensional normed spaces: if X {\displaystyle X} is a normed vector space, then X {\displaystyle X} is finite dimensional if and only if the closed unit ball in X {\displaystyle X} is compact. More generally, if a topological vector space X {\displaystyle X} is locally compact, then it is finite dimensional. The converse of this is also true. Namely, if a topological vector space is finite dimensional, it is locally compact. Therefore local compactness characterizes finite-dimensionality. This classical result is also attributed to Riesz. A short proof can be sketched as follows: let C {\displaystyle C} be a compact neighborhood of the origin in X . {\displaystyle X.} By compactness, there are c 1 , … , c n ∈ C {\displaystyle c_{1},\ldots ,c_{n}\in C} such that C ⊆ ( c 1 + 1 2 C ) ∪ ⋯ ∪ ( c n + 1 2 C ) . {\displaystyle C~\subseteq ~\left(c_{1}+{\tfrac {1}{2}}C\right)\cup \cdots \cup \left(c_{n}+{\tfrac {1}{2}}C\right).} We claim that the finite dimensional subspace Y {\displaystyle Y} spanned by { c 1 , … , c n } {\displaystyle \{c_{1},\ldots ,c_{n}\}} is dense in X , {\displaystyle X,} or equivalently, its closure is X . {\displaystyle X.} Since X {\displaystyle X} is the union of scalar multiples of C , {\displaystyle C,} it is sufficient to show that C ⊆ Y . {\displaystyle C\subseteq Y.} By induction, for every m , {\displaystyle m,} C ⊆ Y + 1 2 m C . {\displaystyle C~\subseteq ~Y+{\frac {1}{2^{m}}}C.} But compact sets are bounded, so C {\displaystyle C} lies in the closure of Y . {\displaystyle Y.} This proves the result. For a different proof based on Hahn–Banach theorem see Crespín (1994). === Spectral theory === The spectral properties of compact operators acting on a Banach space are similar to those of matrices. Riesz's lemma is essential in establishing this fact. === Other applications === As detailed in the article on infinite-dimensional Lebesgue measure, this is useful in showing the non-existence of certain measures on infinite-dimensional Banach spaces. Riesz's lemma also shows that the identity operator on a Banach space X {\displaystyle X} is compact if and only if X {\displaystyle X} is finite-dimensional. == See also == F. Riesz's theorem James's Theorem—a characterization of reflexivity given by a condition on the unit ball == References == Diestel, Joe (1984). Sequences and series in Banach spaces. New York: Springer-Verlag. ISBN 0-387-90859-5. OCLC 9556781. Hashimoto, Kazuo; Nakamura, Gen; Oharu, Shinnosuke (1986-01-01). "Riesz's lemma and orthogonality in normed spaces" (PDF). Hiroshima Mathematical Journal. 16 (2). Hiroshima University - Department of Mathematics. doi:10.32917/hmj/1206130429. ISSN 0018-2079. Kreyszig, Erwin (1978). Introductory functional analysis with applications. New York: John Wiley & Sons. ISBN 0-471-50731-8. OCLC 2818701. Riesz, Frederic; Sz.-Nagy, Béla (1990) [1955]. Functional Analysis. Translated by Boron, Leo F. New York: Dover Publications. ISBN 0-486-66289-6. OCLC 21228994. Rynne, Bryan P.; Youngson, Martin A. (2008). Linear Functional Analysis (2nd ed.). London: Springer. ISBN 978-1848000049. OCLC 233972987. == Further reading == https://mathoverflow.net/questions/470438/a-variation-of-the-riesz-lemma
Wikipedia/Riesz's_lemma
In functional analysis and related areas of mathematics, the strong dual space of a topological vector space (TVS) X {\displaystyle X} is the continuous dual space X ′ {\displaystyle X^{\prime }} of X {\displaystyle X} equipped with the strong (dual) topology or the topology of uniform convergence on bounded subsets of X , {\displaystyle X,} where this topology is denoted by b ( X ′ , X ) {\displaystyle b\left(X^{\prime },X\right)} or β ( X ′ , X ) . {\displaystyle \beta \left(X^{\prime },X\right).} The coarsest polar topology is called weak topology. The strong dual space plays such an important role in modern functional analysis, that the continuous dual space is usually assumed to have the strong dual topology unless indicated otherwise. To emphasize that the continuous dual space, X ′ , {\displaystyle X^{\prime },} has the strong dual topology, X b ′ {\displaystyle X_{b}^{\prime }} or X β ′ {\displaystyle X_{\beta }^{\prime }} may be written. == Strong dual topology == Throughout, all vector spaces will be assumed to be over the field F {\displaystyle \mathbb {F} } of either the real numbers R {\displaystyle \mathbb {R} } or complex numbers C . {\displaystyle \mathbb {C} .} === Definition from a dual system === Let ( X , Y , ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle (X,Y,\langle \cdot ,\cdot \rangle )} be a dual pair of vector spaces over the field F {\displaystyle \mathbb {F} } of real numbers R {\displaystyle \mathbb {R} } or complex numbers C . {\displaystyle \mathbb {C} .} For any B ⊆ X {\displaystyle B\subseteq X} and any y ∈ Y , {\displaystyle y\in Y,} define | y | B = sup x ∈ B | ⟨ x , y ⟩ | . {\displaystyle |y|_{B}=\sup _{x\in B}|\langle x,y\rangle |.} Neither X {\displaystyle X} nor Y {\displaystyle Y} has a topology so say a subset B ⊆ X {\displaystyle B\subseteq X} is said to be bounded by a subset C ⊆ Y {\displaystyle C\subseteq Y} if | y | B < ∞ {\displaystyle |y|_{B}<\infty } for all y ∈ C . {\displaystyle y\in C.} So a subset B ⊆ X {\displaystyle B\subseteq X} is called bounded if and only if sup x ∈ B | ⟨ x , y ⟩ | < ∞ for all y ∈ Y . {\displaystyle \sup _{x\in B}|\langle x,y\rangle |<\infty \quad {\text{ for all }}y\in Y.} This is equivalent to the usual notion of bounded subsets when X {\displaystyle X} is given the weak topology induced by Y , {\displaystyle Y,} which is a Hausdorff locally convex topology. Let B {\displaystyle {\mathcal {B}}} denote the family of all subsets B ⊆ X {\displaystyle B\subseteq X} bounded by elements of Y {\displaystyle Y} ; that is, B {\displaystyle {\mathcal {B}}} is the set of all subsets B ⊆ X {\displaystyle B\subseteq X} such that for every y ∈ Y , {\displaystyle y\in Y,} | y | B = sup x ∈ B | ⟨ x , y ⟩ | < ∞ . {\displaystyle |y|_{B}=\sup _{x\in B}|\langle x,y\rangle |<\infty .} Then the strong topology β ( Y , X , ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle \beta (Y,X,\langle \cdot ,\cdot \rangle )} on Y , {\displaystyle Y,} also denoted by b ( Y , X , ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle b(Y,X,\langle \cdot ,\cdot \rangle )} or simply β ( Y , X ) {\displaystyle \beta (Y,X)} or b ( Y , X ) {\displaystyle b(Y,X)} if the pairing ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is understood, is defined as the locally convex topology on Y {\displaystyle Y} generated by the seminorms of the form | y | B = sup x ∈ B | ⟨ x , y ⟩ | , y ∈ Y , B ∈ B . {\displaystyle |y|_{B}=\sup _{x\in B}|\langle x,y\rangle |,\qquad y\in Y,\qquad B\in {\mathcal {B}}.} The definition of the strong dual topology now proceeds as in the case of a TVS. Note that if X {\displaystyle X} is a TVS whose continuous dual space separates points on X , {\displaystyle X,} then X {\displaystyle X} is part of a canonical dual system ( X , X ′ , ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle \left(X,X^{\prime },\langle \cdot ,\cdot \rangle \right)} where ⟨ x , x ′ ⟩ := x ′ ( x ) . {\displaystyle \left\langle x,x^{\prime }\right\rangle :=x^{\prime }(x).} In the special case when X {\displaystyle X} is a locally convex space, the strong topology on the (continuous) dual space X ′ {\displaystyle X^{\prime }} (that is, on the space of all continuous linear functionals f : X → F {\displaystyle f:X\to \mathbb {F} } ) is defined as the strong topology β ( X ′ , X ) , {\displaystyle \beta \left(X^{\prime },X\right),} and it coincides with the topology of uniform convergence on bounded sets in X , {\displaystyle X,} i.e. with the topology on X ′ {\displaystyle X^{\prime }} generated by the seminorms of the form | f | B = sup x ∈ B | f ( x ) | , where f ∈ X ′ , {\displaystyle |f|_{B}=\sup _{x\in B}|f(x)|,\qquad {\text{ where }}f\in X^{\prime },} where B {\displaystyle B} runs over the family of all bounded sets in X . {\displaystyle X.} The space X ′ {\displaystyle X^{\prime }} with this topology is called strong dual space of the space X {\displaystyle X} and is denoted by X β ′ . {\displaystyle X_{\beta }^{\prime }.} === Definition on a TVS === Suppose that X {\displaystyle X} is a topological vector space (TVS) over the field F . {\displaystyle \mathbb {F} .} Let B {\displaystyle {\mathcal {B}}} be any fundamental system of bounded sets of X {\displaystyle X} ; that is, B {\displaystyle {\mathcal {B}}} is a family of bounded subsets of X {\displaystyle X} such that every bounded subset of X {\displaystyle X} is a subset of some B ∈ B {\displaystyle B\in {\mathcal {B}}} ; the set of all bounded subsets of X {\displaystyle X} forms a fundamental system of bounded sets of X . {\displaystyle X.} A basis of closed neighborhoods of the origin in X ′ {\displaystyle X^{\prime }} is given by the polars: B ∘ := { x ′ ∈ X ′ : sup x ∈ B | x ′ ( x ) | ≤ 1 } {\displaystyle B^{\circ }:=\left\{x^{\prime }\in X^{\prime }:\sup _{x\in B}\left|x^{\prime }(x)\right|\leq 1\right\}} as B {\displaystyle B} ranges over B {\displaystyle {\mathcal {B}}} ). This is a locally convex topology that is given by the set of seminorms on X ′ {\displaystyle X^{\prime }} : | x ′ | B := sup x ∈ B | x ′ ( x ) | {\displaystyle \left|x^{\prime }\right|_{B}:=\sup _{x\in B}\left|x^{\prime }(x)\right|} as B {\displaystyle B} ranges over B . {\displaystyle {\mathcal {B}}.} If X {\displaystyle X} is normable then so is X b ′ {\displaystyle X_{b}^{\prime }} and X b ′ {\displaystyle X_{b}^{\prime }} will in fact be a Banach space. If X {\displaystyle X} is a normed space with norm ‖ ⋅ ‖ {\displaystyle \|\cdot \|} then X ′ {\displaystyle X^{\prime }} has a canonical norm (the operator norm) given by ‖ x ′ ‖ := sup ‖ x ‖ ≤ 1 | x ′ ( x ) | {\displaystyle \left\|x^{\prime }\right\|:=\sup _{\|x\|\leq 1}\left|x^{\prime }(x)\right|} ; the topology that this norm induces on X ′ {\displaystyle X^{\prime }} is identical to the strong dual topology. == Bidual == The bidual or second dual of a TVS X , {\displaystyle X,} often denoted by X ′ ′ , {\displaystyle X^{\prime \prime },} is the strong dual of the strong dual of X {\displaystyle X} : X ′ ′ := ( X b ′ ) ′ {\displaystyle X^{\prime \prime }\,:=\,\left(X_{b}^{\prime }\right)^{\prime }} where X b ′ {\displaystyle X_{b}^{\prime }} denotes X ′ {\displaystyle X^{\prime }} endowed with the strong dual topology b ( X ′ , X ) . {\displaystyle b\left(X^{\prime },X\right).} Unless indicated otherwise, the vector space X ′ ′ {\displaystyle X^{\prime \prime }} is usually assumed to be endowed with the strong dual topology induced on it by X b ′ , {\displaystyle X_{b}^{\prime },} in which case it is called the strong bidual of X {\displaystyle X} ; that is, X ′ ′ := ( X b ′ ) b ′ {\displaystyle X^{\prime \prime }\,:=\,\left(X_{b}^{\prime }\right)_{b}^{\prime }} where the vector space X ′ ′ {\displaystyle X^{\prime \prime }} is endowed with the strong dual topology b ( X ′ ′ , X b ′ ) . {\displaystyle b\left(X^{\prime \prime },X_{b}^{\prime }\right).} == Properties == Let X {\displaystyle X} be a locally convex TVS. A convex balanced weakly compact subset of X ′ {\displaystyle X^{\prime }} is bounded in X b ′ . {\displaystyle X_{b}^{\prime }.} Every weakly bounded subset of X ′ {\displaystyle X^{\prime }} is strongly bounded. If X {\displaystyle X} is a barreled space then X {\displaystyle X} 's topology is identical to the strong dual topology b ( X , X ′ ) {\displaystyle b\left(X,X^{\prime }\right)} and to the Mackey topology on X . {\displaystyle X.} If X {\displaystyle X} is a metrizable locally convex space, then the strong dual of X {\displaystyle X} is a bornological space if and only if it is an infrabarreled space, if and only if it is a barreled space. If X {\displaystyle X} is Hausdorff locally convex TVS then ( X , b ( X , X ′ ) ) {\displaystyle \left(X,b\left(X,X^{\prime }\right)\right)} is metrizable if and only if there exists a countable set B {\displaystyle {\mathcal {B}}} of bounded subsets of X {\displaystyle X} such that every bounded subset of X {\displaystyle X} is contained in some element of B . {\displaystyle {\mathcal {B}}.} If X {\displaystyle X} is locally convex, then this topology is finer than all other G {\displaystyle {\mathcal {G}}} -topologies on X ′ {\displaystyle X^{\prime }} when considering only G {\displaystyle {\mathcal {G}}} 's whose sets are subsets of X . {\displaystyle X.} If X {\displaystyle X} is a bornological space (e.g. metrizable or LF-space) then X b ( X ′ , X ) ′ {\displaystyle X_{b(X^{\prime },X)}^{\prime }} is complete. If X {\displaystyle X} is a barrelled space, then its topology coincides with the strong topology β ( X , X ′ ) {\displaystyle \beta \left(X,X^{\prime }\right)} on X {\displaystyle X} and with the Mackey topology on generated by the pairing ( X , X ′ ) . {\displaystyle \left(X,X^{\prime }\right).} == Examples == If X {\displaystyle X} is a normed vector space, then its (continuous) dual space X ′ {\displaystyle X^{\prime }} with the strong topology coincides with the Banach dual space X ′ {\displaystyle X^{\prime }} ; that is, with the space X ′ {\displaystyle X^{\prime }} with the topology induced by the operator norm. Conversely ( X , X ′ ) . {\displaystyle \left(X,X^{\prime }\right).} -topology on X {\displaystyle X} is identical to the topology induced by the norm on X . {\displaystyle X.} == See also == Dual topology Dual system List of topologies – List of concrete topologies and topological spaces Polar topology – Dual space topology of uniform convergence on some sub-collection of bounded subsets Reflexive space – Locally convex topological vector space Semi-reflexive space Strong topology Topologies on spaces of linear maps == References == == Bibliography == Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Wong (1979). Schwartz spaces, nuclear spaces, and tensor products. Berlin New York: Springer-Verlag. ISBN 3-540-09513-6. OCLC 5126158.
Wikipedia/Strong_topology_(polar_topology)
Strong measurability has a number of different meanings, some of which are explained below. == Values in Banach spaces == For a function f with values in a Banach space (or Fréchet space), strong measurability usually means Bochner measurability. However, if the values of f lie in the space L ( X , Y ) {\displaystyle {\mathcal {L}}(X,Y)} of continuous linear operators from X to Y, then often strong measurability means that the operator f(x) is Bochner measurable for each fixed x in the domain of f, whereas the Bochner measurability of f is called uniform measurability (cf. "uniformly continuous" vs. "strongly continuous"). == Bounded operators == A family of bounded linear operators combined with the direct integral is strongly measurable, when each of the individual operators is strongly measurable. == Semigroups == A semigroup of linear operators can be strongly measurable yet not strongly continuous. It is uniformly measurable if and only if it is uniformly continuous, i.e., if and only if its generator is bounded. == References ==
Wikipedia/Strongly_measurable_functions
In functional analysis, the ultrastrong topology, or σ-strong topology, or strongest topology on the set B(H) of bounded operators on a Hilbert space is the topology defined by the family of seminorms p ω ( x ) = ω ( x ∗ x ) 1 / 2 {\displaystyle p_{\omega }(x)=\omega (x^{*}x)^{1/2}} for positive elements ω {\displaystyle \omega } of the predual L ∗ ( H ) {\displaystyle L_{*}(H)} that consists of trace class operators. : 68  It was introduced by John von Neumann in 1936. == Relation with the strong (operator) topology == The ultrastrong topology is similar to the strong (operator) topology. For example, on any norm-bounded set the strong operator and ultrastrong topologies are the same. The ultrastrong topology is stronger than the strong operator topology. One problem with the strong operator topology is that the dual of B(H) with the strong operator topology is "too small". The ultrastrong topology fixes this problem: the dual is the full predual B*(H) of all trace class operators. In general the ultrastrong topology is better than the strong operator topology, but is more complicated to define so people usually use the strong operator topology if they can get away with it. The ultrastrong topology can be obtained from the strong operator topology as follows. If H1 is a separable infinite dimensional Hilbert space then B(H) can be embedded in B(H⊗H1) by tensoring with the identity map on H1. Then the restriction of the strong operator topology on B(H⊗H1) is the ultrastrong topology of B(H). Equivalently, it is given by the family of seminorms x ↦ ( ∑ n = 1 ∞ ‖ x ξ n ‖ 2 ) 1 / 2 , {\displaystyle x\mapsto \left(\sum _{n=1}^{\infty }\|x\xi _{n}\|^{2}\right)^{1/2},} where ∑ n = 1 ∞ ‖ ξ n ‖ 2 < ∞ . {\displaystyle \sum _{n=1}^{\infty }\|\xi _{n}\|^{2}<\infty .} : 68  The adjoint map is not continuous in the ultrastrong topology. There is another topology called the ultrastrong* topology, which is the weakest topology stronger than the ultrastrong topology such that the adjoint map is continuous.: 68  == See also == Strong operator topology – Locally convex topology on function spaces Topological tensor product – Tensor product constructions for topological vector spaces Topologies on the set of operators on a Hilbert space Ultraweak topology == References == Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Wikipedia/Ultrastrong_topology
In computer science, a sequential algorithm or serial algorithm is an algorithm that is executed sequentially – once through, from start to finish, without other processing executing – as opposed to concurrently or in parallel. The term is primarily used to contrast with concurrent algorithm or parallel algorithm; most standard computer algorithms are sequential algorithms, and not specifically identified as such, as sequentialness is a background assumption. Concurrency and parallelism are in general distinct concepts, but they often overlap – many distributed algorithms are both concurrent and parallel – and thus "sequential" is used to contrast with both, without distinguishing which one. If these need to be distinguished, the opposing pairs sequential/concurrent and serial/parallel may be used. "Sequential algorithm" may also refer specifically to an algorithm for decoding a convolutional code. == See also == Online algorithm Streaming algorithm == References ==
Wikipedia/Sequential_algorithm
Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors (perhaps over a network). Directly applying the mathematical definition of matrix multiplication gives an algorithm that takes time on the order of n3 field operations to multiply two n × n matrices over that field (Θ(n3) in big O notation). Better asymptotic bounds on the time required to multiply matrices have been known since the Strassen's algorithm in the 1960s, but the optimal time (that is, the computational complexity of matrix multiplication) remains unknown. As of April 2024, the best announced bound on the asymptotic complexity of a matrix multiplication algorithm is O(n2.371552) time, given by Williams, Xu, Xu, and Zhou. This improves on the bound of O(n2.3728596) time, given by Alman and Williams. However, this algorithm is a galactic algorithm because of the large constants and cannot be realized practically. == Iterative algorithm == The definition of matrix multiplication is that if C = AB for an n × m matrix A and an m × p matrix B, then C is an n × p matrix with entries c i j = ∑ k = 1 m a i k b k j . {\displaystyle c_{ij}=\sum _{k=1}^{m}a_{ik}b_{kj}.} From this, a simple algorithm can be constructed which loops over the indices i from 1 through n and j from 1 through p, computing the above using a nested loop: This algorithm takes time Θ(nmp) (in asymptotic notation). A common simplification for the purpose of algorithm analysis is to assume that the inputs are all square matrices of size n × n, in which case the running time is Θ(n3), i.e., cubic in the size of the dimension. === Cache behavior === The three loops in iterative matrix multiplication can be arbitrarily swapped with each other without an effect on correctness or asymptotic running time. However, the order can have a considerable impact on practical performance due to the memory access patterns and cache use of the algorithm; which order is best also depends on whether the matrices are stored in row-major order, column-major order, or a mix of both. In particular, in the idealized case of a fully associative cache consisting of M bytes and b bytes per cache line (i.e. ⁠M/b⁠ cache lines), the above algorithm is sub-optimal for A and B stored in row-major order. When n > ⁠M/b⁠, every iteration of the inner loop (a simultaneous sweep through a row of A and a column of B) incurs a cache miss when accessing an element of B. This means that the algorithm incurs Θ(n3) cache misses in the worst case. As of 2010, the speed of memories compared to that of processors is such that the cache misses, rather than the actual calculations, dominate the running time for sizable matrices. The optimal variant of the iterative algorithm for A and B in row-major layout is a tiled version, where the matrix is implicitly divided into square tiles of size √M by √M: In the idealized cache model, this algorithm incurs only Θ(⁠n3/b √M⁠) cache misses; the divisor b √M amounts to several orders of magnitude on modern machines, so that the actual calculations dominate the running time, rather than the cache misses. == Divide-and-conquer algorithm == An alternative to the iterative algorithm is the divide-and-conquer algorithm for matrix multiplication. This relies on the block partitioning C = ( C 11 C 12 C 21 C 22 ) , A = ( A 11 A 12 A 21 A 22 ) , B = ( B 11 B 12 B 21 B 22 ) , {\displaystyle C={\begin{pmatrix}C_{11}&C_{12}\\C_{21}&C_{22}\\\end{pmatrix}},\,A={\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\\\end{pmatrix}},\,B={\begin{pmatrix}B_{11}&B_{12}\\B_{21}&B_{22}\\\end{pmatrix}},} which works for all square matrices whose dimensions are powers of two, i.e., the shapes are 2n × 2n for some n. The matrix product is now ( C 11 C 12 C 21 C 22 ) = ( A 11 A 12 A 21 A 22 ) ( B 11 B 12 B 21 B 22 ) = ( A 11 B 11 + A 12 B 21 A 11 B 12 + A 12 B 22 A 21 B 11 + A 22 B 21 A 21 B 12 + A 22 B 22 ) {\displaystyle {\begin{pmatrix}C_{11}&C_{12}\\C_{21}&C_{22}\\\end{pmatrix}}={\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\\\end{pmatrix}}{\begin{pmatrix}B_{11}&B_{12}\\B_{21}&B_{22}\\\end{pmatrix}}={\begin{pmatrix}A_{11}B_{11}+A_{12}B_{21}&A_{11}B_{12}+A_{12}B_{22}\\A_{21}B_{11}+A_{22}B_{21}&A_{21}B_{12}+A_{22}B_{22}\\\end{pmatrix}}} which consists of eight multiplications of pairs of submatrices, followed by an addition step. The divide-and-conquer algorithm computes the smaller multiplications recursively, using the scalar multiplication c11 = a11b11 as its base case. The complexity of this algorithm as a function of n is given by the recurrence T ( 1 ) = Θ ( 1 ) ; {\displaystyle T(1)=\Theta (1);} T ( n ) = 8 T ( n / 2 ) + Θ ( n 2 ) , {\displaystyle T(n)=8T(n/2)+\Theta (n^{2}),} accounting for the eight recursive calls on matrices of size n/2 and Θ(n2) to sum the four pairs of resulting matrices element-wise. Application of the master theorem for divide-and-conquer recurrences shows this recursion to have the solution Θ(n3), the same as the iterative algorithm. === Non-square matrices === A variant of this algorithm that works for matrices of arbitrary shapes and is faster in practice splits matrices in two instead of four submatrices, as follows. Splitting a matrix now means dividing it into two parts of equal size, or as close to equal sizes as possible in the case of odd dimensions. === Cache behavior === The cache miss rate of recursive matrix multiplication is the same as that of a tiled iterative version, but unlike that algorithm, the recursive algorithm is cache-oblivious: there is no tuning parameter required to get optimal cache performance, and it behaves well in a multiprogramming environment where cache sizes are effectively dynamic due to other processes taking up cache space. (The simple iterative algorithm is cache-oblivious as well, but much slower in practice if the matrix layout is not adapted to the algorithm.) The number of cache misses incurred by this algorithm, on a machine with M lines of ideal cache, each of size b bytes, is bounded by: 13  Θ ( m + n + p + m n + n p + m p b + m n p b M ) {\displaystyle \Theta \left(m+n+p+{\frac {mn+np+mp}{b}}+{\frac {mnp}{b{\sqrt {M}}}}\right)} == Sub-cubic algorithms == Algorithms exist that provide better running times than the straightforward ones. The first to be discovered was Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". It is based on a way of multiplying two 2×2 matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of O ( n log 2 ⁡ 7 ) ≈ O ( n 2.807 ) {\displaystyle O(n^{\log _{2}7})\approx O(n^{2.807})} . Strassen's algorithm is more complex, and the numerical stability is reduced compared to the naïve algorithm, but it is faster in cases where n > 100 or so and appears in several libraries, such as BLAS. It is very useful for large matrices over exact domains such as finite fields, where numerical stability is not an issue. Since Strassen's algorithm is actually used in practical numerical software and computer algebra systems, improving on the constants hidden in the big-O notation has its merits. A table that compares key aspects of the improved version based on recursive multiplication of 2×2-block matrices via 7 block matrix multiplications follows. As usual, n {\displaystyle n} gives the dimensions of the matrix and M {\displaystyle M} designates the memory size. It is known that a Strassen-like algorithm with a 2×2-block matrix step requires at least 7 block matrix multiplications. In 1976 Probert showed that such an algorithm requires at least 15 additions (including subtractions); however, a hidden assumption was that the blocks and the 2×2-block matrix are represented in the same basis. Karstadt and Schwartz computed in different bases and traded 3 additions for less expensive basis transformations. They also proved that one cannot go below 12 additions per step using different bases. In subsequent work Beniamini et el. applied this base-change trick to more general decompositions than 2×2-block matrices and improved the leading constant for their run times. It is an open question in theoretical computer science how well Strassen's algorithm can be improved in terms of asymptotic complexity. The matrix multiplication exponent, usually denoted ω {\displaystyle \omega } , is the smallest real number for which any n × n {\displaystyle n\times n} matrix over a field can be multiplied together using n ω + o ( 1 ) {\displaystyle n^{\omega +o(1)}} field operations. The current best bound on ω {\displaystyle \omega } is ω < 2.371552 {\displaystyle \omega <2.371552} , by Williams, Xu, Xu, and Zhou. This algorithm, like all other recent algorithms in this line of research, is a generalization of the Coppersmith–Winograd algorithm, which was given by Don Coppersmith and Shmuel Winograd in 1990. The conceptual idea of these algorithms is similar to Strassen's algorithm: a way is devised for multiplying two k × k-matrices with fewer than k3 multiplications, and this technique is applied recursively. However, the constant coefficient hidden by the big-O notation is so large that these algorithms are only worthwhile for matrices that are too large to handle on present-day computers. Victor Pan proposed so-called feasible sub-cubic matrix multiplication algorithms with an exponent slightly above 2.77, but in return with a much smaller hidden constant coefficient. Freivalds' algorithm is a simple Monte Carlo algorithm that, given matrices A, B and C, verifies in Θ(n2) time if AB = C. === AlphaTensor === In 2022, DeepMind introduced AlphaTensor, a neural network that used a single-player game analogy to invent thousands of matrix multiplication algorithms, including some previously discovered by humans and some that were not. Operations were restricted to the non-commutative ground field(normal arithmetic) and finite field Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } (mod 2 arithmetic). The best "practical" (explicit low-rank decomposition of a matrix multiplication tensor) algorithm found ran in O(n2.778). Finding low-rank decompositions of such tensors (and beyond) is NP-hard; optimal multiplication even for 3×3 matrices remains unknown, even in commutative field. On 4×4 matrices, AlphaTensor unexpectedly discovered a solution with 47 multiplication steps, an improvement over the 49 required with Strassen’s algorithm of 1969, albeit restricted to mod 2 arithmetic. Similarly, AlphaTensor solved 5×5 matrices with 96 rather than Strassen's 98 steps. Based on the surprising discovery that such improvements exist, other researchers were quickly able to find a similar independent 4×4 algorithm, and separately tweaked Deepmind's 96-step 5×5 algorithm down to 95 steps in mod 2 arithmetic and to 97 in normal arithmetic. Some algorithms were completely new: for example, (4, 5, 5) was improved to 76 steps from a baseline of 80 in both normal and mod 2 arithmetic. == Parallel and distributed algorithms == === Shared-memory parallelism === The divide-and-conquer algorithm sketched earlier can be parallelized in two ways for shared-memory multiprocessors. These are based on the fact that the eight recursive matrix multiplications in ( A 11 B 11 + A 12 B 21 A 11 B 12 + A 12 B 22 A 21 B 11 + A 22 B 21 A 21 B 12 + A 22 B 22 ) {\displaystyle {\begin{pmatrix}A_{11}B_{11}+A_{12}B_{21}&A_{11}B_{12}+A_{12}B_{22}\\A_{21}B_{11}+A_{22}B_{21}&A_{21}B_{12}+A_{22}B_{22}\\\end{pmatrix}}} can be performed independently of each other, as can the four summations (although the algorithm needs to "join" the multiplications before doing the summations). Exploiting the full parallelism of the problem, one obtains an algorithm that can be expressed in fork–join style pseudocode: Here, fork is a keyword that signal a computation may be run in parallel with the rest of the function call, while join waits for all previously "forked" computations to complete. partition achieves its goal by pointer manipulation only. This algorithm has a critical path length of Θ(log2 n) steps, meaning it takes that much time on an ideal machine with an infinite number of processors; therefore, it has a maximum possible speedup of Θ(n3/log2 n) on any real computer. The algorithm isn't practical due to the communication cost inherent in moving data to and from the temporary matrix T, but a more practical variant achieves Θ(n2) speedup, without using a temporary matrix. === Communication-avoiding and distributed algorithms === On modern architectures with hierarchical memory, the cost of loading and storing input matrix elements tends to dominate the cost of arithmetic. On a single machine this is the amount of data transferred between RAM and cache, while on a distributed memory multi-node machine it is the amount transferred between nodes; in either case it is called the communication bandwidth. The naïve algorithm using three nested loops uses Ω(n3) communication bandwidth. Cannon's algorithm, also known as the 2D algorithm, is a communication-avoiding algorithm that partitions each input matrix into a block matrix whose elements are submatrices of size √M/3 by √M/3, where M is the size of fast memory. The naïve algorithm is then used over the block matrices, computing products of submatrices entirely in fast memory. This reduces communication bandwidth to O(n3/√M), which is asymptotically optimal (for algorithms performing Ω(n3) computation). In a distributed setting with p processors arranged in a √p by √p 2D mesh, one submatrix of the result can be assigned to each processor, and the product can be computed with each processor transmitting O(n2/√p) words, which is asymptotically optimal assuming that each node stores the minimum O(n2/p) elements. This can be improved by the 3D algorithm, which arranges the processors in a 3D cube mesh, assigning every product of two input submatrices to a single processor. The result submatrices are then generated by performing a reduction over each row. This algorithm transmits O(n2/p2/3) words per processor, which is asymptotically optimal. However, this requires replicating each input matrix element p1/3 times, and so requires a factor of p1/3 more memory than is needed to store the inputs. This algorithm can be combined with Strassen to further reduce runtime. "2.5D" algorithms provide a continuous tradeoff between memory usage and communication bandwidth. On modern distributed computing environments such as MapReduce, specialized multiplication algorithms have been developed. === Algorithms for meshes === There are a variety of algorithms for multiplication on meshes. For multiplication of two n×n on a standard two-dimensional mesh using the 2D Cannon's algorithm, one can complete the multiplication in 3n-2 steps although this is reduced to half this number for repeated computations. The standard array is inefficient because the data from the two matrices does not arrive simultaneously and it must be padded with zeroes. The result is even faster on a two-layered cross-wired mesh, where only 2n-1 steps are needed. The performance improves further for repeated computations leading to 100% efficiency. The cross-wired mesh array may be seen as a special case of a non-planar (i.e. multilayered) processing structure. In a 3D mesh with n3 processing elements, two matrices can be multiplied in O ( log ⁡ n ) {\displaystyle {\mathcal {O}}(\log n)} using the DNS algorithm. == See also == Computational complexity of mathematical operations Computational complexity of matrix multiplication CYK algorithm § Valiant's algorithm Matrix chain multiplication Method of Four Russians Multiplication algorithm Sparse matrix–vector multiplication == References == == Further reading ==
Wikipedia/Parallel_algorithms_for_matrix_multiplication
In computer science, a sequential algorithm or serial algorithm is an algorithm that is executed sequentially – once through, from start to finish, without other processing executing – as opposed to concurrently or in parallel. The term is primarily used to contrast with concurrent algorithm or parallel algorithm; most standard computer algorithms are sequential algorithms, and not specifically identified as such, as sequentialness is a background assumption. Concurrency and parallelism are in general distinct concepts, but they often overlap – many distributed algorithms are both concurrent and parallel – and thus "sequential" is used to contrast with both, without distinguishing which one. If these need to be distinguished, the opposing pairs sequential/concurrent and serial/parallel may be used. "Sequential algorithm" may also refer specifically to an algorithm for decoding a convolutional code. == See also == Online algorithm Streaming algorithm == References ==
Wikipedia/Serial_algorithm
Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts. This is a property of a system—whether a program, computer, or a network—where there is a separate execution point or "thread of control" for each process. A concurrent system is one where a computation can advance without waiting for all other computations to complete. Concurrent computing is a form of modular programming. In its paradigm an overall computation is factored into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C.A.R. Hoare. == Introduction == The concept of concurrent computing is frequently confused with the related but distinct concept of parallel computing, although both can be described as "multiple processes executing during the same period of time". In parallel computing, execution occurs at the same physical instant: for example, on separate processors of a multi-processor machine, with the goal of speeding up computations—parallel computing is impossible on a (one-core) single processor, as only one computation can occur at any instant (during any single clock cycle). By contrast, concurrent computing consists of process lifetimes overlapping, but execution does not happen at the same instant. The goal here is to model processes that happen concurrently, like multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.: 1  For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via time-sharing slices: only one process runs at a time, and if it does not complete during its time slice, it is paused, another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant. Concurrent computations may be executed in parallel, for example, by assigning each process to a separate processor or processor core, or distributing a computation across a network. The exact timing of when tasks in a concurrent system are executed depends on the scheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2: T1 may be executed and finished before T2 or vice versa (serial and sequential) T1 and T2 may be executed alternately (serial and concurrent) T1 and T2 may be executed simultaneously at the same instant of time (parallel and concurrent) The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, concurrent/sequential and parallel/serial are used as opposing pairs. A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called a serial schedule. A set of tasks that can be scheduled serially is serializable, which simplifies concurrency control. === Coordinating access to shared resources === The main challenge in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions. Potential problems include race conditions, deadlocks, and resource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resource balance: Suppose balance = 500, and two concurrent threads make the calls withdraw(300) and withdraw(350). If line 3 in both operations executes before line 5 both operations will find that balance >= withdrawal evaluates to true, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources benefit from the use of concurrency control, or non-blocking algorithms. === Advantages === There are advantages of concurrent computing: Increased program throughput—parallel execution of a concurrent algorithm allows the number of tasks completed in a given time to increase proportionally to the number of processors according to Gustafson's law. High responsiveness for input/output—input/output-intensive programs mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task. More appropriate program structure—some problems and problem domains are well-suited to representation as concurrent tasks or processes. For example MVCC. == Models == Introduced in 1962, Petri nets were an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, and Dataflow architectures were created to physically implement the ideas of dataflow theory. Beginning in the late 1970s, process calculi such as Calculus of Communicating Systems (CCS) and Communicating Sequential Processes (CSP) were developed to permit algebraic reasoning about systems composed of interacting components. The π-calculus added the capability for reasoning about dynamic topologies. Input/output automata were introduced in 1987. Logics such as Lamport's TLA+, and mathematical models such as traces and Actor event diagrams, have also been developed to describe the behavior of concurrent systems. Software transactional memory borrows from database theory the concept of atomic transactions and applies them to memory accesses. === Consistency models === Concurrent programming languages and multiprocessor programs must have a consistency model (also known as a memory model). The consistency model defines rules for how operations on computer memory occur and how results are produced. One of the first consistency models was Leslie Lamport's sequential consistency model. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program". == Implementation == A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process, or implementing the computational processes as a set of threads within a single operating system process. === Interaction and communication === In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes: Shared memory communication Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually needs the use of some form of locking (e.g., mutexes, semaphores, or monitors) to coordinate between threads. A program that properly implements any of these is said to be thread-safe. Message passing communication Concurrent components communicate by exchanging messages (exemplified by MPI, Go, Scala, Erlang and occam). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming. A wide variety of mathematical theories to understand and analyze message-passing systems are available, including the actor model, and various process calculi. Message passing can be efficiently implemented via symmetric multiprocessing, with or without shared memory cache coherence. Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors. == History == Concurrent computing developed out of earlier work on railroads and telegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via time-division multiplexing (1870s). The academic study of concurrent algorithms started in the 1960s, with Dijkstra (1965) credited with being the first paper in this field, identifying and solving mutual exclusion. == Prevalence == Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow. At the programming language level: Channel Coroutine Futures and promises At the operating system level: Computer multitasking, including both cooperative multitasking and preemptive multitasking Time-sharing, which replaced sequential batch processing of jobs with concurrent use of a system Process Thread At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices. == Languages supporting concurrent programming == Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures and promises. Such languages are sometimes described as concurrency-oriented languages or concurrency-oriented programming languages (COPL). Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present. Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities: Ada—general purpose, with native support for message passing and monitor based concurrency Alef—concurrent, with threads and message passing, for system programming in early versions of Plan 9 from Bell Labs Alice—extension to Standard ML, adds support for concurrency via futures Ateji PX—extension to Java with parallel primitives inspired from π-calculus Axum—domain specific, concurrent, based on actor model and .NET Common Language Runtime using a C-like syntax BMDFM—Binary Modular DataFlow Machine C++—thread and coroutine support libraries Cω (C omega)—for research, extends C#, uses asynchronous communication C#—supports concurrent computing using lock, yield, also since version 5.0 async and await keywords introduced Clojure—modern, functional dialect of Lisp on the Java platform Concurrent Clean—functional programming, similar to Haskell Concurrent Collections (CnC)—Achieves implicit parallelism independent of memory model by explicitly defining flow of data and control Concurrent Haskell—lazy, pure functional language operating concurrent processes on shared memory Concurrent ML—concurrent extension of Standard ML Concurrent Pascal—by Per Brinch Hansen Curry D—multi-paradigm system programming language with explicit support for concurrent programming (actor model) E—uses promises to preclude deadlocks ECMAScript—uses promises for asynchronous operations Eiffel—through its SCOOP mechanism based on the concepts of Design by Contract Elixir—dynamic and functional meta-programming aware language running on the Erlang VM. Erlang—uses synchronous or asynchronous message passing with no shared memory FAUST—real-time functional, for signal processing, compiler provides automatic parallelization via OpenMP or a specific work-stealing scheduler Fortran—coarrays and do concurrent are part of Fortran 2008 standard Go—for system programming, with a concurrent programming model based on CSP Haskell—concurrent, and parallel functional programming language Hume—functional, concurrent, for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing Io—actor-based concurrency Janus—features distinct askers and tellers to logical variables, bag channels; is purely declarative Java—thread class or Runnable interface Julia—"concurrent programming primitives: Tasks, async-wait, Channels." JavaScript—via web workers, in a browser environment, promises, and callbacks. JoCaml—concurrent and distributed channel based, extension of OCaml, implements the join-calculus of processes Join Java—concurrent, based on Java language Joule—dataflow-based, communicates by message passing Joyce—concurrent, teaching, built on Concurrent Pascal with features from CSP by Per Brinch Hansen LabVIEW—graphical, dataflow, functions are nodes in a graph, data is wires between the nodes; includes object-oriented language Limbo—relative of Alef, for system programming in Inferno (operating system) Locomotive BASIC—Amstrad variant of BASIC contains EVERY and AFTER commands for concurrent subroutines MultiLisp—Scheme variant extended to support parallelism Modula-2—for system programming, by N. Wirth as a successor to Pascal with native support for coroutines Modula-3—modern member of Algol family with extensive support for threads, mutexes, condition variables Newsqueak—for research, with channels as first-class values; predecessor of Alef occam—influenced heavily by communicating sequential processes (CSP) occam-π—a modern variant of occam, which incorporates ideas from Milner's π-calculus ooRexx—object-based, message exchange for communication and synchronization Orc—heavily concurrent, nondeterministic, based on Kleene algebra Oz-Mozart—multiparadigm, supports shared-state and message-passing concurrency, and futures ParaSail—object-oriented, parallel, free of pointers, race conditions PHP—multithreading support with parallel extension implementing message passing inspired from Go Pict—essentially an executable implementation of Milner's π-calculus Python — uses thread-based parallelism and process-based parallelism Raku includes classes for threads, promises and channels by default Reia—uses asynchronous message passing between shared-nothing objects Red/System—for system programming, based on Rebol Rust—for system programming, using message-passing with move semantics, shared immutable memory, and shared mutable memory. Scala—general purpose, designed to express common programming patterns in a concise, elegant, and type-safe way SequenceL—general purpose functional, main design objectives are ease of programming, code clarity-readability, and automatic parallelization for performance on multicore hardware, and provably free of race conditions SR—for research SuperPascal—concurrent, for teaching, built on Concurrent Pascal and Joyce by Per Brinch Hansen Swift—built-in support for writing asynchronous and parallel code in a structured way Unicon—for research TNSDL—for developing telecommunication exchanges, uses asynchronous message passing VHSIC Hardware Description Language (VHDL)—IEEE STD-1076 XC—concurrency-extended subset of C language developed by XMOS, based on communicating sequential processes, built-in constructs for programmable I/O Many other languages provide support for concurrency in the form of libraries, at levels roughly comparable with the above list. == See also == Asynchronous I/O Chu space Flow-based programming Java ConcurrentMap Ptolemy Project Race condition § Computing Structured concurrency Transaction processing == Notes == == References == == Sources == == Further reading == Dijkstra, E. W. (1965). "Solution of a problem in concurrent programming control". Communications of the ACM. 8 (9): 569. doi:10.1145/365559.365617. S2CID 19357737. Herlihy, Maurice (2008) [2008]. The Art of Multiprocessor Programming. Morgan Kaufmann. ISBN 978-0123705914. Downey, Allen B. (2005) [2005]. The Little Book of Semaphores (PDF). Green Tea Press. ISBN 978-1-4414-1868-5. Archived from the original (PDF) on 2016-03-04. Retrieved 2009-11-21. Filman, Robert E.; Daniel P. Friedman (1984). Coordinated Computing: Tools and Techniques for Distributed Software. New York: McGraw-Hill. p. 370. ISBN 978-0-07-022439-1. Leppäjärvi, Jouni (2008). A pragmatic, historically oriented survey on the universality of synchronization primitives (PDF). University of Oulu. Archived from the original (PDF) on 2017-08-30. Retrieved 2012-09-13. Taubenfeld, Gadi (2006). Synchronization Algorithms and Concurrent Programming. Pearson / Prentice Hall. p. 433. ISBN 978-0-13-197259-9. == External links == Media related to Concurrent programming at Wikimedia Commons Concurrent Systems Virtual Library
Wikipedia/Concurrent_algorithm
In graph theory a minimum spanning tree (MST) T {\displaystyle T} of a graph G = ( V , E ) {\displaystyle G=(V,E)} with | V | = n {\displaystyle |V|=n} and | E | = m {\displaystyle |E|=m} is a tree subgraph of G {\displaystyle G} that contains all of its vertices and is of minimum weight. MSTs are useful and versatile tools utilised in a wide variety of practical and theoretical fields. For example, a company looking to supply multiple stores with a certain product from a single warehouse might use an MST originating at the warehouse to calculate the shortest paths to each company store. In this case the stores and the warehouse are represented as vertices and the road connections between them - as edges. Each edge is labelled with the length of the corresponding road connection. If G {\displaystyle G} is edge-unweighted every spanning tree possesses the same number of edges and thus the same weight. In the edge-weighted case, the spanning tree, the sum of the weights of the edges of which is lowest among all spanning trees of G {\displaystyle G} , is called a minimum spanning tree (MST). It is not necessarily unique. More generally, graphs that are not necessarily connected have minimum spanning forests, which consist of a union of MSTs for each connected component. As finding MSTs is a widespread problem in graph theory, there exist many sequential algorithms for solving it. Among them are Prim's, Kruskal's and Borůvka's algorithms, each utilising different properties of MSTs. They all operate in a similar fashion - a subset of E {\displaystyle E} is iteratively grown until a valid MST has been discovered. However, as practical problems are often quite large (road networks sometimes have billions of edges), performance is a key factor. One option of improving it is by parallelising known MST algorithms. == Prim's algorithm == This algorithm utilises the cut-property of MSTs. A simple high-level pseudocode implementation is provided below: T ← ∅ {\displaystyle T\gets \emptyset } S ← { s } {\displaystyle S\gets \{s\}} where s {\displaystyle s} is a random vertex in V {\displaystyle V} repeat | V | − 1 {\displaystyle |V|-1} times find lightest edge ( u , v ) {\displaystyle (u,v)} s.t. u ∈ S {\displaystyle u\in S} but v ∈ ( V ∖ S ) {\displaystyle v\in (V\setminus S)} S ← S ∪ { v } {\displaystyle S\gets S\cup \{v\}} T ← T ∪ { ( u , v ) } {\displaystyle T\gets T\cup \{(u,v)\}} return T Each edge is observed exactly twice - namely when examining each of its endpoints. Each vertex is examined exactly once for a total of O ( n + m ) {\displaystyle O(n+m)} operations aside from the selection of the lightest edge at each loop iteration. This selection is often performed using a priority queue (PQ). For each edge at most one decreaseKey operation (amortised in O ( 1 ) {\displaystyle O(1)} ) is performed and each loop iteration performs one deleteMin operation ( O ( log ⁡ n ) {\displaystyle O(\log n)} ). Thus using Fibonacci heaps the total runtime of Prim's algorithm is asymptotically in O ( m + n log ⁡ n ) {\displaystyle O(m+n\log n)} . It is important to note that the loop is inherently sequential and can not be properly parallelised. This is the case, since the lightest edge with one endpoint in S {\displaystyle S} and on in V ∖ S {\displaystyle V\setminus S} might change with the addition of edges to T {\displaystyle T} . Thus no two selections of a lightest edge can be performed at the same time. However, there do exist some attempts at parallelisation. One possible idea is to use O ( n ) {\displaystyle O(n)} processors to support PQ access in O ( 1 ) {\displaystyle O(1)} on an EREW-PRAM machine, thus lowering the total runtime to O ( n + m ) {\displaystyle O(n+m)} . == Kruskal's algorithm == Kruskal's MST algorithm utilises the cycle property of MSTs. A high-level pseudocode representation is provided below. T ← {\displaystyle T\gets } forest with every vertex in its own subtree foreach ( u , v ) ∈ E {\displaystyle (u,v)\in E} in ascending order of weight if u {\displaystyle u} and v {\displaystyle v} in different subtrees of T {\displaystyle T} T ← T ∪ { ( u , v ) } {\displaystyle T\gets T\cup \{(u,v)\}} return T The subtrees of T {\displaystyle T} are stored in union-find data structures, which is why checking whether or not two vertices are in the same subtree is possible in amortised O ( α ( m , n ) ) {\displaystyle O(\alpha (m,n))} where α ( m , n ) {\displaystyle \alpha (m,n)} is the inverse Ackermann function. Thus the total runtime of the algorithm is in O ( s o r t ( n ) + α ( n ) ) {\displaystyle O(sort(n)+\alpha (n))} . Here α ( n ) {\displaystyle \alpha (n)} denotes the single-valued inverse Ackermann function, for which any realistic input yields an integer less than five. === Approach 1: Parallelising the sorting step === Similarly to Prim's algorithm there are components in Kruskal's approach that can not be parallelised in its classical variant. For example, determining whether or not two vertices are in the same subtree is difficult to parallelise, as two union operations might attempt to join the same subtrees at the same time. Really the only opportunity for parallelisation lies in the sorting step. As sorting is linear in the optimal case on O ( log ⁡ n ) {\displaystyle O(\log n)} processors, the total runtime can be reduced to O ( m α ( n ) ) {\displaystyle O(m\alpha (n))} . === Approach 2: Filter-Kruskal === Another approach would be to modify the original algorithm by growing T {\displaystyle T} more aggressively. This idea was presented by Osipov et al. The basic idea behind Filter-Kruskal is to partition the edges in a similar way to quicksort and filter out edges that connect vertices that belong to the same tree in order to reduce the cost of sorting. A high-level pseudocode representation is provided below. filterKruskal( G {\displaystyle G} ): if m < {\displaystyle m<} KruskalThreshold: return kruskal( G {\displaystyle G} ) pivot = chooseRandom( E {\displaystyle E} ) ( E ≤ {\displaystyle (E_{\leq }} , E > ) ← {\displaystyle E_{>})\gets } partition( E {\displaystyle E} , pivot) A ← {\displaystyle A\gets } filterKruskal( E ≤ {\displaystyle E_{\leq }} ) E > ← {\displaystyle E_{>}\gets } filter( E > {\displaystyle E_{>}} ) A ← A {\displaystyle A\gets A} ∪ {\displaystyle \cup } filterKruskal( E > {\displaystyle E_{>}} ) return A {\displaystyle A} partition( E {\displaystyle E} , pivot): E ≤ ← ∅ {\displaystyle E_{\leq }\gets \emptyset } E > ← ∅ {\displaystyle E_{>}\gets \emptyset } foreach ( u , v ) ∈ E {\displaystyle (u,v)\in E} : if weight( u , v {\displaystyle u,v} ) ≤ {\displaystyle \leq } pivot: E ≤ ← E ≤ ∪ ( u , v ) {\displaystyle E_{\leq }\gets E_{\leq }\cup {(u,v)}} else E > ← E > ∪ ( u , v ) {\displaystyle E_{>}\gets E_{>}\cup {(u,v)}} return ( E ≤ {\displaystyle E_{\leq }} , E > {\displaystyle E_{>}} ) filter( E {\displaystyle E} ): E f i l t e r e d ← ∅ {\displaystyle E_{filtered}\gets \emptyset } foreach ( u , v ) ∈ E {\displaystyle (u,v)\in E} : if find-set(u) ≠ {\displaystyle \neq } find-set(v): E f i l t e r e d ← E f i l t e r e d ∪ ( u , v ) {\displaystyle E_{filtered}\gets E_{filtered}\cup {(u,v)}} return E f i l t e r e d {\displaystyle E_{filtered}} Filter-Kruskal is better suited for parallelisation, since sorting, partitioning and filtering have intuitively easy parallelisations where the edges are simply divided between the cores. == Borůvka's algorithm == The main idea behind Borůvka's algorithm is edge contraction. An edge { u , v } {\displaystyle \{u,v\}} is contracted by first removing v {\displaystyle v} from the graph and then redirecting every edge { w , v } ∈ E {\displaystyle \{w,v\}\in E} to { w , u } {\displaystyle \{w,u\}} . These new edges retain their old edge weights. If the goal is not just to determine the weight of an MST but also which edges it comprises, it must be noted between which pairs of vertices an edge was contracted. A high-level pseudocode representation is presented below. T ← ∅ {\displaystyle T\gets \emptyset } while | V | > 1 {\displaystyle |V|>1} S ← ∅ {\displaystyle S\gets \emptyset } for v ∈ V {\displaystyle v\in V} S ← S {\displaystyle S\gets S} ∪ {\displaystyle \cup } lightest { u , v } ∈ E {\displaystyle \{u,v\}\in E} for { u , v } ∈ S {\displaystyle \{u,v\}\in S} contract { u , v } {\displaystyle \{u,v\}} T ← T ∪ S {\displaystyle T\gets T\cup S} return T It is possible that contractions lead to multiple edges between a pair of vertices. The intuitive way of choosing the lightest of them is not possible in O ( m ) {\displaystyle O(m)} . However, if all contractions that share a vertex are performed in parallel this is doable. The recursion stops when there is only a single vertex remaining, which means the algorithm needs at most log ⁡ n {\displaystyle \log n} iterations, leading to a total runtime in O ( m log ⁡ n ) {\displaystyle O(m\log n)} . === Parallelisation === One possible parallelisation of this algorithm yields a polylogarithmic time complexity, i.e. T ( m , n , p ) ⋅ p ∈ O ( m log ⁡ n ) {\displaystyle T(m,n,p)\cdot p\in O(m\log n)} and there exists a constant c {\displaystyle c} so that T ( m , n , p ) ∈ O ( log c ⁡ m ) {\displaystyle T(m,n,p)\in O(\log ^{c}m)} . Here T ( m , n , p ) {\displaystyle T(m,n,p)} denotes the runtime for a graph with m {\displaystyle m} edges, n {\displaystyle n} vertices on a machine with p {\displaystyle p} processors. The basic idea is the following: while | V | > 1 {\displaystyle |V|>1} find lightest incident edges // O ( m p + log ⁡ n + log ⁡ p ) {\displaystyle O({\frac {m}{p}}+\log n+\log p)} assign the corresponding subgraph to each vertex // O ( n p + log ⁡ n ) {\displaystyle O({\frac {n}{p}}+\log n)} contract each subgraph // O ( m p + log ⁡ n ) {\displaystyle O({\frac {m}{p}}+\log n)} The MST then consists of all the found lightest edges. This parallelisation utilises the adjacency array graph representation for G = ( V , E ) {\displaystyle G=(V,E)} . This consists of three arrays - Γ {\displaystyle \Gamma } of length n + 1 {\displaystyle n+1} for the vertices, γ {\displaystyle \gamma } of length m {\displaystyle m} for the endpoints of each of the m {\displaystyle m} edges and c {\displaystyle c} of length m {\displaystyle m} for the edges' weights. Now for vertex i {\displaystyle i} the other end of each edge incident to i {\displaystyle i} can be found in the entries between γ [ Γ [ i − 1 ] ] {\displaystyle \gamma [\Gamma [i-1]]} and γ [ Γ [ i ] ] {\displaystyle \gamma [\Gamma [i]]} . The weight of the i {\displaystyle i} -th edge in Γ {\displaystyle \Gamma } can be found in c [ i ] {\displaystyle c[i]} . Then the i {\displaystyle i} -th edge in γ {\displaystyle \gamma } is between vertices u {\displaystyle u} and v {\displaystyle v} if and only if Γ [ u ] ≤ i < Γ [ u + 1 ] {\displaystyle \Gamma [u]\leq i<\Gamma [u+1]} and γ [ i ] = v {\displaystyle \gamma [i]=v} . ==== Finding the lightest incident edge ==== First the edges are distributed between each of the p {\displaystyle p} processors. The i {\displaystyle i} -th processor receives the edges stored between γ [ i m p ] {\displaystyle \gamma [{\frac {im}{p}}]} and γ [ ( i + 1 ) m p − 1 ] {\displaystyle \gamma [{\frac {(i+1)m}{p}}-1]} . Furthermore, each processor needs to know to which vertex these edges belong (since γ {\displaystyle \gamma } only stores one of the edge's endpoints) and stores this in the array p r e d {\displaystyle pred} . Obtaining this information is possible in O ( log ⁡ n ) {\displaystyle O(\log n)} using p {\displaystyle p} binary searches or in O ( n p + p ) {\displaystyle O({\frac {n}{p}}+p)} using a linear search. In practice the latter approach is sometimes quicker, even though it is asymptotically worse. Now each processor determines the lightest edge incident to each of its vertices. v ← {\displaystyle v\gets } find( i m p {\displaystyle {\frac {im}{p}}} , Γ {\displaystyle \Gamma } ) for e ← i m p ; e < ( i + 1 ) m p − 1 ; e + + {\displaystyle e\gets {\frac {im}{p}};e<{\frac {(i+1)m}{p}}-1;e++} if Γ [ v + 1 ] = e {\displaystyle \Gamma [v+1]=e} v + + {\displaystyle v++} if c [ e ] < c [ p r e d [ v ] ] {\displaystyle c[e]<c[pred[v]]} p r e d [ v ] ← e {\displaystyle pred[v]\gets e} Here the issue arises some vertices are handled by more than one processor. A possible solution to this is that every processor has its own p r e v {\displaystyle prev} array which is later combined with those of the others using a reduction. Each processor has at most two vertices that are also handled by other processors and each reduction is in O ( log ⁡ p ) {\displaystyle O(\log p)} . Thus the total runtime of this step is in O ( m p + log ⁡ n + log ⁡ p ) {\displaystyle O({\frac {m}{p}}+\log n+\log p)} . ==== Assigning subgraphs to vertices ==== Observe the graph that consists solely of edges collected in the previous step. These edges are directed away from the vertex to which they are the lightest incident edge. The resulting graph decomposes into multiple weakly connected components. The goal of this step is to assign to each vertex the component of which it is a part. Note that every vertex has exactly one outgoing edge and therefore each component is a pseudotree - a tree with a single extra edge that runs in parallel to the lightest edge in the component but in the opposite direction. The following code mutates this extra edge into a loop: parallel forAll v ∈ V {\displaystyle v\in V} w ← p r e d [ v ] {\displaystyle w\gets pred[v]} if p r e d [ w ] = v ∧ v < w {\displaystyle pred[w]=v\land v<w} p r e d [ v ] ← v {\displaystyle pred[v]\gets v} Now every weakly connected component is a directed tree where the root has a loop. This root is chosen as the representative of each component. The following code uses doubling to assign each vertex its representative: while ∃ v ∈ V : p r e d [ v ] ≠ p r e d [ p r e d [ v ] ] {\displaystyle \exists v\in V:pred[v]\neq pred[pred[v]]} forAll v ∈ V {\displaystyle v\in V} p r e d [ v ] ← p r e d [ p r e d [ v ] ] {\displaystyle pred[v]\gets pred[pred[v]]} Now every subgraph is a star. With some advanced techniques this step needs O ( n p + log ⁡ n ) {\displaystyle O({\frac {n}{p}}+\log n)} time. ==== Contracting the subgraphs ==== In this step each subgraph is contracted to a single vertex. k ← {\displaystyle k\gets } number of subgraphs V ′ ← { 0 , … , k − 1 } {\displaystyle V'\gets \{0,\dots ,k-1\}} find a bijective function f : {\displaystyle f:} star root → { 0 , … , k − 1 } {\displaystyle \rightarrow \{0,\dots ,k-1\}} E ′ ← { ( f ( p r e d [ v ] ) , f ( p r e d [ w ] ) , c , e o l d ) : ( v , w ) ∈ E ∧ p r e d [ v ] ≠ p r e d [ w ] } {\displaystyle E'\gets \{(f(pred[v]),f(pred[w]),c,e_{old}):(v,w)\in E\land pred[v]\neq pred[w]\}} Finding the bijective function is possible in O ( n p + log ⁡ p ) {\displaystyle O({\frac {n}{p}}+\log p)} using a prefix sum. As we now have a new set of vertices and edges the adjacency array must be rebuilt, which can be done using Integersort on E ′ {\displaystyle E'} in O ( m p + log ⁡ p ) {\displaystyle O({\frac {m}{p}}+\log p)} time. === Complexity === Each iteration now needs O ( m p + log ⁡ n ) {\displaystyle O({\frac {m}{p}}+\log n)} time and just like in the sequential case there are log ⁡ n {\displaystyle \log n} iterations, resulting in a total runtime of O ( log ⁡ n ( m p + log ⁡ n ) ) {\displaystyle O(\log n({\frac {m}{p}}+\log n))} . If m ∈ Ω ( p log 2 ⁡ p ) {\displaystyle m\in \Omega (p\log ^{2}p)} the efficiency of the algorithm is in Θ ( 1 ) {\displaystyle \Theta (1)} and it is relatively efficient. If m ∈ O ( n ) {\displaystyle m\in O(n)} then it is absolutely efficient. == Further algorithms == There are multiple other parallel algorithms that deal the issue of finding an MST. With a linear number of processors it is possible to achieve this in O ( log ⁡ n ) {\displaystyle O(\log n)} . Bader and Cong presented an MST-algorithm, that was five times quicker on eight cores than an optimal sequential algorithm. Another challenge is the External Memory model - there is a proposed algorithm due to Dementiev et al. that is claimed to be only two to five times slower than an algorithm that only makes use of internal memory == References ==
Wikipedia/Parallel_algorithms_for_minimum_spanning_trees
In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method. For polynomials, more elaborate methods exist for testing the existence of a root in an interval (Descartes' rule of signs, Sturm's theorem, Budan's theorem). They allow extending the bisection method into efficient algorithms for finding all real roots of a polynomial; see Real-root isolation. == The method == The method is applicable for numerically solving the equation f ( x ) = 0 {\displaystyle f(x)=0} for the real variable x {\displaystyle x} , where f {\displaystyle f} is a continuous function defined on an interval [ a , b ] {\displaystyle [a,b]} and where f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} have opposite signs. In this case a {\displaystyle a} and b {\displaystyle b} are said to bracket a root since, by the intermediate value theorem, the continuous function f {\displaystyle f} must have at least one root in the interval ( a , b ) {\displaystyle (a,b)} . At each step the method divides the interval in two parts/halves by computing the midpoint c = ( a + b ) / 2 {\displaystyle c=(a+b)/2} of the interval and the value of the function f ( c ) {\displaystyle f(c)} at that point. If c {\displaystyle c} itself is a root then the process has succeeded and stops. Otherwise, there are now only two possibilities: either f ( a ) {\displaystyle f(a)} and f ( c ) {\displaystyle f(c)} have opposite signs and bracket a root, or f ( c ) {\displaystyle f(c)} and f ( b ) {\displaystyle f(b)} have opposite signs and bracket a root. The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step. In this way an interval that contains a zero of f {\displaystyle f} is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small. Explicitly, if f ( c ) = 0 {\displaystyle f(c)=0} then c {\displaystyle c} may be taken as the solution and the process stops. Otherwise, if f ( a ) {\displaystyle f(a)} and f ( c ) {\displaystyle f(c)} have opposite signs, then the method sets c {\displaystyle c} as the new value for b {\displaystyle b} , and if f ( b ) {\displaystyle f(b)} and f ( c ) {\displaystyle f(c)} have opposite signs then the method sets c {\displaystyle c} as the new a {\displaystyle a} . In both cases, the new f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} have opposite signs, so the method is applicable to this smaller interval. === Stopping condition === The input for the method is a continuous function f {\displaystyle f} , an interval [ a , b ] {\displaystyle [a,b]} , and the function values f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} . The function values are of opposite sign (there is at least one zero crossing within the interval). Each iteration performs these steps: Calculate c {\displaystyle c} , the midpoint of the interval, : c = { a + b 2 , if a × b ≤ 0 a + b − a 2 , if a × b > 0 {\displaystyle \qquad c={\begin{cases}{\tfrac {a+b}{2}},&{\text{if }}a\times b\leq 0\\\,a+{\tfrac {b-a}{2}},&{\text{if }}a\times b>0\end{cases}}} Calculate the function value at the midpoint, f ( c ) {\displaystyle f(c)} . If convergence is satisfactory (see below), return c {\displaystyle c} and stop iterating. Examine the sign of f ( c ) {\displaystyle f(c)} and replace either ( a , f ( a ) ) {\displaystyle (a,f(a))} or ( b , f ( b ) ) {\displaystyle (b,f(b))} with ( c , f ( c ) ) {\displaystyle (c,f(c))} so that there is a zero crossing within the new interval. In order to determine when the iteration should stop, it is necessary to consider what is meant by the concept of 'tolerance' ( ϵ {\displaystyle \epsilon } ). Burden & Faires state: "we can select a tolerance ϵ > 0 {\displaystyle \epsilon >0} and generate c1, ..., cN until one of the following conditions is met: Unfortunately, difficulties can arise using any of these stopping criteria ... Without additional knowledge about f {\displaystyle f} or c {\displaystyle c} , inequality (2.2) is the best stopping criterion to apply because it comes closest to testing relative error." (Note: c {\displaystyle c} has been used here as it is more common than Burden and Faire's ′ p ′ {\displaystyle 'p'} .) The objective is to find an approximation, within the tolerance, to the root. It can be seen that (2.3) | f ( c N ) | < ϵ {\displaystyle |f(c_{N})|<\epsilon } does not give such an approximation unless the slope of the function at c N {\displaystyle c_{N}} is in the neighborhood of ± 1 {\displaystyle \pm 1} . Suppose, for the purpose of illustration, the tolerance ϵ = 5 × 10 − 7 {\displaystyle \epsilon =5\times 10^{-7}} . Then, for a function such as f ( x ) = 10 − m ∗ ( x − 1 ) {\displaystyle f(x)=10^{-m}*(x-1)} , | f ( c ) | = 10 − m | x − 1 | < 5 × 10 − 7 {\displaystyle |f(c)|=10^{-m}|x-1|<5\times 10^{-7}} so | x − 1 | < 5 × 10 m − 7 {\displaystyle |x-1|<5\times 10^{m-7}} This means that any number x in [ 1 − 5 × 10 m − 7 , 1 + 5 × 10 m − 7 ] {\displaystyle [1-5\times 10^{m-7},1+5\times 10^{m-7}]} would be a 'good' approximation to the root. If m = 10 {\displaystyle m=10} , the approximation to the root 1 would be in [ 1 − 5000 , 1 + 5000 ] = [ − 4999 , 5001 ] {\displaystyle [1-5000,1+5000]=[-4999,5001]} . -- a very poor result. As (2.3) does not appear to give acceptable results, (2.1) and (2.2) need to be evaluated. The following Python script compares the behavior for those two stopping conditions. def bisect(f, a, b, tolerance): fa = f(a) fb = f(b) i = 0 stop_a = [] stop_r = [] while True: i += 1 c = a + (b - a) / 2 fc = f(c) if c < 10: # For small root if not stop_a: print('{:3d} {:18.16f} {:18.16f} {:18.16e} | {:5.2e} {:5.2e}' .format(i, a, b, c, b - a, (b - a) / c)) else: # large root print('{:3d} {:18.16f} {:18.16f} {:18.16e} | ----- {:5.2e}' .format(i, a, b, c, b - a)) else: if not stop_r: print('{:3d} {:18.7f} {:18.7f} {:18.7e} | {:5.2e} {:5.2e}' .format(i, a, b, c, b - a, (b - a) / c)) else: print('{:3d} {:18.7f} {:18.7f} {:18.7e} | {:5.2e} ----- ' .format(i, a, b, c, b - a)) if fc == 0: return [c, i] if (b - a <= abs(c) * tolerance) & (stop_r == []): stop_r = [c, i] if (b - a <= tolerance) & (stop_a == []): stop_a = [c, i] if np.sign(fa) == np.sign(fc): a = c fa = fc else: b = c fb = fc if (stop_r != []) & (stop_a != []): return [stop_a, stop_r] The first function to be tested is one with a small root i.e. f ( x ) = x − 0.00000000123456789 {\displaystyle f(x)=x-0.00000000123456789} print(' i a b c b - a (b - a)/c') f = lambda x: x - 0.00000000123456789 res = bisect(f, 0, 1, 5e-7) print('In {:2d} steps the absolute error case gives {:20.18F}'.format(res[0][1], res[0][0])) print('In {:2d} steps the relative error case gives {:20.18F}'.format(res[1][1], res[1][0])) print(' as the approximation to 0.00000000123456789') i a b c b - a (b - a)/c 1 0.0000000000000000 1.0000000000000000 5.0000000000000000e-01 | 1.00e+00 2.00e+00 2 0.0000000000000000 0.5000000000000000 2.5000000000000000e-01 | 5.00e-01 2.00e+00 3 0.0000000000000000 0.2500000000000000 1.2500000000000000e-01 | 2.50e-01 2.00e+00 4 0.0000000000000000 0.1250000000000000 6.2500000000000000e-02 | 1.25e-01 2.00e+00 5 0.0000000000000000 0.0625000000000000 3.1250000000000000e-02 | 6.25e-02 2.00e+00 6 0.0000000000000000 0.0312500000000000 1.5625000000000000e-02 | 3.12e-02 2.00e+00 7 0.0000000000000000 0.0156250000000000 7.8125000000000000e-03 | 1.56e-02 2.00e+00 8 0.0000000000000000 0.0078125000000000 3.9062500000000000e-03 | 7.81e-03 2.00e+00 9 0.0000000000000000 0.0039062500000000 1.9531250000000000e-03 | 3.91e-03 2.00e+00 10 0.0000000000000000 0.0019531250000000 9.7656250000000000e-04 | 1.95e-03 2.00e+00 11 0.0000000000000000 0.0009765625000000 4.8828125000000000e-04 | 9.77e-04 2.00e+00 12 0.0000000000000000 0.0004882812500000 2.4414062500000000e-04 | 4.88e-04 2.00e+00 13 0.0000000000000000 0.0002441406250000 1.2207031250000000e-04 | 2.44e-04 2.00e+00 14 0.0000000000000000 0.0001220703125000 6.1035156250000000e-05 | 1.22e-04 2.00e+00 15 0.0000000000000000 0.0000610351562500 3.0517578125000000e-05 | 6.10e-05 2.00e+00 16 0.0000000000000000 0.0000305175781250 1.5258789062500000e-05 | 3.05e-05 2.00e+00 17 0.0000000000000000 0.0000152587890625 7.6293945312500000e-06 | 1.53e-05 2.00e+00 18 0.0000000000000000 0.0000076293945312 3.8146972656250000e-06 | 7.63e-06 2.00e+00 19 0.0000000000000000 0.0000038146972656 1.9073486328125000e-06 | 3.81e-06 2.00e+00 20 0.0000000000000000 0.0000019073486328 9.5367431640625000e-07 | 1.91e-06 2.00e+00 21 0.0000000000000000 0.0000009536743164 4.7683715820312500e-07 | 9.54e-07 2.00e+00 22 0.0000000000000000 0.0000004768371582 2.3841857910156250e-07 | 4.77e-07 2.00e+00 23 0.0000000000000000 0.0000002384185791 1.1920928955078125e-07 | ----- 2.38e-07 24 0.0000000000000000 0.0000001192092896 5.9604644775390625e-08 | ----- 1.19e-07 25 0.0000000000000000 0.0000000596046448 2.9802322387695312e-08 | ----- 5.96e-08 26 0.0000000000000000 0.0000000298023224 1.4901161193847656e-08 | ----- 2.98e-08 27 0.0000000000000000 0.0000000149011612 7.4505805969238281e-09 | ----- 1.49e-08 28 0.0000000000000000 0.0000000074505806 3.7252902984619141e-09 | ----- 7.45e-09 29 0.0000000000000000 0.0000000037252903 1.8626451492309570e-09 | ----- 3.73e-09 30 0.0000000000000000 0.0000000018626451 9.3132257461547852e-10 | ----- 1.86e-09 31 0.0000000009313226 0.0000000018626451 1.3969838619232178e-09 | ----- 9.31e-10 32 0.0000000009313226 0.0000000013969839 1.1641532182693481e-09 | ----- 4.66e-10 33 0.0000000011641532 0.0000000013969839 1.2805685400962830e-09 | ----- 2.33e-10 34 0.0000000011641532 0.0000000012805685 1.2223608791828156e-09 | ----- 1.16e-10 35 0.0000000012223609 0.0000000012805685 1.2514647096395493e-09 | ----- 5.82e-11 36 0.0000000012223609 0.0000000012514647 1.2369127944111824e-09 | ----- 2.91e-11 37 0.0000000012223609 0.0000000012369128 1.2296368367969990e-09 | ----- 1.46e-11 38 0.0000000012296368 0.0000000012369128 1.2332748156040907e-09 | ----- 7.28e-12 39 0.0000000012332748 0.0000000012369128 1.2350938050076365e-09 | ----- 3.64e-12 40 0.0000000012332748 0.0000000012350938 1.2341843103058636e-09 | ----- 1.82e-12 41 0.0000000012341843 0.0000000012350938 1.2346390576567501e-09 | ----- 9.09e-13 42 0.0000000012341843 0.0000000012346391 1.2344116839813069e-09 | ----- 4.55e-13 43 0.0000000012344117 0.0000000012346391 1.2345253708190285e-09 | ----- 2.27e-13 44 0.0000000012345254 0.0000000012346391 1.2345822142378893e-09 | ----- 1.14e-13 45 0.0000000012345254 0.0000000012345822 1.2345537925284589e-09 | ----- 5.68e-14 46 0.0000000012345538 0.0000000012345822 1.2345680033831741e-09 | ----- 2.84e-14 47 0.0000000012345538 0.0000000012345680 1.2345608979558165e-09 | ----- 1.42e-14 48 0.0000000012345609 0.0000000012345680 1.2345644506694953e-09 | ----- 7.11e-15 49 0.0000000012345645 0.0000000012345680 1.2345662270263347e-09 | ----- 3.55e-15 50 0.0000000012345662 0.0000000012345680 1.2345671152047544e-09 | ----- 1.78e-15 51 0.0000000012345671 0.0000000012345680 1.2345675592939642e-09 | ----- 8.88e-16 52 0.0000000012345676 0.0000000012345680 1.2345677813385691e-09 | ----- 4.44e-16 In 22 steps the absolute error case gives 0.000000238418579102 In 52 steps the relative error case gives 0.000000001234567781 as the approximation to 0.00000000123456789 The reason that the absolute difference method gives such a poor result is that it measures 'decimal places' of accuracy - but those decimal places may contain only 0's so have no useful information. That means that the 6 zeros after the decimal point in 0.000000238418579102 match the first 6 in 0.00000000123456789 so the absolute difference is less than ϵ = 5 × 10 − 7 {\displaystyle \epsilon =5\times 10^{-7}} . On the other hand, the relative difference method measures 'significant digits' and represents a much better approximation to the position of the root. The next example is print(' i a b c b - a (b - a)/c') res = bisect(fun, 1234550, 1234581, 5e-7) print('In %2d steps the absolute error case gives %20.18F' % (res[0][1], res[0][0])) print('In %2d steps the relative error case gives %20.18F' % (res[1][1], res[1][0])) print(' as the approximation to 1234567.89012456789') i a b c b - a (b - a)/c 1 1234550.0000000 1234581.0000000 1.2345655e+06 | 3.10e+01 2.51e-05 2 1234565.5000000 1234581.0000000 1.2345732e+06 | 1.55e+01 1.26e-05 3 1234565.5000000 1234573.2500000 1.2345694e+06 | 7.75e+00 6.28e-06 4 1234565.5000000 1234569.3750000 1.2345674e+06 | 3.88e+00 3.14e-06 5 1234567.4375000 1234569.3750000 1.2345684e+06 | 1.94e+00 1.57e-06 6 1234567.4375000 1234568.4062500 1.2345679e+06 | 9.69e-01 7.85e-07 7 1234567.4375000 1234567.9218750 1.2345677e+06 | 4.84e-01 3.92e-07 8 1234567.6796875 1234567.9218750 1.2345678e+06 | 2.42e-01 ----- 9 1234567.8007812 1234567.9218750 1.2345679e+06 | 1.21e-01 ----- 10 1234567.8613281 1234567.9218750 1.2345679e+06 | 6.05e-02 ----- 11 1234567.8613281 1234567.8916016 1.2345679e+06 | 3.03e-02 ----- 12 1234567.8764648 1234567.8916016 1.2345679e+06 | 1.51e-02 ----- 13 1234567.8840332 1234567.8916016 1.2345679e+06 | 7.57e-03 ----- 14 1234567.8878174 1234567.8916016 1.2345679e+06 | 3.78e-03 ----- 15 1234567.8897095 1234567.8916016 1.2345679e+06 | 1.89e-03 ----- 16 1234567.8897095 1234567.8906555 1.2345679e+06 | 9.46e-04 ----- 17 1234567.8897095 1234567.8901825 1.2345679e+06 | 4.73e-04 ----- 18 1234567.8899460 1234567.8901825 1.2345679e+06 | 2.37e-04 ----- 19 1234567.8900642 1234567.8901825 1.2345679e+06 | 1.18e-04 ----- 20 1234567.8901234 1234567.8901825 1.2345679e+06 | 5.91e-05 ----- 21 1234567.8901234 1234567.8901529 1.2345679e+06 | 2.96e-05 ----- 22 1234567.8901234 1234567.8901381 1.2345679e+06 | 1.48e-05 ----- 23 1234567.8901234 1234567.8901308 1.2345679e+06 | 7.39e-06 ----- 24 1234567.8901234 1234567.8901271 1.2345679e+06 | 3.70e-06 ----- 25 1234567.8901234 1234567.8901252 1.2345679e+06 | 1.85e-06 ----- 26 1234567.8901243 1234567.8901252 1.2345679e+06 | 9.24e-07 ----- 27 1234567.8901243 1234567.8901248 1.2345679e+06 | 4.62e-07 ----- In 27 steps the absolute error case gives 1234567.890124522149562836 In 7 steps the relative error case gives 1234567.679687500000000000 as the approximation to 1234567.89012456789 In this case, the absolute difference tries to get 6 decimal places even though there are 7 digits before the decimal point. The relative difference gives 7 significant digits - all before the decimal point. These two examples show that the relative difference method produces much more satisfactory results than does the absolute difference method. A common idea used in algorithms for the bisection method is to do a computation to predetermine the number of steps required to achieve a desired accuracy. This is done by noting that, after n {\displaystyle n} bisections, the maximum difference between the root and the approximation is | c n − c | ≤ | b − a | 2 n < ϵ . {\displaystyle |c_{n}-c|\leq {\frac {|b-a|}{2^{n}}}<\epsilon .} This formula has been used to determine, in advance, an upper bound on the number of iterations that the bisection method needs to converge to a root within a certain number of decimal places. The number n of iterations needed to achieve such a required tolerance ε is bounded by n ≤ ⌈ log 2 ⁡ ( b − a ϵ ) ⌉ {\displaystyle n\leq \left\lceil \log _{2}\left({\frac {b-a}{\epsilon }}\right)\right\rceil } The problem is that the number of iterations is determined by using the absolute difference method and hence should not be applied. An alternative approach has been suggested by MIT: http://web.mit.edu/10.001/Web/Tips/Converge.htm Convergence Tests, RTOL and ATOL Tolerances are usually specified as either a relative tolerance RTOL or an absolute tolerance ATOL, or both. The user typically desires that | True value -- Computed value | < RTOL*|True Value| + ATOL (Eq.1) where the RTOL controls the number of significant figures in the computed value (a float or a double), and a small ATOL is a just a "safety net" for the case where True Value is close to zero. (What would happen if ATOL = 0 and True Value = 0? Would the convergence test ever be satisfied?) You should write your programs to take both RTOL and ATOL as inputs." If the 'True Value' is large, then the 'RTOL' term will control the error so this would help in that case. If the 'True Value' is small, then the error will be controlled by ATOL - this will make things worse. The question is asked "(What would happen if ATOL = 0 and True Value = 0?. Would the convergence test ever be satisfied?)"- but no attempt is made to answer it. The answer to this question will follow. == IEEE Standard-754 for Computer Arithmetic == If the algorithm is being used in the real number system, it is possible to continue the bisection until the relative error produces the desired approximation. If the algorithm is used with computer arithmetic, a further problem arises. In order to improve reliably and portably, the Institute of Electrical and Electronics Engineers (IEEE) produced a standard for floating point arithmetic in 1985 and has revised it in 2008 and 2019; see IEEE 754. The IEEE Standard 754 representation is the standard used in most micro-computers. It is, for example, the basis of the PC floating point processor. Double-precision numbers occupy 64 bits which are divided into a sign bit (+/-), an exponent of 10 bits, and a fractional part of 53 bits. In order to allow for fractions (negative exponents), the exponent is biased to make the effective number of bits for the exponent 9. The effective values of the exponent with 0 < e ≤ 1023 would be ( 2 − 511 , 2 512 ) {\displaystyle (2^{-511},2^{512})} making the double precision numbers take the form ( − 1 ) s 2 e − 511 0. f {\displaystyle (-1)^{s}2^{e-511}0.f} The extreme range for a positive DP number would then be ( 1.492 × 10 − 154 , 1.341 × 10 154 ) {\displaystyle (1.492\times 10^{-154},1.341\times 10^{154})} Because the fraction would normally have a non-zero leading digit (a 1 for binary) that bit does not need to be stored as the processor will supply it. As a result, the 53 bit fraction can be stored in 52 bits so the other bit can be used in the exponent to give an actual range of 0 < e ≤ 2047. The range can be further extended by putting the assumed 1 before the binary point. If both the exponent and fraction are 0, then the number is 0 (with a sign). In order to deal with 3 other extreme situations, an exponent of 2047 is reserved for NaN (Not a Number - such as division by 0) and the infinities. A number is thus stored in the following form: The following are examples of some double precision numbers: The first one (decimal 3) illustrates that 3 (binary 11) has a single one In the fraction part - the other 1 is assumed. The second one Is an example for which the exponent is 2047 ( + ∞ ) {\displaystyle (+\infty )} . The third one gives the largest number which can be represented in double precision arithmetic. Note that 1.7976931348623157e+308 + 0.0000000000000001e+308 = inf The next one, the minimum normal, represents the smallest number that can be used with full double precision. The maximum subnormal and the minimum subnormal represent a range of numbers that have less than full double precision. It is the minimum subnormal, that is crucial for the bisection algorithm. If b − a < 9.8813129168249309 × 10 − 324 {\displaystyle b-a<9.8813129168249309\times 10^{-324}} (2 X the min.subnormal) the interval can not be divided and the process must stop. == Algorithm == import numpy as np import math def bisect(f, a, b, tol, bound=9.8813129168249309e-324): ############################################################################E # input: Function f, # endpoint values a, b, # tolerance tol, (if tol = 5e-t and bound = 9.0e-324 the function # returns t significant digits for a root between the # minimum normal and the maximum normal), # bound (if bound=9.8813129168249309e-324, the algorithm continues # until the interval cannot be further divided, a larger value # may result in termination before t digits are found). # conditions: f is a continuous function in the interval [a, b], # a < b, # and f(a)*f(b) < 0. # output: [root, iterations, convergence, termination condition] #############################################################################N if b <= a: return [float("NAN"), 0, "No convergence", "b < a"] fa = f(a) fb = f(b) if np.sign(fa) == np.sign(fb): return [float("NAN"), 0, "No convergence", "f(a)*f(b) > 0"] en = 0 while en < 2200: en += 1 if np.sign(a) == np.sign(b): # avoid overflow c = a + (b - a)/2 else: c = (a + b)/2 fc = f(c) if b - a <= bound: return [bound, en, "No convergence", "Bound reached"] if fc == 0: return [c, en, "Converged", "f(c) = 0"] if b - a <= abs(c) * tol: return [c, en, "Converged", "Tolerance"] if np.sign(fa) == np.sign(fc): a = c fa = fc else: b = c return [float("NAN"), en, "No convergence", "Bad function"] The first 2 examples test for incorrect input values: 1 bisect(lambda x: x - 1, 5, 1, 5.000000e-15, 9.8813129168249309e-324) Approx. root = nan No convergence after 0 iterations with termination b < a Final interval [ nan, nan] 2 bisect(lambda x: x - 1, 5, 7, 5.000000e-15, 9.8813129168249309e-324) Approx. root = nan No convergence after 0 iterations with termination f(a)*f(b) > 0 Final interval [ nan, nan] Large roots: 3 bisect(lambda x: x - 12345678901.23456, 0, 1.23457e+14, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 12345678901.23454 Converged after 62 iterations with termination Tolerance Final interval [1.2345678901234526e+10, 1.2345678901234552e+10] 4 bisect(lambda x: x - 1.23456789012456e+100, 0, 2e+100, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890124561e+100 Converged after 50 iterations with termination Tolerance Final interval [1.2345678901245599e+100, 1.2345678901245619e+100] The final interval is computed as [c - w/2, c + w/2] where w = b − a 2 n {\displaystyle w={\frac {b-a}{2^{n}}}} . This can give good measure as to the accuracy of the approximation Root near maximum: 5 bisect(lambda x: x - 1.234567890123456e+307, 0, 1e+308, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890123454e+307 Converged after 52 iterations with termination Tolerance Final interval [1.2345678901234535e+307, 1.2345678901234555e+307] Small roots: 6 bisect(lambda x: x - 1.234567890123456e-05, 0, 1, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890123455e-05 Converged after 65 iterations with termination Tolerance Final interval [1.2345678901234537e-05, 1.2345678901234564e-05] 7 bisect(lambda x: x - 1.234567890123456e-100, 0, 1, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890123454e-100 Converged after 381 iterations with termination Tolerance Final interval [1.2345678901234532e-100, 1.2345678901234552e-100] Ex. 8 is beyond the minimum normal but gives a fairly good result because the approximation has a small interval. Calculations for values in the subnormal range can produce unexpected results. 8 bisect(lambda x: x - 1.234567890123457e-310, 0, 1, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890123457e-310 Converged after 1071 iterations with termination f(c) = 0 Final interval [1.2345678901232595e-310, 1.2345678901236548e-310] If the return state is ' f ( c ) = 0 {\displaystyle f(c)=0} ', then the desired tolerance may not have been achieved. This can be checked by lowering the tolerance until a return state of 'Tolerance' is achieved. 8a bisect(lambda x: x - 1.234567890123457e-310, 0, 1, 5.000000e-13) Approx. root = 1.234567890123457e-310 Converged after 1071 iterations with termination f(c) = 0 Final interval [1.2345678901232595e-310, 1.2345678901236548e-310] 8b bisect(lambda x: x - 1.234567890123457e-310, 0, 1, 5.000000e-12) Approx. root = 1.234567890124643e-310 Converged after 1069 iterations with termination Tolerance Final interval [1.2345678901238524e-310, 1.2345678901254334e-310] 8b shows that the result has 12 digits. Even though the root is outside the 'normal' range, it may still be possible to achieve results with good tolerance. 9 bisect(lambda x: x - 1.234567891003685e-315, 0, 1, 5.000000e-03, 9.8813129168249309e-324) Approx. root = 1.23558592808891e-315 Converged after 1055 iterations with termination Tolerance Final interval [1.2342907646422757e-315, 1.2368810915355439e-315] 1.2368810915355439e-315] Ex. 10 shows the maximum number of iterations that should be expected: 10 bisect(lambda x: x - 1.234567891003685e-315, -1e+307, 1e+307, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567891003685e-315 Converged after 2093 iterations with termination f(c) = 0 Final interval [1.2345678910036845e-315, 1.2345678910036845e-315] There may be situations in which a 'good' approximation is not required. This can be achieved by changing the 'Bound': 11 bisect(lambda x: x - 1.234567890123457e-100, 0, 1, 5.000000e-15, 4.9999999999999997e-12) Approx. root = 5e-12 No convergence after 39 iterations with termination Bound reached Final interval [4.0905052982270715e-12, 5.9094947017729279e-12] Evaluation of the final interval may assist in determining accuracy. The following show the behavior of subnormal numbers And shows how the significant digits are lost: print(1.234567890123456e-310) 1.23456789012346e-310 print(1.234567890123456e-312) 1.234567890124e-312 print(1.234567890123456e-315) 1.23456789e-315 print(1.234567890123456e-317) 1.234568e-317 print(1.234567890123456e-319) 1.23457e-319 print(1.234567890123456e-321) 1.235e-321 print(1.234567890123456e-323) 1e-323 print(1.234567890123456e-324) 0.0 These examples show that this method gives 15 digit accuracy for functions of the form f ( x ) = ( x − r ) g ( x ) {\displaystyle f(x)=(x-r)g(x)} for all r {\displaystyle r} in the range of normal numbers. == Higher order roots == Further problems can arise from the use of computer arithmetic for higher order roots. To help in considering how to detect and correct inaccurate results consider the following: bisect(lambda x: (x - 1.23456789012345e-100), 0, 1, 5e-15) Approx. root = 1.23456789012345e-100 Converged after 381 iterations with termination f(c) = 0 Final interval [1.2345678901234491e-100, 1.2345678901234511e-100] The final interval [1.2345678901234491e-100, 1.2345678901234511e-100] indicates fairly good accuracy. The bisection method has a distinct advantage over other root finding techniques in that the final interval can be used to determine the accuracy of the final solution. This information will be useful in assessing the accuracy of some following examples. Next consider what happens for a root of order 3: bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-15) Approx. root = 1.234567898094279e-100 Converged after 357 iterations with termination f(c) = 0 Final interval [1.2345678810624394e-100, 1.2345679151261181e-100] The final interval [1.2345678810624394e-100, 1.2345679151261181e-100] indicates that 15 digits have not been returned. The relative error (1.234567898094279e-100 - 1.23456789012345e-100)/1.23456789012345e-100 = 6.456371473106003e-09 shows that only 8 digits are correct and again f ( c ) = 0 {\displaystyle f(c)=0} . This occurs because f ( a p p r o x . r o o t ) = f ( 1.234567898094279 ∗ 10 − 100 ) = ( 1.234567898094279 ∗ 10 − 100 − 1.23456789012345 ∗ 10 − 100 ) 3 = ( 7.970828885817127 ∗ 10 − 109 ) 3 = 5.064195 ∗ 10 2 ∗ 10 − 327 = 5.064195 ∗ 10 − 325 {\displaystyle {\begin{aligned}f(approx.root)&=f(1.234567898094279*10^{-100})\\&=(1.234567898094279*10^{-100}-1.23456789012345*10^{-100})^{3}\\&=(7.970828885817127*10^{-109})^{3}\\&=5.064195*10^{2}*10^{-327}\\&=5.064195*10^{-325}\end{aligned}}} Because this is less than the minimum subnormal, it returns a value of 0. This can occur in any root finding technique, not just the bisection method, and it is only the fact that the return conditions include the information about what stopping criteria was achieved that the problem can be diagnosed. The use of the relative error as a stopping condition allows us to determine how accurate a solution can be obtained. Consider what happens on trying to achieve 8 significant figures: bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-8) [1.2345678980942788e-100, 357, 'Converged', 'f(c) = 0'] f ( c ) = 0 {\displaystyle f(c)=0} Indicates that eight digits of accuracy have not been achieved, so try bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-4) [1.2347947281308757e-100, 344, 'Converged', 'Tolerance'] At least four digits have been achieved and bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-6) [1.2345658202098768e-100, 351, 'Converged', 'Tolerance'] 6 digit convergence bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-7) [1.2345677277758852e-100, 354, 'Converged', 'Tolerance'] 7 digit convergence A similar problem can arise if there are two small roots close together: bisect(lambda x: (x - 1.23456789012345e-23)*x, 1e-300, 1, 5e-15) [1.2345678901234481e-23, 125, 'Converged', 'Tolerance'] 15 digit convergence bisect(lambda x: (x - 1.23456789012345e-24)*x, 1e-300, 1e-20, 5e-1) [1.5509016039626554e-300, 931, 'Converged', 'f(c) = 0'] Final interval [1.2754508019813276e-300, 1.8263524059439830e-300] relative error = 3.5521376891678086e-1 -- 1 digit convergence bisect(lambda x: (x - 1.23456789012345e-23)*x, 1e-300, 1, 5e-1) [1.1580528575742387e-23, 79, 'Converged', 'Tolerance'] Final interval [1.0753347963189360e-23, 1.2407709188295415e-23] relative error = 1.4285714285714285e-1 -- 1 digit convergence (The following has not been changed.) == Generalization to higher dimensions == The bisection method has been generalized to multi-dimensional functions. Such methods are called generalized bisection methods. === Methods based on degree computation === Some of these methods are based on computing the topological degree, which for a bounded region Ω ⊆ R n {\displaystyle \Omega \subseteq \mathbb {R} ^{n}} and a differentiable function f : R n → R n {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{n}} is defined as a sum over its roots: deg ⁡ ( f , Ω ) := ∑ y ∈ f − 1 ( 0 ) sgn ⁡ det ( D f ( y ) ) {\displaystyle \deg(f,\Omega ):=\sum _{y\in f^{-1}(\mathbf {0} )}\operatorname {sgn} \det(Df(y))} , where D f ( y ) {\displaystyle Df(y)} is the Jacobian matrix, 0 = ( 0 , 0 , . . . , 0 ) T {\displaystyle \mathbf {0} =(0,0,...,0)^{T}} , and sgn ⁡ ( x ) = { 1 , x > 0 0 , x = 0 − 1 , x < 0 {\displaystyle \operatorname {sgn}(x)={\begin{cases}1,&x>0\\0,&x=0\\-1,&x<0\\\end{cases}}} is the sign function. In order for a root to exist, it is sufficient that deg ⁡ ( f , Ω ) ≠ 0 {\displaystyle \deg(f,\Omega )\neq 0} , and this can be verified using a surface integral over the boundary of Ω {\displaystyle \Omega } . === Characteristic bisection method === The characteristic bisection method uses only the signs of a function in different points. Lef f be a function from Rd to Rd, for some integer d ≥ 2. A characteristic polyhedron (also called an admissible polygon) of f is a polytope in Rd, having 2d vertices, such that in each vertex v, the combination of signs of f(v) is unique and the topological degree of f on its interior is not zero (a necessary criterion to ensure the existence of a root). For example, for d=2, a characteristic polyhedron of f is a quadrilateral with vertices (say) A,B,C,D, such that: ⁠ sgn ⁡ f ( A ) = ( − , − ) {\displaystyle \operatorname {sgn} f(A)=(-,-)} ⁠, that is, f1(A)<0, f2(A)<0. ⁠ sgn ⁡ f ( B ) = ( − , + ) {\displaystyle \operatorname {sgn} f(B)=(-,+)} ⁠, that is, f1(B)<0, f2(B)>0. ⁠ sgn ⁡ f ( C ) = ( + , − ) {\displaystyle \operatorname {sgn} f(C)=(+,-)} ⁠, that is, f1(C)>0, f2(C)<0. ⁠ sgn ⁡ f ( D ) = ( + , + ) {\displaystyle \operatorname {sgn} f(D)=(+,+)} ⁠, that is, f1(D)>0, f2(D)>0. A proper edge of a characteristic polygon is a edge between a pair of vertices, such that the sign vector differs by only a single sign. In the above example, the proper edges of the characteristic quadrilateral are AB, AC, BD and CD. A diagonal is a pair of vertices, such that the sign vector differs by all d signs. In the above example, the diagonals are AD and BC. At each iteration, the algorithm picks a proper edge of the polyhedron (say, A—B), and computes the signs of f in its mid-point (say, M). Then it proceeds as follows: If ⁠ sgn ⁡ f ( M ) = sgn ⁡ ( A ) {\displaystyle \operatorname {sgn} f(M)=\operatorname {sgn}(A)} ⁠, then A is replaced by M, and we get a smaller characteristic polyhedron. If ⁠ sgn ⁡ f ( M ) = sgn ⁡ ( B ) {\displaystyle \operatorname {sgn} f(M)=\operatorname {sgn}(B)} ⁠, then B is replaced by M, and we get a smaller characteristic polyhedron. Else, we pick a new proper edge and try again. Suppose the diameter (= length of longest proper edge) of the original characteristic polyhedron is D. Then, at least log 2 ⁡ ( D / ε ) {\displaystyle \log _{2}(D/\varepsilon )} bisections of edges are required so that the diameter of the remaining polygon will be at most ε.: 11, Lemma.4.7  If the topological degree of the initial polyhedron is not zero, then there is a procedure that can choose an edge such that the next polyhedron also has nonzero degree. == See also == Binary search algorithm Lehmer–Schur algorithm, generalization of the bisection method in the complex plane Nested intervals == References == Burden, Richard L.; Faires, J. Douglas (2014). "2.1 The Bisection Algorithm". Numerical Analysis (10th ed.). Cengage Learning. ISBN 978-0-87150-857-7. == Further reading == Corliss, George (1977). "Which root does the bisection algorithm find?". SIAM Review. 19 (2): 325–327. doi:10.1137/1019044. ISSN 1095-7200. Kaw, Autar; Kalu, Egwu (2008). Numerical Methods with Applications (1st ed.). Archived from the original on 2009-04-13. == External links == Weisstein, Eric W. "Bisection". MathWorld. Bisection Method Notes, PPT, Mathcad, Maple, Matlab, Mathematica from Holistic Numerical Methods Institute
Wikipedia/Bisection_algorithm
In database management, an aggregate function or aggregation function is a function where multiple values are processed together to form a single summary statistic. Common aggregate functions include: Average (i.e., arithmetic mean) Count Maximum Median Minimum Mode Range Sum Others include: Nanmean (mean ignoring NaN values, also known as "nil" or "null") Stddev Formally, an aggregate function takes as input a set, a multiset (bag), or a list from some input domain I and outputs an element of an output domain O. The input and output domains may be the same, such as for SUM, or may be different, such as for COUNT. Aggregate functions occur commonly in numerous programming languages, in spreadsheets, and in relational algebra. The listagg function, as defined in the SQL:2016 standard aggregates data from multiple rows into a single concatenated string. In the entity relationship diagram, aggregation is represented as seen in Figure 1 with a rectangle around the relationship and its entities to indicate that it is being treated as an aggregate entity. == Decomposable aggregate functions == Aggregate functions present a bottleneck, because they potentially require having all input values at once. In distributed computing, it is desirable to divide such computations into smaller pieces, and distribute the work, usually computing in parallel, via a divide and conquer algorithm. Some aggregate functions can be computed by computing the aggregate for subsets, and then aggregating these aggregates; examples include COUNT, MAX, MIN, and SUM. In other cases the aggregate can be computed by computing auxiliary numbers for subsets, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples include AVERAGE (tracking sum and count, dividing at the end) and RANGE (tracking max and min, subtracting at the end). In other cases the aggregate cannot be computed without analyzing the entire set at once, though in some cases approximations can be distributed; examples include DISTINCT COUNT (Count-distinct problem), MEDIAN, and MODE. Such functions are called decomposable aggregation functions or decomposable aggregate functions. The simplest may be referred to as self-decomposable aggregation functions, which are defined as those functions f such that there is a merge operator ⁠ ⋄ {\displaystyle \diamond } ⁠ such that f ( X ⊎ Y ) = f ( X ) ⋄ f ( Y ) {\displaystyle f(X\uplus Y)=f(X)\diamond f(Y)} where ⁠ ⊎ {\displaystyle \uplus } ⁠ is the union of multisets (see monoid homomorphism). For example, SUM: SUM ⁡ ( x ) = x {\displaystyle \operatorname {SUM} ({x})=x} , for a singleton; SUM ⁡ ( X ⊎ Y ) = SUM ⁡ ( X ) + SUM ⁡ ( Y ) {\displaystyle \operatorname {SUM} (X\uplus Y)=\operatorname {SUM} (X)+\operatorname {SUM} (Y)} , meaning that merge ⁠ ⋄ {\displaystyle \diamond } ⁠ is simply addition. COUNT: COUNT ⁡ ( x ) = 1 {\displaystyle \operatorname {COUNT} ({x})=1} , COUNT ⁡ ( X ⊎ Y ) = COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) {\displaystyle \operatorname {COUNT} (X\uplus Y)=\operatorname {COUNT} (X)+\operatorname {COUNT} (Y)} . MAX: MAX ⁡ ( x ) = x {\displaystyle \operatorname {MAX} ({x})=x} , MAX ⁡ ( X ⊎ Y ) = max ( MAX ⁡ ( X ) , MAX ⁡ ( Y ) ) {\displaystyle \operatorname {MAX} (X\uplus Y)=\max {\bigl (}\operatorname {MAX} (X),\operatorname {MAX} (Y){\bigr )}} . MIN: MIN ⁡ ( x ) = x {\textstyle \operatorname {MIN} ({x})=x} , MIN ⁡ ( X ⊎ Y ) = min ( MIN ⁡ ( X ) , MIN ⁡ ( Y ) ) {\displaystyle \operatorname {MIN} (X\uplus Y)=\min {\bigl (}\operatorname {MIN} (X),\operatorname {MIN} (Y){\bigr )}} . Note that self-decomposable aggregation functions can be combined (formally, taking the product) by applying them separately, so for instance one can compute both the SUM and COUNT at the same time, by tracking two numbers. More generally, one can define a decomposable aggregation function f as one that can be expressed as the composition of a final function g and a self-decomposable aggregation function h, f = g ∘ h , f ( X ) = g ( h ( X ) ) {\displaystyle f=g\circ h,f(X)=g(h(X))} . For example, AVERAGE=SUM/COUNT and RANGE=MAX−MIN. In the MapReduce framework, these steps are known as InitialReduce (value on individual record/singleton set), Combine (binary merge on two aggregations), and FinalReduce (final function on auxiliary values), and moving decomposable aggregation before the Shuffle phase is known as an InitialReduce step, Decomposable aggregation functions are important in online analytical processing (OLAP), as they allow aggregation queries to be computed on the pre-computed results in the OLAP cube, rather than on the base data. For example, it is easy to support COUNT, MAX, MIN, and SUM in OLAP, since these can be computed for each cell of the OLAP cube and then summarized ("rolled up"), but it is difficult to support MEDIAN, as that must be computed for every view separately. == Other decomposable aggregate functions == In order to calculate the average and standard deviation from aggregate data, it is necessary to have available for each group: the total of values (Σxi = SUM(x)), the number of values (N=COUNT(x)) and the total of squares of the values (Σxi2=SUM(x2)) of each groups. AVG: AVG ⁡ ( X ⊎ Y ) = ( AVG ⁡ ( X ) ∗ COUNT ⁡ ( X ) + AVG ⁡ ( Y ) ∗ COUNT ⁡ ( Y ) ) / ( COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) ) {\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {AVG} (X)*\operatorname {COUNT} (X)+\operatorname {AVG} (Y)*\operatorname {COUNT} (Y){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}} or AVG ⁡ ( X ⊎ Y ) = ( SUM ⁡ ( X ) + SUM ⁡ ( Y ) ) / ( COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) ) {\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {SUM} (X)+\operatorname {SUM} (Y){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}} or, only if COUNT(X)=COUNT(Y) AVG ⁡ ( X ⊎ Y ) = ( AVG ⁡ ( X ) + AVG ⁡ ( Y ) ) / 2 {\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {AVG} (X)+\operatorname {AVG} (Y){\bigr )}/2} SUM(x2): The sum of squares of the values is important in order to calculate the Standard Deviation of groups SUM ⁡ ( X 2 ⊎ Y 2 ) = SUM ⁡ ( X 2 ) + SUM ⁡ ( Y 2 ) {\displaystyle \operatorname {SUM} (X^{2}\uplus Y^{2})=\operatorname {SUM} (X^{2})+\operatorname {SUM} (Y^{2})} STDDEV: For a finite population with equal probabilities at all points, we have STDDEV ⁡ ( X ) = s ( x ) = 1 N ∑ i = 1 N ( x i − x ¯ ) 2 = 1 N ( ∑ i = 1 N x i 2 ) − ( x ¯ ) 2 = SUM ⁡ ( x 2 ) / COUNT ⁡ ( x ) − AVG ⁡ ( x ) 2 {\displaystyle \operatorname {STDDEV} (X)=s(x)={\sqrt {{\frac {1}{N}}\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}}={\sqrt {{\frac {1}{N}}\left(\sum _{i=1}^{N}x_{i}^{2}\right)-({\overline {x}})^{2}}}={\sqrt {\operatorname {SUM} (x^{2})/\operatorname {COUNT} (x)-\operatorname {AVG} (x)^{2}}}} This means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value. STDDEV ⁡ ( X ⊎ Y ) = SUM ⁡ ( X 2 ⊎ Y 2 ) / COUNT ⁡ ( X ⊎ Y ) − AVG ⁡ ( X ⊎ Y ) 2 {\displaystyle \operatorname {STDDEV} (X\uplus Y)={\sqrt {\operatorname {SUM} (X^{2}\uplus Y^{2})/\operatorname {COUNT} (X\uplus Y)-\operatorname {AVG} (X\uplus Y)^{2}}}} STDDEV ⁡ ( X ⊎ Y ) = ( SUM ⁡ ( X 2 ) + SUM ⁡ ( Y 2 ) ) / ( COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) ) − ( ( SUM ⁡ ( X ) + SUM ⁡ ( Y ) ) / ( COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) ) ) 2 {\displaystyle \operatorname {STDDEV} (X\uplus Y)={\sqrt {{\bigl (}\operatorname {SUM} (X^{2})+\operatorname {SUM} (Y^{2}){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}-{\bigl (}(\operatorname {SUM} (X)+\operatorname {SUM} (Y))/(\operatorname {COUNT} (X)+\operatorname {COUNT} (Y)){\bigr )}^{2}}}} == See also == Cross-tabulation a.k.a. Contingency table Data drilling Data mining Data processing Extract, transform, load Fold (higher-order function) Group by (SQL), SQL clause OLAP cube Online analytical processing Pivot table Relational algebra Utility functions on indivisible goods#Aggregates of utility functions XML for Analysis AggregateIQ MapReduce == References == == Literature == Grabisch, Michel; Marichal, Jean-Luc; Mesiar, Radko; Pap, Endre (2009). Aggregation functions. Encyclopedia of Mathematics and its Applications. Vol. 127. Cambridge: Cambridge University Press. ISBN 978-0-521-51926-7. Zbl 1196.00002. Oracle Aggregate Functions: MAX, MIN, COUNT, SUM, AVG Examples == External links == Aggregate Functions (Transact-SQL)
Wikipedia/Decomposable_aggregation_function
In parallel computing, the fork–join model is a way of setting up and executing parallel programs, such that execution branches off in parallel at designated points in the program, to "join" (merge) at a subsequent point and resume sequential execution. Parallel sections may fork recursively until a certain task granularity is reached. Fork–join can be considered a parallel design pattern.: 209 ff.  It was formulated as early as 1963. By nesting fork–join computations recursively, one obtains a parallel version of the divide and conquer paradigm, expressed by the following generic pseudocode: solve(problem): if problem is small enough: solve problem directly (sequential algorithm) else: for part in subdivide(problem) fork subtask to solve(part) join all subtasks spawned in previous loop return combined results == Examples == The simple parallel merge sort of CLRS is a fork–join algorithm. mergesort(A, lo, hi): if lo < hi: // at least one element of input mid = ⌊lo + (hi - lo) / 2⌋ fork mergesort(A, lo, mid) // process (potentially) in parallel with main task mergesort(A, mid, hi) // main task handles second recursion join merge(A, lo, mid, hi) The first recursive call is "forked off", meaning that its execution may run in parallel (in a separate thread) with the following part of the function, up to the join that causes all threads to synchronize. While the join may look like a barrier, it is different because the threads will continue to work after a barrier, while after a join only one thread continues.: 88  The second recursive call is not a fork in the pseudocode above; this is intentional, as forking tasks may come at an expense. If both recursive calls were set up as subtasks, the main task would not have any additional work to perform before being blocked at the join. == Implementations == Implementations of the fork–join model will typically fork tasks, fibers or lightweight threads, not operating-system-level "heavyweight" threads or processes, and use a thread pool to execute these tasks: the fork primitive allows the programmer to specify potential parallelism, which the implementation then maps onto actual parallel execution. The reason for this design is that creating new threads tends to result in too much overhead. The lightweight threads used in fork–join programming will typically have their own scheduler (typically a work stealing one) that maps them onto the underlying thread pool. This scheduler can be much simpler than a fully featured, preemptive operating system scheduler: general-purpose thread schedulers must deal with blocking for locks, but in the fork–join paradigm, threads only block at the join point. Fork–join is the main model of parallel execution in the OpenMP framework, although OpenMP implementations may or may not support nesting of parallel sections. It is also supported by the Java concurrency framework, the Task Parallel Library for .NET, and Intel's Threading Building Blocks (TBB). The Cilk programming language has language-level support for fork and join, in the form of the spawn and sync keywords, or cilk_spawn and cilk_sync in Cilk Plus. == See also == MapReduce Task parallelism Work stealing == References == == External links == A Primer on Scheduling Fork–Join Parallelism with Work Stealing Fork-Join Merge Sort (Java) (in Portuguese)
Wikipedia/Fork–join_model
In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist. Strassen's algorithm works for any ring, such as plus/multiply, but not all semirings, such as min-plus or boolean algebra, where the naive algorithm still works, and so called combinatorial matrix multiplication. == History == Volker Strassen first published this algorithm in 1969 and thereby proved that the n 3 {\displaystyle n^{3}} general matrix multiplication algorithm was not optimal. The Strassen algorithm's publication resulted in more research about matrix multiplication that led to both asymptotically lower bounds and improved computational upper bounds. == Algorithm == Let A {\displaystyle A} , B {\displaystyle B} be two square matrices over a ring R {\displaystyle {\mathcal {R}}} , for example matrices whose entries are integers or the real numbers. The goal of matrix multiplication is to calculate the matrix product C = A B {\displaystyle C=AB} . The following exposition of the algorithm assumes that all of these matrices have sizes that are powers of two (i.e., A , B , C ∈ Matr 2 n × 2 n ⁡ ( R ) {\displaystyle A,\,B,\,C\in \operatorname {Matr} _{2^{n}\times 2^{n}}({\mathcal {R}})} ), but this is only conceptually necessary — if the matrices A {\displaystyle A} , B {\displaystyle B} are not of type 2 n × 2 n {\displaystyle 2^{n}\times 2^{n}} , the "missing" rows and columns can be filled with zeros to obtain matrices with sizes of powers of two — though real implementations of the algorithm do not do this in practice. The Strassen algorithm partitions A {\displaystyle A} , B {\displaystyle B} and C {\displaystyle C} into equally sized block matrices A = [ A 11 A 12 A 21 A 22 ] , B = [ B 11 B 12 B 21 B 22 ] , C = [ C 11 C 12 C 21 C 22 ] , {\displaystyle A={\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}},\quad B={\begin{bmatrix}B_{11}&B_{12}\\B_{21}&B_{22}\end{bmatrix}},\quad C={\begin{bmatrix}C_{11}&C_{12}\\C_{21}&C_{22}\end{bmatrix}},\quad } with A i j , B i j , C i j ∈ Mat 2 n − 1 × 2 n − 1 ⁡ ( R ) {\displaystyle A_{ij},B_{ij},C_{ij}\in \operatorname {Mat} _{2^{n-1}\times 2^{n-1}}({\mathcal {R}})} . The naive algorithm would be: [ C 11 C 12 C 21 C 22 ] = [ A 11 × B 11 + A 12 × B 21 A 11 × B 12 + A 12 × B 22 A 21 × B 11 + A 22 × B 21 A 21 × B 12 + A 22 × B 22 ] . {\displaystyle {\begin{bmatrix}C_{11}&C_{12}\\C_{21}&C_{22}\end{bmatrix}}={\begin{bmatrix}A_{11}{\color {red}\times }B_{11}+A_{12}{\color {red}\times }B_{21}\quad &A_{11}{\color {red}\times }B_{12}+A_{12}{\color {red}\times }B_{22}\\A_{21}{\color {red}\times }B_{11}+A_{22}{\color {red}\times }B_{21}\quad &A_{21}{\color {red}\times }B_{12}+A_{22}{\color {red}\times }B_{22}\end{bmatrix}}.} This construction does not reduce the number of multiplications: 8 multiplications of matrix blocks are still needed to calculate the C i j {\displaystyle C_{ij}} matrices, the same number of multiplications needed when using standard matrix multiplication. The Strassen algorithm defines instead new values: M 1 = ( A 11 + A 22 ) × ( B 11 + B 22 ) ; M 2 = ( A 21 + A 22 ) × B 11 ; M 3 = A 11 × ( B 12 − B 22 ) ; M 4 = A 22 × ( B 21 − B 11 ) ; M 5 = ( A 11 + A 12 ) × B 22 ; M 6 = ( A 21 − A 11 ) × ( B 11 + B 12 ) ; M 7 = ( A 12 − A 22 ) × ( B 21 + B 22 ) , {\displaystyle {\begin{aligned}M_{1}&=(A_{11}+A_{22}){\color {red}\times }(B_{11}+B_{22});\\M_{2}&=(A_{21}+A_{22}){\color {red}\times }B_{11};\\M_{3}&=A_{11}{\color {red}\times }(B_{12}-B_{22});\\M_{4}&=A_{22}{\color {red}\times }(B_{21}-B_{11});\\M_{5}&=(A_{11}+A_{12}){\color {red}\times }B_{22};\\M_{6}&=(A_{21}-A_{11}){\color {red}\times }(B_{11}+B_{12});\\M_{7}&=(A_{12}-A_{22}){\color {red}\times }(B_{21}+B_{22}),\\\end{aligned}}} using only 7 multiplications (one for each M k {\displaystyle M_{k}} ) instead of 8. We may now express the C i j {\displaystyle C_{ij}} in terms of M k {\displaystyle M_{k}} : [ C 11 C 12 C 21 C 22 ] = [ M 1 + M 4 − M 5 + M 7 M 3 + M 5 M 2 + M 4 M 1 − M 2 + M 3 + M 6 ] . {\displaystyle {\begin{bmatrix}C_{11}&C_{12}\\C_{21}&C_{22}\end{bmatrix}}={\begin{bmatrix}M_{1}+M_{4}-M_{5}+M_{7}\quad &M_{3}+M_{5}\\M_{2}+M_{4}\quad &M_{1}-M_{2}+M_{3}+M_{6}\end{bmatrix}}.} We recursively iterate this division process until the submatrices degenerate into numbers (elements of the ring R {\displaystyle {\mathcal {R}}} ). If, as mentioned above, the original matrix had a size that was not a power of 2, then the resulting product will have zero rows and columns just like A {\displaystyle A} and B {\displaystyle B} , and these will then be stripped at this point to obtain the (smaller) matrix C {\displaystyle C} we really wanted. Practical implementations of Strassen's algorithm switch to standard methods of matrix multiplication for small enough submatrices, for which those algorithms are more efficient. The particular crossover point for which Strassen's algorithm is more efficient depends on the specific implementation and hardware. Earlier authors had estimated that Strassen's algorithm is faster for matrices with widths from 32 to 128 for optimized implementations. However, it has been observed that this crossover point has been increasing in recent years, and a 2010 study found that even a single step of Strassen's algorithm is often not beneficial on current architectures, compared to a highly optimized traditional multiplication, until matrix sizes exceed 1000 or more, and even for matrix sizes of several thousand the benefit is typically marginal at best (around 10% or less). A more recent study (2016) observed benefits for matrices as small as 512 and a benefit around 20%. == Improvements to Strassen algorithm == It is possible to reduce the number of matrix additions by instead using the following form discovered by Winograd in 1971: [ a b c d ] [ A C B D ] = [ t + b × B w + v + ( a + b − c − d ) × D w + u + d × ( B + C − A − D ) w + u + v ] {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}{\begin{bmatrix}A&C\\B&D\end{bmatrix}}={\begin{bmatrix}t+b{\color {red}\times }B&w+v+(a+b-c-d){\color {red}\times }D\\w+u+d{\color {red}\times }(B+C-A-D)&w+u+v\end{bmatrix}}} where t = a × A , u = ( c − a ) × ( C − D ) , v = ( c + d ) × ( C − A ) , w = t + ( c + d − a ) × ( A + D − C ) {\displaystyle t=a{\color {red}\times }A,\;u=(c-a){\color {red}\times }(C-D),\;v=(c+d){\color {red}\times }(C-A),\;w=t+(c+d-a){\color {red}\times }(A+D-C)} . This reduces the number of matrix additions and subtractions from 18 to 15. The number of matrix multiplications is still 7, and the asymptotic complexity is the same. The algorithm was further optimised in 2017 using an alternative basis, reducing the number of matrix additions per bilinear step to 12 while maintaining the number of matrix multiplications, and again in 2023: A 22 = A 12 − A 21 + A 22 ; B 22 = B 12 − B 21 + B 22 , {\displaystyle {\begin{aligned}A_{22}&=A_{12}-A_{21}+A_{22};\\B_{22}&=B_{12}-B_{21}+B_{22},\end{aligned}}} C 12 = C 12 − C 22 ; C 21 = C 22 − C 21 , {\displaystyle {\begin{aligned}C_{12}&=C_{12}-C_{22};\\C_{21}&=C_{22}-C_{21},\end{aligned}}} == Asymptotic complexity == The outline of the algorithm above showed that one can get away with just 7, instead of the traditional 8, matrix-matrix multiplications for the sub-blocks of the matrix. On the other hand, one has to do additions and subtractions of blocks, though this is of no concern for the overall complexity: Adding matrices of size N / 2 {\displaystyle N/2} requires only ( N / 2 ) 2 {\displaystyle (N/2)^{2}} operations whereas multiplication is substantially more expensive (traditionally 2 ( N / 2 ) 3 {\displaystyle 2(N/2)^{3}} addition or multiplication operations). The question then is how many operations exactly one needs for Strassen's algorithms, and how this compares with the standard matrix multiplication that takes approximately 2 N 3 {\displaystyle 2N^{3}} (where N = 2 n {\displaystyle N=2^{n}} ) arithmetic operations, i.e. an asymptotic complexity Θ ( N 3 ) {\displaystyle \Theta (N^{3})} . The number of additions and multiplications required in the Strassen algorithm can be calculated as follows: let f ( n ) {\displaystyle f(n)} be the number of operations for a 2 n × 2 n {\displaystyle 2^{n}\times 2^{n}} matrix. Then by recursive application of the Strassen algorithm, we see that f ( n ) = 7 f ( n − 1 ) + l 4 n {\displaystyle f(n)=7f(n-1)+l4^{n}} , for some constant l {\displaystyle l} that depends on the number of additions performed at each application of the algorithm. Hence f ( n ) = ( 7 + o ( 1 ) ) n {\displaystyle f(n)=(7+o(1))^{n}} , i.e., the asymptotic complexity for multiplying matrices of size N = 2 n {\displaystyle N=2^{n}} using the Strassen algorithm is O ( [ 7 + o ( 1 ) ] n ) = O ( N log 2 ⁡ 7 + o ( 1 ) ) ≈ O ( N 2.8074 ) {\displaystyle O([7+o(1)]^{n})=O(N^{\log _{2}7+o(1)})\approx O(N^{2.8074})} . The reduction in the number of arithmetic operations however comes at the price of a somewhat reduced numerical stability, and the algorithm also requires significantly more memory compared to the naive algorithm. Both initial matrices must have their dimensions expanded to the next power of 2, which results in storing up to four times as many elements, and the seven auxiliary matrices each contain a quarter of the elements in the expanded ones. Strassen's algorithm needs to be compared to the "naive" way of doing the matrix multiplication that would require 8 instead of 7 multiplications of sub-blocks. This would then give rise to the complexity one expects from the standard approach: O ( 8 n ) = O ( N log 2 ⁡ 8 ) = O ( N 3 ) {\displaystyle O(8^{n})=O(N^{\log _{2}8})=O(N^{3})} . The comparison of these two algorithms shows that asymptotically, Strassen's algorithm is faster: There exists a size N threshold {\displaystyle N_{\text{threshold}}} so that matrices that are larger are more efficiently multiplied with Strassen's algorithm than the "traditional" way. However, the asymptotic statement does not imply that Strassen's algorithm is always faster even for small matrices, and in practice this is in fact not the case: For small matrices, the cost of the additional additions of matrix blocks outweighs the savings in the number of multiplications. There are also other factors not captured by the analysis above, such as the difference in cost on today's hardware between loading data from memory onto processors vs. the cost of actually doing operations on this data. As a consequence of these sorts of considerations, Strassen's algorithm is typically only used on "large" matrices. This kind of effect is even more pronounced with alternative algorithms such as the one by Coppersmith and Winograd: While asymptotically even faster, the cross-over point N threshold {\displaystyle N_{\text{threshold}}} is so large that the algorithm is not generally used on matrices one encounters in practice. === Rank or bilinear complexity === The bilinear complexity or rank of a bilinear map is an important concept in the asymptotic complexity of matrix multiplication. The rank of a bilinear map ϕ : A × B → C {\displaystyle \phi :\mathbf {A} \times \mathbf {B} \rightarrow \mathbf {C} } over a field F is defined as (somewhat of an abuse of notation) R ( ϕ / F ) = min { r | ∃ f i ∈ A ∗ , g i ∈ B ∗ , w i ∈ C , ∀ a ∈ A , b ∈ B , ϕ ( a , b ) = ∑ i = 1 r f i ( a ) g i ( b ) w i } {\displaystyle R(\phi /\mathbf {F} )=\min \left\{r\left|\exists f_{i}\in \mathbf {A} ^{*},g_{i}\in \mathbf {B} ^{*},w_{i}\in \mathbf {C} ,\forall \mathbf {a} \in \mathbf {A} ,\mathbf {b} \in \mathbf {B} ,\phi (\mathbf {a} ,\mathbf {b} )=\sum _{i=1}^{r}f_{i}(\mathbf {a} )g_{i}(\mathbf {b} )w_{i}\right.\right\}} In other words, the rank of a bilinear map is the length of its shortest bilinear computation. The existence of Strassen's algorithm shows that the rank of 2 × 2 {\displaystyle 2\times 2} matrix multiplication is no more than seven. To see this, let us express this algorithm (alongside the standard algorithm) as such a bilinear computation. In the case of matrices, the dual spaces A* and B* consist of maps into the field F induced by a scalar double-dot product, (i.e. in this case the sum of all the entries of a Hadamard product.) It can be shown that the total number of elementary multiplications L {\displaystyle L} required for matrix multiplication is tightly asymptotically bound to the rank R {\displaystyle R} , i.e. L = Θ ( R ) {\displaystyle L=\Theta (R)} , or more specifically, since the constants are known, R / 2 ≤ L ≤ R {\displaystyle R/2\leq L\leq R} . One useful property of the rank is that it is submultiplicative for tensor products, and this enables one to show that 2 n × 2 n × 2 n {\displaystyle 2^{n}\times 2^{n}\times 2^{n}} matrix multiplication can be accomplished with no more than 7 n {\displaystyle 7n} elementary multiplications for any n {\displaystyle n} . (This n {\displaystyle n} -fold tensor product of the 2 × 2 × 2 {\displaystyle 2\times 2\times 2} matrix multiplication map with itself — an n {\displaystyle n} -th tensor power—is realized by the recursive step in the algorithm shown.) === Cache behavior === Strassen's algorithm is cache oblivious. Analysis of its cache behavior algorithm has shown it to incur Θ ( 1 + n 2 b + n log 2 ⁡ 7 b M ) {\displaystyle \Theta \left(1+{\frac {n^{2}}{b}}+{\frac {n^{\log _{2}7}}{b{\sqrt {M}}}}\right)} cache misses during its execution, assuming an idealized cache of size M {\displaystyle M} (i.e. with M / b {\displaystyle M/b} lines of length b {\displaystyle b} ).: 13  == Implementation considerations == The description above states that the matrices are square, and the size is a power of two, and that padding should be used if needed. This restriction allows the matrices to be split in half, recursively, until limit of scalar multiplication is reached. The restriction simplifies the explanation, and analysis of complexity, but is not actually necessary; and in fact, padding the matrix as described will increase the computation time and can easily eliminate the fairly narrow time savings obtained by using the method in the first place. A good implementation will observe the following: It is not necessary or desirable to use the Strassen algorithm down to the limit of scalars. Compared to conventional matrix multiplication, the algorithm adds a considerable O ( n 2 ) {\displaystyle O(n^{2})} workload in addition/subtractions; so below a certain size, it will be better to use conventional multiplication. Thus, for instance, a 1600 × 1600 {\displaystyle 1600\times 1600} does not need to be padded to 2048 × 2048 {\displaystyle 2048\times 2048} , since it could be subdivided down to 25 × 25 {\displaystyle 25\times 25} matrices and conventional multiplication can then be used at that level. The method can indeed be applied to square matrices of any dimension. If the dimension is even, they are split in half as described. If the dimension is odd, zero padding by one row and one column is applied first. Such padding can be applied on-the-fly and lazily, and the extra rows and columns discarded as the result is formed. For instance, suppose the matrices are 199 × 199 {\displaystyle 199\times 199} . They can be split so that the upper-left portion is 100 × 100 {\displaystyle 100\times 100} and the lower-right is 99 × 99 {\displaystyle 99\times 99} . Wherever the operations require it, dimensions of 99 {\displaystyle 99} are zero padded to 100 {\displaystyle 100} first. Note, for instance, that the product M 2 {\displaystyle M_{2}} is only used in the lower row of the output, so is only required to be 99 {\displaystyle 99} rows high; and thus the left factor A 21 + A 22 {\displaystyle A_{21}+A_{22}} used to generate it need only be 99 {\displaystyle 99} rows high; accordingly, there is no need to pad that sum to 100 {\displaystyle 100} rows; it is only necessary to pad A 22 {\displaystyle A_{22}} to 100 {\displaystyle 100} columns to match A 21 {\displaystyle A_{21}} . Furthermore, there is no need for the matrices to be square. Non-square matrices can be split in half using the same methods, yielding smaller non-square matrices. If the matrices are sufficiently non-square it will be worthwhile reducing the initial operation to more square products, using simple methods which are essentially O ( n 2 ) {\displaystyle O(n^{2})} , for instance: A product of size [ 2 N × N ] ∗ [ N × 10 N ] {\displaystyle [2N\times N]\ast [N\times 10N]} can be done as 20 separate [ N × N ] ∗ [ N × N ] {\displaystyle [N\times N]\ast [N\times N]} operations, arranged to form the result; A product of size [ N × 10 N ] ∗ [ 10 N × N ] {\displaystyle [N\times 10N]\ast [10N\times N]} can be done as 10 separate [ N × N ] ∗ [ N × N ] {\displaystyle [N\times N]\ast [N\times N]} operations, summed to form the result. These techniques will make the implementation more complicated, compared to simply padding to a power-of-two square; however, it is a reasonable assumption that anyone undertaking an implementation of Strassen, rather than conventional multiplication, will place a higher priority on computational efficiency than on simplicity of the implementation. In practice, Strassen's algorithm can be implemented to attain better performance than conventional multiplication even for matrices as small as 500 × 500 {\displaystyle 500\times 500} , for matrices that are not at all square, and without requiring workspace beyond buffers that are already needed for a high-performance conventional multiplication. == See also == Computational complexity of mathematical operations Gauss–Jordan elimination Computational complexity of matrix multiplication Z-order curve Karatsuba algorithm, for multiplying n-digit integers in O ( n log 2 ⁡ 3 ) {\displaystyle O(n^{\log _{2}3})} instead of in O ( n 2 ) {\displaystyle O(n^{2})} time A similar complex multiplication algorithm multiplies two complex numbers using 3 real multiplications instead of 4 Toom-Cook algorithm, a faster generalization of the Karatsuba algorithm that permits recursive divide-and-conquer decomposition into more than 2 blocks at a time == References == Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 28: Section 28.2: Strassen's algorithm for matrix multiplication, pp. 735–741. Knuth, Donald (1997). The Art of Computer Programming, Seminumerical Algorithms. Vol. II (3rd ed.). Addison-Wesley. ISBN 0-201-89684-2. == External links == Weisstein, Eric W. "Strassen's Formulas". MathWorld. (also includes formulas for fast matrix inversion) Tyler J. Earnest, Strassen's Algorithm on the Cell Broadband Engine
Wikipedia/Strassen_algorithm
Bottom-up and top-down are strategies of composition and decomposition in fields as diverse as information processing and ordering knowledge, software, humanistic and scientific theories (see systemics), and management and organization. In practice they can be seen as a style of thinking, teaching, or leadership. A top-down approach (also known as stepwise design and stepwise refinement and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional subsystems in a reverse engineering fashion. In a top-down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top-down model is often specified with the assistance of black boxes, which makes it easier to manipulate. However, black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. A top-down approach starts with the big picture, then breaks down into smaller segments. A bottom-up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems subsystems of the emergent system. Bottom-up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. But "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose. == Computer science == === Software development === In the software development process, the top-down and bottom-up approaches play a key role. Top-down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top-down approaches are implemented by attaching the stubs in place of the module. But these delay testing of the ultimate functional units of a system until significant design is complete. Bottom-up emphasizes coding and early testing, which can begin as soon as the first module has been specified. But this approach runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of a bottom-up approach. Top-down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top-down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top-down programming was not strictly what he promoted. Top-down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be used. Modern software design approaches usually combine top-down and bottom-up approaches. Although an understanding of the complete system is usually considered necessary for good design—leading theoretically to a top-down approach—most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom-up flavor. === Programming === Top-down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top-down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized subroutines eventually will perform actions so simple they can be easily and concisely coded. When all the various subroutines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower-level work can be self-contained. In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked, sometimes at many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with Lego. Engineers call this "piece part design". === Parsing === Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler. Bottom-up parsing is parsing strategy that recognizes the text's lowest-level small details first, before its mid-level structures, and leaves the highest-level overall structure to last. In top-down parsing, on the other hand, one first looks at the highest level of the parse tree and works down the parse tree by using the rewriting rules of a formal grammar. == Natural sciences == === Nanotechnology === Top-down and bottom-up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom-up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top-down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications. A top-down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top-down secondary approaches to engineer nanostructures. Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches use the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods but could potentially be overwhelmed as the size and complexity of the desired assembly increases. === Neuroscience and psychology === These terms are also employed in cognitive sciences including neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing. Typically, sensory input is considered bottom-up, and higher cognitive processes, which have more information from other sources, are considered top-down. A bottom-up process is characterized by an absence of higher-level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19). According to college teaching notes written by Charles Ramskov, Irvin Rock, Neiser, and Richard Gregory claim that the top-down approach involves perception that is an active and constructive process. Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to theoretical synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach." Conversely, psychology defines bottom-up processing as an approach in which there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom-up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus. Theoretical synthesis also claims that bottom-up processing occurs "when a stimulus is presented long and clearly enough." Certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom-up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top-down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom-up connections. Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top-down influence. The study of visual attention is an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom-up fashion—your attention was not contingent on knowledge of the flower: the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object, you are looking for, it is salient. This is an example of the use of top-down information. In cognition, two thinking approaches are distinguished. "Top-down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom-up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition. Studies in task switching and response selection show that there are differences through the two types of processing. Top-down processing primarily focuses on the attention side, such as task repetition. Bottom-up processing focuses on item-based learning, such as finding the same object over and over again. Implications for understanding attentional control of response selection in conflict situations are discussed. This also applies to how we structure these processing neurologically. With structuring information interfaces in our neurological processes for procedural learning. These processes were proven effective to work in our interface design. But although both top-down principles were effective in guiding interface design; they were not sufficient. They can be combined with iterative bottom-up methods to produce usable interfaces . Undergraduate (or bachelor) students are taught the basis of top-down bottom-up processing around their third year in the program. Going through four main parts of the processing when viewing it from a learning perspective. The two main definitions are that bottom-up processing is determined directly by environmental stimuli rather than the individual's knowledge and expectations. === Public health === Both top-down and bottom-up approaches are used in public health. There are many examples of top-down programs, often run by governments or large inter-governmental organizations; many of these are disease-or issue-specific, such as HIV control or smallpox eradication. Examples of bottom-up programs include many small NGOs set up to improve local access to healthcare. But many programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary healthcare. === Ecology === In ecology top-down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The interactions between these top predators and their prey are what influences lower trophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top-down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins, which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. This reduces the diversity of the ecosystem as a whole and can have detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top-down control has in this example; when the population of otters decreased, the population of the urchins increased. Bottom-up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type of primary producers (plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because of biomagnification and ecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface. There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there are still debates going on as to which type of control affects food webs in certain ecosystems. == Management and organization == In the fields of management and organization, the terms "top-down" and "bottom-up" are used to describe how decisions are made and/or how change is implemented. A "top-down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then use a planned approach to drive the changes down to the frontline staff. A bottom-up approach to changes is one that works from the grassroots, and originates in a flat structure with people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom-up" decision. A bottom-up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers". Positive aspects of top-down approaches include their efficiency and superb overview of higher levels; and external effects can be internalized. On the negative side, if reforms are perceived to be imposed "from above", it can be difficult for lower levels to accept them. Evidence suggests this to be true regardless of the content of reforms. A bottom-up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change. === Corporate environment (Performance management) === Top-down and bottom-up planning are two fundamental approaches in enterprise performance management (EPM), each offering distinct advantages. Top-down planning begins with senior management setting overarching strategic goals, which are then disseminated throughout the organization. This approach ensures alignment with the company's vision and facilitates uniform implementation across departments. Conversely, bottom-up planning starts at the departmental or team level, where specific goals and plans are developed based on detailed operational insights. These plans are then aggregated to form the organization's overall strategy, ensuring that ground-level insights inform higher-level decisions. Many organizations adopt a hybrid approach, known as the countercurrent or integrated planning method, to leverage the strengths of both top-down and bottom-up planning. In this model, strategic objectives set by leadership are informed by operational data from various departments, creating a dynamic and iterative planning process. This integration enhances collaboration, improves data accuracy, and ensures that strategies are both ambitious and grounded in operational realities. Financial planning & analysis (FP&A) teams play a crucial role in harmonizing these approaches, utilizing tools like driver-based planning and AI-assisted forecasting to create flexible, data-driven plans that adapt to changing business conditions. === Product design and development === During the development of new products, designers and engineers rely on both bottom-up and top-down approaches. The bottom-up approach is being used when off-the-shelf or existing components are selected and integrated into the product. An example includes selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top-down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment), such as a spacesuit, a more top-down approach is taken and almost everything is custom designed. == Architecture == Often the École des Beaux-Arts school of design is said to have primarily promoted top-down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project. By contrast, the Bauhaus focused on bottom-up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design). == Philosophy and ethics == Top-down reasoning in ethics is when the reasoner starts from abstract universalizable principles and then reasons down them to particular situations. Bottom-up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles. Reflective equilibrium occurs when there is interaction between top-down and bottom-up reasoning until both are in harmony. That is to say, when universalizable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs when cognitive dissonance occurs when reasoners try to resolve top-down with bottom-up reasoning, and adjust one or the other, until they are satisfied, they have found the best combinations of principles and situational judgements. == See also == Formal concept analysis Pseudocode The Cathedral and the Bazaar == References == === Sources === == Further reading == == External links == "Program Development by Stepwise Refinement", Communications of the ACM, Vol. 14, No. 4, April (1971) Integrated Parallel Bottom-up and Top-down Approach. In Proceedings of the International Emergency Management Society's Fifth Annual Conference (TIEMS 98), May 19–22, Washington DC, USA (1998). Changing Your Mind: On the Contributions of Top-Down and Bottom-Up Guidance in Visual Search for Feature Singletons, Journal of Experimental Psychology: Human Perception and Performance, Vol. 29, No. 2, 483–502, 2003. K. Eric Drexler and Christine Peterson, Nanotechnology and Enabling Technologies, Foresight Briefing No. 2, 1989. Empowering sustained patient safety: the benefits of combining top-down and bottom-up approaches
Wikipedia/Bottom-up_design
In computer science, the Akra–Bazzi method, or Akra–Bazzi theorem, is used to analyze the asymptotic behavior of the mathematical recurrences that appear in the analysis of divide and conquer algorithms where the sub-problems have substantially different sizes. It is a generalization of the master theorem for divide-and-conquer recurrences, which assumes that the sub-problems have equal size. It is named after mathematicians Mohamad Akra and Louay Bazzi. == Formulation == The Akra–Bazzi method applies to recurrence formulas of the form: T ( x ) = g ( x ) + ∑ i = 1 k a i T ( b i x + h i ( x ) ) for x ≥ x 0 . {\displaystyle T(x)=g(x)+\sum _{i=1}^{k}a_{i}T(b_{i}x+h_{i}(x))\qquad {\text{for }}x\geq x_{0}.} The conditions for usage are: sufficient base cases are provided a i {\displaystyle a_{i}} and b i {\displaystyle b_{i}} are constants for all i {\displaystyle i} a i > 0 {\displaystyle a_{i}>0} for all i {\displaystyle i} 0 < b i < 1 {\displaystyle 0<b_{i}<1} for all i {\displaystyle i} | g ′ ( x ) | ∈ O ( x c ) {\displaystyle \left|g'(x)\right|\in O(x^{c})} , where c is a constant and O notates Big O notation | h i ( x ) | ∈ O ( x ( log ⁡ x ) 2 ) {\displaystyle \left|h_{i}(x)\right|\in O\left({\frac {x}{(\log x)^{2}}}\right)} for all i {\displaystyle i} x 0 {\displaystyle x_{0}} is a constant The asymptotic behavior of T ( x ) {\displaystyle T(x)} is found by determining the value of p {\displaystyle p} for which ∑ i = 1 k a i b i p = 1 {\displaystyle \sum _{i=1}^{k}a_{i}b_{i}^{p}=1} and plugging that value into the equation: T ( x ) ∈ Θ ( x p ( 1 + ∫ 1 x g ( u ) u p + 1 d u ) ) {\displaystyle T(x)\in \Theta \left(x^{p}\left(1+\int _{1}^{x}{\frac {g(u)}{u^{p+1}}}du\right)\right)} (see Θ). Intuitively, h i ( x ) {\displaystyle h_{i}(x)} represents a small perturbation in the index of T {\displaystyle T} . By noting that ⌊ b i x ⌋ = b i x + ( ⌊ b i x ⌋ − b i x ) {\displaystyle \lfloor b_{i}x\rfloor =b_{i}x+(\lfloor b_{i}x\rfloor -b_{i}x)} and that the absolute value of ⌊ b i x ⌋ − b i x {\displaystyle \lfloor b_{i}x\rfloor -b_{i}x} is always between 0 and 1, h i ( x ) {\displaystyle h_{i}(x)} can be used to ignore the floor function in the index. Similarly, one can also ignore the ceiling function. For example, T ( n ) = n + T ( 1 2 n ) {\displaystyle T(n)=n+T\left({\frac {1}{2}}n\right)} and T ( n ) = n + T ( ⌊ 1 2 n ⌋ ) {\displaystyle T(n)=n+T\left(\left\lfloor {\frac {1}{2}}n\right\rfloor \right)} will, as per the Akra–Bazzi theorem, have the same asymptotic behavior. == Example == Suppose T ( n ) {\displaystyle T(n)} is defined as 1 for integers 0 ≤ n ≤ 3 {\displaystyle 0\leq n\leq 3} and n 2 + 7 4 T ( ⌊ 1 2 n ⌋ ) + T ( ⌈ 3 4 n ⌉ ) {\displaystyle n^{2}+{\frac {7}{4}}T\left(\left\lfloor {\frac {1}{2}}n\right\rfloor \right)+T\left(\left\lceil {\frac {3}{4}}n\right\rceil \right)} for integers n > 3 {\displaystyle n>3} . In applying the Akra–Bazzi method, the first step is to find the value of p {\displaystyle p} for which 7 4 ( 1 2 ) p + ( 3 4 ) p = 1 {\displaystyle {\frac {7}{4}}\left({\frac {1}{2}}\right)^{p}+\left({\frac {3}{4}}\right)^{p}=1} . In this example, p = 2 {\displaystyle p=2} . Then, using the formula, the asymptotic behavior can be determined as follows: T ( x ) ∈ Θ ( x p ( 1 + ∫ 1 x g ( u ) u p + 1 d u ) ) = Θ ( x 2 ( 1 + ∫ 1 x u 2 u 3 d u ) ) = Θ ( x 2 ( 1 + ln ⁡ x ) ) = Θ ( x 2 log ⁡ x ) . {\displaystyle {\begin{aligned}T(x)&\in \Theta \left(x^{p}\left(1+\int _{1}^{x}{\frac {g(u)}{u^{p+1}}}\,du\right)\right)\\&=\Theta \left(x^{2}\left(1+\int _{1}^{x}{\frac {u^{2}}{u^{3}}}\,du\right)\right)\\&=\Theta (x^{2}(1+\ln x))\\&=\Theta (x^{2}\log x).\end{aligned}}} == Significance == The Akra–Bazzi method is more useful than most other techniques for determining asymptotic behavior because it covers such a wide variety of cases. Its primary application is the approximation of the running time of many divide-and-conquer algorithms. For example, in the merge sort, the number of comparisons required in the worst case, which is roughly proportional to its runtime, is given recursively as T ( 1 ) = 0 {\displaystyle T(1)=0} and T ( n ) = T ( ⌊ 1 2 n ⌋ ) + T ( ⌈ 1 2 n ⌉ ) + n − 1 {\displaystyle T(n)=T\left(\left\lfloor {\frac {1}{2}}n\right\rfloor \right)+T\left(\left\lceil {\frac {1}{2}}n\right\rceil \right)+n-1} for integers n > 0 {\displaystyle n>0} , and can thus be computed using the Akra–Bazzi method to be Θ ( n log ⁡ n ) {\displaystyle \Theta (n\log n)} . == See also == Master theorem (analysis of algorithms) Asymptotic complexity == References == == External links == O Método de Akra-Bazzi na Resolução de Equações de Recorrência (in Portuguese)
Wikipedia/Akra–Bazzi_method
External memory graph traversal is a type of graph traversal optimized for accessing externally stored memory. == Background == Graph traversal is a subroutine in most graph algorithms. The goal of a graph traversal algorithm is to visit (and / or process) every node of a graph. Graph traversal algorithms, like breadth-first search and depth-first search, are analyzed using the von Neumann model, which assumes uniform memory access cost. This view neglects the fact, that for huge instances part of the graph resides on disk rather than internal memory. Since accessing the disk is magnitudes slower than accessing internal memory, the need for efficient traversal of external memory exists. == External memory model == For external memory algorithms the external memory model by Aggarwal and Vitter is used for analysis. A machine is specified by three parameters: M, B and D. M is the size of the internal memory, B is the block size of a disk and D is the number of parallel disks. The measure of performance for an external memory algorithm is the number of I/Os it performs. == External memory breadth-first search == The breadth-first search algorithm starts at a root node and traverses every node with depth one. If there are no more unvisited nodes at the current depth, nodes at a higher depth are traversed. Eventually, every node of the graph has been visited. === Munagala and Ranade === For an undirected graph G {\displaystyle G} , Munagala and Ranade proposed the following external memory algorithm: Let L ( t ) {\displaystyle L(t)} denote the nodes in breadth-first search level t and let A ( t ) := N ( L ( t − 1 ) ) {\displaystyle A(t):=N(L(t-1))} be the multi-set of neighbors of level t-1. For every t, L ( t ) {\displaystyle L(t)} can be constructed from A ( t ) {\displaystyle A(t)} by transforming it into a set and excluding previously visited nodes from it. Create A ( t ) {\displaystyle A(t)} by accessing the adjacency list of every vertex in L ( t − 1 ) {\displaystyle L(t-1)} . This step requires O ( | L ( t − 1 ) | + | A ( t ) | / ( D ⋅ B ) ) {\displaystyle O(|L(t-1)|+|A(t)|/(D\cdot B))} I/Os. Next A ′ ( t ) {\displaystyle A'(t)} is created from A ( t ) {\displaystyle A(t)} by removing duplicates. This can be achieved via sorting of A ( t ) {\displaystyle A(t)} , followed by a scan and compaction phase needing O ( sort ⁡ ( | A | ) ) {\displaystyle O(\operatorname {sort} (|A|))} I/Os. L ( t ) := A ′ ( t ) ∖ { L ( t − 1 ) ∪ L ( t − 2 ) } {\displaystyle L(t):=A'(t)\backslash \{L(t-1)\cup L(t-2)\}} is calculated by a parallel scan over L ( t − 1 ) {\displaystyle L(t-1)} and L ( t − 2 ) {\displaystyle L(t-2)} and requires O ( ( | A ( t ) | + | L ( t − 1 ) | + | L ( t − 2 ) | ) / ( D ⋅ B ) ) {\displaystyle O((|A(t)|+|L(t-1)|+|L(t-2)|)/(D\cdot B))} I/Os. The overall number of I/Os of this algorithm follows with consideration that ∑ t | A ( t ) | = O ( m ) {\displaystyle \sum _{t}|A(t)|=O(m)} and ∑ t | L ( t ) | = O ( n ) {\displaystyle \sum _{t}|L(t)|=O(n)} and is O ( n + sort ⁡ ( n + m ) ) {\displaystyle O(n+\operatorname {sort} (n+m))} . A visualization of the three described steps necessary to compute L(t) is depicted in the figure on the right. === Mehlhorn and Meyer === Mehlhorn and Meyer proposed an algorithm that is based on the algorithm of Munagala and Ranade (MR) and improves their result. It consists of two phases. In the first phase the graph is preprocessed, the second phase performs a breadth-first search using the information gathered in phase one. During the preprocessing phase the graph is partitioned into disjointed subgraphs S i , 0 ≤ i ≤ K {\displaystyle S_{i},\,0\leq i\leq K} with small diameter. It further partitions the adjacency lists accordingly, by constructing an external file F = F 0 F 1 … F K − 1 {\displaystyle F=F_{0}F_{1}\dots F_{K-1}} , where F i {\displaystyle F_{i}} contains the adjacency list for all nodes in S i {\displaystyle S_{i}} . The breadth-first search phase is similar to the MR algorithm. In addition the algorithm maintains a sorted external file H. This file is initialized with F 0 {\displaystyle F_{0}} . Further, the nodes of any created breadth-first search level carry identifiers for the files F i {\displaystyle F_{i}} of their respective subgraphs S i {\displaystyle S_{i}} . Instead of using random accesses to construct L ( t ) {\displaystyle L(t)} the file H is used. Perform a parallel scan of sorted list L ( t − 1 ) {\displaystyle L(t-1)} and H. Extract the adjacency lists for nodes v ∈ L ( t − 1 ) {\displaystyle v\in L(t-1)} , that can be found in H. The adjacency lists for the remaining nodes that could not be found in H need to be fetched. A scan over L ( t − 1 ) {\displaystyle L(t-1)} yields the partition identifiers. After sorting and deletion of duplicates, the respective files F i {\displaystyle F_{i}} can be concatenated into a temporary file F'. The missing adjacency lists can be extracted from F' with a scan. Next, the remaining adjacency lists are merged into H with a single pass. A ( t ) {\displaystyle A(t)} is created by a simple scan. The partition information is attached to each node in A ( t ) {\displaystyle A(t)} . The algorithm proceeds like the MR algorithm. Edges might be scanned more often in H, but unstructured I/Os in order to fetch adjacency lists are reduced. The overall number of I/Os for this algorithm is O ( n ⋅ ( n + m ) D ⋅ B + sort ⁡ ( n + m ) ) {\displaystyle O\left({\sqrt {\frac {n\cdot (n+m)}{D\cdot B}}}+\operatorname {sort} (n+m)\right)} == External memory depth-first search == The depth-first search algorithm explores a graph along each branch as deep as possible, before backtracing. For directed graphs Buchsbaum, Goldwasser, Venkatasubramanian and Westbrook proposed an algorithm with O ( ( V + E / B ) log 2 ⁡ ( V / B ) + sort ⁡ ( E ) ) {\displaystyle O((V+E/B)\log _{2}(V/B)+\operatorname {sort} (E))} I/Os. This algorithm is based on a data structure called buffered repository tree (BRT). It stores a multi-set of items from an ordered universe. Items are identified by key. A BTR offers two operations: insert(T, x), which adds item x to T and needs O ( 1 / B log 2 ⁡ ( N / B ) ) {\displaystyle O(1/B\log _{2}(N/B))} amortized I/Os. N is the number of items added to the BTR. extract(T, k), which reports and deletes from T all items with key k. It requires O ( log 2 ⁡ ( N / B ) + S / B ) {\displaystyle O(\log _{2}(N/B)+S/B)} I/Os, where S is the size of the set returned by extract. The algorithm simulates an internal depth-first search algorithm. A stack S of nodes is hold. During an iteration for the node v on top of S push an unvisited neighbor onto S and iterate. If there are no unvisited neighbors pop v. The difficulty is to determine whether a node is unvisited without doing Ω ( 1 ) {\displaystyle \Omega (1)} I/Os per edge. To do this for a node v incoming edges ⁠ ( x , v ) {\displaystyle (x,v)} ⁠ are put into a BRT D, when v is first discovered. Further, outgoing edges (v,x) are put into a priority queue P(v), keyed by the rank in the adjacency list. For vertex u on top of S all edges (u,x) are extracted from D. Such edges only exist if x has been discovered since the last time u was on top of S (or since the start of the algorithm if u is the first time on top of S). For every edge (u,x) a delete(x) operation is performed on P(u). Finally a delete-min operation on ⁠ P ( u ) {\displaystyle P(u)} ⁠ yields the next unvisited node. If P(u) is empty, u is popped from S. Pseudocode for this algorithm is given below. 1 procedure BGVW-depth-first-search(G, v): 2 let S be a stack, P[] a priority queue for each node and D a BRT 3 S.push(v) 4 while S is not empty: 5 v := S.top() 6 if v is not marked: 7 mark(v) 8 extract all edges (v, x) from D, ∀x: P[v].delete(x) 9 if (u := P[v].delete-min()) is not null: 10 S.push(u) 11 else: 12 S.pop() 13 procedure mark(v) 14 put all edges (x, v) into D 15 ∀ (v, x): put x into P[v] == References ==
Wikipedia/External_memory_graph_traversal
In numerical linear algebra, the Cuthill–McKee algorithm (CM), named after Elizabeth Cuthill and James McKee, is an algorithm to permute a sparse matrix that has a symmetric sparsity pattern into a band matrix form with a small bandwidth. The reverse Cuthill–McKee algorithm (RCM) due to Alan George and Joseph Liu is the same algorithm but with the resulting index numbers reversed. In practice this generally results in less fill-in than the CM ordering when Gaussian elimination is applied. The Cuthill McKee algorithm is a variant of the standard breadth-first search algorithm used in graph algorithms. It starts with a peripheral node and then generates levels R i {\displaystyle R_{i}} for i = 1 , 2 , . . {\displaystyle i=1,2,..} until all nodes are exhausted. The set R i + 1 {\displaystyle R_{i+1}} is created from set R i {\displaystyle R_{i}} by listing all vertices adjacent to all nodes in R i {\displaystyle R_{i}} . These nodes are ordered according to predecessors and degree. == Algorithm == Given a symmetric n × n {\displaystyle n\times n} matrix we visualize the matrix as the adjacency matrix of a graph. The Cuthill–McKee algorithm is then a relabeling of the vertices of the graph to reduce the bandwidth of the adjacency matrix. The algorithm produces an ordered n-tuple R {\displaystyle R} of vertices which is the new order of the vertices. First we choose a peripheral vertex (the vertex with the lowest degree) x {\displaystyle x} and set R := ( { x } ) {\displaystyle R:=(\{x\})} . Then for i = 1 , 2 , … {\displaystyle i=1,2,\dots } we iterate the following steps while | R | < n {\displaystyle |R|<n} Construct the adjacency set A i {\displaystyle A_{i}} of R i {\displaystyle R_{i}} (with R i {\displaystyle R_{i}} the i-th component of R {\displaystyle R} ) and exclude the vertices we already have in R {\displaystyle R} A i := Adj ⁡ ( R i ) ∖ R {\displaystyle A_{i}:=\operatorname {Adj} (R_{i})\setminus R} Sort A i {\displaystyle A_{i}} ascending by minimum predecessor (the already-visited neighbor with the earliest position in R), and as a tiebreak ascending by vertex degree. Append A i {\displaystyle A_{i}} to the Result set R {\displaystyle R} . In other words, number the vertices according to a particular level structure (computed by breadth-first search) where the vertices in each level are visited in order of their predecessor's numbering from lowest to highest. Where the predecessors are the same, vertices are distinguished by degree (again ordered from lowest to highest). == See also == Graph bandwidth Sparse matrix == References == Cuthill–McKee documentation for the Boost C++ Libraries. A detailed description of the Cuthill–McKee algorithm. symrcm MATLAB's implementation of RCM. reverse_cuthill_mckee RCM routine from SciPy written in Cython.
Wikipedia/Cuthill–McKee_algorithm
Maze generation algorithms are automated methods for the creation of mazes. == Graph theory based methods == A maze can be generated by starting with a predetermined arrangement of cells (most commonly a rectangular grid but other arrangements are possible) with wall sites between them. This predetermined arrangement can be considered as a connected graph with the edges representing possible wall sites and the nodes representing cells. The purpose of the maze generation algorithm can then be considered to be making a subgraph in which it is challenging to find a route between two particular nodes. If the subgraph is not connected, then there are regions of the graph that are wasted because they do not contribute to the search space. If the graph contains loops, then there may be multiple paths between the chosen nodes. Because of this, maze generation is often approached as generating a random spanning tree. Loops, which can confound naive maze solvers, may be introduced by adding random edges to the result during the course of the algorithm. The animation shows the maze generation steps for a graph that is not on a rectangular grid. First, the computer creates a random planar graph G shown in blue, and its dual F shown in yellow. Second, the computer traverses F using a chosen algorithm, such as a depth-first search, coloring the path red. During the traversal, whenever a red edge crosses over a blue edge, the blue edge is removed. Finally, when all vertices of F have been visited, F is erased and two edges from G, one for the entrance and one for the exit, are removed. === Randomized depth-first search === This algorithm, also known as the "recursive backtracker" algorithm, is a randomized version of the depth-first search algorithm. Frequently implemented with a stack, this approach is one of the simplest ways to generate a maze using a computer. Consider the space for a maze being a large grid of cells (like a large chess board), each cell starting with four walls. Starting from a random cell, the computer then selects a random neighbouring cell that has not yet been visited. The computer removes the wall between the two cells and marks the new cell as visited, and adds it to the stack to facilitate backtracking. The computer continues this process, with a cell that has no unvisited neighbours being considered a dead-end. When at a dead-end it backtracks through the path until it reaches a cell with an unvisited neighbour, continuing the path generation by visiting this new, unvisited cell (creating a new junction). This process continues until every cell has been visited, causing the computer to backtrack all the way back to the beginning cell. We can be sure every cell is visited. As given above this algorithm involves deep recursion which may cause stack overflow issues on some computer architectures. The algorithm can be rearranged into a loop by storing backtracking information in the maze itself. This also provides a quick way to display a solution, by starting at any given point and backtracking to the beginning. Mazes generated with a depth-first search have a low branching factor and contain many long corridors, because the algorithm explores as far as possible along each branch before backtracking. ==== Recursive implementation ==== The depth-first search algorithm of maze generation is frequently implemented using backtracking. This can be described with a following recursive routine: Given a current cell as a parameter Mark the current cell as visited While the current cell has any unvisited neighbour cells Choose one of the unvisited neighbours Remove the wall between the current cell and the chosen cell Invoke the routine recursively for the chosen cell which is invoked once for any initial cell in the area. ==== Iterative implementation (with stack) ==== A disadvantage of the first approach is a large depth of recursion – in the worst case, the routine may need to recur on every cell of the area being processed, which may exceed the maximum recursion stack depth in many environments. As a solution, the same backtracking method can be implemented with an explicit stack, which is usually allowed to grow much bigger with no harm. Choose the initial cell, mark it as visited and push it to the stack While the stack is not empty Pop a cell from the stack and make it a current cell If the current cell has any neighbours which have not been visited Push the current cell to the stack Choose one of the unvisited neighbours Remove the wall between the current cell and the chosen cell Mark the chosen cell as visited and push it to the stack === Iterative randomized Kruskal's algorithm (with sets) === This algorithm is a randomized version of Kruskal's algorithm. Create a list of all walls, and create a set for each cell, each containing just that one cell. For each wall, in some random order: If the cells divided by this wall belong to distinct sets: Remove the current wall. Join the sets of the formerly divided cells. There are several data structures that can be used to model the sets of cells. An efficient implementation using a disjoint-set data structure can perform each union and find operation on two sets in nearly constant amortized time (specifically, O ( α ( V ) ) {\displaystyle O(\alpha (V))} time; α ( x ) < 5 {\displaystyle \alpha (x)<5} for any plausible value of x {\displaystyle x} ), so the running time of this algorithm is essentially proportional to the number of walls available to the maze. It matters little whether the list of walls is initially randomized or if a wall is randomly chosen from a nonrandom list, either way is just as easy to code. Because the effect of this algorithm is to produce a minimal spanning tree from a graph with equally weighted edges, it tends to produce regular patterns which are fairly easy to solve. === Iterative randomized Prim's algorithm (without stack, without sets) === This algorithm is a randomized version of Prim's algorithm. Start with a grid full of walls. Pick a cell, mark it as part of the maze. Add the walls of the cell to the wall list. While there are walls in the list: Pick a random wall from the list. If only one of the cells that the wall divides is visited, then: Make the wall a passage and mark the unvisited cell as part of the maze. Add the neighboring walls of the cell to the wall list. Remove the wall from the list. Note that simply running classical Prim's on a graph with random edge weights would create mazes stylistically identical to Kruskal's, because they are both minimal spanning tree algorithms. Instead, this algorithm introduces stylistic variation because the edges closer to the starting point have a lower effective weight. ==== Modified version ==== Although the classical Prim's algorithm keeps a list of edges, for maze generation we could instead maintain a list of adjacent cells. If the randomly chosen cell has multiple edges that connect it to the existing maze, select one of these edges at random. This will tend to branch slightly more than the edge-based version above. ==== Simplified version ==== The algorithm can be simplified even further by randomly selecting cells that neighbour already-visited cells, rather than keeping track of the weights of all cells or edges. It will usually be relatively easy to find the way to the starting cell, but hard to find the way anywhere else. === Wilson's algorithm === All the above algorithms have biases of various sorts: depth-first search is biased toward long corridors, while Kruskal's/Prim's algorithms are biased toward many short dead ends. Wilson's algorithm, on the other hand, generates an unbiased sample from the uniform distribution over all mazes, using loop-erased random walks. We begin the algorithm by initializing the maze with one cell chosen arbitrarily. Then we start at a new cell chosen arbitrarily, and perform a random walk until we reach a cell already in the maze—however, if at any point the random walk reaches its own path, forming a loop, we erase the loop from the path before proceeding. When the path reaches the maze, we add it to the maze. Then we perform another loop-erased random walk from another arbitrary starting cell, repeating until all cells have been filled. This procedure remains unbiased no matter which method we use to arbitrarily choose starting cells. So we could always choose the first unfilled cell in (say) left-to-right, top-to-bottom order for simplicity. === Aldous-Broder algorithm === The Aldous-Broder algorithm also produces uniform spanning trees. However, it is one of the least efficient maze algorithms. Pick a random cell as the current cell and mark it as visited. While there are unvisited cells: Pick a random neighbour. If the chosen neighbour has not been visited: Remove the wall between the current cell and the chosen neighbour. Mark the chosen neighbour as visited. Make the chosen neighbour the current cell. === Recursive division method === Mazes can be created with recursive division, an algorithm which works as follows: Begin with the maze's space with no walls. Call this a chamber. Divide the chamber with a randomly positioned wall (or multiple walls) where each wall contains a randomly positioned passage opening within it. Then recursively repeat the process on the subchambers until all chambers are minimum sized. This method results in mazes with long straight walls crossing their space, making it easier to see which areas to avoid. For example, in a rectangular maze, build at random points two walls that are perpendicular to each other. These two walls divide the large chamber into four smaller chambers separated by four walls. Choose three of the four walls at random, and open a one cell-wide hole at a random point in each of the three. Continue in this manner recursively, until every chamber has a width of one cell in either of the two directions. === Fractal Tessellation algorithm === This is a simple and fast way to generate a maze. On each iteration, this algorithm creates a maze twice the size by copying itself 3 times. At the end of each iteration, 3 paths are opened between the 4 smaller mazes. The advantage of this method is that it is very fast. The downside is that it is not possible to get a maze of a chosen size - but various tricks can be used to get around this problem. == Simple algorithms == Other algorithms exist that require only enough memory to store one line of a 2D maze or one plane of a 3D maze. Eller's algorithm prevents loops by storing which cells in the current line are connected through cells in the previous lines, and never removes walls between any two cells already connected. The Sidewinder algorithm starts with an open passage along the entire top row, and subsequent rows consist of shorter horizontal passages with one connection to the passage above. The Sidewinder algorithm is trivial to solve from the bottom up because it has no upward dead ends. Given a starting width, both algorithms create perfect mazes of unlimited height. Most maze generation algorithms require maintaining relationships between cells within it, to ensure the result will be solvable. Valid simply connected mazes can however be generated by focusing on each cell independently. A binary tree maze is a standard orthogonal maze where each cell always has a passage leading up or leading left, but never both. To create a binary tree maze, for each cell flip a coin to decide whether to add a passage leading up or left. Always pick the same direction for cells on the boundary, and the result will be a valid simply connected maze that looks like a binary tree, with the upper left corner its root. As with Sidewinder, the binary tree maze has no dead ends in the directions of bias. A related form of flipping a coin for each cell is to create an image using a random mix of forward slash and backslash characters. This doesn't generate a valid simply connected maze, but rather a selection of closed loops and unicursal passages. The manual for the Commodore 64 presents a BASIC program using this algorithm, using PETSCII diagonal line graphic characters instead for a smoother graphic appearance. == Cellular automaton algorithms == Certain types of cellular automata can be used to generate mazes. Two well-known such cellular automata, Maze and Mazectric, have rulestrings B3/S12345 and B3/S1234. In the former, this means that cells survive from one generation to the next if they have at least one and at most five neighbours. In the latter, this means that cells survive if they have one to four neighbours. If a cell has exactly three neighbours, it is born. It is similar to Conway's Game of Life in that patterns that do not have a living cell adjacent to 1, 4, or 5 other living cells in any generation will behave identically to it. However, for large patterns, it behaves very differently from Life. For a random starting pattern, these maze-generating cellular automata will evolve into complex mazes with well-defined walls outlining corridors. Mazecetric, which has the rule B3/S1234 has a tendency to generate longer and straighter corridors compared with Maze, with the rule B3/S12345. Since these cellular automaton rules are deterministic, each maze generated is uniquely determined by its random starting pattern. This is a significant drawback since the mazes tend to be relatively predictable. Like some of the graph-theory based methods described above, these cellular automata typically generate mazes from a single starting pattern; hence it will usually be relatively easy to find the way to the starting cell, but harder to find the way anywhere else. == See also == Maze solving algorithm Self-avoiding walk Brute-force search == References == == External links == Think Labyrinth: Maze algorithms (details on these and other maze generation algorithms) Jamis Buck: HTML 5 Presentation with Demos of Maze generation Algorithms Maze generation visualization Java implementation of Prim's algorithm Implementations of DFS maze creation algorithm in multiple languages at Rosetta Code Armin Reichert: 34 maze algorithms in Java 8, with demo application Coding Challenge #10.1: Maze Generator with p5.js - Part 1: Maze generation algorithm in JavaScript with p5 Maze Generator by Charles Bond, COMPUTE! Magazine, December 1981
Wikipedia/Maze_generation_algorithm
Facebook Graph Search was a semantic search engine that Facebook introduced in March 2013. It was designed to give answers to user natural language queries rather than a list of links. The name refers to the social graph nature of Facebook, which maps the relationships among users. The Graph Search feature combined the big data acquired from its over one billion users and external data into a search engine providing user-specific search results. In a presentation headed by Facebook CEO Mark Zuckerberg, it was announced that the Graph Search algorithm finds information from within a user's network of friends. Microsoft's Bing search engine provided additional results. In July it was made available to all users using the U.S. English version of Facebook. After being made less publicly visible starting December 2014, the original Graph Search was almost entirely deprecated in June 2019. == History == === Initial development === The feature was developed under former Google employees Lars Rasmussen and Tom Stocky. The Graph Search Features was launched in Beta in January 2013 as a limited preview for some English users in the United States. Company reports indicate that the service launched to between tens and hundreds of thousands of users. The feature has been released only to limited users, with a slow expansion planned. Facebook announced plans for a future mobile interface and the inclusion of Instagram photos. In late September 2013, Facebook announced that it would begin rolling out search for posts and comments as part of Graph Search. The rollout began in October 2013, but many people who had Graph Search were not given immediate access to this feature. A post on the Facebook Engineering blog explained that the huge amount of post and comment data, coming to a total of 700 TB, meant that developing Graph Search for posts was substantially more challenging than the original Graph Search. === Removal from public visibility from December 2014 onward === In December 2014, Facebook changed its search features, dropping partnership with Bing. Around the same time, Facebook changed the way searches could be done through the website and app, obscuring some of the previous graph search functionality, but most of the functionality was still available through direct construction of the search urls. Over the next few years, the online intelligence community, investigative journalists, and criminal investigators developed tools and practices to more effectively use Facebook Graph Search despite it not being publicly visible. One of these, Stalkscan, received media attention. Graph.tips was a frequently used tool in the online intelligence community as an interface on top of Facebook Graph Search. === Deprecation of most functionality in June 2019 === In early June 2019, the feature was further deprecated, with the majority of URLs for graph search queries no longer working. Facebook explained this by saying: "The vast majority of people on Facebook search using keywords, a factor which led us to pause some aspects of graph search and focus more on improving keyword search. We are working closely with researchers to make sure they have the tools they need to use our platform." However, there was speculation that the shutdown of Graph Search may also have been motivated by privacy concerns. Many tools that depended on Facebook Graph Search, including Stalkscan and graph.tips, had much of their functionality stop working, though some tools were updated using complicated workarounds for some queries. Vice quoted Bellingcat's Nick Waters as saying: "Now that Graph Search has gone down, it's become evident that it's used by some incredibly important section[s] of society, from human rights investigators and citizens wanting to hold their countries to account, to police investigating people trafficking and sexual slavery, to emergency responders." == Operation == Graph Search operated by use of a search algorithm similar to traditional search engines such as Google. However, the search feature is distinguished as a semantic search engine, searching based on intended meaning. Rather than returning results based on matching keywords, the search engine is designed to match phrases, as well as objects on the site. Search results were based on both the content of the user and their friends’ profiles and the relationships between the user and their friends. Results were based on the friends and interests expressed on Facebook, and also shaped by users’ privacy settings. In addition to being restricted from seeing some content, users could sometimes view relevant content made publicly available by users not listed as friends. Entries into the search bar were auto-completed as users typed, with Facebook suggesting friends and second degree connections, Facebook pages, automatically generated topics, and Web searches for anything Facebook was not able to search for. The operation of the search feature depended on user involvement. The feature was intended to encourage users to add more friends, more quickly. In doing so, it could provide updating, more data-rich results and stimulate use of the feature. === Search functions === Facebook supported searches for the following types: People Pages Places (limitable to a specific location (latitude and longitude) and distance) Check-ins of the user, friends, or where user or friends have been tagged Objects with location information attached. In addition, the returned objects will be those in which the user or friends have been tagged, or those objects that were created by the user or friends. Users could filter results, such as in time (since and until), or search only a given user's News feed and much more new. The feature also allowed users to search the web directly. === Examples === Tom Stocky of the search team offered several examples of potential queries during the launch presentation, including, "Friends who Like Star Wars and Harry Potter" For setting up a potential date, "Who are single men in San Francisco and are from India" For employee recruiting, "NASA employees who are friends with people at Facebook" For browsing photos or planning travel, "photos of my friends taken at National Parks" During its roll-out stage, bloggers showed how Facebook Graph Search could be used to uncover potentially embarrassing information (e.g., companies employing people who like racism) or illegal interests (e.g., Chinese residents who like the banned group Falun Gong). Microsoft was partnered with Facebook to provide search results from 2008 to 2014. Microsoft Live Search came to be known as Bing following the initiation of the partnership. In 2010, Facebook and Bing partnered to offer socially oriented search results: ‘People Search’ and ‘Liked by your Facebook Friends’ information appeared in results within Facebook and on Bing.com. In May 2012, Bing launched a social sidebar feature which displayed Facebook content alongside of search results. Promoted on the basis of asking friends for advice, the feature allows users to broadcast queries related to their searches to Facebook friends, and offers recommendations of Facebook friends, as well as experts from other networks who could be capable of offering insight. The previously developed Instant Personalization feature integrated friends’ publicly available information, such as likes, into content on other external websites, such as Rotten Tomatoes and Yelp. The emergence of the Graph Search feature builds on this partnership. Facebook content remains on Bing.com. The focus of Graph Search is internal content, but Bing continues to issue search results of external content. The external search results are based on traditional keyword-match. == Advertising == In 2012, Facebook introduced sponsored pages in search results. By buying "Targeted Entities" on Facebook, advertisers pay to have their page appear when users search for that entity. Facebook CEO Zuckerberg reported that this would remain a feature of the search feature, but that the advertising component had not been extended in the Graph Search feature. Criticisms arose about the integrity of search results on the basis of "buying likes". This practice refers to situations in which companies, without sponsoring results, accumulate a large number of "likes" through practices such as promotions or paying to operate bot accounts. Critics argued that this rendered results allegedly based on other users’ opinions meaningless. == Open Graph == The Open Graph feature allows developers to integrate their applications and pages into the Facebook platform, and links Facebook with external sites on the Internet. The feature operates by allowing the addition of metadata to turn websites into graph objects. Actions made using the app are expressed on users’ profile pages. == Privacy == Initial reactions to the launch of Graph Search included many concerns about privacy. The social media analytics company Crimson Hexagon reported that 19 percent of users discussing the launch of the feature were stating concerns about privacy. Facebook has alluded to these concerns and emphasized that the search operates within the pre-existing privacy settings: users can access only the information already available to them. The feature makes this information easier and potentially more appealing to find. Related concerns about phishing and the appearance of minors in search results have also been expressed. == References == == Further reading == Richter, Michael (28 January 2013). "Protecting Your Privacy in Graph Search". Newsroom. Facebook. == External links == Official website
Wikipedia/Facebook_Graph_Search
A function pointer, also called a subroutine pointer or procedure pointer, is a pointer referencing executable code, rather than data. Dereferencing the function pointer yields the referenced function, which can be invoked and passed arguments just as in a normal function call. Such an invocation is also known as an "indirect" call, because the function is being invoked indirectly through a variable instead of directly through a fixed identifier or address. Function pointers allow different code to be executed at runtime. They can also be passed to a function to enable callbacks. Function pointers are supported by third-generation programming languages (such as PL/I, COBOL, Fortran, dBASE dBL, and C) and object-oriented programming languages (such as C++, C#, and D). == Simple function pointers == The simplest implementation of a function (or subroutine) pointer is as a variable containing the address of the function within executable memory. Older third-generation languages such as PL/I and COBOL, as well as more modern languages such as Pascal and C generally implement function pointers in this manner. === Example in C === The following C program illustrates the use of two function pointers: func1 takes one double-precision (double) parameter and returns another double, and is assigned to a function which converts centimeters to inches. func2 takes a pointer to a constant character array as well as an integer and returns a pointer to a character, and is assigned to a C string handling function which returns a pointer to the first occurrence of a given character in a character array. The next program uses a function pointer to invoke one of two functions (sin or cos) indirectly from another function (compute_sum, computing an approximation of the function's Riemann integration). The program operates by having function main call function compute_sum twice, passing it a pointer to the library function sin the first time, and a pointer to function cos the second time. Function compute_sum in turn invokes one of the two functions indirectly by dereferencing its function pointer argument funcp multiple times, adding together the values that the invoked function returns and returning the resulting sum. The two sums are written to the standard output by main. == Functors == Functors, or function objects, are similar to function pointers, and can be used in similar ways. A functor is an object of a class type that implements the function-call operator, allowing the object to be used within expressions using the same syntax as a function call. Functors are more powerful than simple function pointers, being able to contain their own data values, and allowing the programmer to emulate closures. They are also used as callback functions if it is necessary to use a member function as a callback function. Many "pure" object-oriented languages do not support function pointers. Something similar can be implemented in these kinds of languages, though, using references to interfaces that define a single method (member function). CLI languages such as C# and Visual Basic .NET implement type-safe function pointers with delegates. In other languages that support first-class functions, functions are regarded as data, and can be passed, returned, and created dynamically directly by other functions, eliminating the need for function pointers. Extensively using function pointers to call functions may produce a slow-down for the code on modern processors, because a branch predictor may not be able to figure out where to branch to (it depends on the value of the function pointer at run time) although this effect can be overstated as it is often amply compensated for by significantly reduced non-indexed table lookups. == Method pointers == C++ includes support for object-oriented programming, so classes can have methods (usually referred to as member functions). Non-static member functions (instance methods) have an implicit parameter (the this pointer) which is the pointer to the object it is operating on, so the type of the object must be included as part of the type of the function pointer. The method is then used on an object of that class by using one of the "pointer-to-member" operators: .* or ->* (for an object or a pointer to object, respectively). Although function pointers in C and C++ can be implemented as simple addresses, so that typically sizeof(Fx)==sizeof(void *), member pointers in C++ are sometimes implemented as "fat pointers", typically two or three times the size of a simple function pointer, in order to deal with virtual methods and virtual inheritance. == In C++ == In C++, in addition to the method used in C, it is also possible to use the C++ standard library class template std::function, of which the instances are function objects: === Pointers to member functions in C++ === This is how C++ uses function pointers when dealing with member functions of classes or structs. These are invoked using an object pointer or a this call. They are type safe in that you can only call members of that class (or derivatives) using a pointer of that type. This example also demonstrates the use of a typedef for the pointer to member function added for simplicity. Function pointers to static member functions are done in the traditional 'C' style because there is no object pointer for this call required. == Alternate C and C++ syntax == The C and C++ syntax given above is the canonical one used in all the textbooks - but it's difficult to read and explain. Even the above typedef examples use this syntax. However, every C and C++ compiler supports a more clear and concise mechanism to declare function pointers: use typedef, but don't store the pointer as part of the definition. Note that the only way this kind of typedef can actually be used is with a pointer - but that highlights the pointer-ness of it. === C and C++ === === C++ === These examples use the above definitions. In particular, note that the above definition for Fn can be used in pointer-to-member-function definitions: == PL/I == PL/I procedures can be nested, that is, procedure A may contain procedure B, which in turn may contain C. In addition to data declared in B, B can also reference any data declared in A, as long as it doesn’t override the definition. Likewise C can reference data in both A and B. Therefore, PL/I ENTRY variables need to contain context, to provide procedure C with the addresses of the values of data in B and A at the time C was called. == See also == Delegation (computing) Function object Higher-order function Procedural parameter Closure Anonymous functions == References == == External links == FAQ on Function Pointers, things to avoid with function pointers, some information on using function objects Function Pointer Tutorials Archived 2018-06-30 at the Wayback Machine, a guide to C/C++ function pointers, callbacks, and function objects (functors) Member Function Pointers and the Fastest Possible C++ Delegates, CodeProject article by Don Clugston Pointer Tutorials Archived 2009-04-05 at the Wayback Machine, C++ documentation and tutorials C pointers explained Archived 2019-06-09 at the Wayback Machine a visual guide of pointers in C Secure Function Pointer and Callbacks in Windows Programming, CodeProject article by R. Selvam The C Book, Function Pointers in C by "The C Book" Function Pointers in dBASE dBL, Function Pointer in dBASE dBL
Wikipedia/Function_pointers
A hot spot in computer science is most usually defined as a region of a computer program where a high proportion of executed instructions occur or where most time is spent during the program's execution (not necessarily the same thing since some instructions are faster than others). If a program is interrupted randomly, the program counter (the pointer to the next instruction to be executed) is frequently found to contain the address of an instruction within a certain range, possibly indicating code that is in need of optimization or even indicating the existence of a 'tight' CPU loop. This simple technique can detect highly used instructions, although more-sophisticated methods, such as instruction set simulators or performance analyzers, achieve this more accurately and consistently. == History of hot spot detection == The computer scientist Donald Knuth described his first encounter with what he refers to as a jump trace in an interview for Dr. Dobb's Journal in 1996, saying: In the '60s, someone invented the concept of a 'jump trace'. This was a way of altering the machine language of a program so it would change the next branch or jump instruction to retain control, so you could execute the program at fairly high speed instead of interpreting each instruction one at a time and record in a file just where a program diverged from sequentiality. By processing this file you could figure out where the program was spending most of its time. So the first day we had this software running, we applied it to our Fortran compiler supplied by, I suppose it was in those days, Control Data Corporation. We found out it was spending 87 percent of its time reading comments! The reason was that it was translating from one code system into another into another. === Iteration === The example above serves to illustrate that effective hot spot detection is often an iterative process and perhaps one that should always be carried out (instead of simply accepting that a program is performing reasonably). After eliminating all extraneous processing (just by removing all the embedded comments for instance), a new runtime analysis would more accurately detect the "genuine" hot spots in the translation. If no hot spot detection had taken place at all, the program may well have consumed vastly more resources than necessary, possibly for many years on numerous machines, without anyone ever being fully aware of this. == Instruction set simulation as a hot spot detector == An instruction set simulator can be used to count each time a particular instruction is executed and later produce either an on-screen display, a printed program listing (with counts and/or percentages of total instruction path length) or a separate report, showing precisely where the highest number of instructions took place. This only provides a relative view of hot spots (from an instruction step perspective) since most instructions have different timings on many machines. It nevertheless provides a measure of highly used code and one that is quite useful in itself when tuning an algorithm. == See also == Profiling (computer programming) == References ==
Wikipedia/Hot_spot_(computer_science)
Index mapping (or direct addressing, or a trivial hash function) in computer science describes using an array, in which each position corresponds to a key in the universe of possible values. The technique is most effective when the universe of keys is reasonably small, such that allocating an array with one position for every possible key is affordable. Its effectiveness comes from the fact that an arbitrary position in an array can be examined in constant time. == Applicable arrays == There are many practical examples of data whose valid values are restricted within a small range. A trivial hash function is a suitable choice when such data needs to act as a lookup key. Some examples include: month in the year (1–12) day in the month (1–31) day of the week (1–7) human age (0–130) – e.g. lifecover actuary tables, fixed-term mortgage ASCII characters (0–127), encompassing common mathematical operator symbols, digits, punctuation marks, and English language alphabet == Examples == Using a trivial hash function, in a non-iterative table lookup, can eliminate conditional testing and branching completely, reducing the instruction path length of a computer program. === Avoid branching === Roger Sayle gives an example of eliminating a multiway branch caused by a switch statement: Which can be replaced with a table lookup: == References ==
Wikipedia/Trivial_hash_function
In computer science, reflective programming or reflection is the ability of a process to examine, introspect, and modify its own structure and behavior. == Historical background == The earliest computers were programmed in their native assembly languages, which were inherently reflective, as these original architectures could be programmed by defining instructions as data and using self-modifying code. As the bulk of programming moved to higher-level compiled languages such as ALGOL, COBOL, Fortran, Pascal, and C, this reflective ability largely disappeared until new programming languages with reflection built into their type systems appeared. Brian Cantwell Smith's 1982 doctoral dissertation introduced the notion of computational reflection in procedural programming languages and the notion of the meta-circular interpreter as a component of 3-Lisp. == Uses == Reflection helps programmers make generic software libraries to display data, process different formats of data, perform serialization and deserialization of data for communication, or do bundling and unbundling of data for containers or bursts of communication. Effective use of reflection almost always requires a plan: A design framework, encoding description, object library, a map of a database or entity relations. Reflection makes a language more suited to network-oriented code. For example, it assists languages such as Java to operate well in networks by enabling libraries for serialization, bundling and varying data formats. Languages without reflection such as C are required to use auxiliary compilers for tasks like Abstract Syntax Notation to produce code for serialization and bundling. Reflection can be used for observing and modifying program execution at runtime. A reflection-oriented program component can monitor the execution of an enclosure of code and can modify itself according to a desired goal of that enclosure. This is typically accomplished by dynamically assigning program code at runtime. In object-oriented programming languages such as Java, reflection allows inspection of classes, interfaces, fields and methods at runtime without knowing the names of the interfaces, fields, methods at compile time. It also allows instantiation of new objects and invocation of methods. Reflection is often used as part of software testing, such as for the runtime creation/instantiation of mock objects. Reflection is also a key strategy for metaprogramming. In some object-oriented programming languages such as C# and Java, reflection can be used to bypass member accessibility rules. For C#-properties this can be achieved by writing directly onto the (usually invisible) backing field of a non-public property. It is also possible to find non-public methods of classes and types and manually invoke them. This works for project-internal files as well as external libraries such as .NET's assemblies and Java's archives. == Implementation == A language that supports reflection provides a number of features available at runtime that would otherwise be difficult to accomplish in a lower-level language. Some of these features are the abilities to: Discover and modify source-code constructions (such as code blocks, classes, methods, protocols, etc.) as first-class objects at runtime. Convert a string matching the symbolic name of a class or function into a reference to or invocation of that class or function. Evaluate a string as if it were a source-code statement at runtime. Create a new interpreter for the language's bytecode to give a new meaning or purpose for a programming construct. These features can be implemented in different ways. In MOO, reflection forms a natural part of everyday programming idiom. When verbs (methods) are called, various variables such as verb (the name of the verb being called) and this (the object on which the verb is called) are populated to give the context of the call. Security is typically managed by accessing the caller stack programmatically: Since callers() is a list of the methods by which the current verb was eventually called, performing tests on callers()[0] (the command invoked by the original user) allows the verb to protect itself against unauthorised use. Compiled languages rely on their runtime system to provide information about the source code. A compiled Objective-C executable, for example, records the names of all methods in a block of the executable, providing a table to correspond these with the underlying methods (or selectors for these methods) compiled into the program. In a compiled language that supports runtime creation of functions, such as Common Lisp, the runtime environment must include a compiler or an interpreter. Reflection can be implemented for languages without built-in reflection by using a program transformation system to define automated source-code changes. == Security considerations == Reflection may allow a user to create unexpected control flow paths through an application, potentially bypassing security measures. This may be exploited by attackers. Historical vulnerabilities in Java caused by unsafe reflection allowed code retrieved from potentially untrusted remote machines to break out of the Java sandbox security mechanism. A large scale study of 120 Java vulnerabilities in 2013 concluded that unsafe reflection is the most common vulnerability in Java, though not the most exploited. == Examples == The following code snippets create an instance foo of class Foo and invoke its method PrintHello. For each programming language, normal and reflection-based call sequences are shown. === Common Lisp === The following is an example in Common Lisp using the Common Lisp Object System: === C# === The following is an example in C#: === Delphi, Object Pascal === This Delphi and Object Pascal example assumes that a TFoo class has been declared in a unit called Unit1: === eC === The following is an example in eC: === Go === The following is an example in Go: === Java === The following is an example in Java: === JavaScript === The following is an example in JavaScript: === Julia === The following is an example in Julia: === Objective-C === The following is an example in Objective-C, implying either the OpenStep or Foundation Kit framework is used: === Perl === The following is an example in Perl: === PHP === The following is an example in PHP: === Python === The following is an example in Python: === R === The following is an example in R: === Ruby === The following is an example in Ruby: === Xojo === The following is an example using Xojo: == See also == List of reflective programming languages and platforms Mirror (programming) Programming paradigms Self-hosting (compilers) Self-modifying code Type introspection typeof == References == === Citations === === Sources === == Further reading == Ira R. Forman and Nate Forman, Java Reflection in Action (2005), ISBN 1-932394-18-4 Ira R. Forman and Scott Danforth, Putting Metaclasses to Work (1999), ISBN 0-201-43305-2 == External links == Reflection in logic, functional and object-oriented programming: a short comparative study An Introduction to Reflection-Oriented Programming Brian Foote's pages on Reflection in Smalltalk Java Reflection API Tutorial from Oracle
Wikipedia/Reflection_(computer_science)
Data conversion is the conversion of computer data from one format to another. Throughout a computer environment, data is encoded in a variety of ways. For example, computer hardware is built on the basis of certain standards, which requires that data contains, for example, parity bit checks. Similarly, the operating system is predicated on certain standards for data and file handling. Furthermore, each computer program handles data in a different manner. Whenever any one of these variables is changed, data must be converted in some way before it can be used by a different computer, operating system or program. Even different versions of these elements usually involve different data structures. For example, the changing of bits from one format to another, usually for the purpose of application interoperability or of the capability of using new features, is merely a data conversion. Data conversions may be as simple as the conversion of a text file from one character encoding system to another; or more complex, such as the conversion of office file formats, or the conversion of image formats and audio file formats. There are many ways in which data is converted within the computer environment. This may be seamless, as in the case of upgrading to a newer version of a computer program. Alternatively, the conversion may require processing by the use of a special conversion program, or it may involve a complex process of going through intermediary stages, or involving complex "exporting" and "importing" procedures, which may include converting to and from a tab-delimited or comma-separated text file. In some cases, a program may recognize several data file formats at the data input stage and then is also capable of storing the output data in several different formats. Such a program may be used to convert a file format. If the source format or target format is not recognized, then at times a third program may be available which permits the conversion to an intermediate format, which can then be reformatted using the first program. There are many possible scenarios. == Information basics == Before any data conversion is carried out, the user or application programmer should keep a few basics of computing and information theory in mind. These include: Information can easily be discarded by the computer, but adding information takes effort. The computer can add information only in a rule-based fashion. Upsampling the data or converting to a more feature-rich format does not add information; it merely makes room for that addition, which usually a human must do. Data stored in an electronic format can be quickly modified and analyzed. For example, a true color image can easily be converted to grayscale, while the opposite conversion is a painstaking process. Converting a Unix text file to a Microsoft (DOS/Windows) text file involves adding characters, but this does not increase the entropy since it is rule-based; whereas the addition of color information to a grayscale image cannot be reliably done programmatically, as it requires adding new information, so any attempt to add color would require estimation by the computer based on previous knowledge. Converting a 24-bit PNG to a 48-bit one does not add information to it, it only pads existing RGB pixel values with zeroes, so that a pixel with a value of FF C3 56, for example, becomes FF00 C300 5600. The conversion makes it possible to change a pixel to have a value of, for instance, FF80 C340 56A0, but the conversion itself does not do that, only further manipulation of the image can. Converting an image or audio file in a lossy format (like JPEG or Vorbis) to a lossless (like PNG or FLAC) or uncompressed (like BMP or WAV) format only wastes space, since the same image with its loss of original information (the artifacts of lossy compression) becomes the target. A JPEG image can never be restored to the quality of the original image from which it was made, no matter how much the user tries the "JPEG Artifact Removal" feature of his or her image manipulation program. Automatic restoration of information that was lost through a lossy compression process would probably require important advances in artificial intelligence. Because of these realities of computing and information theory, data conversion is often a complex and error-prone process that requires the help of experts. == Pivotal conversion == Data conversion can occur directly from one format to another, but many applications that convert between multiple formats use an intermediate representation by way of which any source format is converted to its target. For example, it is possible to convert Cyrillic text from KOI8-R to Windows-1251 using a lookup table between the two encodings, but the modern approach is to convert the KOI8-R file to Unicode first and from that to Windows-1251. This is a more manageable approach; rather than needing lookup tables for all possible pairs of character encodings, an application needs only one lookup table for each character set, which it uses to convert to and from Unicode, thereby scaling the number of tables down from hundreds to a few tens. Pivotal conversion is similarly used in other areas. Office applications, when employed to convert between office file formats, use their internal, default file format as a pivot. For example, a word processor may convert an RTF file to a WordPerfect file by converting the RTF to OpenDocument and then that to WordPerfect format. An image conversion program does not convert a PCX image to PNG directly; instead, when loading the PCX image, it decodes it to a simple bitmap format for internal use in memory, and when commanded to convert to PNG, that memory image is converted to the target format. An audio converter that converts from FLAC to AAC decodes the source file to raw PCM data in memory first, and then performs the lossy AAC compression on that memory image to produce the target file. == Lost and inexact data conversion == The objective of data conversion is to maintain all of the data, and as much of the embedded information as possible. This can only be done if the target format supports the same features and data structures present in the source file. Conversion of a word processing document to a plain text file necessarily involves loss of formatting information, because plain text format does not support word processing constructs such as marking a word as boldface. For this reason, conversion from one format to another which does not support a feature that is important to the user is rarely carried out, though it may be necessary for interoperability, e.g. converting a file from one version of Microsoft Word to an earlier version to enable transfer and use by other users who do not have the same later version of Word installed on their computer. Loss of information can be mitigated by approximation in the target format. There is no way of converting a character like ä to ASCII, since the ASCII standard lacks it, but the information may be retained by approximating the character as ae. Of course, this is not an optimal solution, and can impact operations like searching and copying; and if a language makes a distinction between ä and ae, then that approximation does involve loss of information. Data conversion can also suffer from inexactitude, the result of converting between formats that are conceptually different. The WYSIWYG paradigm, extant in word processors and desktop publishing applications, versus the structural-descriptive paradigm, found in SGML, XML and many applications derived therefrom, like HTML and MathML, is one example. Using a WYSIWYG HTML editor conflates the two paradigms, and the result is HTML files with suboptimal, if not nonstandard, code. In the WYSIWYG paradigm a double linebreak signifies a new paragraph, as that is the visual cue for such a construct, but a WYSIWYG HTML editor will usually convert such a sequence to <BR><BR>, which is structurally no new paragraph at all. As another example, converting from PDF to an editable word processor format is a tough chore, because PDF records the textual information like engraving on stone, with each character given a fixed position and linebreaks hard-coded, whereas word processor formats accommodate text reflow. PDF does not know of a word space character—the space between two letters and the space between two words differ only in quantity. Therefore, a title with ample letter-spacing for effect will usually end up with spaces in the word processor file, for example INTRODUCTION with spacing of 1 em as I N T R O D U C T I O N on the word processor. == Open vs. secret specifications == Successful data conversion requires thorough knowledge of the workings of both source and target formats. In the case where the specification of a format is unknown, reverse engineering will be needed to carry out conversion. Reverse engineering can achieve close approximation of the original specifications, but errors and missing features can still result. == Electronics == Data format conversion can also occur at the physical layer of an electronic communication system. Conversion between line codes such as NRZ and RZ can be accomplished when necessary. == See also == Character encoding Comparison of programming languages (basic instructions)#Data conversions Data migration Data transformation Data wrangling Transcoding Distributed Data Management Architecture (DDM) Code conversion (computing) Source-to-source translation Presentation layer Video Converting == References == Manolescu, FirstName (2006). Pattern Languages of Program Design 5. Upper Saddle River, NJ: Addison-Wesley. ISBN 0321321944.
Wikipedia/Data_conversion
In computer science, an offset within an array or other data structure object is an integer indicating the distance (displacement) between the beginning of the object and a given element or point, presumably within the same object.: 100–103  The concept of a distance is valid only if all elements of the object are of the same size (typically given in bytes or words). For example, if A is an array of characters containing "abcdef", the fourth element containing the character 'd' has an offset of three from the start of A. == In assembly language == In computer engineering and low-level programming (such as assembly language), an offset usually denotes the number of address locations added to a base address in order to get to a specific absolute address. In this (original) meaning of offset, only the basic address unit, usually the 8-bit byte, is used to specify the offset's size. In this context an offset is sometimes called a relative address. In IBM System/360 instructions, a 12-bit offset embedded within certain instructions provided a range of between 0 and 4096 bytes. For example, within an unconditional branch instruction (X'47F0Fxxx'), the xxx 12-bit hexadecimal offset provided the byte offset from the base register (15) to branch to. An odd offset would cause a program check (unless the base register itself also contained an odd address)—since instructions had to be aligned on half-word boundaries to execute without a program or hardware interrupt. The previous example describes an indirect way to address to a memory location in the format of segment:offset. For example, assume we want to refer to memory location 0xF867. One way this can be accomplished is by first defining a segment with beginning address 0xF000, and then defining an offset of 0x0867. Further, we are also allowed to shift the hexadecimal segment to reach the final absolute memory address. One thing to note here is that we can reach our final absolute address in many ways. An offset is not always relative to the base address of the module, for example: If you have a class and you want to retrieve the "color" attribute of this class, the offset may be 0x0100, but this offset has to be added to the offset of the class itself, not the base address. If the class' offset is 0xFF881 and the base address is 0x0A100, then to retrieve the "color" attribute both offsets are added to the base address. 0x0A100 (base) + 0xFF881 (class) + 0x0100 (attribute). Ultimately the attribute's address will be 0x109A81. == See also == Array Index == References ==
Wikipedia/Offset_(computer_science)
In mathematical optimization, the revised simplex method is a variant of George Dantzig's simplex method for linear programming. The revised simplex method is mathematically equivalent to the standard simplex method but differs in implementation. Instead of maintaining a tableau which explicitly represents the constraints adjusted to a set of basic variables, it maintains a representation of a basis of the matrix representing the constraints. The matrix-oriented approach allows for greater computational efficiency by enabling sparse matrix operations. == Problem formulation == For the rest of the discussion, it is assumed that a linear programming problem has been converted into the following standard form: minimize c T x subject to A x = b , x ≥ 0 {\displaystyle {\begin{array}{rl}{\text{minimize}}&{\boldsymbol {c}}^{\mathrm {T} }{\boldsymbol {x}}\\{\text{subject to}}&{\boldsymbol {Ax}}={\boldsymbol {b}},{\boldsymbol {x}}\geq {\boldsymbol {0}}\end{array}}} where A ∈ ℝm×n. Without loss of generality, it is assumed that the constraint matrix A has full row rank and that the problem is feasible, i.e., there is at least one x ≥ 0 such that Ax = b. If A is rank-deficient, either there are redundant constraints, or the problem is infeasible. Both situations can be handled by a presolve step. == Algorithmic description == === Optimality conditions === For linear programming, the Karush–Kuhn–Tucker conditions are both necessary and sufficient for optimality. The KKT conditions of a linear programming problem in the standard form is A x = b , A T λ + s = c , x ≥ 0 , s ≥ 0 , s T x = 0 {\displaystyle {\begin{aligned}{\boldsymbol {Ax}}&={\boldsymbol {b}},\\{\boldsymbol {A}}^{\mathrm {T} }{\boldsymbol {\lambda }}+{\boldsymbol {s}}&={\boldsymbol {c}},\\{\boldsymbol {x}}&\geq {\boldsymbol {0}},\\{\boldsymbol {s}}&\geq {\boldsymbol {0}},\\{\boldsymbol {s}}^{\mathrm {T} }{\boldsymbol {x}}&=0\end{aligned}}} where λ and s are the Lagrange multipliers associated with the constraints Ax = b and x ≥ 0, respectively. The last condition, which is equivalent to sixi = 0 for all 1 < i < n, is called the complementary slackness condition. By what is sometimes known as the fundamental theorem of linear programming, a vertex x of the feasible polytope can be identified by being a basis B of A chosen from the latter's columns. Since A has full rank, B is nonsingular. Without loss of generality, assume that A = [B N]. Then x is given by x = [ x B x N ] = [ B − 1 b 0 ] {\displaystyle {\boldsymbol {x}}={\begin{bmatrix}{\boldsymbol {x_{B}}}\\{\boldsymbol {x_{N}}}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {B}}^{-1}{\boldsymbol {b}}\\{\boldsymbol {0}}\end{bmatrix}}} where xB ≥ 0. Partition c and s accordingly into c = [ c B c N ] , s = [ s B s N ] . {\displaystyle {\begin{aligned}{\boldsymbol {c}}&={\begin{bmatrix}{\boldsymbol {c_{B}}}\\{\boldsymbol {c_{N}}}\end{bmatrix}},\\{\boldsymbol {s}}&={\begin{bmatrix}{\boldsymbol {s_{B}}}\\{\boldsymbol {s_{N}}}\end{bmatrix}}.\end{aligned}}} To satisfy the complementary slackness condition, let sB = 0. It follows that B T λ = c B , N T λ + s N = c N , {\displaystyle {\begin{aligned}{\boldsymbol {B}}^{\mathrm {T} }{\boldsymbol {\lambda }}&={\boldsymbol {c_{B}}},\\{\boldsymbol {N}}^{\mathrm {T} }{\boldsymbol {\lambda }}+{\boldsymbol {s_{N}}}&={\boldsymbol {c_{N}}},\end{aligned}}} which implies that λ = ( B T ) − 1 c B , s N = c N − N T λ . {\displaystyle {\begin{aligned}{\boldsymbol {\lambda }}&=({\boldsymbol {B}}^{\mathrm {T} })^{-1}{\boldsymbol {c_{B}}},\\{\boldsymbol {s_{N}}}&={\boldsymbol {c_{N}}}-{\boldsymbol {N}}^{\mathrm {T} }{\boldsymbol {\lambda }}.\end{aligned}}} If sN ≥ 0 at this point, the KKT conditions are satisfied, and thus x is optimal. === Pivot operation === If the KKT conditions are violated, a pivot operation consisting of introducing a column of N into the basis at the expense of an existing column in B is performed. In the absence of degeneracy, a pivot operation always results in a strict decrease in cTx. Therefore, if the problem is bounded, the revised simplex method must terminate at an optimal vertex after repeated pivot operations because there are only a finite number of vertices. Select an index m < q ≤ n such that sq < 0 as the entering index. The corresponding column of A, Aq, will be moved into the basis, and xq will be allowed to increase from zero. It can be shown that ∂ ( c T x ) ∂ x q = s q , {\displaystyle {\frac {\partial ({\boldsymbol {c}}^{\mathrm {T} }{\boldsymbol {x}})}{\partial x_{q}}}=s_{q},} i.e., every unit increase in xq results in a decrease by −sq in cTx. Since B x B + A q x q = b , {\displaystyle {\boldsymbol {Bx_{B}}}+{\boldsymbol {A}}_{q}x_{q}={\boldsymbol {b}},} xB must be correspondingly decreased by ΔxB = B−1Aqxq subject to xB − ΔxB ≥ 0. Let d = B−1Aq. If d ≤ 0, no matter how much xq is increased, xB − ΔxB will stay nonnegative. Hence, cTx can be arbitrarily decreased, and thus the problem is unbounded. Otherwise, select an index p = argmin1≤i≤m {xi/di | di > 0} as the leaving index. This choice effectively increases xq from zero until xp is reduced to zero while maintaining feasibility. The pivot operation concludes with replacing Ap with Aq in the basis. == Numerical example == Consider a linear program where c = [ − 2 − 3 − 4 0 0 ] T , A = [ 3 2 1 1 0 2 5 3 0 1 ] , b = [ 10 15 ] . {\displaystyle {\begin{aligned}{\boldsymbol {c}}&={\begin{bmatrix}-2&-3&-4&0&0\end{bmatrix}}^{\mathrm {T} },\\{\boldsymbol {A}}&={\begin{bmatrix}3&2&1&1&0\\2&5&3&0&1\end{bmatrix}},\\{\boldsymbol {b}}&={\begin{bmatrix}10\\15\end{bmatrix}}.\end{aligned}}} Let B = [ A 4 A 5 ] , N = [ A 1 A 2 A 3 ] {\displaystyle {\begin{aligned}{\boldsymbol {B}}&={\begin{bmatrix}{\boldsymbol {A}}_{4}&{\boldsymbol {A}}_{5}\end{bmatrix}},\\{\boldsymbol {N}}&={\begin{bmatrix}{\boldsymbol {A}}_{1}&{\boldsymbol {A}}_{2}&{\boldsymbol {A}}_{3}\end{bmatrix}}\end{aligned}}} initially, which corresponds to a feasible vertex x = [0 0 0 10 15]T. At this moment, λ = [ 0 0 ] T , s N = [ − 2 − 3 − 4 ] T . {\displaystyle {\begin{aligned}{\boldsymbol {\lambda }}&={\begin{bmatrix}0&0\end{bmatrix}}^{\mathrm {T} },\\{\boldsymbol {s_{N}}}&={\begin{bmatrix}-2&-3&-4\end{bmatrix}}^{\mathrm {T} }.\end{aligned}}} Choose q = 3 as the entering index. Then d = [1 3]T, which means a unit increase in x3 results in x4 and x5 being decreased by 1 and 3, respectively. Therefore, x3 is increased to 5, at which point x5 is reduced to zero, and p = 5 becomes the leaving index. After the pivot operation, B = [ A 3 A 4 ] , N = [ A 1 A 2 A 5 ] . {\displaystyle {\begin{aligned}{\boldsymbol {B}}&={\begin{bmatrix}{\boldsymbol {A}}_{3}&{\boldsymbol {A}}_{4}\end{bmatrix}},\\{\boldsymbol {N}}&={\begin{bmatrix}{\boldsymbol {A}}_{1}&{\boldsymbol {A}}_{2}&{\boldsymbol {A}}_{5}\end{bmatrix}}.\end{aligned}}} Correspondingly, x = [ 0 0 5 5 0 ] T , λ = [ 0 − 4 / 3 ] T , s N = [ 2 / 3 11 / 3 4 / 3 ] T . {\displaystyle {\begin{aligned}{\boldsymbol {x}}&={\begin{bmatrix}0&0&5&5&0\end{bmatrix}}^{\mathrm {T} },\\{\boldsymbol {\lambda }}&={\begin{bmatrix}0&-4/3\end{bmatrix}}^{\mathrm {T} },\\{\boldsymbol {s_{N}}}&={\begin{bmatrix}2/3&11/3&4/3\end{bmatrix}}^{\mathrm {T} }.\end{aligned}}} A positive sN indicates that x is now optimal. == Practical issues == === Degeneracy === Because the revised simplex method is mathematically equivalent to the simplex method, it also suffers from degeneracy, where a pivot operation does not result in a decrease in cTx, and a chain of pivot operations causes the basis to cycle. A perturbation or lexicographic strategy can be used to prevent cycling and guarantee termination. === Basis representation === Two types of linear systems involving B are present in the revised simplex method: B z = y , B T z = y . {\displaystyle {\begin{aligned}{\boldsymbol {Bz}}&={\boldsymbol {y}},\\{\boldsymbol {B}}^{\mathrm {T} }{\boldsymbol {z}}&={\boldsymbol {y}}.\end{aligned}}} Instead of refactorizing B, usually an LU factorization is directly updated after each pivot operation, for which purpose there exist several strategies such as the Forrest−Tomlin and Bartels−Golub methods. However, the amount of data representing the updates as well as numerical errors builds up over time and makes periodic refactorization necessary. == Notes and references == === Notes === === References === === Bibliography ===
Wikipedia/Revised_simplex_algorithm
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size. == History == The ellipsoid method has a long history. As an iterative method, a preliminary version was introduced by Naum Z. Shor. In 1972, an approximation algorithm for real convex minimization was studied by Arkadi Nemirovski and David B. Yudin (Judin). As an algorithm for solving linear programming problems with rational data, the ellipsoid algorithm was studied by Leonid Khachiyan; Khachiyan's achievement was to prove the polynomial-time solvability of linear programs. This was a notable step from a theoretical perspective: The standard algorithm for solving linear problems at the time was the simplex algorithm, which has a run time that typically is linear in the size of the problem, but for which examples exist for which it is exponential in the size of the problem. As such, having an algorithm that is guaranteed to be polynomial for all cases was a theoretical breakthrough. Khachiyan's work showed, for the first time, that there can be algorithms for solving linear programs whose runtime can be proven to be polynomial. In practice, however, the algorithm is fairly slow and of little practical interest, though it provided inspiration for later work that turned out to be of much greater practical use. Specifically, Karmarkar's algorithm, an interior-point method, is much faster than the ellipsoid method in practice. Karmarkar's algorithm is also faster in the worst case. The ellipsoidal algorithm allows complexity theorists to achieve (worst-case) bounds that depend on the dimension of the problem and on the size of the data, but not on the number of rows, so it remained important in combinatorial optimization theory for many years. Only in the 21st century have interior-point algorithms with similar complexity properties appeared. == Description == A convex minimization problem consists of the following ingredients. A convex function f 0 ( x ) : R n → R {\displaystyle f_{0}(x):\mathbb {R} ^{n}\to \mathbb {R} } to be minimized over the vector x {\displaystyle x} (containing n variables); Convex inequality constraints of the form f i ( x ) ⩽ 0 {\displaystyle f_{i}(x)\leqslant 0} , where the functions f i {\displaystyle f_{i}} are convex; these constraints define a convex set Q {\displaystyle Q} . Linear equality constraints of the form h i ( x ) = 0 {\displaystyle h_{i}(x)=0} . We are also given an initial ellipsoid E ( 0 ) ⊂ R n {\displaystyle {\mathcal {E}}^{(0)}\subset \mathbb {R} ^{n}} defined as E ( 0 ) = { z ∈ R n : ( z − x 0 ) T P ( 0 ) − 1 ( z − x 0 ) ⩽ 1 } {\displaystyle {\mathcal {E}}^{(0)}=\left\{z\in \mathbb {R} ^{n}\ :\ (z-x_{0})^{T}P_{(0)}^{-1}(z-x_{0})\leqslant 1\right\}} containing a minimizer x ∗ {\displaystyle x^{*}} , where P ( 0 ) ≻ 0 {\displaystyle P_{(0)}\succ 0} and x 0 {\displaystyle x_{0}} is the center of E {\displaystyle {\mathcal {E}}} . Finally, we require the existence of a separation oracle for the convex set Q {\displaystyle Q} . Given a point x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} , the oracle should return one of two answers: "The point x {\displaystyle x} is in Q {\displaystyle Q} ", or - "The point x {\displaystyle x} is not in Q {\displaystyle Q} , and moreover, here is a hyperplane that separates x {\displaystyle x} from Q {\displaystyle Q} ", that is, a vector c {\displaystyle c} such that c ⋅ x < c ⋅ y {\displaystyle c\cdot x<c\cdot y} for all y ∈ Q {\displaystyle y\in Q} . The output of the ellipsoid method is either: Any point in the polytope Q {\displaystyle Q} (i.e., any feasible point), or - A proof that Q {\displaystyle Q} is empty. Inequality-constrained minimization of a function that is zero everywhere corresponds to the problem of simply identifying any feasible point. It turns out that any linear programming problem can be reduced to a linear feasibility problem (i.e. minimize the zero function subject to some linear inequality and equality constraints). One way to do this is by combining the primal and dual linear programs together into one program, and adding the additional (linear) constraint that the value of the primal solution is no worse than the value of the dual solution.: 84  Another way is to treat the objective of the linear program as an additional constraint, and use binary search to find the optimum value.: 7–8  == Unconstrained minimization == At the k-th iteration of the algorithm, we have a point x ( k ) {\displaystyle x^{(k)}} at the center of an ellipsoid E ( k ) = { x ∈ R n : ( x − x ( k ) ) T P ( k ) − 1 ( x − x ( k ) ) ⩽ 1 } . {\displaystyle {\mathcal {E}}^{(k)}=\left\{x\in \mathbb {R} ^{n}\ :\ \left(x-x^{(k)}\right)^{T}P_{(k)}^{-1}\left(x-x^{(k)}\right)\leqslant 1\right\}.} We query the cutting-plane oracle to obtain a vector g ( k + 1 ) ∈ R n {\displaystyle g^{(k+1)}\in \mathbb {R} ^{n}} such that g ( k + 1 ) T ( x ∗ − x ( k ) ) ⩽ 0. {\displaystyle g^{(k+1)T}\left(x^{*}-x^{(k)}\right)\leqslant 0.} We therefore conclude that x ∗ ∈ E ( k ) ∩ { z : g ( k + 1 ) T ( z − x ( k ) ) ⩽ 0 } . {\displaystyle x^{*}\in {\mathcal {E}}^{(k)}\cap \left\{z\ :\ g^{(k+1)T}\left(z-x^{(k)}\right)\leqslant 0\right\}.} We set E ( k + 1 ) {\displaystyle {\mathcal {E}}^{(k+1)}} to be the ellipsoid of minimal volume containing the half-ellipsoid described above and compute x ( k + 1 ) {\displaystyle x^{(k+1)}} . The update is given by x ( k + 1 ) = x ( k ) − 1 n + 1 P ( k ) g ~ ( k + 1 ) P ( k + 1 ) = n 2 n 2 − 1 ( P ( k ) − 2 n + 1 P ( k ) g ~ ( k + 1 ) g ~ ( k + 1 ) T P ( k ) ) {\displaystyle {\begin{aligned}x^{(k+1)}&=x^{(k)}-{\frac {1}{n+1}}P_{(k)}{\tilde {g}}^{(k+1)}\\P_{(k+1)}&={\frac {n^{2}}{n^{2}-1}}\left(P_{(k)}-{\frac {2}{n+1}}P_{(k)}{\tilde {g}}^{(k+1)}{\tilde {g}}^{(k+1)T}P_{(k)}\right)\end{aligned}}} where g ~ ( k + 1 ) = ( 1 g ( k + 1 ) T P ( k ) g ( k + 1 ) ) g ( k + 1 ) . {\displaystyle {\tilde {g}}^{(k+1)}=\left({\frac {1}{\sqrt {g^{(k+1)T}P_{(k)}g^{(k+1)}}}}\right)g^{(k+1)}.} The stopping criterion is given by the property that g ( k ) T P ( k ) g ( k ) ⩽ ϵ ⇒ f ( x ( k ) ) − f ( x ∗ ) ⩽ ϵ . {\displaystyle {\sqrt {g^{(k)T}P_{(k)}g^{(k)}}}\leqslant \epsilon \quad \Rightarrow \quad f(x^{(k)})-f\left(x^{*}\right)\leqslant \epsilon .} == Inequality-constrained minimization == At the k-th iteration of the algorithm for constrained minimization, we have a point x ( k ) {\displaystyle x^{(k)}} at the center of an ellipsoid E ( k ) {\displaystyle {\mathcal {E}}^{(k)}} as before. We also must maintain a list of values f b e s t ( k ) {\displaystyle f_{\rm {best}}^{(k)}} recording the smallest objective value of feasible iterates so far. Depending on whether or not the point x ( k ) {\displaystyle x^{(k)}} is feasible, we perform one of two tasks: If x ( k ) {\displaystyle x^{(k)}} is feasible, perform essentially the same update as in the unconstrained case, by choosing a subgradient g 0 {\displaystyle g_{0}} that satisfies g 0 T ( x ∗ − x ( k ) ) + f 0 ( x ( k ) ) − f b e s t ( k ) ⩽ 0 {\displaystyle g_{0}^{T}(x^{*}-x^{(k)})+f_{0}(x^{(k)})-f_{\rm {best}}^{(k)}\leqslant 0} If x ( k ) {\displaystyle x^{(k)}} is infeasible and violates the j-th constraint, update the ellipsoid with a feasibility cut. Our feasibility cut may be a subgradient g j {\displaystyle g_{j}} of f j {\displaystyle f_{j}} which must satisfy g j T ( z − x ( k ) ) + f j ( x ( k ) ) ⩽ 0 {\displaystyle g_{j}^{T}(z-x^{(k)})+f_{j}(x^{(k)})\leqslant 0} for all feasible z. == Performance in convex programs == === Theoretical run-time complexity guarantee === The run-time complexity guarantee of the ellipsoid method in the real RAM model is given by the following theorem.: Thm.8.3.1  Consider a family of convex optimization problems of the form: minimize f(x) s.t. x is in G, where f is a convex function and G is a convex set (a subset of an Euclidean space Rn). Each problem p in the family is represented by a data-vector Data(p), e.g., the real-valued coefficients in matrices and vectors representing the function f and the feasible region G. The size of a problem p, Size(p), is defined as the number of elements (real numbers) in Data(p). The following assumptions are needed: G (the feasible region) is: Bounded; Has a non-empty interior (so there is a strictly-feasible point); Given Data(p), one can compute using poly(Size(p)) arithmetic operations: An ellipsoid that contains G; A lower bound 'MinVol(p)>0' of the volume G. Given Data(p) and a point x in Rn, one can compute using poly(Size(p)) arithmetic operations: A separation oracle for G (that is: either assert that x is in G, or return a hyperplane separating x from G). A first-order oracle for f (that is: compute the value of f(x) and a subgradient f'(x)). Under these assumptions, the ellipsoid method is "R-polynomial". This means that there exists a polynomial Poly such that, for every problem-instance p and every approximation-ratio ε>0, the method finds a solution x satisfying : f ( x ) − min G f ≤ ε ⋅ [ max G f − min G f ] {\displaystyle f(x)-\min _{G}f\leq \varepsilon \cdot [\max _{G}f-\min _{G}f]} ,using at most the following number of arithmetic operations on real numbers: P o l y ( S i z e ( p ) ) ⋅ ln ⁡ ( V ( p ) ϵ ) {\displaystyle Poly(Size(p))\cdot \ln \left({\frac {V(p)}{\epsilon }}\right)} where V(p) is a data-dependent quantity. Intuitively, it means that the number of operations required for each additional digit of accuracy is polynomial in Size(p). In the case of the ellipsoid method, we have: V ( p ) = [ V o l ( initial ellipsoid ) V o l ( G ) ] 1 / n ≤ [ V o l ( initial ellipsoid ) M i n V o l ( p ) ] 1 / n {\displaystyle V(p)=\left[{\frac {Vol({\text{initial ellipsoid}})}{Vol(G)}}\right]^{1/n}\leq \left[{\frac {Vol({\text{initial ellipsoid}})}{MinVol(p)}}\right]^{1/n}} .The ellipsoid method requires at most 2 ( n − 1 ) n ⋅ ln ⁡ ( V ( p ) ϵ ) {\displaystyle 2(n-1)n\cdot \ln \left({\frac {V(p)}{\epsilon }}\right)} steps, and each step requires Poly(Size(p)) arithmetic operations. === Practical performance === The ellipsoid method is used on low-dimensional problems, such as planar location problems, where it is numerically stable. Nemirovsky and BenTal: Sec.8.3.3  say that it is efficient if the number of variables is at most 20-30; this is so even if there are thousands of constraints, as the number of iterations does not depend on the number of constraints. However, in problems with many variables, the ellipsoid method is very inefficient, as the number of iterations grows as O(n2). Even on "small"-sized problems, it suffers from numerical instability and poor performance in practice . === Theoretical importance === The ellipsoid method is an important theoretical technique in combinatorial optimization. In computational complexity theory, the ellipsoid algorithm is attractive because its complexity depends on the number of columns and the digital size of the coefficients, but not on the number of rows. The ellipsoid method can be used to show that many algorithmic problems on convex sets are polynomial-time equivalent. == Performance in linear programs == Leonid Khachiyan applied the ellipsoid method to the special case of linear programming: minimize cTx s.t. Ax ≤ b, where all coefficients in A,b,c are rational numbers. He showed that linear programs can be solved in polynomial time. Here is a sketch of Khachiyan's theorem.: Sec.8.4.2  Step 1: reducing optimization to search. The theorem of linear programming duality says that we can reduce the above minimization problem to the search problem: find x,y s.t. Ax ≤ b ; ATy = c ; y ≤ 0 ; cTx=bTy. The first problem is solvable iff the second problem is solvable; in case the problem is solvable, the x-components of the solution to the second problem are an optimal solution to the first problem. Therefore, from now on, we can assume that we need to solve the following problem: find z ≥ 0 s.t. Rz ≤ r. Multiplying all rational coefficients by the common denominator, we can assume that all coefficients are integers. Step 2: reducing search to feasibility-check. The problem find z ≥ 0 s.t. Rz ≤ r can be reduced to the binary decision problem: "is there a z ≥ 0 such that Rz ≤ r?". This can be done as follows. If the answer to the decision problem is "no", then the answer to the search problem is "None", and we are done. Otherwise, take the first inequality constraint R1z ≤ r1; replace it with an equality R1z = r1; and apply the decision problem again. If the answer is "yes", we keep the equality; if the answer is "no", it means that the inequality is redundant, and we can remove it. Then we proceed to the next inequality constraint. For each constraint, we either convert it to equality or remove it. Finally, we have only equality constraints, which can be solved by any method for solving a system of linear equations. Step 3: the decision problem can be reduced to a different optimization problem. Define the residual function f(z) := max[(Rz)1-r1, (Rz)2-r2, (Rz)3-r3,...]. Clearly, f(z)≤0 iff Rz ≤ r. Therefore, to solve the decision problem, it is sufficient to solve the minimization problem: minz f(z). The function f is convex (it is a maximum of linear functions). Denote the minimum value by f*. Then the answer to the decision problem is "yes" iff f*≤0. Step 4: In the optimization problem minz f(z), we can assume that z is in a box of side-length 2L, where L is the bit length of the problem data. Thus, we have a bounded convex program, that can be solved up to any accuracy ε by the ellipsoid method, in time polynomial in L. Step 5: It can be proved that, if f*>0, then f*>2-poly(L), for some polynomial. Therefore, we can pick the accuracy ε=2-poly(L). Then, the ε-approximate solution found by the ellipsoid method will be positive, iff f*>0, iff the decision problem is unsolvable. == Variants == The ellipsoid method has several variants, depending on what cuts exactly are used in each step.: Sec. 3  === Different cuts === In the central-cut ellipsoid method,: 82, 87–94  the cuts are always through the center of the current ellipsoid. The input is a rational number ε>0, a convex body K given by a weak separation oracle, and a number R such that S(0,R) (the ball of radius R around the origin) contains K. The output is one of the following: (a) A vector at a distance of at most ε from K, or -- (b) A positive definite matrix A and a point a such that the ellipsoid E(A,a) contains K, and the volume of E(A,a) is at most ε. The number of steps is N := ⌈ 5 n log ⁡ ( 1 / ϵ ) + 5 n 2 log ⁡ ( 2 R ) ⌉ {\displaystyle N:=\lceil 5n\log(1/\epsilon )+5n^{2}\log(2R)\rceil } , the number of required accuracy digits is p := 8N, and the required accuracy of the separation oracle is d := 2−p. In the deep-cut ellipsoid method,: 83  the cuts remove more than half of the ellipsoid in each step. This makes it faster to discover that K is empty. However, when K is nonempty, there are examples in which the central-cut method finds a feasible point faster. The use of deep cuts does not change the order of magnitude of the run-time. In the shallow-cut ellipsoid method,: 83, 94–101  the cuts remove less than half of the ellipsoid in each step. This variant is not very useful in practice, but it has theoretical importance: it allows to prove results that cannot be derived from other variants. The input is a rational number ε>0, a convex body K given by a shallow separation oracle, and a number R such that S(0,R) contains K. The output is a positive definite matrix A and a point a such that one of the following holds: (a) The ellipsoid E(A,a) has been declared "tough" by the oracle, or - (b) K is contained in E(A,a) and the volume of E(A,a) is at most ε. The number of steps is N := ⌈ 5 n ( n + 1 ) 2 log ⁡ ( 1 / ϵ ) + 5 n 2 ( n + 1 ) 2 log ⁡ ( 2 R ) + log ⁡ ( n + 1 ) ⌉ {\displaystyle N:=\lceil 5n(n+1)^{2}\log(1/\epsilon )+5n^{2}(n+1)^{2}\log(2R)+\log(n+1)\rceil } , and the number of required accuracy digits is p := 8N. === Different ellipsoids === There is also a distinction between the circumscribed ellipsoid and the inscribed ellipsoid methods: In the circumscribed ellipsoid method, each iteration finds an ellipsoid of smallest volume that contains the remaining part of the previous ellipsoid. This method was developed by Yudin and Nemirovskii. In the Inscribed ellipsoid method, each iteration finds an ellipsoid of largest volume that is contained the remaining part of the previous ellipsoid. This method was developed by Tarasov, Khachian and Erlikh. The methods differ in their runtime complexity (below, n is the number of variables and epsilon is the accuracy): The circumscribed method requires O ( n 2 ) ln ⁡ 1 ϵ {\displaystyle O(n^{2})\ln {\frac {1}{\epsilon }}} iterations, where each iteration consists of finding a separating hyperplane and finding a new circumscribed ellipsoid. Finding a circumscribed ellipsoid requires O ( n 2 ) {\displaystyle O(n^{2})} time. The inscribed method requires O ( n ) ln ⁡ 1 ϵ {\displaystyle O(n)\ln {\frac {1}{\epsilon }}} iterations, where each iteration consists of finding a separating hyperplane and finding a new inscribed ellipsoid. Finding an inscribed ellipsoid requires O ( n 3.5 + δ ) {\displaystyle O(n^{3.5+\delta })} time for some small δ > 0 {\displaystyle \delta >0} . The relative efficiency of the methods depends on the time required for finding a separating hyperplane, which depends on the application: if the runtime is O ( n t ) {\displaystyle O(n^{t})} for t ≤ 2.5 {\displaystyle t\leq 2.5} then the circumscribed method is more efficient, but if t > 2.5 {\displaystyle t>2.5} then the inscribed method is more efficient. == Related methods == The center-of-gravity method is a conceptually simpler method, that requires fewer steps. However, each step is computationally expensive, as it requires to compute the center of gravity of the current feasible polytope. Interior point methods, too, allow solving convex optimization problems in polynomial time, but their practical performance is much better than the ellipsoid method. == Notes == == Further reading == Dmitris Alevras and Manfred W. Padberg, Linear Optimization and Extensions: Problems and Extensions, Universitext, Springer-Verlag, 2001. (Problems from Padberg with solutions.) V. Chandru and M.R.Rao, Linear Programming, Chapter 31 in Algorithms and Theory of Computation Handbook, edited by M.J.Atallah, CRC Press 1999, 31-1 to 31-37. V. Chandru and M.R.Rao, Integer Programming, Chapter 32 in Algorithms and Theory of Computation Handbook, edited by M.J.Atallah, CRC Press 1999, 32-1 to 32-45. George B. Dantzig and Mukund N. Thapa. 1997. Linear programming 1: Introduction. Springer-Verlag. George B. Dantzig and Mukund N. Thapa. 2003. Linear Programming 2: Theory and Extensions. Springer-Verlag. L. Lovász: An Algorithmic Theory of Numbers, Graphs, and Convexity, CBMS-NSF Regional Conference Series in Applied Mathematics 50, SIAM, Philadelphia, Pennsylvania, 1986 Kattta G. Murty, Linear Programming, Wiley, 1983. M. Padberg, Linear Optimization and Extensions, Second Edition, Springer-Verlag, 1999. Christos H. Papadimitriou and Kenneth Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Corrected republication with a new preface, Dover. Alexander Schrijver, Theory of Linear and Integer Programming. John Wiley & sons, 1998, ISBN 0-471-98232-6 == External links == EE364b, a Stanford course homepage
Wikipedia/Ellipsoidal_algorithm
In mathematics, a topological space X {\displaystyle X} is said to be a Baire space if countable unions of closed sets with empty interior also have empty interior. According to the Baire category theorem, compact Hausdorff spaces and complete metric spaces are examples of Baire spaces. The Baire category theorem combined with the properties of Baire spaces has numerous applications in topology, geometry, and analysis, in particular functional analysis. For more motivation and applications, see the article Baire category theorem. The current article focuses more on characterizations and basic properties of Baire spaces per se. Bourbaki introduced the term "Baire space" in honor of René Baire, who investigated the Baire category theorem in the context of Euclidean space R n {\displaystyle \mathbb {R} ^{n}} in his 1899 thesis. == Definition == The definition that follows is based on the notions of meagre (or first category) set (namely, a set that is a countable union of sets whose closure has empty interior) and nonmeagre (or second category) set (namely, a set that is not meagre). See the corresponding article for details. A topological space X {\displaystyle X} is called a Baire space if it satisfies any of the following equivalent conditions: Every countable intersection of dense open sets is dense. Every countable union of closed sets with empty interior has empty interior. Every meagre set has empty interior. Every nonempty open set is nonmeagre. Every comeagre set is dense. Whenever a countable union of closed sets has an interior point, at least one of the closed sets has an interior point. The equivalence between these definitions is based on the associated properties of complementary subsets of X {\displaystyle X} (that is, of a set A ⊆ X {\displaystyle A\subseteq X} and of its complement X ∖ A {\displaystyle X\setminus A} ) as given in the table below. == Baire category theorem == The Baire category theorem gives sufficient conditions for a topological space to be a Baire space. (BCT1) Every complete pseudometric space is a Baire space. In particular, every completely metrizable topological space is a Baire space. (BCT2) Every locally compact regular space is a Baire space. In particular, every locally compact Hausdorff space is a Baire space. BCT1 shows that the following are Baire spaces: The space R {\displaystyle \mathbb {R} } of real numbers. The space of irrational numbers, which is homeomorphic to the Baire space ω ω {\displaystyle \omega ^{\omega }} of set theory. Every Polish space. BCT2 shows that the following are Baire spaces: Every compact Hausdorff space; for example, the Cantor set (or Cantor space). Every manifold, even if it is not paracompact (hence not metrizable), like the long line. One should note however that there are plenty of spaces that are Baire spaces without satisfying the conditions of the Baire category theorem, as shown in the Examples section below. == Properties == Every nonempty Baire space is nonmeagre. In terms of countable intersections of dense open sets, being a Baire space is equivalent to such intersections being dense, while being a nonmeagre space is equivalent to the weaker condition that such intersections are nonempty. Every open subspace of a Baire space is a Baire space. Every dense Gδ set in a Baire space is a Baire space. The result need not hold if the Gδ set is not dense. See the Examples section. Every comeagre set in a Baire space is a Baire space. A subset of a Baire space is comeagre if and only if it contains a dense Gδ set. A closed subspace of a Baire space need not be Baire. See the Examples section. If a space contains a dense subspace that is Baire, it is also a Baire space. A space that is locally Baire, in the sense that each point has a neighborhood that is a Baire space, is a Baire space. Every topological sum of Baire spaces is Baire. The product of two Baire spaces is not necessarily Baire. An arbitrary product of complete metric spaces is Baire. Every locally compact sober space is a Baire space. Every finite topological space is a Baire space (because a finite space has only finitely many open sets and the intersection of two open dense sets is an open dense set). A topological vector space is a Baire space if and only if it is nonmeagre, which happens if and only if every closed balanced absorbing subset has non-empty interior. Let f n : X → Y {\displaystyle f_{n}:X\to Y} be a sequence of continuous functions with pointwise limit f : X → Y . {\displaystyle f:X\to Y.} If X {\displaystyle X} is a Baire space, then the points where f {\displaystyle f} is not continuous is a meagre set in X {\displaystyle X} and the set of points where f {\displaystyle f} is continuous is dense in X . {\displaystyle X.} A special case of this is the uniform boundedness principle. == Examples == The empty space is a Baire space. It is the only space that is both Baire and meagre. The space R {\displaystyle \mathbb {R} } of real numbers with the usual topology is a Baire space. The space Q {\displaystyle \mathbb {Q} } of rational numbers (with the topology induced from R {\displaystyle \mathbb {R} } ) is not a Baire space, since it is meagre. The space of irrational numbers (with the topology induced from R {\displaystyle \mathbb {R} } ) is a Baire space, since it is comeagre in R . {\displaystyle \mathbb {R} .} The space X = [ 0 , 1 ] ∪ ( [ 2 , 3 ] ∩ Q ) {\displaystyle X=[0,1]\cup ([2,3]\cap \mathbb {Q} )} (with the topology induced from R {\displaystyle \mathbb {R} } ) is nonmeagre, but not Baire. There are several ways to see it is not Baire: for example because the subset [ 0 , 1 ] {\displaystyle [0,1]} is comeagre but not dense; or because the nonempty subset [ 2 , 3 ] ∩ Q {\displaystyle [2,3]\cap \mathbb {Q} } is open and meagre. Similarly, the space X = { 1 } ∪ ( [ 2 , 3 ] ∩ Q ) {\displaystyle X=\{1\}\cup ([2,3]\cap \mathbb {Q} )} is not Baire. It is nonmeagre since 1 {\displaystyle 1} is an isolated point. The following are examples of Baire spaces for which the Baire category theorem does not apply, because these spaces are not locally compact and not completely metrizable: The Sorgenfrey line. The Sorgenfrey plane. The Niemytzki plane. The subspace of R 2 {\displaystyle \mathbb {R} ^{2}} consisting of the open upper half plane together with the rationals on the x-axis, namely, X = ( R × ( 0 , ∞ ) ) ∪ ( Q × { 0 } ) , {\displaystyle X=(\mathbb {R} \times (0,\infty ))\cup (\mathbb {Q} \times \{0\}),} is a Baire space, because the open upper half plane is dense in X {\displaystyle X} and completely metrizable, hence Baire. The space X {\displaystyle X} is not locally compact and not completely metrizable. The set Q × { 0 } {\displaystyle \mathbb {Q} \times \{0\}} is closed in X {\displaystyle X} , but is not a Baire space. Since in a metric space closed sets are Gδ sets, this also shows that in general Gδ sets in a Baire space need not be Baire. Algebraic varieties with the Zariski topology are Baire spaces. An example is the affine space A n {\displaystyle \mathbb {A} ^{n}} consisting of the set C n {\displaystyle \mathbb {C} ^{n}} of n-tuples of complex numbers, together with the topology whose closed sets are the vanishing sets of polynomials f ∈ C [ x 1 , … , x n ] . {\displaystyle f\in \mathbb {C} [x_{1},\ldots ,x_{n}].} == See also == Banach–Mazur game Barrelled space – Type of topological vector space Blumberg theorem – Any real function on R admits a continuous restriction on a dense subset of R Choquet game – Topological game Property of Baire – Difference of an open set by a meager set Webbed space – Space where open mapping and closed graph theorems hold == Notes == == References == Bourbaki, Nicolas (1989) [1967]. General Topology 2: Chapters 5–10 [Topologie Générale]. Éléments de mathématique. Vol. 4. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64563-4. OCLC 246032063. Engelking, Ryszard (1989). General Topology. Heldermann Verlag, Berlin. ISBN 3-88538-006-4. Gierz, G.; Hofmann, K. H.; Keimel, K.; Lawson, J. D.; Mislove, M. W.; Scott, D. S. (2003). Continuous Lattices and Domains. Encyclopedia of Mathematics and its Applications. Vol. 93. Cambridge University Press. ISBN 978-0521803380. Haworth, R. C.; McCoy, R. A. (1977), Baire Spaces, Warszawa: Instytut Matematyczny Polskiej Akademi Nauk Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. Munkres, James R. (2000). Topology. Prentice-Hall. ISBN 0-13-181629-2. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114. == External links == Encyclopaedia of Mathematics article on Baire space Encyclopaedia of Mathematics article on Baire theorem
Wikipedia/Baire_category_theory
In operations research, the Big M method is a method of solving linear programming problems using the simplex algorithm. The Big M method extends the simplex algorithm to problems that contain "greater-than" constraints. It does so by associating the constraints with large negative constants which would not be part of any optimal solution, if it exists. == Algorithm == The simplex algorithm is the original and still one of the most widely used methods for solving linear maximization problems. It is obvious that the points with the optimal objective must be reached on a vertex of the simplex which is the shape of feasible region of an LP (linear program). Points on the vertex of the simplex are represented as a basis. So, to apply the simplex algorithm which aims improve the basis until a global optima is reached, one needs to find a feasible basis first. The trivial basis (all problem variables equal to 0) is not always part of the simplex. It is feasible if and only if all the constraints (except non-negativity) are less-than constraints and with positive constant on the right-hand side. The Big M method introduces surplus and artificial variables to convert all inequalities into that form and there by extends the simplex in higher dimensions to be valid in the trivial basis. It is always a vertex due to the positivity constraint on the problem variables inherent in the standard formulation of LP. The "Big M" refers to a large number associated with the artificial variables, represented by the letter M. The steps in the algorithm are as follows: Multiply the inequality constraints to ensure that the right hand side is positive. If the problem is of minimization, transform to maximization by multiplying the objective by −1. For any greater-than constraints, introduce surplus si and artificial variables ai (as shown below). Choose a large positive Value M and introduce a term in the objective of the form −M multiplying the artificial variables. For less-than or equal constraints, introduce slack variables si so that all constraints are equalities. Solve the problem using the usual simplex method. For example, x + y ≤ 100 becomes x + y + s1 = 100, whilst x + y ≥ 100 becomes x + y − s1 + a1 = 100. The artificial variables must be shown to be 0. The function to be maximised is rewritten to include the sum of all the artificial variables. Then row reductions are applied to gain a final solution. The value of M must be chosen sufficiently large so that the artificial variable would not be part of any feasible solution. For a sufficiently large M, the optimal solution contains any artificial variables in the basis (i.e. positive values) if and only if the problem is not feasible. However, the a-priori selection of an appropriate value for M is not trivial. A way to overcome the need to specify the value of M is described in . Other ways to find an initial basis for the simplex algorithm involves solving another linear program in an initial phase. == Other usage == When used in the objective function, the Big M method sometimes refers to formulations of linear optimization problems in which violations of a constraint or set of constraints are associated with a large positive penalty constant, M. In Mixed integer linear optimization the term Big M can also refer to use of a large term in the constraints themselves. For example the logical constraint z = 0 ⟺ x = y {\displaystyle z=0\iff x=y} where z is binary variable (0 or 1) variable refers to ensuring equality of variables only when a certain binary variable takes on one value, but to leave the variables "open" if the binary variable takes on its opposite value. For a sufficiently large M and z binary variable (0 or 1), the constraints x − y ≤ M z {\displaystyle x-y\leq Mz} x − y ≥ − M z {\displaystyle x-y\geq -Mz} ensure that when z = 0 {\displaystyle z=0} then x = y {\displaystyle x=y} . Otherwise, when z = 1 {\displaystyle z=1} , then − M ≤ x − y ≤ M {\displaystyle -M\leq x-y\leq M} , indicating that the variables x and y can have any values so long as the absolute value of their difference is bounded by M {\displaystyle M} (hence the need for M to be "large enough.") Thus it is possible to "encode" the logical constraint into a MILP problem. == See also == Two phase method (linear programming) another approach for solving problems with >= constraints Karush–Kuhn–Tucker conditions, which apply to nonlinear optimization problems with inequality constraints. == External links == Bibliography Griva, Igor; Nash, Stephan G.; Sofer, Ariela (26 March 2009). Linear and Nonlinear Optimization (2nd ed.). Society for Industrial Mathematics. ISBN 978-0-89871-661-0. Discussion Simplex – Big M Method, Lynn Killen, Dublin City University. The Big M Method, businessmanagementcourses.org The Big M Method, Mark Hutchinson The Big-M Method with the Numerical Infinite M, a recently introduced parameterless variant A THREE-PHASE SIMPLEX METHOD FOR INFEASIBLE AND UNBOUNDED LINEAR PROGRAMMING PROBLEMS, Big M method for M=1 == References ==
Wikipedia/Big_M_method
In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables. Geometrically, each BFS corresponds to a vertex of the polyhedron of feasible solutions. If there exists an optimal solution, then there exists an optimal BFS. Hence, to find an optimal solution, it is sufficient to consider the BFS-s. This fact is used by the simplex algorithm, which essentially travels from one BFS to another until an optimal solution is found. == Definitions == === Preliminaries: equational form with linearly-independent rows === For the definitions below, we first present the linear program in the so-called equational form: maximize c T x {\textstyle \mathbf {c^{T}} \mathbf {x} } subject to A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } and x ≥ 0 {\displaystyle \mathbf {x} \geq 0} where: c T {\displaystyle \mathbf {c^{T}} } and x {\displaystyle \mathbf {x} } are vectors of size n (the number of variables); b {\displaystyle \mathbf {b} } is a vector of size m (the number of constraints); A {\displaystyle A} is an m-by-n matrix; x ≥ 0 {\displaystyle \mathbf {x} \geq 0} means that all variables are non-negative. Any linear program can be converted into an equational form by adding slack variables. As a preliminary clean-up step, we verify that: The system A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } has at least one solution (otherwise the whole LP has no solution and there is nothing more to do); All m rows of the matrix A {\displaystyle A} are linearly independent, i.e., its rank is m (otherwise we can just delete redundant rows without changing the LP). === Feasible solution === A feasible solution of the LP is any vector x ≥ 0 {\displaystyle \mathbf {x} \geq 0} such that A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } . We assume that there is at least one feasible solution. If m = n, then there is only one feasible solution. Typically m < n, so the system A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } has many solutions; each such solution is called a feasible solution of the LP. === Basis === A basis of the LP is a nonsingular submatrix of A, with all m rows and only m<n columns. Sometimes, the term basis is used not for the submatrix itself, but for the set of indices of its columns. Let B be a subset of m indices from {1,...,n}. Denote by A B {\displaystyle A_{B}} the square m-by-m matrix made of the m columns of A {\displaystyle A} indexed by B. If A B {\displaystyle A_{B}} is nonsingular, the columns indexed by B are a basis of the column space of A {\displaystyle A} . In this case, we call B a basis of the LP. Since the rank of A {\displaystyle A} is m, it has at least one basis; since A {\displaystyle A} has n columns, it has at most ( n m ) {\displaystyle {\binom {n}{m}}} bases. === Basic feasible solution === Given a basis B, we say that a feasible solution x {\displaystyle \mathbf {x} } is a basic feasible solution with basis B if all its non-zero variables are indexed by B, that is, for all j ∉ B : x j = 0 {\displaystyle j\not \in B:~~x_{j}=0} . == Properties == 1. A BFS is determined only by the constraints of the LP (the matrix A {\displaystyle A} and the vector b {\displaystyle \mathbf {b} } ); it does not depend on the optimization objective. 2. By definition, a BFS has at most m non-zero variables and at least n-m zero variables. A BFS can have less than m non-zero variables; in that case, it can have many different bases, all of which contain the indices of its non-zero variables. 3. A feasible solution x {\displaystyle \mathbf {x} } is basic if-and-only-if the columns of the matrix A K {\displaystyle A_{K}} are linearly independent, where K is the set of indices of the non-zero elements of x {\displaystyle \mathbf {x} } .: 45  4. Each basis determines a unique BFS: for each basis B of m indices, there is at most one BFS x B {\displaystyle \mathbf {x_{B}} } with basis B. This is because x B {\displaystyle \mathbf {x_{B}} } must satisfy the constraint A B x B = b {\displaystyle A_{B}\mathbf {x_{B}} =b} , and by definition of basis the matrix A B {\displaystyle A_{B}} is non-singular, so the constraint has a unique solution: x B = A B − 1 ⋅ b {\displaystyle \mathbf {x_{B}} ={A_{B}}^{-1}\cdot b} The opposite is not true: each BFS can come from many different bases. If the unique solution of x B = A B − 1 ⋅ b {\displaystyle \mathbf {x_{B}} ={A_{B}}^{-1}\cdot b} satisfies the non-negativity constraints x B ≥ 0 {\displaystyle \mathbf {x_{B}} \geq 0} , then B is called a feasible basis. 5. If a linear program has an optimal solution (i.e., it has a feasible solution, and the set of feasible solutions is bounded), then it has an optimal BFS. This is a consequence of the Bauer maximum principle: the objective of a linear program is convex; the set of feasible solutions is convex (it is an intersection of hyperspaces); therefore the objective attains its maximum in an extreme point of the set of feasible solutions. Since the number of BFS-s is finite and bounded by ( n m ) {\displaystyle {\binom {n}{m}}} , an optimal solution to any LP can be found in finite time by just evaluating the objective function in all ( n m ) {\displaystyle {\binom {n}{m}}} BFS-s. This is not the most efficient way to solve an LP; the simplex algorithm examines the BFS-s in a much more efficient way. == Examples == Consider a linear program with the following constraints: x 1 + 5 x 2 + 3 x 3 + 4 x 4 + 6 x 5 = 14 x 2 + 3 x 3 + 5 x 4 + 6 x 5 = 7 ∀ i ∈ { 1 , … , 5 } : x i ≥ 0 {\displaystyle {\begin{aligned}x_{1}+5x_{2}+3x_{3}+4x_{4}+6x_{5}&=14\\x_{2}+3x_{3}+5x_{4}+6x_{5}&=7\\\forall i\in \{1,\ldots ,5\}:x_{i}&\geq 0\end{aligned}}} The matrix A is: A = ( 1 5 3 4 6 0 1 3 5 6 ) b = ( 14 7 ) {\displaystyle A={\begin{pmatrix}1&5&3&4&6\\0&1&3&5&6\end{pmatrix}}~~~~~\mathbf {b} =(14~~7)} Here, m=2 and there are 10 subsets of 2 indices, however, not all of them are bases: the set {3,5} is not a basis since columns 3 and 5 are linearly dependent. The set B={2,4} is a basis, since the matrix A B = ( 5 4 1 5 ) {\displaystyle A_{B}={\begin{pmatrix}5&4\\1&5\end{pmatrix}}} is non-singular. The unique BFS corresponding to this basis is x B = ( 0 2 0 1 0 ) {\displaystyle x_{B}=(0~~2~~0~~1~~0)} . == Geometric interpretation == The set of all feasible solutions is an intersection of hyperspaces. Therefore, it is a convex polyhedron. If it is bounded, then it is a convex polytope. Each BFS corresponds to a vertex of this polytope.: 53–56  == Basic feasible solutions for the dual problem == As mentioned above, every basis B defines a unique basic feasible solution x B = A B − 1 ⋅ b {\displaystyle \mathbf {x_{B}} ={A_{B}}^{-1}\cdot b} . In a similar way, each basis defines a solution to the dual linear program: minimize b T y {\textstyle \mathbf {b^{T}} \mathbf {y} } subject to A T y ≥ c {\displaystyle A^{T}\mathbf {y} \geq \mathbf {c} } . The solution is y B = A B T − 1 ⋅ c {\displaystyle \mathbf {y_{B}} ={A_{B}^{T}}^{-1}\cdot c} . == Finding an optimal BFS == There are several methods for finding a BFS that is also optimal. === Using the simplex algorithm === In practice, the easiest way to find an optimal BFS is to use the simplex algorithm. It keeps, at each point of its execution, a "current basis" B (a subset of m out of n variables), a "current BFS", and a "current tableau". The tableau is a representation of the linear program where the basic variables are expressed in terms of the non-basic ones:: 65  x B = p + Q x N z = z 0 + r T x N {\displaystyle {\begin{aligned}x_{B}&=p+Qx_{N}\\z&=z_{0}+r^{T}x_{N}\end{aligned}}} where x B {\displaystyle x_{B}} is the vector of m basic variables, x N {\displaystyle x_{N}} is the vector of n non-basic variables, and z {\displaystyle z} is the maximization objective. Since non-basic variables equal 0, the current BFS is p {\displaystyle p} , and the current maximization objective is z 0 {\displaystyle z_{0}} . If all coefficients in r {\displaystyle r} are negative, then z 0 {\displaystyle z_{0}} is an optimal solution, since all variables (including all non-basic variables) must be at least 0, so the second line implies z ≤ z 0 {\displaystyle z\leq z_{0}} . If some coefficients in r {\displaystyle r} are positive, then it may be possible to increase the maximization target. For example, if x 5 {\displaystyle x_{5}} is non-basic and its coefficient in r {\displaystyle r} is positive, then increasing it above 0 may make z {\displaystyle z} larger. If it is possible to do so without violating other constraints, then the increased variable becomes basic (it "enters the basis"), while some basic variable is decreased to 0 to keep the equality constraints and thus becomes non-basic (it "exits the basis"). If this process is done carefully, then it is possible to guarantee that z {\displaystyle z} increases until it reaches an optimal BFS. === Converting any optimal solution to an optimal BFS === In the worst case, the simplex algorithm may require exponentially many steps to complete. There are algorithms for solving an LP in weakly-polynomial time, such as the ellipsoid method; however, they usually return optimal solutions that are not basic. However, Given any optimal solution to the LP, it is easy to find an optimal feasible solution that is also basic.: see also "external links" below.  === Finding a basis that is both primal-optimal and dual-optimal === A basis B of the LP is called dual-optimal if the solution y B = A B T − 1 ⋅ c {\displaystyle \mathbf {y_{B}} ={A_{B}^{T}}^{-1}\cdot c} is an optimal solution to the dual linear program, that is, it minimizes b T y {\textstyle \mathbf {b^{T}} \mathbf {y} } . In general, a primal-optimal basis is not necessarily dual-optimal, and a dual-optimal basis is not necessarily primal-optimal (in fact, the solution of a primal-optimal basis may even be unfeasible for the dual, and vice versa). If both x B = A B − 1 ⋅ b {\displaystyle \mathbf {x_{B}} ={A_{B}}^{-1}\cdot b} is an optimal BFS of the primal LP, and y B = A B T − 1 ⋅ c {\displaystyle \mathbf {y_{B}} ={A_{B}^{T}}^{-1}\cdot c} is an optimal BFS of the dual LP, then the basis B is called PD-optimal. Every LP with an optimal solution has a PD-optimal basis, and it is found by the Simplex algorithm. However, its run-time is exponential in the worst case. Nimrod Megiddo proved the following theorems: There exists a strongly polynomial time algorithm that inputs an optimal solution to the primal LP and an optimal solution to the dual LP, and returns an optimal basis. If there exists a strongly polynomial time algorithm that inputs an optimal solution to only the primal LP (or only the dual LP) and returns an optimal basis, then there exists a strongly-polynomial time algorithm for solving any linear program (the latter is a famous open problem). Megiddo's algorithms can be executed using a tableau, just like the simplex algorithm. == External links == How to move from an optimal feasible solution to an optimal basic feasible solution. Paul Robin, Operations Research Stack Exchange. == References ==
Wikipedia/Basic_feasible_solution
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss. == Examples == === Regret === Leonard J. Savage argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. === Quadratic loss function === The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is λ ( x ) = C ( t − x ) 2 {\displaystyle \lambda (x)=C(t-x)^{2}\;} for some constant C; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as the squared error loss (SEL). Many common statistics, including t-tests, regression models, design of experiments, and much else, use least squares methods applied using linear regression theory, which is based on the quadratic loss function. The quadratic loss function is also used in linear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a quadratic form in the deviations of the variables of interest from their desired values; this approach is tractable because it results in linear first-order conditions. In the context of stochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like the Huber, Log-Cosh and SMAE losses are used when the data has many large outliers. === 0-1 loss function === In statistics and decision theory, a frequently used loss function is the 0-1 loss function L ( y ^ , y ) = [ y ^ ≠ y ] {\displaystyle L({\hat {y}},y)=\left[{\hat {y}}\neq y\right]} using Iverson bracket notation, i.e. it evaluates to 1 when y ^ ≠ y {\displaystyle {\hat {y}}\neq y} , and 0 otherwise. == Constructing loss and objective functions == In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences. In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers. Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities and the European subsidies for equalizing unemployment rates among 271 German regions. == Expected loss == In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X. === Statistics === Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms. ==== Frequentist expected loss ==== We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the probability distribution, Pθ, of the observed data, X. This is also referred to as the risk function of the decision rule δ and the parameter θ. Here the decision rule depends on the outcome of X. The risk function is given by: R ( θ , δ ) = E θ ⁡ L ( θ , δ ( X ) ) = ∫ X L ( θ , δ ( x ) ) d P θ ( x ) . {\displaystyle R(\theta ,\delta )=\operatorname {E} _{\theta }L{\big (}\theta ,\delta (X){\big )}=\int _{X}L{\big (}\theta ,\delta (x){\big )}\,\mathrm {d} P_{\theta }(x).} Here, θ is a fixed but possibly unknown state of nature, X is a vector of observations stochastically drawn from a population, E θ {\displaystyle \operatorname {E} _{\theta }} is the expectation over all population values of X, dPθ is a probability measure over the event space of X (parametrized by θ) and the integral is evaluated over the entire support of X. ==== Bayes Risk ==== In a Bayesian approach, the expectation is calculated using the prior distribution π* of the parameter θ: ρ ( π ∗ , a ) = ∫ Θ ∫ X L ( θ , a ( x ) ) d P ( x | θ ) d π ∗ ( θ ) = ∫ X ∫ Θ L ( θ , a ( x ) ) d π ∗ ( θ | x ) d M ( x ) {\displaystyle \rho (\pi ^{*},a)=\int _{\Theta }\int _{\mathbf {X}}L(\theta ,a({\mathbf {x}}))\,\mathrm {d} P({\mathbf {x}}\vert \theta )\,\mathrm {d} \pi ^{*}(\theta )=\int _{\mathbf {X}}\int _{\Theta }L(\theta ,a({\mathbf {x}}))\,\mathrm {d} \pi ^{*}(\theta \vert {\mathbf {x}})\,\mathrm {d} M({\mathbf {x}})} where m(x) is known as the predictive likelihood wherein θ has been "integrated out," π* (θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the action a* which minimises this expected loss, which is referred to as Bayes Risk. In the latter equation, the integrand inside dx is known as the Posterior Risk, and minimising it with respect to decision a also minimizes the overall Bayes Risk. This optimal decision, a* is known as the Bayes (decision) Rule - it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. ==== Examples in statistics ==== For a scalar parameter θ, a decision function whose output θ ^ {\displaystyle {\hat {\theta }}} is an estimate of θ, and a quadratic loss function (squared error loss) L ( θ , θ ^ ) = ( θ − θ ^ ) 2 , {\displaystyle L(\theta ,{\hat {\theta }})=(\theta -{\hat {\theta }})^{2},} the risk function becomes the mean squared error of the estimate, R ( θ , θ ^ ) = E θ ⁡ [ ( θ − θ ^ ) 2 ] . {\displaystyle R(\theta ,{\hat {\theta }})=\operatorname {E} _{\theta }\left[(\theta -{\hat {\theta }})^{2}\right].} An Estimator found by minimizing the Mean squared error estimates the Posterior distribution's mean. In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm, L ( f , f ^ ) = ‖ f − f ^ ‖ 2 2 , {\displaystyle L(f,{\hat {f}})=\|f-{\hat {f}}\|_{2}^{2}\,,} the risk function becomes the mean integrated squared error R ( f , f ^ ) = E ⁡ ( ‖ f − f ^ ‖ 2 ) . {\displaystyle R(f,{\hat {f}})=\operatorname {E} \left(\|f-{\hat {f}}\|^{2}\right).\,} === Economic choice under uncertainty === In economics, decision-making under uncertainty is often modelled using the von Neumann–Morgenstern utility function of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. == Decision rules == A decision rule makes a choice using an optimality criterion. Some commonly used criteria are: Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: a r g m i n δ max θ ∈ Θ R ( θ , δ ) . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\ \max _{\theta \in \Theta }\ R(\theta ,\delta ).} Invariance: Choose the decision rule which satisfies an invariance requirement. Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): a r g m i n δ E θ ∈ Θ ⁡ [ R ( θ , δ ) ] = a r g m i n δ ∫ θ ∈ Θ R ( θ , δ ) p ( θ ) d θ . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\operatorname {E} _{\theta \in \Theta }[R(\theta ,\delta )]={\underset {\delta }{\operatorname {arg\,min} }}\ \int _{\theta \in \Theta }R(\theta ,\delta )\,p(\theta )\,d\theta .} == Selecting a loss function == Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances. A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering. For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used loss functions are the squared loss, L ( a ) = a 2 {\displaystyle L(a)=a^{2}} , and the absolute loss, L ( a ) = | a | {\displaystyle L(a)=|a|} . However the absolute loss has the disadvantage that it is not differentiable at a = 0 {\displaystyle a=0} . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of a {\displaystyle a} 's (as in ∑ i = 1 n L ( a i ) {\textstyle \sum _{i=1}^{n}L(a_{i})} ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties. Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others. W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases. == See also == Bayesian regret Loss functions for classification Discounted maximum loss Hinge loss Scoring rule Statistical risk == References == == Further reading == Aretz, Kevin; Bartram, Söhnke M.; Pope, Peter F. (April–June 2011). "Asymmetric Loss Functions and the Rationality of Expected Stock Returns" (PDF). International Journal of Forecasting. 27 (2): 413–437. doi:10.1016/j.ijforecast.2009.10.008. SSRN 889323. Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2. MR 0804611. Cecchetti, S. (2000). "Making monetary policy: Objectives and rules". Oxford Review of Economic Policy. 16 (4): 43–59. doi:10.1093/oxrep/16.4.43. Horowitz, Ann R. (1987). "Loss functions and public policy". Journal of Macroeconomics. 9 (4): 489–504. doi:10.1016/0164-0704(87)90016-4. Waud, Roger N. (1976). "Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty". Econometrica. 44 (1): 53–66. doi:10.2307/1911380. JSTOR 1911380.
Wikipedia/Loss_Functions
In mathematics, integrability is a property of certain dynamical systems. While there are several distinct formal definitions, informally speaking, an integrable system is a dynamical system with sufficiently many conserved quantities, or first integrals, that its motion is confined to a submanifold of much smaller dimensionality than that of its phase space. Three features are often referred to as characterizing integrable systems: the existence of a maximal set of conserved quantities (the usual defining property of complete integrability) the existence of algebraic invariants, having a basis in algebraic geometry (a property known sometimes as algebraic integrability) the explicit determination of solutions in an explicit functional form (not an intrinsic property, but something often referred to as solvability) Integrable systems may be seen as very different in qualitative character from more generic dynamical systems, which are more typically chaotic systems. The latter generally have no conserved quantities, and are asymptotically intractable, since an arbitrarily small perturbation in initial conditions may lead to arbitrarily large deviations in their trajectories over a sufficiently large time. Many systems studied in physics are completely integrable, in particular, in the Hamiltonian sense, the key example being multi-dimensional harmonic oscillators. Another standard example is planetary motion about either one fixed center (e.g., the sun) or two. Other elementary examples include the motion of a rigid body about its center of mass (the Euler top) and the motion of an axially symmetric rigid body about a point in its axis of symmetry (the Lagrange top). In the late 1960s, it was realized that there are completely integrable systems in physics having an infinite number of degrees of freedom, such as some models of shallow water waves (Korteweg–de Vries equation), the Kerr effect in optical fibres, described by the nonlinear Schrödinger equation, and certain integrable many-body systems, such as the Toda lattice. The modern theory of integrable systems was revived with the numerical discovery of solitons by Martin Kruskal and Norman Zabusky in 1965, which led to the inverse scattering transform method in 1967. In the special case of Hamiltonian systems, if there are enough independent Poisson commuting first integrals for the flow parameters to be able to serve as a coordinate system on the invariant level sets (the leaves of the Lagrangian foliation), and if the flows are complete and the energy level set is compact, this implies the Liouville–Arnold theorem; i.e., the existence of action-angle variables. General dynamical systems have no such conserved quantities; in the case of autonomous Hamiltonian systems, the energy is generally the only one, and on the energy level sets, the flows are typically chaotic. A key ingredient in characterizing integrable systems is the Frobenius theorem, which states that a system is Frobenius integrable (i.e., is generated by an integrable distribution) if, locally, it has a foliation by maximal integral manifolds. But integrability, in the sense of dynamical systems, is a global property, not a local one, since it requires that the foliation be a regular one, with the leaves embedded submanifolds. Integrability does not necessarily imply that generic solutions can be explicitly expressed in terms of some known set of special functions; it is an intrinsic property of the geometry and topology of the system, and the nature of the dynamics. == General dynamical systems == In the context of differentiable dynamical systems, the notion of integrability refers to the existence of invariant, regular foliations; i.e., ones whose leaves are embedded submanifolds of the smallest possible dimension that are invariant under the flow. There is thus a variable notion of the degree of integrability, depending on the dimension of the leaves of the invariant foliation. This concept has a refinement in the case of Hamiltonian systems, known as complete integrability in the sense of Liouville (see below), which is what is most frequently referred to in this context. An extension of the notion of integrability is also applicable to discrete systems such as lattices. This definition can be adapted to describe evolution equations that either are systems of differential equations or finite difference equations. The distinction between integrable and nonintegrable dynamical systems has the qualitative implication of regular motion vs. chaotic motion and hence is an intrinsic property, not just a matter of whether a system can be explicitly integrated in an exact form. == Hamiltonian systems and Liouville integrability == In the special setting of Hamiltonian systems, we have the notion of integrability in the Liouville sense. (See the Liouville–Arnold theorem.) Liouville integrability means that there exists a regular foliation of the phase space by invariant manifolds such that the Hamiltonian vector fields associated with the invariants of the foliation span the tangent distribution. Another way to state this is that there exists a maximal set of functionally independent Poisson commuting invariants (i.e., independent functions on the phase space whose Poisson brackets with the Hamiltonian of the system, and with each other, vanish). In finite dimensions, if the phase space is symplectic (i.e., the center of the Poisson algebra consists only of constants), it must have even dimension 2 n , {\displaystyle 2n,} and the maximal number of independent Poisson commuting invariants (including the Hamiltonian itself) is n {\displaystyle n} . The leaves of the foliation are totally isotropic with respect to the symplectic form and such a maximal isotropic foliation is called Lagrangian. All autonomous Hamiltonian systems (i.e. those for which the Hamiltonian and Poisson brackets are not explicitly time-dependent) have at least one invariant; namely, the Hamiltonian itself, whose value along the flow is the energy. If the energy level sets are compact, the leaves of the Lagrangian foliation are tori, and the natural linear coordinates on these are called "angle" variables. The cycles of the canonical 1 {\displaystyle 1} -form are called the action variables, and the resulting canonical coordinates are called action-angle variables (see below). There is also a distinction between complete integrability, in the Liouville sense, and partial integrability, as well as a notion of superintegrability and maximal superintegrability. Essentially, these distinctions correspond to the dimensions of the leaves of the foliation. When the number of independent Poisson commuting invariants is less than maximal (but, in the case of autonomous systems, more than one), we say the system is partially integrable. When there exist further functionally independent invariants, beyond the maximal number that can be Poisson commuting, and hence the dimension of the leaves of the invariant foliation is less than n, we say the system is superintegrable. If there is a regular foliation with one-dimensional leaves (curves), this is called maximally superintegrable. == Action-angle variables == When a finite-dimensional Hamiltonian system is completely integrable in the Liouville sense, and the energy level sets are compact, the flows are complete, and the leaves of the invariant foliation are tori. There then exist, as mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables, such that the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the tori. The motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables. == The Hamilton–Jacobi approach == In canonical transformation theory, there is the Hamilton–Jacobi method, in which solutions to Hamilton's equations are sought by first finding a complete solution of the associated Hamilton–Jacobi equation. In classical terminology, this is described as determining a transformation to a canonical set of coordinates consisting of completely ignorable variables; i.e., those in which there is no dependence of the Hamiltonian on a complete set of canonical "position" coordinates, and hence the corresponding canonically conjugate momenta are all conserved quantities. In the case of compact energy level sets, this is the first step towards determining the action-angle variables. In the general theory of partial differential equations of Hamilton–Jacobi type, a complete solution (i.e. one that depends on n independent constants of integration, where n is the dimension of the configuration space), exists in very general cases, but only in the local sense. Therefore, the existence of a complete solution of the Hamilton–Jacobi equation is by no means a characterization of complete integrability in the Liouville sense. Most cases that can be "explicitly integrated" involve a complete separation of variables, in which the separation constants provide the complete set of integration constants that are required. Only when these constants can be reinterpreted, within the full phase space setting, as the values of a complete set of Poisson commuting functions restricted to the leaves of a Lagrangian foliation, can the system be regarded as completely integrable in the Liouville sense. == Solitons and inverse spectral methods == A resurgence of interest in classical integrable systems came with the discovery, in the late 1960s, that solitons, which are strongly stable, localized solutions of partial differential equations like the Korteweg–de Vries equation (which describes 1-dimensional non-dissipative fluid dynamics in shallow basins), could be understood by viewing these equations as infinite-dimensional integrable Hamiltonian systems. Their study leads to a very fruitful approach for "integrating" such systems, the inverse scattering transform and more general inverse spectral methods (often reducible to Riemann–Hilbert problems), which generalize local linear methods like Fourier analysis to nonlocal linearization, through the solution of associated integral equations. The basic idea of this method is to introduce a linear operator that is determined by the position in phase space and which evolves under the dynamics of the system in question in such a way that its "spectrum" (in a suitably generalized sense) is invariant under the evolution, cf. Lax pair. This provides, in certain cases, enough invariants, or "integrals of motion" to make the system completely integrable. In the case of systems having an infinite number of degrees of freedom, such as the KdV equation, this is not sufficient to make precise the property of Liouville integrability. However, for suitably defined boundary conditions, the spectral transform can, in fact, be interpreted as a transformation to completely ignorable coordinates, in which the conserved quantities form half of a doubly infinite set of canonical coordinates, and the flow linearizes in these. In some cases, this may even be seen as a transformation to action-angle variables, although typically only a finite number of the "position" variables are actually angle coordinates, and the rest are noncompact. == Hirota bilinear equations and τ-functions == Another viewpoint that arose in the modern theory of integrable systems originated in a calculational approach pioneered by Ryogo Hirota, which involved replacing the original nonlinear dynamical system with a bilinear system of constant coefficient equations for an auxiliary quantity, which later came to be known as the τ-function. These are now referred to as the Hirota equations. Although originally appearing just as a calculational device, without any clear relation to the inverse scattering approach, or the Hamiltonian structure, this nevertheless gave a very direct method from which important classes of solutions such as solitons could be derived. Subsequently, this was interpreted by Mikio Sato and his students, at first for the case of integrable hierarchies of PDEs, such as the Kadomtsev–Petviashvili hierarchy, but then for much more general classes of integrable hierarchies, as a sort of universal phase space approach, in which, typically, the commuting dynamics were viewed simply as determined by a fixed (finite or infinite) abelian group action on a (finite or infinite) Grassmann manifold. The τ-function was viewed as the determinant of a projection operator from elements of the group orbit to some origin within the Grassmannian, and the Hirota equations as expressing the Plücker relations, characterizing the Plücker embedding of the Grassmannian in the projectivization of a suitably defined (infinite) exterior space, viewed as a fermionic Fock space. == Quantum integrable systems == There is also a notion of quantum integrable systems. In the quantum setting, functions on phase space must be replaced by self-adjoint operators on a Hilbert space, and the notion of Poisson commuting functions replaced by commuting operators. The notion of conservation laws must be specialized to local conservation laws. Every Hamiltonian has an infinite set of conserved quantities given by projectors to its energy eigenstates. However, this does not imply any special dynamical structure. To explain quantum integrability, it is helpful to consider the free particle setting. Here all dynamics are one-body reducible. A quantum system is said to be integrable if the dynamics are two-body reducible. The Yang–Baxter equation is a consequence of this reducibility and leads to trace identities which provide an infinite set of conserved quantities. All of these ideas are incorporated into the quantum inverse scattering method where the algebraic Bethe ansatz can be used to obtain explicit solutions. Examples of quantum integrable models are the Lieb–Liniger model, the Hubbard model and several variations on the Heisenberg model. Some other types of quantum integrability are known in explicitly time-dependent quantum problems, such as the driven Tavis-Cummings model. == Exactly solvable models == In physics, completely integrable systems, especially in the infinite-dimensional setting, are often referred to as exactly solvable models. This obscures the distinction between integrability, in the Hamiltonian sense, and the more general dynamical systems sense. There are also exactly solvable models in statistical mechanics, which are more closely related to quantum integrable systems than classical ones. Two closely related methods: the Bethe ansatz approach, in its modern sense, based on the Yang–Baxter equations and the quantum inverse scattering method, provide quantum analogs of the inverse spectral methods. These are equally important in the study of solvable models in statistical mechanics. An imprecise notion of "exact solvability" as meaning: "The solutions can be expressed explicitly in terms of some previously known functions" is also sometimes used, as though this were an intrinsic property of the system itself, rather than the purely calculational feature that we happen to have some "known" functions available, in terms of which the solutions may be expressed. This notion has no intrinsic meaning, since what is meant by "known" functions very often is defined precisely by the fact that they satisfy certain given equations, and the list of such "known functions" is constantly growing. Although such a characterization of "integrability" has no intrinsic validity, it often implies the sort of regularity that is to be expected in integrable systems. == List of some well-known integrable systems == Classical mechanical systems Calogero–Moser–Sutherland model Central force motion (exact solutions of classical central-force problems) Geodesic motion on ellipsoids Harmonic oscillator Integrable Clebsch and Steklov systems in fluids Lagrange, Euler, and Kovalevskaya tops Neumann oscillator Two center Newtonian gravitational motion Integrable lattice models Ablowitz–Ladik lattice Toda lattice Volterra lattice Integrable systems in 1 + 1 dimensions AKNS system Benjamin–Ono equation Boussinesq equation (water waves) Camassa–Holm equation Classical Heisenberg ferromagnet model (spin chain) Degasperis–Procesi equation Dym equation Garnier integrable system Kaup–Kupershmidt equation Krichever–Novikov equation Korteweg–de Vries equation Landau–Lifshitz equation (continuous spin field) Nonlinear Schrödinger equation Nonlinear sigma models Sine–Gordon equation Thirring model Three-wave equation Integrable PDEs in 2 + 1 dimensions Davey–Stewartson equation Ishimori equation Kadomtsev–Petviashvili equation Novikov–Veselov equation Integrable PDEs in 3 + 1 dimensions The Belinski–Zakharov transform generates a Lax pair for the Einstein field equations; general solutions are termed gravitational solitons, of which the Schwarzschild metric, the Kerr metric and some gravitational wave solutions are examples. Exactly solvable statistical lattice models 8-vertex model Gaudin model Ising model in 1- and 2-dimensions Ice-type model of Lieb Quantum Heisenberg model == See also == Hitchin system Pentagram map === Related areas === Mathematical physics Soliton Painleve transcendents Statistical mechanics Integrable algorithm === Some key contributors (since 1965) === == References == Arnold, V.I. (1997). Mathematical Methods of Classical Mechanics (2nd ed.). Springer. ISBN 978-0-387-96890-2. Audin, M. (1996). Spinning Tops: A Course on Integrable Systems. Cambridge Studies in Advanced Mathematics. Vol. 51. Cambridge University Press. ISBN 978-0521779197. Babelon, O.; Bernard, D.; Talon, M. (2003). Introduction to classical integrable systems. Cambridge University Press. doi:10.1017/CBO9780511535024. ISBN 0-521-82267-X. Baxter, R.J. (1982). Exactly solved models in statistical mechanics. Academic Press. ISBN 978-0-12-083180-7. Dunajski, M. (2009). Solitons, Instantons and Twistors. Oxford University Press. ISBN 978-0-19-857063-9. Faddeev, L.D.; Takhtajan, L.A. (1987). Hamiltonian Methods in the Theory of Solitons. Addison-Wesley. ISBN 978-0-387-15579-1. Fomenko, A.T. (1995). Symplectic Geometry. Methods and Applications (2nd ed.). Gordon and Breach. ISBN 978-2-88124-901-3. Fomenko, A.T.; Bolsinov, A.V. (2003). Integrable Hamiltonian Systems: Geometry, Topology, Classification. Taylor and Francis. ISBN 978-0-415-29805-6. Goldstein, H. (1980). Classical Mechanics (2nd ed.). Addison-Wesley. ISBN 0-201-02918-9. Harnad, J.; Winternitz, P.; Sabidussi, G., eds. (2000). Integrable Systems: From Classical to Quantum. American Mathematical Society. ISBN 0-8218-2093-1. Harnad, J.; Balogh, F. (2021). Tau functions and Their Applications. Cambridge Monographs on Mathematical Physics. Cambridge University Press. doi:10.1017/9781108610902. ISBN 9781108492683. S2CID 222379146. Hietarinta, J.; Joshi, N.; Nijhoff, F. (2016). Discrete systems and integrability. Cambridge University Press. Bibcode:2016dsi..book.....H. doi:10.1017/CBO9781107337411. ISBN 978-1-107-04272-8. Korepin, V. E.; Bogoliubov, N.M.; Izergin, A.G. (1997). Quantum Inverse Scattering Method and Correlation Functions. Cambridge University Press. ISBN 978-0-521-58646-7. Afrajmovich, V.S.; Arnold, V.I.; Il'yashenko, Yu. S.; Shil'nikov, L.P. Dynamical Systems V. Springer. ISBN 3-540-18173-3. Mussardo, Giuseppe (2010). Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics. Oxford University Press. ISBN 978-0-19-954758-6. Sardanashvily, G. (2015). Handbook of Integrable Hamiltonian Systems. URSS. ISBN 978-5-396-00687-4. == Further reading == Beilinson, A.; Drinfeld, V. "Quantization of Hitchin's integrable system and Hecke eigensheaves" (PDF). Donagi, R.; Markman, E. (1996). "Spectral covers, algebraically completely integrable, Hamiltonian systems, and moduli of bundles". Integrable systems and quantum groups. Lecture Notes in Mathematics. Vol. 1620. Springer. pp. 1–119. doi:10.1007/BFb0094792. ISBN 978-3-540-60542-3. Sonnad, Kiran G.; Cary, John R. (2004). "Finding a nonlinear lattice with improved integrability using Lie transform perturbation theory". Physical Review E. 69 (5): 056501. Bibcode:2004PhRvE..69e6501S. doi:10.1103/PhysRevE.69.056501. PMID 15244955. == External links == "Integrable system", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "SIDE - Symmetries and Integrability of Difference Equations", a conference devoted to the study of integrable difference equations and related topics. == Notes ==
Wikipedia/Integrable_systems