title
stringlengths
4
67
text
stringlengths
43
278k
Bilinear form
In mathematics, a bilinear form is a bilinear map V × V → K on a vector space V (the elements of which are called vectors) over a field K (the elements of which are called scalars). In other words, a bilinear form is a function B : V × V → K that is linear in each argument separately: B(u + v, w) = B(u, w) + B(v, w) and B(λu, v) = λB(u, v) B(u, v + w) = B(u, v) + B(u, w) and B(u, λv) = λB(u, v) The dot product on R n {\displaystyle \mathbb {R} ^{n}} is an example of a bilinear form which is also an inner product. An example of a bilinear form that is not an inner product would be the four-vector product. The definition of a bilinear form can be extended to include modules over a ring, with linear maps replaced by module homomorphisms. When K is the field of complex numbers C, one is often more interested in sesquilinear forms, which are similar to bilinear forms but are conjugate linear in one argument. == Coordinate representation == Let V be an n-dimensional vector space with basis {e1, …, en}. The n × n matrix A, defined by Aij = B(ei, ej) is called the matrix of the bilinear form on the basis {e1, …, en}. If the n × 1 matrix x represents a vector x with respect to this basis, and similarly, the n × 1 matrix y represents another vector y, then: B ( x , y ) = x T A y = ∑ i , j = 1 n x i A i j y j . {\displaystyle B(\mathbf {x} ,\mathbf {y} )=\mathbf {x} ^{\textsf {T}}A\mathbf {y} =\sum _{i,j=1}^{n}x_{i}A_{ij}y_{j}.} A bilinear form has different matrices on different bases. However, the matrices of a bilinear form on different bases are all congruent. More precisely, if {f1, …, fn} is another basis of V, then f j = ∑ i = 1 n S i , j e i , {\displaystyle \mathbf {f} _{j}=\sum _{i=1}^{n}S_{i,j}\mathbf {e} _{i},} where the S i , j {\displaystyle S_{i,j}} form an invertible matrix S. Then, the matrix of the bilinear form on the new basis is STAS. == Properties == === Non-degenerate bilinear forms === Every bilinear form B on V defines a pair of linear maps from V to its dual space V∗. Define B1, B2: V → V∗ by This is often denoted as where the dot ( ⋅ ) indicates the slot into which the argument for the resulting linear functional is to be placed (see Currying). For a finite-dimensional vector space V, if either of B1 or B2 is an isomorphism, then both are, and the bilinear form B is said to be nondegenerate. More concretely, for a finite-dimensional vector space, non-degenerate means that every non-zero element pairs non-trivially with some other element: B ( x , y ) = 0 {\displaystyle B(x,y)=0} for all y ∈ V {\displaystyle y\in V} implies that x = 0 and B ( x , y ) = 0 {\displaystyle B(x,y)=0} for all x ∈ V {\displaystyle x\in V} implies that y = 0. The corresponding notion for a module over a commutative ring is that a bilinear form is unimodular if V → V∗ is an isomorphism. Given a finitely generated module over a commutative ring, the pairing may be injective (hence "nondegenerate" in the above sense) but not unimodular. For example, over the integers, the pairing B(x, y) = 2xy is nondegenerate but not unimodular, as the induced map from V = Z to V∗ = Z is multiplication by 2. If V is finite-dimensional then one can identify V with its double dual V∗∗. One can then show that B2 is the transpose of the linear map B1 (if V is infinite-dimensional then B2 is the transpose of B1 restricted to the image of V in V∗∗). Given B one can define the transpose of B to be the bilinear form given by The left radical and right radical of the form B are the kernels of B1 and B2 respectively; they are the vectors orthogonal to the whole space on the left and on the right. If V is finite-dimensional then the rank of B1 is equal to the rank of B2. If this number is equal to dim(V) then B1 and B2 are linear isomorphisms from V to V∗. In this case B is nondegenerate. By the rank–nullity theorem, this is equivalent to the condition that the left and equivalently right radicals be trivial. For finite-dimensional spaces, this is often taken as the definition of nondegeneracy: Given any linear map A : V → V∗ one can obtain a bilinear form B on V via This form will be nondegenerate if and only if A is an isomorphism. If V is finite-dimensional then, relative to some basis for V, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero. Likewise, a nondegenerate form is one for which the determinant of the associated matrix is non-zero (the matrix is non-singular). These statements are independent of the chosen basis. For a module over a commutative ring, a unimodular form is one for which the determinant of the associate matrix is a unit (for example 1), hence the term; note that a form whose matrix determinant is non-zero but not a unit will be nondegenerate but not unimodular, for example B(x, y) = 2xy over the integers. === Symmetric, skew-symmetric, and alternating forms === We define a bilinear form to be symmetric if B(v, w) = B(w, v) for all v, w in V; alternating if B(v, v) = 0 for all v in V; skew-symmetric or antisymmetric if B(v, w) = −B(w, v) for all v, w in V; Proposition Every alternating form is skew-symmetric. Proof This can be seen by expanding B(v + w, v + w). If the characteristic of K is not 2 then the converse is also true: every skew-symmetric form is alternating. However, if char(K) = 2 then a skew-symmetric form is the same as a symmetric form and there exist symmetric/skew-symmetric forms that are not alternating. A bilinear form is symmetric (respectively skew-symmetric) if and only if its coordinate matrix (relative to any basis) is symmetric (respectively skew-symmetric). A bilinear form is alternating if and only if its coordinate matrix is skew-symmetric and the diagonal entries are all zero (which follows from skew-symmetry when char(K) ≠ 2). A bilinear form is symmetric if and only if the maps B1, B2: V → V∗ are equal, and skew-symmetric if and only if they are negatives of one another. If char(K) ≠ 2 then one can decompose a bilinear form into a symmetric and a skew-symmetric part as follows B + = 1 2 ( B + t B ) B − = 1 2 ( B − t B ) , {\displaystyle B^{+}={\tfrac {1}{2}}(B+{}^{\text{t}}B)\qquad B^{-}={\tfrac {1}{2}}(B-{}^{\text{t}}B),} where tB is the transpose of B (defined above). === Reflexive bilinear forms and orthogonal vectors === A bilinear form B is reflexive if and only if it is either symmetric or alternating. In the absence of reflexivity we have to distinguish left and right orthogonality. In a reflexive space the left and right radicals agree and are termed the kernel or the radical of the bilinear form: the subspace of all vectors orthogonal with every other vector. A vector v, with matrix representation x, is in the radical of a bilinear form with matrix representation A, if and only if Ax = 0 ⇔ xTA = 0. The radical is always a subspace of V. It is trivial if and only if the matrix A is nonsingular, and thus if and only if the bilinear form is nondegenerate. Suppose W is a subspace. Define the orthogonal complement W ⊥ = { v ∣ B ( v , w ) = 0 for all w ∈ W } . {\displaystyle W^{\perp }=\left\{\mathbf {v} \mid B(\mathbf {v} ,\mathbf {w} )=0{\text{ for all }}\mathbf {w} \in W\right\}.} For a non-degenerate form on a finite-dimensional space, the map V/W → W⊥ is bijective, and the dimension of W⊥ is dim(V) − dim(W). === Bounded and elliptic bilinear forms === Definition: A bilinear form on a normed vector space (V, ‖⋅‖) is bounded, if there is a constant C such that for all u, v ∈ V, B ( u , v ) ≤ C ‖ u ‖ ‖ v ‖ . {\displaystyle B(\mathbf {u} ,\mathbf {v} )\leq C\left\|\mathbf {u} \right\|\left\|\mathbf {v} \right\|.} Definition: A bilinear form on a normed vector space (V, ‖⋅‖) is elliptic, or coercive, if there is a constant c > 0 such that for all u ∈ V, B ( u , u ) ≥ c ‖ u ‖ 2 . {\displaystyle B(\mathbf {u} ,\mathbf {u} )\geq c\left\|\mathbf {u} \right\|^{2}.} == Associated quadratic form == For any bilinear form B : V × V → K, there exists an associated quadratic form Q : V → K defined by Q : V → K : v ↦ B(v, v). When char(K) ≠ 2, the quadratic form Q is determined by the symmetric part of the bilinear form B and is independent of the antisymmetric part. In this case there is a one-to-one correspondence between the symmetric part of the bilinear form and the quadratic form, and it makes sense to speak of the symmetric bilinear form associated with a quadratic form. When char(K) = 2 and dim V > 1, this correspondence between quadratic forms and symmetric bilinear forms breaks down. == Relation to tensor products == By the universal property of the tensor product, there is a canonical correspondence between bilinear forms on V and linear maps V ⊗ V → K. If B is a bilinear form on V the corresponding linear map is given by In the other direction, if F : V ⊗ V → K is a linear map the corresponding bilinear form is given by composing F with the bilinear map V × V → V ⊗ V that sends (v, w) to v⊗w. The set of all linear maps V ⊗ V → K is the dual space of V ⊗ V, so bilinear forms may be thought of as elements of (V ⊗ V)∗ which (when V is finite-dimensional) is canonically isomorphic to V∗ ⊗ V∗. Likewise, symmetric bilinear forms may be thought of as elements of (Sym2V)* (dual of the second symmetric power of V) and alternating bilinear forms as elements of (Λ2V)∗ ≃ Λ2V∗ (the second exterior power of V∗). If char(K) ≠ 2, (Sym2V)* ≃ Sym2(V∗). == Generalizations == === Pairs of distinct vector spaces === Much of the theory is available for a bilinear mapping from two vector spaces over the same base field to that field Here we still have induced linear mappings from V to W∗, and from W to V∗. It may happen that these mappings are isomorphisms; assuming finite dimensions, if one is an isomorphism, the other must be. When this occurs, B is said to be a perfect pairing. In finite dimensions, this is equivalent to the pairing being nondegenerate (the spaces necessarily having the same dimensions). For modules (instead of vector spaces), just as how a nondegenerate form is weaker than a unimodular form, a nondegenerate pairing is a weaker notion than a perfect pairing. A pairing can be nondegenerate without being a perfect pairing, for instance Z × Z → Z via (x, y) ↦ 2xy is nondegenerate, but induces multiplication by 2 on the map Z → Z∗. Terminology varies in coverage of bilinear forms. For example, F. Reese Harvey discusses "eight types of inner product". To define them he uses diagonal matrices Aij having only +1 or −1 for non-zero elements. Some of the "inner products" are symplectic forms and some are sesquilinear forms or Hermitian forms. Rather than a general field K, the instances with real numbers R, complex numbers C, and quaternions H are spelled out. The bilinear form ∑ k = 1 p x k y k − ∑ k = p + 1 n x k y k {\displaystyle \sum _{k=1}^{p}x_{k}y_{k}-\sum _{k=p+1}^{n}x_{k}y_{k}} is called the real symmetric case and labeled R(p, q), where p + q = n. Then he articulates the connection to traditional terminology: Some of the real symmetric cases are very important. The positive definite case R(n, 0) is called Euclidean space, while the case of a single minus, R(n−1, 1) is called Lorentzian space. If n = 4, then Lorentzian space is also called Minkowski space or Minkowski spacetime. The special case R(p, p) will be referred to as the split-case. === General modules === Given a ring R and a right R-module M and its dual module M∗, a mapping B : M∗ × M → R is called a bilinear form if for all u, v ∈ M∗, all x, y ∈ M and all α, β ∈ R. The mapping ⟨⋅,⋅⟩ : M∗ × M → R : (u, x) ↦ u(x) is known as the natural pairing, also called the canonical bilinear form on M∗ × M. A linear map S : M∗ → M∗ : u ↦ S(u) induces the bilinear form B : M∗ × M → R : (u, x) ↦ ⟨S(u), x⟩, and a linear map T : M → M : x ↦ T(x) induces the bilinear form B : M∗ × M → R : (u, x) ↦ ⟨u, T(x)⟩. Conversely, a bilinear form B : M∗ × M → R induces the R-linear maps S : M∗ → M∗ : u ↦ (x ↦ B(u, x)) and T′ : M → M∗∗ : x ↦ (u ↦ B(u, x)). Here, M∗∗ denotes the double dual of M. == See also == == Citations == == References == == External links == "Bilinear form", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Bilinear form". PlanetMath. This article incorporates material from Unimodular on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Surveillance capitalism
Surveillance capitalism is a concept in political economics which denotes the widespread collection and commodification of personal data by corporations. This phenomenon is distinct from government surveillance, although the two can be mutually reinforcing. The concept of surveillance capitalism, as described by Shoshana Zuboff, is driven by a profit-making incentive, and arose as advertising companies, led by Google's AdWords, saw the possibilities of using personal data to target consumers more precisely. Increased data collection may have various benefits for individuals and society, such as self-optimization (the quantified self), societal optimizations (e.g., by smart cities) and optimized services (including various web applications). However, as capitalism focuses on expanding the proportion of social life that is open to data collection and data processing, this can have significant implications for vulnerability and control of society, as well as for privacy. The economic pressures of capitalism are driving the intensification of online connection and monitoring, with spaces of social life opening up to saturation by corporate actors, directed at making profits and/or regulating behavior. Therefore, personal data points increased in value after the possibilities of targeted advertising were known. As a result, the increasing price of data has limited access to the purchase of personal data points to the richest in society. == Background == Shoshana Zuboff writes that "analysing massive data sets began as a way to reduce uncertainty by discovering the probabilities of future patterns in the behavior of people and systems". In 2014, Vincent Mosco referred to the marketing of information about customers and subscribers to advertisers as surveillance capitalism and made note of the surveillance state alongside it. Christian Fuchs found that the surveillance state fuses with surveillance capitalism. Similarly, Zuboff informs that the issue is further complicated by highly invisible collaborative arrangements with state security apparatuses. According to Trebor Scholz, companies recruit people as informants for this type of capitalism. Zuboff contrasts the mass production of industrial capitalism with surveillance capitalism, where the former was interdependent with its populations, who were its consumers and employees, and the latter preys on dependent populations, who are neither its consumers nor its employees and largely ignorant of its procedures. Their research shows that the capitalist addition to the analysis of massive amounts of data has taken its original purpose in an unexpected direction. Surveillance has been changing power structures in the information economy, potentially shifting the balance of power further from nation-states and towards large corporations employing the surveillance capitalist logic. Zuboff notes that surveillance capitalism extends beyond the conventional institutional terrain of the private firm, accumulating not only surveillance assets and capital but also rights, and operating without meaningful mechanisms of consent. In other words, analysing massive data sets was at some point not only executed by the state apparatuses but also companies. Zuboff claims that both Google and Facebook have invented surveillance capitalism and translated it into "a new logic of accumulation". This mutation resulted in both companies collecting very large numbers of data points about their users, with the core purpose of making a profit. By selling these data points to external users (particularly advertisers), it has become an economic mechanism. The combination of the analysis of massive data sets and the use of these data sets as a market mechanism has shaped the concept of surveillance capitalism. Surveillance capitalism has been heralded as the successor to neoliberalism. Oliver Stone, creator of the film Snowden, pointed to the location-based game Pokémon Go as the "latest sign of the emerging phenomenon and demonstration of surveillance capitalism". Stone criticized that the location of its users was used not only for game purposes, but also to retrieve more information about its players. By tracking users' locations, the game collected far more information than just users' names and locations: "it can access the contents of your USB storage, your accounts, photographs, network connections, and phone activities, and can even activate your phone, when it is in standby mode". This data can then be analysed and commodified by companies such as Google (which significantly invested in the game's development) to improve the effectiveness of targeted advertisement. Another aspect of surveillance capitalism is its influence on political campaigning. Personal data retrieved by data miners can enable various companies (most notoriously Cambridge Analytica) to improve the targeting of political advertising, a step beyond the commercial aims of previous surveillance capitalist operations. In this way, it is possible that political parties will be able to produce far more targeted political advertising to maximise its impact on voters. However, Cory Doctorow writes that the misuse of these data sets "will lead us towards totalitarianism". This may resemble a corporatocracy, and Joseph Turow writes that "the centrality of corporate power is a direct reality at the very heart of the digital age".: 17  == Theory == === Shoshana Zuboff === The terminology "surveillance capitalism" was popularized by Harvard Professor Shoshana Zuboff.: 107  In Zuboff's theory, surveillance capitalism is a novel market form and a specific logic of capitalist accumulation. In her 2014 essay A Digital Declaration: Big Data as Surveillance Capitalism, she characterized it as a "radically disembedded and extractive variant of information capitalism" based on the commodification of "reality" and its transformation into behavioral data for analysis and sales. In a subsequent article in 2015, Zuboff analyzed the societal implications of this mutation of capitalism. She distinguished between "surveillance assets", "surveillance capital", and "surveillance capitalism" and their dependence on a global architecture of computer mediation that she calls "Big Other", a distributed and largely uncontested new expression of power that constitutes hidden mechanisms of extraction, commodification, and control that threatens core values such as freedom, democracy, and privacy. According to Zuboff, surveillance capitalism was pioneered by Google and later Facebook, just as mass-production and managerial capitalism were pioneered by Ford and General Motors a century earlier, and has now become the dominant form of information capitalism. Zuboff emphasizes that behavioral changes enabled by artificial intelligence have become aligned with the financial goals of American internet companies such as Google, Facebook, and Amazon.: 107  In her Oxford University lecture published in 2016, Zuboff identified the mechanisms and practices of surveillance capitalism, including the production of "prediction products" for sale in new "behavioral futures markets." She introduced the concept "dispossession by surveillance", arguing that it challenges the psychological and political bases of self-determination by concentrating rights in the surveillance regime. This is described as a "coup from above." ==== Key features ==== Zuboff's book The Age of Surveillance Capitalism is a detailed examination of the unprecedented power of surveillance capitalism and the quest by powerful corporations to predict and control human behavior. Zuboff identifies four key features in the logic of surveillance capitalism and explicitly follows the four key features identified by Google's chief economist, Hal Varian: The drive toward more and more data extraction and analysis. The development of new contractual forms using computer-monitoring and automation. The desire to personalize and customize the services offered to users of digital platforms. The use of the technological infrastructure to carry out continual experiments on its users and consumers. ==== Analysis ==== Zuboff compares demanding privacy from surveillance capitalists or lobbying for an end to commercial surveillance on the Internet to asking Henry Ford to make each Model T by hand and states that such demands are existential threats that violate the basic mechanisms of the entity's survival. Zuboff warns that principles of self-determination might be forfeited due to "ignorance, learned helplessness, inattention, inconvenience, habituation, or drift" and states that "we tend to rely on mental models, vocabularies, and tools distilled from past catastrophes," referring to the twentieth century's totalitarian nightmares or the monopolistic predations of Gilded Age capitalism, with countermeasures that have been developed to fight those earlier threats not being sufficient or even appropriate to meet the novel challenges. She also poses the question: "will we be the masters of information, or will we be its slaves?" and states that "if the digital future is to be our home, then it is we who must make it so". In her book, Zuboff discusses the differences between industrial capitalism and surveillance capitalism. Zuboff writes that as industrial capitalism exploited nature, surveillance capitalism exploits human nature. === John Bellamy Foster and Robert W. McChesney === The term "surveillance capitalism" has also been used by political economists John Bellamy Foster and Robert W. McChesney, although with a different meaning. In an article published in Monthly Review in 2014, they apply it to describe the manifestation of the "insatiable need for data" of financialization, which they explain is "the long-term growth speculation on financial assets relative to GDP" introduced in the United States by industry and government in the 1980s that evolved out of the military-industrial complex and the advertising industry. == Response == Numerous organizations have been struggling for free speech and privacy rights in the new surveillance capitalism and various national governments have enacted privacy laws. It is also conceivable that new capabilities and uses for mass-surveillance require structural changes towards a new system to create accountability and prevent misuse. Government attention towards the dangers of surveillance capitalism especially increased after the exposure of the Facebook-Cambridge Analytica data scandal that occurred in early 2018. In response to the misuse of mass-surveillance multiple states have taken preventive measures. The European Union, for example, has reacted to these events and restricted its rules and regulations on misusing big data. Surveillance-Capitalism has become a lot harder under these rules, known as the General Data Protection Regulations. However, implementing preventive measures against misuse of mass-surveillance is hard for many countries as it requires structural change of the system. Bruce Sterling's 2014 lecture at Strelka Institute "The epic struggle of the internet of things" explained how consumer products could become surveillance objects that track people's everyday life. In his talk, Sterling highlights the alliances between multinational corporations who develop Internet of Things-based surveillance systems which feeds surveillance capitalism. In 2015, Tega Brain and Surya Mattu's satirical artwork Unfit Bits encourages users to subvert fitness data collected by Fitbits. They suggested ways to fake datasets by attaching the device, for example to a metronome or on a bicycle wheel. In 2018, Brain created a project with Sam Lavigne called New Organs which collect people's stories of being monitored online and offline. The 2019 documentary film The Great Hack tells the story of how a company named Cambridge Analytica used Facebook to manipulate the 2016 U.S. presidential election. Extensive profiling of users and news feeds that are ordered by black box algorithms were presented as the main source of the problem, which is also mentioned in Zuboff's book. The usage of personal data to subject individuals to categorization and potentially politically influence individuals highlights how individuals can become voiceless in the face of data misusage. This highlights the crucial role surveillance capitalism can have on social injustice as it can affect all aspects of life. == See also == Adware – Software with, often unwanted, adverts Commercialization of the Internet – Running online services principally for financial gain Criticism of capitalism – Arguments against the economic system of capitalism Data capitalism Data mining – Process of extracting and discovering patterns in large data sets Decomputing Digital integrity – law to protect people's digital livesPages displaying wikidata descriptions as a fallback Five Eyes – Anglosphere intelligence alliance Free and open-source software – Software whose source code is available and which is permissively licensed Googlization – NeologismPages displaying short descriptions with no spaces Mass surveillance industry Microtargeting – Usage of online data for individuals advertising Surveillance § Corporate Targeted advertising – Form of advertising Personalized marketing – Marketing strategy using data analysis to deliver individualized messages and products Platform capitalism – Business model of technological platforms Privacy concerns with social networking services Social profiling – Process of constructing a social media user's profile using his or her social data == References == == Further reading == Couldry, Nick; Mejias, Ulises Ali (2019). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford, California: Stanford University Press. ISBN 9781503609754. Crain, Matthew (2021). Profit over Privacy: How Surveillance Advertising Conquered the Internet. Minneapolis: University of Minnesota Press. ISBN 9781517905057. Zuboff, Shoshana (2018). Das Zeitalter des Überwachungskapitalismus. Berlin: Campus Verlag. ISBN 9783593509303. == External links == Shoshana Zuboff Keynote: Reality is the Next Big Thing, YouTube, Elevate Festival, 2014 Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, Shoshana Zuboff Capitalism's New Clothes, Evgeny Morozov, The Baffler (4 February 2019)
Loewner order
In mathematics, Loewner order is the partial order defined by the convex cone of positive semi-definite matrices. This order is usually employed to generalize the definitions of monotone and concave/convex scalar functions to monotone and concave/convex Hermitian valued functions. These functions arise naturally in matrix and operator theory and have applications in many areas of physics and engineering. == Definition == Let A and B be two Hermitian matrices of order n. We say that A ≥ B if A − B is positive semi-definite. Similarly, we say that A > B if A − B is positive definite. Although it is commonly discussed on matrices (as a finite-dimensional case), the Loewner order is also well-defined on operators (an infinite-dimensional case) in the analogous way. == Properties == When A and B are real scalars (i.e. n = 1), the Loewner order reduces to the usual ordering of R. Although some familiar properties of the usual order of R are also valid when n ≥ 2, several properties are no longer valid. For instance, the comparability of two matrices may no longer be valid. In fact, if A = [ 1 0 0 0 ] {\displaystyle A={\begin{bmatrix}1&0\\0&0\end{bmatrix}}\ } and B = [ 0 0 0 1 ] {\displaystyle B={\begin{bmatrix}0&0\\0&1\end{bmatrix}}\ } then neither A ≥ B or B ≥ A holds true. In other words, the Loewner order is a partial order, but not a total order. Moreover, since A and B are Hermitian matrices, their eigenvalues are all real numbers. If λ1(B) is the maximum eigenvalue of B and λn(A) the minimum eigenvalue of A, a sufficient criterion to have A ≥ B is that λn(A) ≥ λ1(B). If A or B is a multiple of the identity matrix, then this criterion is also necessary. The Loewner order does not have the least-upper-bound property, and therefore does not form a lattice. It is bounded: for any finite set S {\displaystyle S} of matrices, one can find an "upper-bound" matrix A that is greater than all of S. However, there will be multiple upper bounds. In a lattice, there would exist a unique maximum max ( S ) {\displaystyle \max(S)} such that any upper bound U on S {\displaystyle S} obeys max ( S ) {\displaystyle \max(S)} ≤ U. But in the Loewner order, one can have two upper bounds A and B that are both minimal (there is no element C < A that is also an upper bound) but that are incomparable (A - B is neither positive semidefinite nor negative semidefinite). == See also == Trace inequalities == References == Pukelsheim, Friedrich (2006). Optimal design of experiments. Society for Industrial and Applied Mathematics. pp. 11–12. ISBN 9780898716047. Bhatia, Rajendra (1997). Matrix Analysis. New York, NY: Springer. ISBN 9781461206538. Zhan, Xingzhi (2002). Matrix inequalities. Berlin: Springer. pp. 1–15. ISBN 9783540437987.
Statistical interference
When two probability distributions overlap, statistical interference exists. Knowledge of the distributions can be used to determine the likelihood that one parameter exceeds another, and by how much. This technique can be used for geometric dimensioning of mechanical parts, determining when an applied load exceeds the strength of a structure, and in many other situations. This type of analysis can also be used to estimate the probability of failure or the failure rate. == Dimensional interference == Mechanical parts are usually designed to fit precisely together. For example, if a shaft is designed to have a "sliding fit" in a hole, the shaft must be a little smaller than the hole. (Traditional tolerances may suggest that all dimensions fall within those intended tolerances. A process capability study of actual production, however, may reveal normal distributions with long tails.) Both the shaft and hole sizes will usually form normal distributions with some average (arithmetic mean) and standard deviation. With two such normal distributions, a distribution of interference can be calculated. The derived distribution will also be normal, and its average will be equal to the difference between the means of the two base distributions. The variance of the derived distribution will be the sum of the variances of the two base distributions. This derived distribution can be used to determine how often the difference in dimensions will be less than zero (i.e., the shaft cannot fit in the hole), how often the difference will be less than the required sliding gap (the shaft fits, but too tightly), and how often the difference will be greater than the maximum acceptable gap (the shaft fits, but not tightly enough). == Physical property interference == Physical properties and the conditions of use are also inherently variable. For example, the applied load (stress) on a mechanical part may vary. The measured strength of that part (tensile strength, etc.) may also be variable. The part will break when the stress exceeds the strength. With two normal distributions, the statistical interference may be calculated as above. (This problem is also workable for transformed units such as the log-normal distribution). With other distributions, or combinations of different distributions, a Monte Carlo method or simulation is often the most practical way to quantify the effects of statistical interference. == See also == Interference fit Interval estimation Joint probability distribution Probabilistic design Process capability Reliability engineering Specification Tolerance (engineering) == References == Paul H. Garthwaite, Byron Jones, Ian T. Jolliffe (2002) Statistical Inference. ISBN 0-19-857226-3 Haugen, (1980) Probabilistic mechanical design, Wiley. ISBN 0-471-05847-5
Explanation-based learning
Explanation-based learning (EBL) is a form of machine learning that exploits a very strong, or even perfect, domain theory (i.e. a formal theory of an application domain akin to a domain model in ontology engineering, not to be confused with Scott's domain theory) in order to make generalizations or form concepts from training examples. It is also linked with Encoding (memory) to help with Learning. == Details == An example of EBL using a perfect domain theory is a program that learns to play chess through example. A specific chess position that contains an important feature such as "Forced loss of black queen in two moves" includes many irrelevant features, such as the specific scattering of pawns on the board. EBL can take a single training example and determine what are the relevant features in order to form a generalization. A domain theory is perfect or complete if it contains, in principle, all information needed to decide any question about the domain. For example, the domain theory for chess is simply the rules of chess. Knowing the rules, in principle, it is possible to deduce the best move in any situation. However, actually making such a deduction is impossible in practice due to combinatoric explosion. EBL uses training examples to make searching for deductive consequences of a domain theory efficient in practice. In essence, an EBL system works by finding a way to deduce each training example from the system's existing database of domain theory. Having a short proof of the training example extends the domain-theory database, enabling the EBL system to find and classify future examples that are similar to the training example very quickly. The main drawback of the method—the cost of applying the learned proof macros, as these become numerous—was analyzed by Minton. === Basic formulation === EBL software takes four inputs: a hypothesis space (the set of all possible conclusions) a domain theory (axioms about a domain of interest) training examples (specific facts that rule out some possible hypothesis) operationality criteria (criteria for determining which features in the domain are efficiently recognizable, e.g. which features are directly detectable using sensors) == Application == An especially good application domain for an EBL is natural language processing (NLP). Here a rich domain theory, i.e., a natural language grammar—although neither perfect nor complete, is tuned to a particular application or particular language usage, using a treebank (training examples). Rayner pioneered this work. The first successful industrial application was to a commercial NL interface to relational databases. The method has been successfully applied to several large-scale natural language parsing systems, where the utility problem was solved by omitting the original grammar (domain theory) and using specialized LR-parsing techniques, resulting in huge speed-ups, at a cost in coverage, but with a gain in disambiguation. EBL-like techniques have also been applied to surface generation, the converse of parsing. When applying EBL to NLP, the operationality criteria can be hand-crafted, or can be inferred from the treebank using either the entropy of its or-nodes or a target coverage/disambiguation trade-off (= recall/precision trade-off = f-score). EBL can also be used to compile grammar-based language models for speech recognition, from general unification grammars. Note how the utility problem, first exposed by Minton, was solved by discarding the original grammar/domain theory, and that the quoted articles tend to contain the phrase grammar specialization—quite the opposite of the original term explanation-based generalization. Perhaps the best name for this technique would be data-driven search space reduction. Other people who worked on EBL for NLP include Guenther Neumann, Aravind Joshi, Srinivas Bangalore, and Khalil Sima'an. == See also == One-shot learning in computer vision Zero-shot learning == References ==
Chernoff__apos__s distribution
In probability theory, Chernoff's distribution, named after Herman Chernoff, is the probability distribution of the random variable Z = argmax s ∈ R ( W ( s ) − s 2 ) , {\displaystyle Z={\underset {s\in \mathbf {R} }{\operatorname {argmax} }}\ (W(s)-s^{2}),} where W is a "two-sided" Wiener process (or two-sided "Brownian motion") satisfying W(0) = 0. If V ( a , c ) = argmax s ∈ R ( W ( s ) − c ( s − a ) 2 ) , {\displaystyle V(a,c)={\underset {s\in \mathbf {R} }{\operatorname {argmax} }}\ (W(s)-c(s-a)^{2}),} then V(0, c) has density f c ( t ) = 1 2 g c ( t ) g c ( − t ) {\displaystyle f_{c}(t)={\frac {1}{2}}g_{c}(t)g_{c}(-t)} where gc has Fourier transform given by g ^ c ( s ) = ( 2 / c ) 1 / 3 Ai ⁡ ( i ( 2 c 2 ) − 1 / 3 s ) , s ∈ R {\displaystyle {\hat {g}}_{c}(s)={\frac {(2/c)^{1/3}}{\operatorname {Ai} (i(2c^{2})^{-1/3}s)}},\ \ \ s\in \mathbf {R} } and where Ai is the Airy function. Thus fc is symmetric about 0 and the density ƒZ = ƒ1. Groeneboom (1989) shows that f Z ( z ) ∼ 1 2 4 4 / 3 | z | Ai ′ ⁡ ( a ~ 1 ) exp ⁡ ( − 2 3 | z | 3 + 2 1 / 3 a ~ 1 | z | ) as z → ∞ {\displaystyle f_{Z}(z)\sim {\frac {1}{2}}{\frac {4^{4/3}|z|}{\operatorname {Ai} '({\tilde {a}}_{1})}}\exp \left(-{\frac {2}{3}}|z|^{3}+2^{1/3}{\tilde {a}}_{1}|z|\right){\text{ as }}z\rightarrow \infty } where a ~ 1 ≈ − 2.3381 {\displaystyle {\tilde {a}}_{1}\approx -2.3381} is the largest zero of the Airy function Ai and where Ai ′ ⁡ ( a ~ 1 ) ≈ 0.7022 {\displaystyle \operatorname {Ai} '({\tilde {a}}_{1})\approx 0.7022} . In the same paper, Groeneboom also gives an analysis of the process { V ( a , 1 ) : a ∈ R } {\displaystyle \{V(a,1):a\in \mathbf {R} \}} . The connection with the statistical problem of estimating a monotone density is discussed in Groeneboom (1985). Chernoff's distribution is now known to appear in a wide range of monotone problems including isotonic regression. The Chernoff distribution should not be confused with the Chernoff geometric distribution (called the Chernoff point in information geometry) induced by the Chernoff information. == History == Groeneboom, Lalley and Temme state that the first investigation of this distribution was probably by Chernoff in 1964, who studied the behavior of a certain estimator of a mode. In his paper, Chernoff characterized the distribution through an analytic representation through the heat equation with suitable boundary conditions. Initial attempts at approximating Chernoff's distribution via solving the heat equation, however, did not achieve satisfactory precision due to the nature of the boundary conditions. The computation of the distribution is addressed, for example, in Groeneboom and Wellner (2001). The connection of Chernoff's distribution with Airy functions was also found independently by Daniels and Skyrme and Temme, as cited in Groeneboom, Lalley and Temme. These two papers, along with Groeneboom (1989), were all written in 1984. == References ==
Single-particle trajectory
Single-particle trajectories (SPTs) consist of a collection of successive discrete points causal in time. These trajectories are acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule. Molecules can now by visualized based on recent super-resolution microscopy, which allow routine collections of thousands of short and long trajectories. These trajectories explore part of a cell, either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell, as emphasized in various cell types such as neuronal cells, astrocytes, immune cells and many others. == SPTs allow observing moving molecules inside cells to collect statistics == SPT allowed observing moving particles. These trajectories are used to investigate cytoplasm or membrane organization, but also the cell nucleus dynamics, remodeler dynamics or mRNA production. Due to the constant improvement of the instrumentation, the spatial resolution is continuously decreasing, reaching now values of approximately 20 nm, while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues. A variant of super-resolution microscopy called sptPALM is used to detect the local and dynamically changing organization of molecules in cells, or events of DNA binding by transcription factors in mammalian nucleus. Super-resolution image acquisition and particle tracking are crucial to guarantee a high quality data == Assembling points into a trajectory based on tracking algorithms == Once points are acquired, the next step is to reconstruct a trajectory. This step is done known tracking algorithms to connect the acquired points. Tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise. == Extract physical parameters from redundant SPTs == The redundancy of many short (SPTs) is a key feature to extract biophysical information parameters from empirical data at a molecular level. In contrast, long isolated trajectories have been used to extract information along trajectories, destroying the natural spatial heterogeneity associated to the various positions. The main statistical tool is to compute the mean-square displacement (MSD) or second order statistical moment: ⟨ | X ( t + Δ t ) − X ( t ) | 2 ⟩ ∼ t α {\displaystyle \langle |X(t+\Delta t)-X(t)|^{2}\rangle \sim t^{\alpha }} (average over realizations), where α {\displaystyle \alpha } is the called the anomalous exponent. For a Brownian motion, ⟨ | X ( t + Δ t ) − X ( t ) | 2 ⟩ = 2 n D t {\displaystyle \langle |X(t+\Delta t)-X(t)|^{2}\rangle =2nDt} , where D is the diffusion coefficient, n is dimension of the space. Some other properties can also be recovered from long trajectories, such as the radius of confinement for a confined motion. The MSD has been widely used in early applications of long but not necessarily redundant single-particle trajectories in a biological context. However, the MSD applied to long trajectories suffers from several issues. First, it is not precise in part because the measured points could be correlated. Second, it cannot be used to compute any physical diffusion coefficient when trajectories consists of switching episodes for example alternating between free and confined diffusion. At low spatiotemporal resolution of the observed trajectories, the MSD behaves sublinearly with time, a process known as anomalous diffusion, which is due in part to the averaging of the different phases of the particle motion. In the context of cellular transport (ameoboid), high resolution motion analysis of long SPTs in micro-fluidic chambers containing obstacles revealed different types of cell motions. Depending on the obstacle density: crawling was found at low density of obstacles and directed motion and random phases can even be differentiated. == Physical model to recover spatial properties from redundant SPTs == === Langevin and Smoluchowski equations as a model of motion === Statistical methods to extract information from SPTs are based on stochastic models, such as the Langevin equation or its Smoluchowski's limit and associated models that account for additional localization point identification noise or memory kernel. The Langevin equation describes a stochastic particle driven by a Brownian force Ξ {\displaystyle \Xi } and a field of force (e.g., electrostatic, mechanical, etc.) with an expression F ( x , t ) {\displaystyle F(x,t)} : m x ¨ + Γ x ˙ − F ( x , t ) = Ξ , {\displaystyle m{\ddot {x}}+\Gamma {\dot {x}}-F(x,t)=\Xi ,} where m is the mass of the particle and Γ = 6 π a ρ {\displaystyle \Gamma =6\pi a\rho } is the friction coefficient of a diffusing particle, ρ {\displaystyle \rho } the viscosity. Here Ξ {\displaystyle \Xi } is the δ {\displaystyle \delta } -correlated Gaussian white noise. The force can derived from a potential well U so that F ( x , t ) = − U ′ ( x ) {\displaystyle F(x,t)=-U'(x)} and in that case, the equation takes the form m d 2 x d t 2 + Γ d x d t + ∇ U ( x ) = 2 ε γ d η d t , {\displaystyle m{\frac {d^{2}x}{dt^{2}}}+\Gamma {\frac {dx}{dt}}+\nabla U(x)={\sqrt {2\varepsilon \gamma }}\,{\frac {d\eta }{dt}},} where ε = k B T , {\displaystyle \varepsilon =k_{\text{B}}T,} is the energy and k B {\displaystyle k_{\text{B}}} the Boltzmann constant and T the temperature. Langevin's equation is used to describe trajectories where inertia or acceleration matters. For example, at very short timescales, when a molecule unbinds from a binding site or escapes from a potential well and the inertia term allows the particles to move away from the attractor and thus prevents immediate rebinding that could plague numerical simulations. In the large friction limit γ → ∞ {\displaystyle \gamma \to \infty } the trajectories x ( t ) {\displaystyle x(t)} of the Langevin equation converges in probability to those of the Smoluchowski's equation γ x ˙ + U ′ ( x ) = 2 ε γ w ˙ , {\displaystyle \gamma {\dot {x}}+U^{\prime }(x)={\sqrt {2\varepsilon \gamma }}\,{\dot {w}},} where w ˙ ( t ) {\displaystyle {\dot {w}}(t)} is δ {\displaystyle \delta } -correlated. This equation is obtained when the diffusion coefficient is constant in space. When this is not case, coarse grained equations (at a coarse spatial resolution) should be derived from molecular considerations. Interpretation of the physical forces are not resolved by Ito's vs Stratonovich integral representations or any others. === General model equations === For a timescale much longer than the elementary molecular collision, the position of a tracked particle is described by a more general overdamped limit of the Langevin stochastic model. Indeed, if the acquisition timescale of empirical recorded trajectories is much lower compared to the thermal fluctuations, rapid events are not resolved in the data. Thus at this coarser spatiotemporal scale, the motion description is replaced by an effective stochastic equation X ˙ ( t ) = b ( X ( t ) ) + 2 B e ( X ( t ) ) w ˙ ( t ) , ( 1 ) {\displaystyle {\dot {X}}(t)={b}(X(t))+{\sqrt {2}}{B}_{e}(X(t)){\dot {w}}(t),\qquad \qquad (1)} where b ( X ) {\displaystyle {b}(X)} is the drift field and B e {\displaystyle {B}_{e}} the diffusion matrix. The effective diffusion tensor can vary in space D ( X ) = 1 2 B ( X ) B T X T {\displaystyle D(X)={\frac {1}{2}}B(X)B^{T}X^{T}} ( X T {\textstyle X^{T}} denotes the transpose of X {\textstyle X} ). This equation is not derived but assumed. However the diffusion coefficient should be smooth enough as any discontinuity in D should be resolved by a spatial scaling to analyse the source of discontinuity (usually inert obstacles or transitions between two medias). The observed effective diffusion tensor is not necessarily isotropic and can be state-dependent, whereas the friction coefficient γ {\displaystyle \gamma } remains constant as long as the medium stays the same and the microscopic diffusion coefficient (or tensor) could remain isotropic. == Statistical analysis of these trajectories == The development of statistical methods are based on stochastic models, a possible deconvolution procedure applied to the trajectories. Numerical simulations could also be used to identify specific features that could be extracted from single-particle trajectories data. The goal of building a statistical ensemble from SPTs data is to observe local physical properties of the particles, such as velocity, diffusion, confinement or attracting forces reflecting the interactions of the particles with their local nanometer environments. It is possible to use stochastic modeling to construct from diffusion coefficient (or tensor) the confinement or local density of obstacles reflecting the presence of biological objects of different sizes. === Empirical estimators for the drift and diffusion tensor of a stochastic process === Several empirical estimators have been proposed to recover the local diffusion coefficient, vector field and even organized patterns in the drift, such as potential wells. The construction of empirical estimators that serve to recover physical properties from parametric and non-parametric statistics. Retrieving statistical parameters of a diffusion process from one-dimensional time series statistics use the first moment estimator or Bayesian inference. The models and the analysis assume that processes are stationary, so that the statistical properties of trajectories do not change over time. In practice, this assumption is satisfied when trajectories are acquired for less than a minute, where only few slow changes may occur on the surface of a neuron for example. Non stationary behavior are observed using a time-lapse analysis, with a delay of tens of minutes between successive acquisitions. The coarse-grained model Eq. 1 is recovered from the conditional moments of the trajectory by computing the increments Δ X = X ( t + Δ t ) − X ( t ) {\displaystyle \Delta X=X(t+\Delta t)-X(t)} : a ( x ) = lim Δ t → 0 E [ Δ X ( t ) ∣ X ( t ) = x ] Δ t , {\displaystyle a(x)=\lim _{\Delta t\rightarrow 0}{\frac {E[\Delta X(t)\mid X(t)=x]}{\Delta t}},} D ( x ) = lim Δ t → 0 E [ Δ X ( t ) T Δ X ( t ) ∣ X ( t ) = x ] 2 Δ t . {\displaystyle D(x)=\lim _{\Delta t\rightarrow 0}{\frac {E[\Delta X(t)^{T}\,\Delta X(t)\mid X(t)=x]}{2\,\Delta t}}.} Here the notation E [ ⋅ | X ( t ) = x ] {\displaystyle E[\cdot \,|\,X(t)=x]} means averaging over all trajectories that are at point x at time t. The coefficients of the Smoluchowski equation can be statistically estimated at each point x from an infinitely large sample of its trajectories in the neighborhood of the point x at time t. === Empirical estimation === In practice, the expectations for a and D are estimated by finite sample averages and Δ t {\displaystyle \Delta t} is the time-resolution of the recorded trajectories. Formulas for a and D are approximated at the time step Δ t {\displaystyle \Delta t} , where for tens to hundreds of points falling in any bin. This is usually enough for the estimation. To estimate the local drift and diffusion coefficients, trajectories are first grouped within a small neighbourhood. The field of observation is partitioned into square bins S ( x k , r ) {\displaystyle S(x_{k},r)} of side r and centre x k {\displaystyle x_{k}} and the local drift and diffusion are estimated for each of the square. Considering a sample with N t {\displaystyle N_{t}} trajectories { x i ( t 1 ) , … , x i ( t N s ) } , {\displaystyle \{x^{i}(t_{1}),\dots ,x^{i}(t_{N_{s}})\},} where t j {\displaystyle t_{j}} are the sampling times, the discretization of equation for the drift a ( x k ) = ( a x ( x k ) , a y ( x k ) ) {\displaystyle a(x_{k})=(a_{x}(x_{k}),a_{y}(x_{k}))} at position x k {\displaystyle x_{k}} is given for each spatial projection on the x and y axis by a x ( x k ) ≈ 1 N k ∑ j = 1 N t ∑ i = 0 , x ~ i j ∈ S ( x k , r ) N s − 1 ( x i + 1 j − x i j Δ t ) {\displaystyle a_{x}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,{\tilde {x}}_{i}^{j}\in S(x_{k},r)}^{N_{s}-1}\left({\frac {x_{i+1}^{j}-x_{i}^{j}}{\Delta t}}\right)} a y ( x k ) ≈ 1 N k ∑ j = 1 N t ∑ i = 0 , x ~ i j ∈ S ( x k , r ) N s − 1 ( y i + 1 j − y i j Δ t ) , {\displaystyle a_{y}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,{\tilde {x}}_{i}^{j}\in S(x_{k},r)}^{N_{s}-1}\left({\frac {y_{i+1}^{j}-y_{i}^{j}}{\Delta t}}\right),} where N k {\displaystyle N_{k}} is the number of points of trajectory that fall in the square S ( x k , r ) {\displaystyle S(x_{k},r)} . Similarly, the components of the effective diffusion tensor D ( x k ) {\displaystyle D(x_{k})} are approximated by the empirical sums D x x ( x k ) ≈ 1 N k ∑ j = 1 N t ∑ i = 0 , x i ∈ S ( x k , r ) N s − 1 ( x i + 1 j − x i j ) 2 2 Δ t , {\displaystyle D_{xx}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,x_{i}\in S(x_{k},r)}^{N_{s}-1}{\frac {(x_{i+1}^{j}-x_{i}^{j})^{2}}{2\,\Delta t}},} D y y ( x k ) ≈ 1 N k ∑ j = 1 N t ∑ i = 0 , x i ∈ S ( x k , r ) N s − 1 ( y i + 1 j − y i j ) 2 2 Δ t , {\displaystyle D_{yy}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,x_{i}\in S(x_{k},r)}^{N_{s}-1}{\frac {(y_{i+1}^{j}-y_{i}^{j})^{2}}{2\,\Delta t}},} D x y ( x k ) ≈ 1 N k ∑ j = 1 N t ∑ i = 0 , x i ∈ S ( x k , r ) N s − 1 ( x i + 1 j − x i j ) ( y i + 1 j − y i j ) 2 Δ t . {\displaystyle D_{xy}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,x_{i}\in S(x_{k},r)}^{N_{s}-1}{\frac {(x_{i+1}^{j}-x_{i}^{j})(y_{i+1}^{j}-y_{i}^{j})}{2\,\Delta t}}.} The moment estimation requires a large number of trajectories passing through each point, which agrees precisely with the massive data generated by the a certain types of super-resolution data such as those acquired by sptPALM technique on biological samples. The exact inversion of Lagenvin's equation demands in theory an infinite number of trajectories passing through any point x of interest. In practice, the recovery of the drift and diffusion tensor is obtained after a region is subdivided by a square grid of radius r or by moving sliding windows (of the order of 50 to 100 nm). === Automated recovery of the boundary of a nanodomain === Algorithms based on mapping the density of points extracted from trajectories allow to reveal local binding and trafficking interactions and organization of dynamic subcellular sites. The algorithms can be applied to study regions of high density, revealved by SPTs. Examples are organelles such as endoplasmic reticulum or cell membranes. The method is based on spatiotemporal segmentation to detect local architecture and boundaries of high-density regions for domains measuring hundreds of nanometers. == References ==
Item tree analysis
Item tree analysis (ITA) is a data analytical method which allows constructing a hierarchical structure on the items of a questionnaire or test from observed response patterns. Assume that we have a questionnaire with m items and that subjects can answer positive (1) or negative (0) to each of these items, i.e. the items are dichotomous. If n subjects answer the items this results in a binary data matrix D with m columns and n rows. Typical examples of this data format are test items which can be solved (1) or failed (0) by subjects. Other typical examples are questionnaires where the items are statements to which subjects can agree (1) or disagree (0). Depending on the content of the items it is possible that the response of a subject to an item j determines her or his responses to other items. It is, for example, possible that each subject who agrees to item j will also agree to item i. In this case we say that item j implies item i (short i → j {\displaystyle i\rightarrow j} ). The goal of an ITA is to uncover such deterministic implications from the data set D. == Algorithms for ITA == ITA was originally developed by Van Leeuwe in 1974. The result of his algorithm, which we refer in the following as Classical ITA, is a logically consistent set of implications i → j {\displaystyle i\rightarrow j} . Logically consistent means that if i implies j and j implies k then i implies k for each triple i, j, k of items. Thus the outcome of an ITA is a reflexive and transitive relation on the item set, i.e. a quasi-order on the items. A different algorithm to perform an ITA was suggested in Schrepp (1999). This algorithm is called Inductive ITA. Classical ITA and inductive ITA both construct a quasi-order on the item set by explorative data analysis. But both methods use a different algorithm to construct this quasi-order. For a given data set the resulting quasi-orders from classical and inductive ITA will usually differ. A detailed description of the algorithms used in classical and inductive ITA can be found in Schrepp (2003) or Schrepp (2006)[1]. In a recent paper (Sargin & Ünlü, 2009) some modifications to the algorithm of inductive ITA are proposed, which improve the ability of this method to detect the correct implications from data (especially in the case of higher random response error rates). == Relation to other methods == ITA belongs to a group of data analysis methods called Boolean analysis of questionnaires. Boolean analysis was introduced by Flament in 1976. The goal of a Boolean analysis is to detect deterministic dependencies (formulas from Boolean logic connecting the items, like for example i → j {\displaystyle i\rightarrow j} , i ∧ j → k {\displaystyle i\wedge j\rightarrow k} , and i ∨ j → k {\displaystyle i\vee j\rightarrow k} ) between the items of a questionnaire or test. Since the basic work of Flament (1976) a number of different methods for boolean analysis have been developed. See, for example, Van Buggenhaut and Degreef (1987), Duquenne (1987) or Theuns (1994). These methods share the goal to derive deterministic dependencies between the items of a questionnaire from data, but differ in the algorithms to reach this goal. A comparison of ITA to other methods of boolean data analysis can be found in Schrepp (2003). == Applications == There are several research papers available, which describe concrete applications of item tree analysis. Held and Korossy (1998) analyzes implications on a set of algebra problems with classical ITA. Item tree analysis is also used in a number of social science studies to get insight into the structure of dichotomous data. In Bart and Krus (1973), for example, a predecessor of ITA is used to establish a hierarchical order on items that describe socially unaccepted behavior. In Janssens (1999) a method of Boolean analysis is used to investigate the integration process of minorities into the value system of the dominant culture. Schrepp describes several applications of inductive ITA in the analysis of dependencies between items of social science questionnaires. == Example of an application == To show the possibilities of an analysis of a data set by ITA we analyse the statements of question 4 of the International Social Science Survey Programme (ISSSP) for the year 1995 by inductive and classical ITA. The ISSSP is a continuing annual program of cross-national collaboration on surveys covering important topics for social science research. The program conducts each year one survey with comparable questions in each of the participating nations. The theme of the 1995 survey was national identity. We analyze the results for question 4 for the data set of Western Germany. The statement for question 4 was: Some people say the following things are important for being truly German. Others say they are not important. How important do you think each of the following is: 1. to have been born in Germany 2. to have German citizenship 3. to have lived in Germany for most of one’s life 4. to be able to speak German 5. to be a Christian 6. to respect Germany’s political institutions 7. to feel German The subjects had the response possibilities Very important, Important, Not very important, Not important at all, and Can’t choose to answer the statements. To apply ITA to this data set we changed the answer categories. Very important and Important are coded as 1. Not very important and Not important at all are coded as 0. Can’t choose was handled as missing data. The following figure shows the resulting quasi-orders ≤ I I T A {\displaystyle \leq _{IITA}} from inductive ITA and ≤ C I T A {\displaystyle \leq _{CITA}} from classical ITA. == Available software == The program ITA 2.0 implements both classical and inductive ITA. The program is available on [2]. A short documentation of the program is available in [3]. == See also == Item response theory == Notes == == References == Bart, W. M., & Krus, D. J. (1973). An ordering-theoretic method to determine hierarchies among items. Educational and psychological measurement, 33, 291–300. Duquenne V (1987). Conceptual Implications Between Attributes and some Representation Properties for Finite Lattices. In B Ganter, R Wille, K Wolfe (eds.), Beiträge zur Begriffsanalyse: Vorträge der Arbeitstagung Begriffsanalyse, Darmstadt 1986, pp. 313–339. Wissenschafts-Verlag, Mannheim. Flament C (1976). L’Analyse Bool´eenne de Questionnaire. Mouton, Paris. Held, T., & Korossy, K. (1998). Data-analysis as heuristic for establishing theoretically founded item structures. Zeitschrift für Psychologie, 206, 169–188. Janssens, R. (1999). A Boolean approach to the measurement of group processes and attitudes. The concept of integration as an example. Mathematical Social Sciences, 38, 275–293. Schrepp M (1999). On the Empirical Construction of Implications on Bi-valued Test Items. Mathematical Social Sciences, 38(3), 361–375. Schrepp, M (2002). Explorative analysis of empirical data by boolean analysis of questionnaires. Zeitschrift für Psychologie, 210/2, S. 99-109. Schrepp, M. (2003). A method for the analysis of hierarchical dependencies between items of a questionnaire. Methods of Psychological Research, 19, 43–79. Schrepp, M. (2006). ITA 2.0: A program for Classical and Inductive Item Tree Analysis. Journal of Statistical Software, Vol. 16, Issue 10. Schrepp, M. (2006). Properties of the correlational agreement coefficient: A comment to Ünlü & Albert (2004). Mathematical Social Science, Vol. 51, Issue 1, 117–123. Schrepp, M. (2007). On the evaluation of fit measures for quasi-orders. Mathematical Social Sciences Vol. 53, Issue 2, 196–208. Theuns P (1994). A Dichotomization Method for Boolean Analysis of Quantifiable Cooccurence Data. In G Fischer, D Laming (eds.), Contributions to Mathematical Psychology, Psychometrics and Methodology, Scientific Psychology Series, pp. 173–194. Springer-Verlag, New York. Ünlü, A., & Albert, D. (2004). The Correlational Agreement Coefficient CA - a mathematical analysis of a descriptive goodness-of-fit measure. Mathematical Social Sciences, 48, 281–314. Van Buggenhaut J, Degreef E (1987). On Dichotomization Methods in Boolean Analysis of Questionnaires. In E Roskam, R Suck (eds.), Mathematical Psychology in Progress, Elsevier Science Publishers B.V., North Holland. Van Leeuwe, J.F.J. (1974). Item tree analysis. Nederlands Tijdschrift voor de Psychologie, 29, 475–484. Sargin, A., & Ünlü, A. (2009). Inductive item tree analysis: Corrections, improvements, and comparisons. Mathematical Social Sciences, 58, 376–392.
NSynth
NSynth (a portmanteau of "Neural Synthesis") is a WaveNet-based autoencoder for synthesizing audio, outlined in a paper in April 2017. == Overview == The model generates sounds through a neural network based synthesis, employing a WaveNet-style autoencoder to learn its own temporal embeddings from four different sounds. Google then released an open source hardware interface for the algorithm called NSynth Super, used by notable musicians such as Grimes and YACHT to generate experimental music using artificial intelligence. The research and development of the algorithm was part of a collaboration between Google Brain, Magenta and DeepMind. == Technology == === Dataset === The NSynth dataset is composed of 305,979 one-shot instrumental notes featuring a unique pitch, timbre, and envelope, sampled from 1,006 instruments from commercial sample libraries. For each instrument the dataset contains four-second 16 kHz audio snippets by ranging over every pitch of a standard MIDI piano, as well as five different velocities. The dataset is made available under a Creative Commons Attribution 4.0 International (CC BY 4.0) license. === Machine learning model === A spectral autoencoder model and a WaveNet autoencoder model are publicly available on GitHub. The baseline model uses a spectrogram with fft_size 1024 and hop_size 256, MSE loss on the magnitudes, and the Griffin-Lim algorithm for reconstruction. The WaveNet model trains on mu-law encoded waveform chunks of size 6144. It learns embeddings with 16 dimensions that are downsampled by 512 in time. == NSynth Super == In 2018 Google released a hardware interface for the NSynth algorithm, called NSynth Super, designed to provide an accessible physical interface to the algorithm for musicians to use in their artistic production. Design files, source code and internal components are released under an open source Apache License 2.0, enabling hobbyists and musicians to freely build and use the instrument. At the core of the NSynth Super there is a Raspberry Pi, extended with a custom printed circuit board to accommodate the interface elements. == Influence == Despite not being publicly available as a commercial product, NSynth Super has been used by notable artists, including Grimes and YACHT. Grimes reported using the instrument in her 2020 studio album Miss Anthropocene. YACHT announced an extensive use of NSynth Super in their album Chain Tripping. Claire L. Evans compared the potential influence of the instrument to the Roland TR-808. The NSynth Super design was honored with a D&AD Yellow Pencil award in 2018. == References == == Further reading == Engel, Jesse; Resnick, Cinjon; Roberts, Adam; Dieleman, Sander; Eck, Douglas; Simonyan, Karen; Norouzi, Mohammad (2017). "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders". arXiv:1704.01279 [cs.LG]. == External links == Official Nsynth Super site Official Magenta site In-browser emulation of the Nsynth algorithm
Energy-based model
An energy-based model (EBM) (also called Canonical Ensemble Learning or Learning via Canonical Ensemble – CEL and LCE, respectively) is an application of canonical ensemble formulation from statistical physics for learning from data. The approach prominently appears in generative artificial intelligence. EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models. An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution. Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models, the energy functions of which are parameterized by modern deep neural networks. Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy. == Description == For a given input x {\displaystyle x} , the model describes an energy E θ ( x ) {\displaystyle E_{\theta }(x)} such that the Boltzmann distribution P θ ( x ) = exp ⁡ ( − β E θ ( x ) ) / Z ( θ ) {\displaystyle P_{\theta }(x)=\exp(-\beta E_{\theta }(x))/Z(\theta )} is a probability (density), and typically β = 1 {\displaystyle \beta =1} . Since the normalization constant: Z ( θ ) := ∫ x ∈ X exp ⁡ ( − β E θ ( x ) ) d x {\displaystyle Z(\theta ):=\int _{x\in X}\exp(-\beta E_{\theta }(x))dx} (also known as the partition function) depends on all the Boltzmann factors of all possible inputs x {\displaystyle x} , it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation. However, for maximizing the likelihood during training, the gradient of the log-likelihood of a single training example x {\displaystyle x} is given by using the chain rule: ∂ θ log ⁡ ( P θ ( x ) ) = E x ′ ∼ P θ [ ∂ θ E θ ( x ′ ) ] − ∂ θ E θ ( x ) ( ∗ ) {\displaystyle \partial _{\theta }\log \left(P_{\theta }(x)\right)=\mathbb {E} _{x'\sim P_{\theta }}[\partial _{\theta }E_{\theta }(x')]-\partial _{\theta }E_{\theta }(x)\,(*)} The expectation in the above formula for the gradient can be approximately estimated by drawing samples x ′ {\displaystyle x'} from the distribution P θ {\displaystyle P_{\theta }} using Markov chain Monte Carlo (MCMC). Early energy-based models, such as the 2003 Boltzmann machine by Hinton, estimated this expectation via blocked Gibbs sampling. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD), drawing samples using: x 0 ′ ∼ P 0 , x i + 1 ′ = x i ′ − α 2 ∂ E θ ( x i ′ ) ∂ x i ′ + ϵ {\displaystyle x_{0}'\sim P_{0},x_{i+1}'=x_{i}'-{\frac {\alpha }{2}}{\frac {\partial E_{\theta }(x_{i}')}{\partial x_{i}'}}+\epsilon } , where ϵ ∼ N ( 0 , α ) {\displaystyle \epsilon \sim {\mathcal {N}}(0,\alpha )} . A replay buffer of past values x i ′ {\displaystyle x_{i}'} is used with LD to initialize the optimization module. The parameters θ {\displaystyle \theta } of the neural network are therefore trained in a generative manner via MCMC-based maximum likelihood estimation: the learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method (e.g., Langevin dynamics or Hybrid Monte Carlo), and then updates the parameters θ {\displaystyle \theta } based on the difference between the training examples and the synthesized ones – see equation ( ∗ ) {\displaystyle (*)} . This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation. Essentially, the model learns a function E θ {\displaystyle E_{\theta }} that associates low energies to correct values, and higher energies to incorrect values. After training, given a converged energy model E θ {\displaystyle E_{\theta }} , the Metropolis–Hastings algorithm can be used to draw new samples. The acceptance probability is given by: P a c c ( x i → x ∗ ) = min ( 1 , P θ ( x ∗ ) P θ ( x i ) ) . {\displaystyle P_{acc}(x_{i}\to x^{*})=\min \left(1,{\frac {P_{\theta }(x^{*})}{P_{\theta }(x_{i})}}\right).} == History == The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs. Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables. == Characteristics == EBMs demonstrate useful properties: Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance. Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples. Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes). Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples. Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques. == Experimental results == On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification. == Applications == Target applications include natural language processing, robotics and computer vision. The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis, 3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ). == Alternatives == EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows. == Extensions == === Joint energy-based models === Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability p θ ( y | x ) = e f → θ ( x ) [ y ] ∑ j = 1 K e f → θ ( x ) [ j ] for y = 1 , … , K and f → θ = ( f 1 , … , f K ) ∈ R K , {\displaystyle p_{\theta }(y|x)={\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{\sum _{j=1}^{K}e^{{\vec {f}}_{\theta }(x)[j]}}}\ \ {\text{ for }}y=1,\dotsc ,K{\text{ and }}{\vec {f}}_{\theta }=(f_{1},\dotsc ,f_{K})\in \mathbb {R} ^{K},} where f → θ ( x ) [ y ] {\displaystyle {\vec {f}}_{\theta }(x)[y]} is the y-th index of the logits f → {\displaystyle {\vec {f}}} corresponding to class y. Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density: p θ ( y , x ) = e f → θ ( x ) [ y ] Z ( θ ) , {\displaystyle p_{\theta }(y,x)={\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{Z(\theta )}},} with unknown partition function Z ( θ ) {\displaystyle Z(\theta )} and energy E θ ( x , y ) = − f θ ( x ) [ y ] {\displaystyle E_{\theta }(x,y)=-f_{\theta }(x)[y]} . By marginalization, we obtain the unnormalized density p θ ( x ) = ∑ y p θ ( y , x ) = ∑ y e f → θ ( x ) [ y ] Z ( θ ) =: exp ⁡ ( − E θ ( x ) ) , {\displaystyle p_{\theta }(x)=\sum _{y}p_{\theta }(y,x)=\sum _{y}{\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{Z(\theta )}}=:\exp(-E_{\theta }(x)),} therefore, E θ ( x ) = − log ⁡ ( ∑ y e f → θ ( x ) [ y ] Z ( θ ) ) , {\displaystyle E_{\theta }(x)=-\log \left(\sum _{y}{\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{Z(\theta )}}\right),} so that any classifier can be used to define an energy function E θ ( x ) {\displaystyle E_{\theta }(x)} . == See also == Empirical likelihood Posterior predictive distribution Contrastive learning == Literature == Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689 Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263 == References == == External links == "CIAR NCAP Summer School". www.cs.toronto.edu. Retrieved 2019-12-27. Dayan, Peter; Hinton, Geoffrey; Neal, Radford; Zemel, Richard S. (1999), "Helmholtz Machine", Unsupervised Learning, The MIT Press, doi:10.7551/mitpress/7011.003.0017, ISBN 978-0-262-28803-3 Hinton, Geoffrey E. (August 2002). "Training Products of Experts by Minimizing Contrastive Divergence". Neural Computation. 14 (8): 1771–1800. doi:10.1162/089976602760128018. ISSN 0899-7667. PMID 12180402. S2CID 207596505. Salakhutdinov, Ruslan; Hinton, Geoffrey (2009-04-15). "Deep Boltzmann Machines". Artificial Intelligence and Statistics: 448–455.
NETtalk (artificial neural network)
NETtalk is an artificial neural network that learns to pronounce written English text by supervised learning. It takes English text as input, and produces a matching phonetic transcriptions as output. It is the result of research carried out in the mid-1980s by Terrence Sejnowski and Charles Rosenberg. The intent behind NETtalk was to construct simplified models that might shed light on the complexity of learning human level cognitive tasks, and their implementation as a connectionist model that could also learn to perform a comparable task. The authors trained it by backpropagation. The network was trained on a large amount of English words and their corresponding pronunciations, and is able to generate pronunciations for unseen words with a high level of accuracy. The success of the NETtalk network inspired further research in the field of pronunciation generation and speech synthesis and demonstrated the potential of neural networks for solving complex natural language processing problems. The output of the network was a stream of phonemes, which fed into DECtalk to produce audible speech, It achieved popular success, appearing on the Today show.: 115  From the point of view of modeling human cognition, NETtalk does not specifically model the image processing stages and letter recognition of the visual cortex. Rather, it assumes that the letters have been pre-classified and recognized. It is NETtalk's task to learn proper associations between the correct pronunciation with a given sequence of letters based on the context in which the letters appear. A similar architecture had been subsequently used for the opposite task, that of converting continuous speech signal to a phoneme sequence. == Training == The training dataset was a 20,008-word subset of the Brown Corpus, with manually annotated phoneme and stress for each letter. The development process was described in a 1993 interview. It took three months -- 250 person-hours -- to create the training dataset, but only a few days to train the network. After it was run successfully on this, the authors tried it on a phonological transcription of an interview with a young Latino boy from a barrio in Los Angeles. This resulted in a network that reproduced his Spanish accent.: 115  The original NETtalk was implemented on a Ridge 32, which took 0.275 seconds per learning step (one forward and one backward pass). Training NETtalk became a benchmark to test for the efficiency of backpropagation programs. For example, an implementation on Connection Machine-1 (with 16384 processors) ran at 52x speedup. An implementation on a 10-cell Warp ran at 340x speedup. The following table compiles the benchmark scores as of 1988. Speed is measured in "millions of connections per second" (MCPS). For example, the original NETtalk on Ridge 32 took 0.275 seconds per forward-backward pass, giving 18629 / 10 6 0.275 = 0.068 {\displaystyle {\frac {18629/10^{6}}{0.275}}=0.068} MCPS. Relative times are normalized to the MicroVax. == Architecture == The network had three layers and 18,629 adjustable weights, large by the standards of 1986. There were worries that it would overfit the dataset, but it was trained successfully. The input of the network has 203 units, divided into 7 groups of 29 units each. Each group is a one-hot encoding of one character. There are 29 possible characters: 26 letters, comma, period, and word boundary (whitespace). To produce the pronunciation of a single character, the network takes the character itself, as well as 3 characters before and 3 characters after it. The hidden layer has 80 units. The output has 26 units. 21 units encode for articulatory features (point of articulation, voicing, vowel height, etc.) of phonemes, and 5 units encode for stress and syllable boundaries. Sejnowski studied the learned representation in the network, and found that phonemes that sound similar are clustered together in representation space. The output of the network degrades, but remains understandable, when some hidden neurons are removed. == References == == External links == Original NETtalk training set New York Times article about NETtalk
PVLV
The primary value learned value (PVLV) model is a possible explanation for the reward-predictive firing properties of dopamine (DA) neurons. It simulates behavioral and neural data on Pavlovian conditioning and the midbrain dopaminergic neurons that fire in proportion to unexpected rewards. It is an alternative to the temporal-differences (TD) algorithm. It is used as part of Leabra. == References ==
Graded structure
In mathematics, the term "graded" has a number of meanings, mostly related: In abstract algebra, it refers to a family of concepts: An algebraic structure X {\displaystyle X} is said to be I {\displaystyle I} -graded for an index set I {\displaystyle I} if it has a gradation or grading, i.e. a decomposition into a direct sum X = ⨁ i ∈ I X i {\textstyle X=\bigoplus _{i\in I}X_{i}} of structures; the elements of X i {\displaystyle X_{i}} are said to be "homogeneous of degree i". The index set I {\displaystyle I} is most commonly N {\displaystyle \mathbb {N} } or Z {\displaystyle \mathbb {Z} } , and may be required to have extra structure depending on the type of X {\displaystyle X} . Grading by Z 2 {\displaystyle \mathbb {Z} _{2}} (i.e. Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } ) is also important; see e.g. signed set (the Z 2 {\displaystyle \mathbb {Z} _{2}} -graded sets). The trivial ( Z {\displaystyle \mathbb {Z} } - or N {\displaystyle \mathbb {N} } -) gradation has X 0 = X , X i = 0 {\displaystyle X_{0}=X,X_{i}=0} for i ≠ 0 {\displaystyle i\neq 0} and a suitable trivial structure 0 {\displaystyle 0} . An algebraic structure is said to be doubly graded if the index set is a direct product of sets; the pairs may be called "bidegrees" (e.g. see Spectral sequence). A I {\displaystyle I} -graded vector space or graded linear space is thus a vector space with a decomposition into a direct sum V = ⨁ i ∈ I V i {\textstyle V=\bigoplus _{i\in I}V_{i}} of spaces. A graded linear map is a map between graded vector spaces respecting their gradations. A graded ring is a ring that is a direct sum of additive abelian groups R i {\displaystyle R_{i}} such that R i R j ⊆ R i + j {\displaystyle R_{i}R_{j}\subseteq R_{i+j}} , with i {\displaystyle i} taken from some monoid, usually N {\displaystyle \mathbb {N} } or Z {\displaystyle \mathbb {Z} } , or semigroup (for a ring without identity). The associated graded ring of a commutative ring R {\displaystyle R} with respect to a proper ideal I {\displaystyle I} is gr I ⁡ R = ⨁ n ∈ N I n / I n + 1 {\textstyle \operatorname {gr} _{I}R=\bigoplus _{n\in \mathbb {N} }I^{n}/I^{n+1}} . A graded module is left module M {\displaystyle M} over a graded ring that is a direct sum ⨁ i ∈ I M i {\textstyle \bigoplus _{i\in I}M_{i}} of modules satisfying R i M j ⊆ M i + j {\displaystyle R_{i}M_{j}\subseteq M_{i+j}} . The associated graded module of an R {\displaystyle R} -module M {\displaystyle M} with respect to a proper ideal I {\displaystyle I} is gr I ⁡ M = ⨁ n ∈ N I n M / I n + 1 M {\textstyle \operatorname {gr} _{I}M=\bigoplus _{n\in \mathbb {N} }I^{n}M/I^{n+1}M} . A differential graded module, differential graded Z {\displaystyle \mathbb {Z} } -module or DG-module is a graded module M {\displaystyle M} with a differential d : M → M : M i → M i + 1 {\displaystyle d\colon M\to M\colon M_{i}\to M_{i+1}} making M {\displaystyle M} a chain complex, i.e. d ∘ d = 0 {\displaystyle d\circ d=0} . A graded algebra is an algebra A {\displaystyle A} over a ring R {\displaystyle R} that is graded as a ring; if R {\displaystyle R} is graded we also require A i R j ⊆ A i + j ⊇ R i A j {\displaystyle A_{i}R_{j}\subseteq A_{i+j}\supseteq R_{i}A_{j}} . The graded Leibniz rule for a map d : A → A {\displaystyle d\colon A\to A} on a graded algebra A {\displaystyle A} specifies that d ( a ⋅ b ) = ( d a ) ⋅ b + ( − 1 ) | a | a ⋅ ( d b ) {\displaystyle d(a\cdot b)=(da)\cdot b+(-1)^{|a|}a\cdot (db)} . A differential graded algebra, DG-algebra or DGAlgebra is a graded algebra that is a differential graded module whose differential obeys the graded Leibniz rule. A homogeneous derivation on a graded algebra A is a homogeneous linear map of grade d = |D| on A such that D ( a b ) = D ( a ) b + ε | a | | D | a D ( b ) , ε = ± 1 {\displaystyle D(ab)=D(a)b+\varepsilon ^{|a||D|}aD(b),\varepsilon =\pm 1} acting on homogeneous elements of A. A graded derivation is a sum of homogeneous derivations with the same ε {\displaystyle \varepsilon } . A DGA is an augmented DG-algebra, or differential graded augmented algebra, (see Differential graded algebra). A superalgebra is a Z 2 {\displaystyle \mathbb {Z} _{2}} -graded algebra. A graded-commutative superalgebra satisfies the "supercommutative" law y x = ( − 1 ) | x | | y | x y . {\displaystyle yx=(-1)^{|x||y|}xy.} for homogeneous x,y, where | a | {\displaystyle |a|} represents the "parity" of a {\displaystyle a} , i.e. 0 or 1 depending on the component in which it lies. CDGA may refer to the category of augmented differential graded commutative algebras. A graded Lie algebra is a Lie algebra that is graded as a vector space by a gradation compatible with its Lie bracket. A graded Lie superalgebra is a graded Lie algebra with the requirement for anticommutativity of its Lie bracket relaxed. A supergraded Lie superalgebra is a graded Lie superalgebra with an additional super Z 2 {\displaystyle \mathbb {Z} _{2}} -gradation. A differential graded Lie algebra is a graded vector space over a field of characteristic zero together with a bilinear map [ , ] : L i ⊗ L j → L i + j {\displaystyle [\ ,]\colon L_{i}\otimes L_{j}\to L_{i+j}} and a differential d : L i → L i − 1 {\displaystyle d\colon L_{i}\to L_{i-1}} satisfying [ x , y ] = ( − 1 ) | x | | y | + 1 [ y , x ] , {\displaystyle [x,y]=(-1)^{|x||y|+1}[y,x],} for any homogeneous elements x, y in L, the "graded Jacobi identity" and the graded Leibniz rule. The Graded Brauer group is a synonym for the Brauer–Wall group B W ( F ) {\displaystyle BW(F)} classifying finite-dimensional graded central division algebras over the field F. An A {\displaystyle {\mathcal {A}}} -graded category for a category A {\displaystyle {\mathcal {A}}} is a category C {\displaystyle {\mathcal {C}}} together with a functor F : C → A {\displaystyle F\colon {\mathcal {C}}\rightarrow {\mathcal {A}}} . A differential graded category or DG category is a category whose morphism sets form differential graded Z {\displaystyle \mathbb {Z} } -modules. Graded manifold – extension of the manifold concept based on ideas coming from supersymmetry and supercommutative algebra, including sections on Graded function Graded vector fields Graded exterior forms Graded differential geometry Graded differential calculus In other areas of mathematics: Functionally graded elements are used in finite element analysis. A graded poset is a poset P {\displaystyle P} with a rank function ρ : P → N {\displaystyle \rho \colon P\to \mathbb {N} } compatible with the ordering (i.e. ρ ( x ) < ρ ( y ) ⟹ x < y {\displaystyle \rho (x)<\rho (y)\implies x<y} ) such that y {\displaystyle y} covers x ⟹ ρ ( y ) = ρ ( x ) + 1 {\displaystyle x\implies \rho (y)=\rho (x)+1} .
Link-centric preferential attachment
In mathematical modeling of social networks, link-centric preferential attachment is a node's propensity to re-establish links to nodes it has previously been in contact with in time-varying networks. This preferential attachment model relies on nodes keeping memory of previous neighbors up to the current time. == Background == In real social networks individuals exhibit a tendency to re-connect with past contacts (ex. family, friends, co-workers, etc.) rather than strangers. In 1970, Mark Granovetter examined this behaviour in the social networks of a group of workers and identified tie strength, a characteristic of social ties describing the frequency of contact between two individuals. From this comes the idea of strong and weak ties, where an individual's strong ties are those she has come into frequent contact with. Link-centric preferential attachment aims to explain the mechanism behind strong and weak ties as a stochastic reinforcement process for old ties in agent-based modeling where nodes have long-term memory. == Examples == In a simple model for this mechanism, a node's propensity to establish a new link can be characterized solely by n {\displaystyle n} , the number of contacts it has had in the past. The probability for a node with n social ties to establish a new social tie could then be simply given by P ( n ) = c n + c {\displaystyle P(n)={c \over n+c}\,} where c is an offset constant. The probability for a node to re-connect with old ties is then 1 − P ( n ) = n n + c . {\displaystyle 1-P(n)={n \over n+c}.} Figure 1. shows an example of this process: in the first step nodes A and C connect to node B, giving B a total of two social ties. With c = 1, in the next step B has a probability P(2) = 1/(2 + 1) = 1/3 to create a new tie with D, whereas the probability to reconnect with A or C is twice that at 2/3. More complex models may take into account other variables, such as frequency of contact, contact and intercontact duration, as well as short term memory effects. Effects on the spreading of contagions / weakness of strong ties Understanding the evolution of a network's structure and how it can influence dynamical processes has become an important part of modeling the spreading of contagions. In models of social and biological contagion spreading on time-varying networks link-centric preferential attachment can alter the spread of the contagion to the entire population. Compared to the classic rumour spreading process where nodes are memory-less, link-centric preferential attachment can cause not only a slower spread of the contagion but also one less diffuse. In these models an infected node's chances of connecting to new contacts diminishes as their size of their social circle n {\displaystyle n} grows leading to a limiting effect on the growth of n. The result is strong ties with a node's early contacts and consequently the weakening of the diffusion of the contagion. == See also == BA model Network science Interpersonal tie == References ==
One-way analysis of variance
In statistics, one-way analysis of variance (or one-way ANOVA) is a technique to compare whether two or more samples' means are significantly different (using the F distribution). This analysis of variance technique requires a numeric response variable "Y" and a single explanatory variable "X", hence "one-way". The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. These estimates rely on various assumptions (see below). The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values. Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test (Gosset, 1908). When there are only two means to compare, the t-test and the F-test are equivalent; the relation between ANOVA and t is given by F = t2. An extension of one-way ANOVA is two-way analysis of variance that examines the influence of two different categorical independent variables on one dependent variable. == Assumptions == The results of a one-way ANOVA can be considered reliable as long as the following assumptions are met: Response variable residuals are normally distributed (or approximately normally distributed). Variances of populations are equal. Responses for a given group are independent and identically distributed normal random variables (not a simple random sample (SRS)). If data are ordinal, a non-parametric alternative to this test should be used such as Kruskal–Wallis one-way analysis of variance. If the variances are not known to be equal, a generalization of 2-sample Welch's t-test can be used. === Departures from population normality === ANOVA is a relatively robust procedure with respect to violations of the normality assumption. The one-way ANOVA can be generalized to the factorial and multivariate layouts, as well as to the analysis of covariance. It is often stated in popular literature that none of these F-tests are robust when there are severe violations of the assumption that each population follows the normal distribution, particularly for small alpha levels and unbalanced layouts. Furthermore, it is also claimed that if the underlying assumption of homoscedasticity is violated, the Type I error properties degenerate much more severely. However, this is a misconception, based on work done in the 1950s and earlier. The first comprehensive investigation of the issue by Monte Carlo simulation was Donaldson (1966). He showed that under the usual departures (positive skew, unequal variances) "the F-test is conservative", and so it is less likely than it should be to find that a variable is significant. However, as either the sample size or the number of cells increases, "the power curves seem to converge to that based on the normal distribution". Tiku (1971) found that "the non-normal theory power of F is found to differ from the normal theory power by a correction term which decreases sharply with increasing sample size." The problem of non-normality, especially in large samples, is far less serious than popular articles would suggest. The current view is that "Monte-Carlo studies were used extensively with normal distribution-based tests to determine how sensitive they are to violations of the assumption of normal distribution of the analyzed variables in the population. The general conclusion from these studies is that the consequences of such violations are less severe than previously thought. Although these conclusions should not entirely discourage anyone from being concerned about the normality assumption, they have increased the overall popularity of the distribution-dependent statistical tests in all areas of research." For nonparametric alternatives in the factorial layout, see Sawilowsky. For more discussion see ANOVA on ranks. == The case of fixed effects, fully randomized experiment, unbalanced data == === The model === The normal linear model describes treatment groups with probability distributions which are identically bell-shaped (normal) curves with different means. Thus fitting the models requires only the means of each treatment group and a variance calculation (an average variance within the treatment groups is used). Calculations of the means and the variance are performed as part of the hypothesis test. The commonly used normal linear models for a completely randomized experiment are: y i , j = μ j + ε i , j {\displaystyle y_{i,j}=\mu _{j}+\varepsilon _{i,j}} (the means model) or y i , j = μ + τ j + ε i , j {\displaystyle y_{i,j}=\mu +\tau _{j}+\varepsilon _{i,j}} (the effects model) where i = 1 , … , I {\displaystyle i=1,\dotsc ,I} is an index over experimental units j = 1 , … , J {\displaystyle j=1,\dotsc ,J} is an index over treatment groups I j {\displaystyle I_{j}} is the number of experimental units in the jth treatment group I = ∑ j I j {\displaystyle I=\sum _{j}I_{j}} is the total number of experimental units y i , j {\displaystyle y_{i,j}} are observations μ j {\displaystyle \mu _{j}} is the mean of the observations for the jth treatment group μ {\displaystyle \mu } is the grand mean of the observations τ j {\displaystyle \tau _{j}} is the jth treatment effect, a deviation from the grand mean ∑ τ j = 0 {\displaystyle \sum \tau _{j}=0} μ j = μ + τ j {\displaystyle \mu _{j}=\mu +\tau _{j}} ε ∼ N ( 0 , σ 2 ) {\displaystyle \varepsilon \thicksim N(0,\sigma ^{2})} , ε i , j {\displaystyle \varepsilon _{i,j}} are normally distributed zero-mean random errors. The index i {\displaystyle i} over the experimental units can be interpreted several ways. In some experiments, the same experimental unit is subject to a range of treatments; i {\displaystyle i} may point to a particular unit. In others, each treatment group has a distinct set of experimental units; i {\displaystyle i} may simply be an index into the j {\displaystyle j} -th list. === The data and statistical summaries of the data === One form of organizing experimental observations y i j {\displaystyle y_{ij}} is with groups in columns: Comparing model to summaries: μ = m {\displaystyle \mu =m} and μ j = m j {\displaystyle \mu _{j}=m_{j}} . The grand mean and grand variance are computed from the grand sums, not from group means and variances. === The hypothesis test === Given the summary statistics, the calculations of the hypothesis test are shown in tabular form. While two columns of SS are shown for their explanatory value, only one column is required to display results. M S E r r o r {\displaystyle MS_{Error}} is the estimate of variance corresponding to σ 2 {\displaystyle \sigma ^{2}} of the model. === Analysis summary === The core ANOVA analysis consists of a series of calculations. The data is collected in tabular form. Then Each treatment group is summarized by the number of experimental units, two sums, a mean and a variance. The treatment group summaries are combined to provide totals for the number of units and the sums. The grand mean and grand variance are computed from the grand sums. The treatment and grand means are used in the model. The three DFs and SSs are calculated from the summaries. Then the MSs are calculated and a ratio determines F. A computer typically determines a p-value from F which determines whether treatments produce significantly different results. If the result is significant, then the model provisionally has validity. If the experiment is balanced, all of the I j {\displaystyle I_{j}} terms are equal so the SS equations simplify. In a more complex experiment, where the experimental units (or environmental effects) are not homogeneous, row statistics are also used in the analysis. The model includes terms dependent on i {\displaystyle i} . Determining the extra terms reduces the number of degrees of freedom available. == Example == Consider an experiment to study the effect of three different levels of a factor on a response (e.g. three levels of a fertilizer on plant growth). If we had 6 observations for each level, we could write the outcome of the experiment in a table like this, where a1, a2, and a3 are the three levels of the factor being studied. The null hypothesis, denoted H0, for the overall F-test for this experiment would be that all three levels of the factor produce the same response, on average. To calculate the F-ratio: Step 1: Calculate the mean within each group: Y ¯ 1 = 1 6 ∑ Y 1 i = 6 + 8 + 4 + 5 + 3 + 4 6 = 5 Y ¯ 2 = 1 6 ∑ Y 2 i = 8 + 12 + 9 + 11 + 6 + 8 6 = 9 Y ¯ 3 = 1 6 ∑ Y 3 i = 13 + 9 + 11 + 8 + 7 + 12 6 = 10 {\displaystyle {\begin{aligned}{\overline {Y}}_{1}&={\frac {1}{6}}\sum Y_{1i}={\frac {6+8+4+5+3+4}{6}}=5\\{\overline {Y}}_{2}&={\frac {1}{6}}\sum Y_{2i}={\frac {8+12+9+11+6+8}{6}}=9\\{\overline {Y}}_{3}&={\frac {1}{6}}\sum Y_{3i}={\frac {13+9+11+8+7+12}{6}}=10\end{aligned}}} Step 2: Calculate the overall mean: Y ¯ = ∑ i Y ¯ i a = Y ¯ 1 + Y ¯ 2 + Y ¯ 3 a = 5 + 9 + 10 3 = 8 {\displaystyle {\overline {Y}}={\frac {\sum _{i}{\overline {Y}}_{i}}{a}}={\frac {{\overline {Y}}_{1}+{\overline {Y}}_{2}+{\overline {Y}}_{3}}{a}}={\frac {5+9+10}{3}}=8} where a is the number of groups. Step 3: Calculate the "between-group" sum of squared differences: S B = n ( Y ¯ 1 − Y ¯ ) 2 + n ( Y ¯ 2 − Y ¯ ) 2 + n ( Y ¯ 3 − Y ¯ ) 2 = 6 ( 5 − 8 ) 2 + 6 ( 9 − 8 ) 2 + 6 ( 10 − 8 ) 2 = 84 {\displaystyle {\begin{aligned}S_{B}&=n({\overline {Y}}_{1}-{\overline {Y}})^{2}+n({\overline {Y}}_{2}-{\overline {Y}})^{2}+n({\overline {Y}}_{3}-{\overline {Y}})^{2}\\[8pt]&=6(5-8)^{2}+6(9-8)^{2}+6(10-8)^{2}=84\end{aligned}}} where n is the number of data values per group. The between-group degrees of freedom is one less than the number of groups f b = 3 − 1 = 2 {\displaystyle f_{b}=3-1=2} so the between-group mean square value is M S B = 84 / 2 = 42 {\displaystyle MS_{B}=84/2=42} Step 4: Calculate the "within-group" sum of squares. Begin by centering the data in each group The within-group sum of squares is the sum of squares of all 18 values in this table S W = ( 1 ) 2 + ( 3 ) 2 + ( − 1 ) 2 + ( 0 ) 2 + ( − 2 ) 2 + ( − 1 ) 2 + ( − 1 ) 2 + ( 3 ) 2 + ( 0 ) 2 + ( 2 ) 2 + ( − 3 ) 2 + ( − 1 ) 2 + ( 3 ) 2 + ( − 1 ) 2 + ( 1 ) 2 + ( − 2 ) 2 + ( − 3 ) 2 + ( 2 ) 2 = 1 + 9 + 1 + 0 + 4 + 1 + 1 + 9 + 0 + 4 + 9 + 1 + 9 + 1 + 1 + 4 + 9 + 4 = 68 {\displaystyle {\begin{aligned}S_{W}=&(1)^{2}+(3)^{2}+(-1)^{2}+(0)^{2}+(-2)^{2}+(-1)^{2}+\\&(-1)^{2}+(3)^{2}+(0)^{2}+(2)^{2}+(-3)^{2}+(-1)^{2}+\\&(3)^{2}+(-1)^{2}+(1)^{2}+(-2)^{2}+(-3)^{2}+(2)^{2}\\=&\ 1+9+1+0+4+1+1+9+0+4+9+1+9+1+1+4+9+4\\=&\ 68\\\end{aligned}}} The within-group degrees of freedom is f W = a ( n − 1 ) = 3 ( 6 − 1 ) = 15 {\displaystyle f_{W}=a(n-1)=3(6-1)=15} Thus the within-group mean square value is M S W = S W / f W = 68 / 15 ≈ 4.5 {\displaystyle MS_{W}=S_{W}/f_{W}=68/15\approx 4.5} Step 5: The F-ratio is F = M S B M S W ≈ 42 / 4.5 ≈ 9.3 {\displaystyle F={\frac {MS_{B}}{MS_{W}}}\approx 42/4.5\approx 9.3} The critical value is the number that the test statistic must exceed to reject the test. In this case, Fcrit(2,15) = 3.68 at α = 0.05. Since F=9.3 > 3.68, the results are significant at the 5% significance level. One would not accept the null hypothesis, concluding that there is strong evidence that the expected values in the three groups differ. The p-value for this test is 0.002. After performing the F-test, it is common to carry out some "post-hoc" analysis of the group means. In this case, the first two group means differ by 4 units, the first and third group means differ by 5 units, and the second and third group means differ by only 1 unit. The standard error of each of these differences is 4.5 / 6 + 4.5 / 6 = 1.2 {\displaystyle {\sqrt {4.5/6+4.5/6}}=1.2} . Thus the first group is strongly different from the other groups, as the mean difference is more than 3 times the standard error, so we can be highly confident that the population mean of the first group differs from the population means of the other groups. However, there is no evidence that the second and third groups have different population means from each other, as their mean difference of one unit is comparable to the standard error. Note F(x, y) denotes an F-distribution cumulative distribution function with x degrees of freedom in the numerator and y degrees of freedom in the denominator. == See also == Analysis of variance F test (Includes a one-way ANOVA example) Mixed model Multivariate analysis of variance (MANOVA) Repeated measures ANOVA Two-way ANOVA Welch's t-test == Notes == == Further reading == George Casella (18 April 2008). Statistical design. Springer. ISBN 978-0-387-75965-4.
Long short-term memory
Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps (thus "long short-term memory"). The name is made in analogy with long-term memory and short-term memory and their relationship, studied by cognitive psychologists since the early 20th century. An LSTM unit is typically composed of a cell and three gates: an input gate, an output gate, and a forget gate. The cell remembers values over arbitrary time intervals, and the gates regulate the flow of information into and out of the cell. Forget gates decide what information to discard from the previous state, by mapping the previous state and the current input to a value between 0 and 1. A (rounded) value of 1 signifies retention of the information, and a value of 0 represents discarding. Input gates decide which pieces of new information to store in the current cell state, using the same system as forget gates. Output gates control which pieces of information in the current cell state to output, by assigning a value from 0 to 1 to the information, considering the previous and current states. Selectively outputting relevant information from the current state allows the LSTM network to maintain useful, long-term dependencies to make predictions, both in current and future time-steps. LSTM has wide applications in classification, data processing, time series analysis tasks, speech recognition, machine translation, speech activity detection, robot control, video games, healthcare. == Motivation == In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with classic RNNs is computational (or practical) in nature: when training a classic RNN using back-propagation, the long-term gradients which are back-propagated can "vanish", meaning they can tend to zero due to very small numbers creeping into the computations, causing the model to effectively stop learning. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow with little to no attenuation. However, LSTM networks can still suffer from the exploding gradient problem. The intuition behind the LSTM architecture is to create an additional module in a neural network that learns when to remember and when to forget pertinent information. In other words, the network effectively learns which information might be needed later on in a sequence and when that information is no longer needed. For instance, in the context of natural language processing, the network can learn grammatical dependencies. An LSTM might process the sentence "Dave, as a result of his controversial claims, is now a pariah" by remembering the (statistically likely) grammatical gender and number of the subject Dave, note that this information is pertinent for the pronoun his and note that this information is no longer important after the verb is. == Variants == In the equations below, the lowercase variables represent vectors. Matrices W q {\displaystyle W_{q}} and U q {\displaystyle U_{q}} contain, respectively, the weights of the input and recurrent connections, where the subscript q {\displaystyle _{q}} can either be the input gate i {\displaystyle i} , output gate o {\displaystyle o} , the forget gate f {\displaystyle f} or the memory cell c {\displaystyle c} , depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, c t ∈ R h {\displaystyle c_{t}\in \mathbb {R} ^{h}} is not just one unit of one LSTM cell, but contains h {\displaystyle h} LSTM cell's units. See for an empirical study of 8 architectural variants of LSTM. === LSTM with a forget gate === The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are: f t = σ g ( W f x t + U f h t − 1 + b f ) i t = σ g ( W i x t + U i h t − 1 + b i ) o t = σ g ( W o x t + U o h t − 1 + b o ) c ~ t = σ c ( W c x t + U c h t − 1 + b c ) c t = f t ⊙ c t − 1 + i t ⊙ c ~ t h t = o t ⊙ σ h ( c t ) {\displaystyle {\begin{aligned}f_{t}&=\sigma _{g}(W_{f}x_{t}+U_{f}h_{t-1}+b_{f})\\i_{t}&=\sigma _{g}(W_{i}x_{t}+U_{i}h_{t-1}+b_{i})\\o_{t}&=\sigma _{g}(W_{o}x_{t}+U_{o}h_{t-1}+b_{o})\\{\tilde {c}}_{t}&=\sigma _{c}(W_{c}x_{t}+U_{c}h_{t-1}+b_{c})\\c_{t}&=f_{t}\odot c_{t-1}+i_{t}\odot {\tilde {c}}_{t}\\h_{t}&=o_{t}\odot \sigma _{h}(c_{t})\end{aligned}}} where the initial values are c 0 = 0 {\displaystyle c_{0}=0} and h 0 = 0 {\displaystyle h_{0}=0} and the operator ⊙ {\displaystyle \odot } denotes the Hadamard product (element-wise product). The subscript t {\displaystyle t} indexes the time step. ==== Variables ==== Letting the superscripts d {\displaystyle d} and h {\displaystyle h} refer to the number of input features and number of hidden units, respectively: x t ∈ R d {\displaystyle x_{t}\in \mathbb {R} ^{d}} : input vector to the LSTM unit f t ∈ ( 0 , 1 ) h {\displaystyle f_{t}\in {(0,1)}^{h}} : forget gate's activation vector i t ∈ ( 0 , 1 ) h {\displaystyle i_{t}\in {(0,1)}^{h}} : input/update gate's activation vector o t ∈ ( 0 , 1 ) h {\displaystyle o_{t}\in {(0,1)}^{h}} : output gate's activation vector h t ∈ ( − 1 , 1 ) h {\displaystyle h_{t}\in {(-1,1)}^{h}} : hidden state vector also known as output vector of the LSTM unit c ~ t ∈ ( − 1 , 1 ) h {\displaystyle {\tilde {c}}_{t}\in {(-1,1)}^{h}} : cell input activation vector c t ∈ R h {\displaystyle c_{t}\in \mathbb {R} ^{h}} : cell state vector W ∈ R h × d {\displaystyle W\in \mathbb {R} ^{h\times d}} , U ∈ R h × h {\displaystyle U\in \mathbb {R} ^{h\times h}} and b ∈ R h {\displaystyle b\in \mathbb {R} ^{h}} : weight matrices and bias vector parameters which need to be learned during training ==== Activation functions ==== σ g {\displaystyle \sigma _{g}} : sigmoid function. σ c {\displaystyle \sigma _{c}} : hyperbolic tangent function. σ h {\displaystyle \sigma _{h}} : hyperbolic tangent function or, as the peephole LSTM paper suggests, σ h ( x ) = x {\displaystyle \sigma _{h}(x)=x} . === Peephole LSTM === The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM). Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state. h t − 1 {\displaystyle h_{t-1}} is not used, c t − 1 {\displaystyle c_{t-1}} is used instead in most places. f t = σ g ( W f x t + U f c t − 1 + b f ) i t = σ g ( W i x t + U i c t − 1 + b i ) o t = σ g ( W o x t + U o c t − 1 + b o ) c t = f t ⊙ c t − 1 + i t ⊙ σ c ( W c x t + b c ) h t = o t ⊙ σ h ( c t ) {\displaystyle {\begin{aligned}f_{t}&=\sigma _{g}(W_{f}x_{t}+U_{f}c_{t-1}+b_{f})\\i_{t}&=\sigma _{g}(W_{i}x_{t}+U_{i}c_{t-1}+b_{i})\\o_{t}&=\sigma _{g}(W_{o}x_{t}+U_{o}c_{t-1}+b_{o})\\c_{t}&=f_{t}\odot c_{t-1}+i_{t}\odot \sigma _{c}(W_{c}x_{t}+b_{c})\\h_{t}&=o_{t}\odot \sigma _{h}(c_{t})\end{aligned}}} Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. i t , o t {\displaystyle i_{t},o_{t}} and f t {\displaystyle f_{t}} represent the activations of respectively the input, output and forget gates, at time step t {\displaystyle t} . The 3 exit arrows from the memory cell c {\displaystyle c} to the 3 gates i , o {\displaystyle i,o} and f {\displaystyle f} represent the peephole connections. These peephole connections actually denote the contributions of the activation of the memory cell c {\displaystyle c} at time step t − 1 {\displaystyle t-1} , i.e. the contribution of c t − 1 {\displaystyle c_{t-1}} (and not c t {\displaystyle c_{t}} , as the picture may suggest). In other words, the gates i , o {\displaystyle i,o} and f {\displaystyle f} calculate their activations at time step t {\displaystyle t} (i.e., respectively, i t , o t {\displaystyle i_{t},o_{t}} and f t {\displaystyle f_{t}} ) also considering the activation of the memory cell c {\displaystyle c} at time step t − 1 {\displaystyle t-1} , i.e. c t − 1 {\displaystyle c_{t-1}} . The single left-to-right arrow exiting the memory cell is not a peephole connection and denotes c t {\displaystyle c_{t}} . The little circles containing a × {\displaystyle \times } symbol represent an element-wise multiplication between its inputs. The big circles containing an S-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum. === Peephole convolutional LSTM === Peephole convolutional LSTM. The ∗ {\displaystyle *} denotes the convolution operator. f t = σ g ( W f ∗ x t + U f ∗ h t − 1 + V f ⊙ c t − 1 + b f ) i t = σ g ( W i ∗ x t + U i ∗ h t − 1 + V i ⊙ c t − 1 + b i ) c t = f t ⊙ c t − 1 + i t ⊙ σ c ( W c ∗ x t + U c ∗ h t − 1 + b c ) o t = σ g ( W o ∗ x t + U o ∗ h t − 1 + V o ⊙ c t + b o ) h t = o t ⊙ σ h ( c t ) {\displaystyle {\begin{aligned}f_{t}&=\sigma _{g}(W_{f}*x_{t}+U_{f}*h_{t-1}+V_{f}\odot c_{t-1}+b_{f})\\i_{t}&=\sigma _{g}(W_{i}*x_{t}+U_{i}*h_{t-1}+V_{i}\odot c_{t-1}+b_{i})\\c_{t}&=f_{t}\odot c_{t-1}+i_{t}\odot \sigma _{c}(W_{c}*x_{t}+U_{c}*h_{t-1}+b_{c})\\o_{t}&=\sigma _{g}(W_{o}*x_{t}+U_{o}*h_{t-1}+V_{o}\odot c_{t}+b_{o})\\h_{t}&=o_{t}\odot \sigma _{h}(c_{t})\end{aligned}}} == Training == An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight. A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to lim n → ∞ W n = 0 {\displaystyle \lim _{n\to \infty }W^{n}=0} if the spectral radius of W {\displaystyle W} is smaller than 1. However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value. === CTC score function === Many applications use stacks of LSTM RNNs and train them by connectionist temporal classification (CTC) to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition. === Alternatives === Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution or by policy gradient methods, especially when there is no "teacher" (that is, training labels). == Applications == Applications of LSTM include: 2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice. According to the official blog post, the new model cut transcription errors by 49%. 2016: Google started using an LSTM to suggest messages in the Allo conversation app. In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%. Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype in the iPhone and for Siri. Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology. 2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks. Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory". 2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2, and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity. 2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II. == History == === Development === Aspects of LSTM were anticipated by "focused back-propagation" (Mozer, 1989), cited by the LSTM paper. Sepp Hochreiter's 1991 German diploma thesis analyzed the vanishing gradient problem and developed principles of the method. His supervisor, Jürgen Schmidhuber, considered the thesis highly significant. An early version of LSTM was published in 1995 in a technical report by Sepp Hochreiter and Jürgen Schmidhuber, then published in the NIPS 1996 conference. The most commonly used reference point for LSTM was published in 1997 in the journal Neural Computation. By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates. (Felix Gers, Jürgen Schmidhuber, and Fred Cummins, 1999) introduced the forget gate (also called "keep gate") into the LSTM architecture in 1999, enabling the LSTM to reset its own state. This is the most commonly used version of LSTM nowadays. (Gers, Schmidhuber, and Cummins, 2000) added peephole connections. Additionally, the output activation function was omitted. === Development of variants === (Graves, Fernandez, Gomez, and Schmidhuber, 2006) introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences. (Graves, Schmidhuber, 2005) published LSTM with full backpropagation through time and bidirectional LSTM. (Kyunghyun Cho et al., 2014) published a simplified variant of the forget gate LSTM called Gated recurrent unit (GRU). (Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber, 2015) used LSTM principles to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks. Concurrently, the ResNet architecture was developed. It is equivalent to an open-gated or gateless highway network. A modern upgrade of LSTM called xLSTM is published by a team led by Sepp Hochreiter (Maximilian et al, 2024). One of the 2 blocks (mLSTM) of the architecture are parallelizable like the Transformer architecture, the other ones (sLSTM) allow state tracking. === Applications === 2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models. Hochreiter et al. used LSTM for meta-learning (i.e. learning a learning algorithm). 2004: First successful application of LSTM to speech Alex Graves et al. 2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher. Mayer et al. trained LSTM to control robots. 2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher. Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology. 2009: Justin Bayer et al. introduced neural architecture search for LSTM. 2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves. One was the most accurate model in the competition and another was the fastest. This was the first time an RNN won international competitions. 2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset. 2017: Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference. Their time-aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM. == See also == == References == == Further reading == Monner, Derek D.; Reggia, James A. (2010). "A generalized LSTM-like training algorithm for second-order recurrent neural networks" (PDF). Neural Networks. 25 (1): 70–83. doi:10.1016/j.neunet.2011.07.003. PMC 3217173. PMID 21803542. High-performing extension of LSTM that has been simplified to a single node type and can train arbitrary architectures Gers, Felix A.; Schraudolph, Nicol N.; Schmidhuber, Jürgen (Aug 2002). "Learning precise timing with LSTM recurrent networks" (PDF). Journal of Machine Learning Research. 3: 115–143. Gers, Felix (2001). "Long Short-Term Memory in Recurrent Neural Networks" (PDF). PhD thesis. Abidogun, Olusola Adeniyi (2005). Data Mining, Fraud Detection and Mobile Telecommunications: Call Pattern Analysis with Unsupervised Neural Networks. Master's Thesis (Thesis). University of the Western Cape. hdl:11394/249. Archived (PDF) from the original on May 22, 2012. original with two chapters devoted to explaining recurrent neural networks, especially LSTM. == External links == Recurrent Neural Networks with over 30 LSTM papers by Jürgen Schmidhuber's group at IDSIA Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "10.1. Long Short-Term Memory (LSTM)". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
Inclusion–exclusion principle
In combinatorics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as | A ∪ B | = | A | + | B | − | A ∩ B | {\displaystyle |A\cup B|=|A|+|B|-|A\cap B|} where A and B are two finite sets and |S| indicates the cardinality of a set S (which may be considered as the number of elements of the set, if the set is finite). The formula expresses the fact that the sum of the sizes of the two sets may be too large since some elements may be counted twice. The double-counted elements are those in the intersection of the two sets and the count is corrected by subtracting the size of the intersection. The inclusion-exclusion principle, being a generalization of the two-set case, is perhaps more clearly seen in the case of three sets, which for the sets A, B and C is given by | A ∪ B ∪ C | = | A | + | B | + | C | − | A ∩ B | − | A ∩ C | − | B ∩ C | + | A ∩ B ∩ C | {\displaystyle |A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C|+|A\cap B\cap C|} This formula can be verified by counting how many times each region in the Venn diagram figure is included in the right-hand side of the formula. In this case, when removing the contributions of over-counted elements, the number of elements in the mutual intersection of the three sets has been subtracted too often, so must be added back in to get the correct total. Generalizing the results of these examples gives the principle of inclusion–exclusion. To find the cardinality of the union of n sets: Include the cardinalities of the sets. Exclude the cardinalities of the pairwise intersections. Include the cardinalities of the triple-wise intersections. Exclude the cardinalities of the quadruple-wise intersections. Include the cardinalities of the quintuple-wise intersections. Continue, until the cardinality of the n-tuple-wise intersection is included (if n is odd) or excluded (n even). The name comes from the idea that the principle is based on over-generous inclusion, followed by compensating exclusion. This concept is attributed to Abraham de Moivre (1718), although it first appears in a paper of Daniel da Silva (1854) and later in a paper by J. J. Sylvester (1883). Sometimes the principle is referred to as the formula of Da Silva or Sylvester, due to these publications. The principle can be viewed as an example of the sieve method extensively used in number theory and is sometimes referred to as the sieve formula. As finite probabilities are computed as counts relative to the cardinality of the probability space, the formulas for the principle of inclusion–exclusion remain valid when the cardinalities of the sets are replaced by finite probabilities. More generally, both versions of the principle can be put under the common umbrella of measure theory. In a very abstract setting, the principle of inclusion–exclusion can be expressed as the calculation of the inverse of a certain matrix. This inverse has a special structure, making the principle an extremely valuable technique in combinatorics and related areas of mathematics. As Gian-Carlo Rota put it: "One of the most useful principles of enumeration in discrete probability and combinatorial theory is the celebrated principle of inclusion–exclusion. When skillfully applied, this principle has yielded the solution to many a combinatorial problem." == Formula == In its general formula, the principle of inclusion–exclusion states that for finite sets A1, ..., An, one has the identity This can be compactly written as | ⋃ i = 1 n A i | = ∑ k = 1 n ( − 1 ) k + 1 ( ∑ 1 ⩽ i 1 < ⋯ < i k ⩽ n | A i 1 ∩ ⋯ ∩ A i k | ) {\displaystyle \left|\bigcup _{i=1}^{n}A_{i}\right|=\sum _{k=1}^{n}(-1)^{k+1}\left(\sum _{1\leqslant i_{1}<\cdots <i_{k}\leqslant n}|A_{i_{1}}\cap \cdots \cap A_{i_{k}}|\right)} or | ⋃ i = 1 n A i | = ∑ ∅ ≠ J ⊆ { 1 , … , n } ( − 1 ) | J | + 1 | ⋂ j ∈ J A j | . {\displaystyle \left|\bigcup _{i=1}^{n}A_{i}\right|=\sum _{\emptyset \neq J\subseteq \{1,\ldots ,n\}}(-1)^{|J|+1}\left|\bigcap _{j\in J}A_{j}\right|.} In words, to count the number of elements in a finite union of finite sets, first sum the cardinalities of the individual sets, then subtract the number of elements that appear in at least two sets, then add back the number of elements that appear in at least three sets, then subtract the number of elements that appear in at least four sets, and so on. This process always ends since there can be no elements that appear in more than the number of sets in the union. (For example, if n = 4 , {\displaystyle n=4,} there can be no elements that appear in more than 4 {\displaystyle 4} sets; equivalently, there can be no elements that appear in at least 5 {\displaystyle 5} sets.) In applications it is common to see the principle expressed in its complementary form. That is, letting S be a finite universal set containing all of the Ai and letting A i ¯ {\displaystyle {\bar {A_{i}}}} denote the complement of Ai in S, by De Morgan's laws we have | ⋂ i = 1 n A i ¯ | = | S − ⋃ i = 1 n A i | = | S | − ∑ i = 1 n | A i | + ∑ 1 ⩽ i < j ⩽ n | A i ∩ A j | − ⋯ + ( − 1 ) n | A 1 ∩ ⋯ ∩ A n | . {\displaystyle \left|\bigcap _{i=1}^{n}{\bar {A_{i}}}\right|=\left|S-\bigcup _{i=1}^{n}A_{i}\right|=|S|-\sum _{i=1}^{n}|A_{i}|+\sum _{1\leqslant i<j\leqslant n}|A_{i}\cap A_{j}|-\cdots +(-1)^{n}|A_{1}\cap \cdots \cap A_{n}|.} As another variant of the statement, let P1, ..., Pn be a list of properties that elements of a set S may or may not have, then the principle of inclusion–exclusion provides a way to calculate the number of elements of S that have none of the properties. Just let Ai be the subset of elements of S which have the property Pi and use the principle in its complementary form. This variant is due to J. J. Sylvester. Notice that if you take into account only the first m<n sums on the right (in the general form of the principle), then you will get an overestimate if m is odd and an underestimate if m is even. == Examples == === Counting derangements === A more complex example is the following. Suppose there is a deck of n cards numbered from 1 to n. Suppose a card numbered m is in the correct position if it is the mth card in the deck. How many ways, W, can the cards be shuffled with at least 1 card being in the correct position? Begin by defining set Am, which is all of the orderings of cards with the mth card correct. Then the number of orders, W, with at least one card being in the correct position, m, is W = | ⋃ m = 1 n A m | . {\displaystyle W=\left|\bigcup _{m=1}^{n}A_{m}\right|.} Apply the principle of inclusion–exclusion, W = ∑ m 1 = 1 n | A m 1 | − ∑ 1 ⩽ m 1 < m 2 ⩽ n | A m 1 ∩ A m 2 | + ⋯ + ( − 1 ) p − 1 ∑ 1 ⩽ m 1 < ⋯ < m p ⩽ n | A m 1 ∩ ⋯ ∩ A m p | + ⋯ {\displaystyle W=\sum _{m_{1}=1}^{n}|A_{m_{1}}|-\sum _{1\leqslant m_{1}<m_{2}\leqslant n}|A_{m_{1}}\cap A_{m_{2}}|+\cdots +(-1)^{p-1}\sum _{1\leqslant m_{1}<\cdots <m_{p}\leqslant n}|A_{m_{1}}\cap \cdots \cap A_{m_{p}}|+\cdots } Each value A m 1 ∩ ⋯ ∩ A m p {\displaystyle A_{m_{1}}\cap \cdots \cap A_{m_{p}}} represents the set of shuffles having at least p values m1, ..., mp in the correct position. Note that the number of shuffles with at least p values correct only depends on p, not on the particular values of m {\displaystyle m} . For example, the number of shuffles having the 1st, 3rd, and 17th cards in the correct position is the same as the number of shuffles having the 2nd, 5th, and 13th cards in the correct positions. It only matters that of the n cards, 3 were chosen to be in the correct position. Thus there are ( n p ) {\textstyle {n \choose p}} equal terms in the pth summation (see combination). W = ( n 1 ) | A 1 | − ( n 2 ) | A 1 ∩ A 2 | + ⋯ + ( − 1 ) p − 1 ( n p ) | A 1 ∩ ⋯ ∩ A p | + ⋯ {\displaystyle W={n \choose 1}|A_{1}|-{n \choose 2}|A_{1}\cap A_{2}|+\cdots +(-1)^{p-1}{n \choose p}|A_{1}\cap \cdots \cap A_{p}|+\cdots } | A 1 ∩ ⋯ ∩ A p | {\displaystyle |A_{1}\cap \cdots \cap A_{p}|} is the number of orderings having p elements in the correct position, which is equal to the number of ways of ordering the remaining n − p elements, or (n − p)!. Thus we finally get: W = ( n 1 ) ( n − 1 ) ! − ( n 2 ) ( n − 2 ) ! + ⋯ + ( − 1 ) p − 1 ( n p ) ( n − p ) ! + ⋯ = ∑ p = 1 n ( − 1 ) p − 1 ( n p ) ( n − p ) ! = ∑ p = 1 n ( − 1 ) p − 1 n ! p ! ( n − p ) ! ( n − p ) ! = ∑ p = 1 n ( − 1 ) p − 1 n ! p ! {\displaystyle {\begin{aligned}W&={n \choose 1}(n-1)!-{n \choose 2}(n-2)!+\cdots +(-1)^{p-1}{n \choose p}(n-p)!+\cdots \\&=\sum _{p=1}^{n}(-1)^{p-1}{n \choose p}(n-p)!\\&=\sum _{p=1}^{n}(-1)^{p-1}{\frac {n!}{p!(n-p)!}}(n-p)!\\&=\sum _{p=1}^{n}(-1)^{p-1}{\frac {n!}{p!}}\end{aligned}}} A permutation where no card is in the correct position is called a derangement. Taking n! to be the total number of permutations, the probability Q that a random shuffle produces a derangement is given by Q = 1 − W n ! = ∑ p = 0 n ( − 1 ) p p ! , {\displaystyle Q=1-{\frac {W}{n!}}=\sum _{p=0}^{n}{\frac {(-1)^{p}}{p!}},} a truncation to n + 1 terms of the Taylor expansion of e−1. Thus the probability of guessing an order for a shuffled deck of cards and being incorrect about every card is approximately e−1 or 37%. == A special case == The situation that appears in the derangement example above occurs often enough to merit special attention. Namely, when the size of the intersection sets appearing in the formulas for the principle of inclusion–exclusion depend only on the number of sets in the intersections and not on which sets appear. More formally, if the intersection A J := ⋂ j ∈ J A j {\displaystyle A_{J}:=\bigcap _{j\in J}A_{j}} has the same cardinality, say αk = |AJ|, for every k-element subset J of {1, ..., n}, then | ⋃ i = 1 n A i | = ∑ k = 1 n ( − 1 ) k − 1 ( n k ) α k . {\displaystyle \left|\bigcup _{i=1}^{n}A_{i}\right|=\sum _{k=1}^{n}(-1)^{k-1}{\binom {n}{k}}\alpha _{k}.} Or, in the complementary form, where the universal set S has cardinality α0, | S ∖ ⋃ i = 1 n A i | = α 0 − ∑ k = 0 n ( − 1 ) k − 1 ( n k ) α k . {\displaystyle \left|S\smallsetminus \bigcup _{i=1}^{n}A_{i}\right|=\alpha _{0}-\sum _{k=0}^{n}(-1)^{k-1}{\binom {n}{k}}\alpha _{k}.} == Formula generalization == Given a family (repeats allowed) of subsets A1, A2, ..., An of a universal set S, the principle of inclusion–exclusion calculates the number of elements of S in none of these subsets. A generalization of this concept would calculate the number of elements of S which appear in exactly some fixed m of these sets. Let N = [n] = {1,2,...,n}. If we define A ∅ = S {\displaystyle A_{\emptyset }=S} , then the principle of inclusion–exclusion can be written as, using the notation of the previous section; the number of elements of S contained in none of the Ai is: ∑ J ⊆ [ n ] ( − 1 ) | J | | A J | . {\displaystyle \sum _{J\subseteq [n]}(-1)^{|J|}|A_{J}|.} If I is a fixed subset of the index set N, then the number of elements which belong to Ai for all i in I and for no other values is: ∑ I ⊆ J ( − 1 ) | J | − | I | | A J | . {\displaystyle \sum _{I\subseteq J}(-1)^{|J|-|I|}|A_{J}|.} Define the sets B k = A I ∪ { k } for k ∈ N ∖ I . {\displaystyle B_{k}=A_{I\cup \{k\}}{\text{ for }}k\in N\smallsetminus I.} We seek the number of elements in none of the Bk which, by the principle of inclusion–exclusion (with B ∅ = A I {\displaystyle B_{\emptyset }=A_{I}} ), is ∑ K ⊆ N ∖ I ( − 1 ) | K | | B K | . {\displaystyle \sum _{K\subseteq N\smallsetminus I}(-1)^{|K|}|B_{K}|.} The correspondence K ↔ J = I ∪ K between subsets of N \ I and subsets of N containing I is a bijection and if J and K correspond under this map then BK = AJ, showing that the result is valid. == In probability == In probability, for events A1, ..., An in a probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} , the inclusion–exclusion principle becomes for n = 2 P ( A 1 ∪ A 2 ) = P ( A 1 ) + P ( A 2 ) − P ( A 1 ∩ A 2 ) , {\displaystyle \mathbb {P} (A_{1}\cup A_{2})=\mathbb {P} (A_{1})+\mathbb {P} (A_{2})-\mathbb {P} (A_{1}\cap A_{2}),} for n = 3 P ( A 1 ∪ A 2 ∪ A 3 ) = P ( A 1 ) + P ( A 2 ) + P ( A 3 ) − P ( A 1 ∩ A 2 ) − P ( A 1 ∩ A 3 ) − P ( A 2 ∩ A 3 ) + P ( A 1 ∩ A 2 ∩ A 3 ) {\displaystyle \mathbb {P} (A_{1}\cup A_{2}\cup A_{3})=\mathbb {P} (A_{1})+\mathbb {P} (A_{2})+\mathbb {P} (A_{3})-\mathbb {P} (A_{1}\cap A_{2})-\mathbb {P} (A_{1}\cap A_{3})-\mathbb {P} (A_{2}\cap A_{3})+\mathbb {P} (A_{1}\cap A_{2}\cap A_{3})} and in general P ( ⋃ i = 1 n A i ) = ∑ i = 1 n P ( A i ) − ∑ i < j P ( A i ∩ A j ) + ∑ i < j < k P ( A i ∩ A j ∩ A k ) + ⋯ + ( − 1 ) n − 1 P ( ⋂ i = 1 n A i ) , {\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{n}A_{i}\right)=\sum _{i=1}^{n}\mathbb {P} (A_{i})-\sum _{i<j}\mathbb {P} (A_{i}\cap A_{j})+\sum _{i<j<k}\mathbb {P} (A_{i}\cap A_{j}\cap A_{k})+\cdots +(-1)^{n-1}\mathbb {P} \left(\bigcap _{i=1}^{n}A_{i}\right),} which can be written in closed form as P ( ⋃ i = 1 n A i ) = ∑ k = 1 n ( ( − 1 ) k − 1 ∑ I ⊆ { 1 , … , n } | I | = k P ( A I ) ) , {\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{n}A_{i}\right)=\sum _{k=1}^{n}\left((-1)^{k-1}\sum _{I\subseteq \{1,\ldots ,n\} \atop |I|=k}\mathbb {P} (A_{I})\right),} where the last sum runs over all subsets I of the indices 1, ..., n which contain exactly k elements, and A I := ⋂ i ∈ I A i {\displaystyle A_{I}:=\bigcap _{i\in I}A_{i}} denotes the intersection of all those Ai with index in I. According to the Bonferroni inequalities, the sum of the first terms in the formula is alternately an upper bound and a lower bound for the LHS. This can be used in cases where the full formula is too cumbersome. For a general measure space (S,Σ,μ) and measurable subsets A1, ..., An of finite measure, the above identities also hold when the probability measure P {\displaystyle \mathbb {P} } is replaced by the measure μ. === Special case === If, in the probabilistic version of the inclusion–exclusion principle, the probability of the intersection AI only depends on the cardinality of I, meaning that for every k in {1, ..., n} there is an ak such that a k = P ( A I ) for every I ⊂ { 1 , … , n } with | I | = k , {\displaystyle a_{k}=\mathbb {P} (A_{I}){\text{ for every }}I\subset \{1,\ldots ,n\}{\text{ with }}|I|=k,} then the above formula simplifies to P ( ⋃ i = 1 n A i ) = ∑ k = 1 n ( − 1 ) k − 1 ( n k ) a k {\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{n}A_{i}\right)=\sum _{k=1}^{n}(-1)^{k-1}{\binom {n}{k}}a_{k}} due to the combinatorial interpretation of the binomial coefficient ( n k ) {\textstyle {\binom {n}{k}}} . For example, if the events A i {\displaystyle A_{i}} are independent and identically distributed, then P ( A i ) = p {\displaystyle \mathbb {P} (A_{i})=p} for all i, and we have a k = p k {\displaystyle a_{k}=p^{k}} , in which case the expression above simplifies to P ( ⋃ i = 1 n A i ) = 1 − ( 1 − p ) n . {\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{n}A_{i}\right)=1-(1-p)^{n}.} (This result can also be derived more simply by considering the intersection of the complements of the events A i {\displaystyle A_{i}} .) An analogous simplification is possible in the case of a general measure space ( S , Σ , μ ) {\displaystyle (S,\Sigma ,\mu )} and measurable subsets A 1 , … , A n {\displaystyle A_{1},\dots ,A_{n}} of finite measure. There is another formula used in point processes. Let S {\displaystyle S} be a finite set and P {\displaystyle P} be a random subset of S {\displaystyle S} . Let A {\displaystyle A} be any subset of S {\displaystyle S} , then P ( P = A ) = P ( P ⊃ A ) − ∑ j 1 ∈ S ∖ A P ( P ⊃ A ∪ j 1 ) + ∑ j 1 , j 2 ∈ S ∖ A j 1 ≠ j 2 P ( P ⊃ A ∪ j 1 , j 2 ) + … + ( − 1 ) | S | − | A | P ( P ⊃ S ) = ∑ A ⊂ J ⊂ S ( − 1 ) | J | − | A | P ( P ⊃ J ) . {\displaystyle {\begin{aligned}\mathbb {P} (P=A)&=\mathbb {P} (P\supset A)-\sum _{j_{1}\in S\setminus A}\mathbb {P} (P\supset A\cup {j_{1}})\\&+\sum _{j_{1},j_{2}\in S\setminus A\ j_{1}\neq j_{2}}\mathbb {P} (P\supset A\cup {j_{1},j_{2}})+\dots \\&+(-1)^{|S|-|A|}\mathbb {P} (P\supset S)\\&=\sum _{A\subset J\subset S}(-1)^{|J|-|A|}\mathbb {P} (P\supset J).\end{aligned}}} == Other formulas == The principle is sometimes stated in the form that says that if g ( A ) = ∑ S ⊆ A f ( S ) {\displaystyle g(A)=\sum _{S\subseteq A}f(S)} then The combinatorial and the probabilistic version of the inclusion–exclusion principle are instances of (2). If one sees a number n {\displaystyle n} as a set of its prime factors, then (2) is a generalization of Möbius inversion formula for square-free natural numbers. Therefore, (2) is seen as the Möbius inversion formula for the incidence algebra of the partially ordered set of all subsets of A. For a generalization of the full version of Möbius inversion formula, (2) must be generalized to multisets. For multisets instead of sets, (2) becomes where A − S {\displaystyle A-S} is the multiset for which ( A − S ) ⊎ S = A {\displaystyle (A-S)\uplus S=A} , and μ(S) = 1 if S is a set (i.e. a multiset without double elements) of even cardinality. μ(S) = −1 if S is a set (i.e. a multiset without double elements) of odd cardinality. μ(S) = 0 if S is a proper multiset (i.e. S has double elements). Notice that μ ( A − S ) {\displaystyle \mu (A-S)} is just the ( − 1 ) | A | − | S | {\displaystyle (-1)^{|A|-|S|}} of (2) in case A − S {\displaystyle A-S} is a set. == Applications == The inclusion–exclusion principle is widely used and only a few of its applications can be mentioned here. === Counting derangements === A well-known application of the inclusion–exclusion principle is to the combinatorial problem of counting all derangements of a finite set. A derangement of a set A is a bijection from A into itself that has no fixed points. Via the inclusion–exclusion principle one can show that if the cardinality of A is n, then the number of derangements is [n! / e] where [x] denotes the nearest integer to x; a detailed proof is available here and also see the examples section above. The first occurrence of the problem of counting the number of derangements is in an early book on games of chance: Essai d'analyse sur les jeux de hazard by P. R. de Montmort (1678 – 1719) and was known as either "Montmort's problem" or by the name he gave it, "problème des rencontres." The problem is also known as the hatcheck problem. The number of derangements is also known as the subfactorial of n, written !n. It follows that if all bijections are assigned the same probability then the probability that a random bijection is a derangement quickly approaches 1/e as n grows. === Counting intersections === The principle of inclusion–exclusion, combined with De Morgan's law, can be used to count the cardinality of the intersection of sets as well. Let A k ¯ {\displaystyle {\overline {A_{k}}}} represent the complement of Ak with respect to some universal set A such that A k ⊆ A {\displaystyle A_{k}\subseteq A} for each k. Then we have ⋂ i = 1 n A i = ⋃ i = 1 n A i ¯ ¯ {\displaystyle \bigcap _{i=1}^{n}A_{i}={\overline {\bigcup _{i=1}^{n}{\overline {A_{i}}}}}} thereby turning the problem of finding an intersection into the problem of finding a union. === Graph coloring === The inclusion exclusion principle forms the basis of algorithms for a number of NP-hard graph partitioning problems, such as graph coloring. A well known application of the principle is the construction of the chromatic polynomial of a graph. === Bipartite graph perfect matchings === The number of perfect matchings of a bipartite graph can be calculated using the principle. === Number of onto functions === Given finite sets A and B, how many surjective functions (onto functions) are there from A to B? Without any loss of generality we may take A = {1, ..., k} and B = {1, ..., n}, since only the cardinalities of the sets matter. By using S as the set of all functions from A to B, and defining, for each i in B, the property Pi as "the function misses the element i in B" (i is not in the image of the function), the principle of inclusion–exclusion gives the number of onto functions between A and B as: ∑ j = 0 n ( n j ) ( − 1 ) j ( n − j ) k . {\displaystyle \sum _{j=0}^{n}{\binom {n}{j}}(-1)^{j}(n-j)^{k}.} === Permutations with forbidden positions === A permutation of the set S = {1, ..., n} where each element of S is restricted to not being in certain positions (here the permutation is considered as an ordering of the elements of S) is called a permutation with forbidden positions. For example, with S = {1,2,3,4}, the permutations with the restriction that the element 1 can not be in positions 1 or 3, and the element 2 can not be in position 4 are: 2134, 2143, 3124, 4123, 2341, 2431, 3241, 3421, 4231 and 4321. By letting Ai be the set of positions that the element i is not allowed to be in, and the property Pi to be the property that a permutation puts element i into a position in Ai, the principle of inclusion–exclusion can be used to count the number of permutations which satisfy all the restrictions. In the given example, there are 12 = 2(3!) permutations with property P1, 6 = 3! permutations with property P2 and no permutations have properties P3 or P4 as there are no restrictions for these two elements. The number of permutations satisfying the restrictions is thus: 4! − (12 + 6 + 0 + 0) + (4) = 24 − 18 + 4 = 10. The final 4 in this computation is the number of permutations having both properties P1 and P2. There are no other non-zero contributions to the formula. === Stirling numbers of the second kind === The Stirling numbers of the second kind, S(n,k) count the number of partitions of a set of n elements into k non-empty subsets (indistinguishable boxes). An explicit formula for them can be obtained by applying the principle of inclusion–exclusion to a very closely related problem, namely, counting the number of partitions of an n-set into k non-empty but distinguishable boxes (ordered non-empty subsets). Using the universal set consisting of all partitions of the n-set into k (possibly empty) distinguishable boxes, A1, A2, ..., Ak, and the properties Pi meaning that the partition has box Ai empty, the principle of inclusion–exclusion gives an answer for the related result. Dividing by k! to remove the artificial ordering gives the Stirling number of the second kind: S ( n , k ) = 1 k ! ∑ t = 0 k ( − 1 ) t ( k t ) ( k − t ) n . {\displaystyle S(n,k)={\frac {1}{k!}}\sum _{t=0}^{k}(-1)^{t}{\binom {k}{t}}(k-t)^{n}.} === Rook polynomials === A rook polynomial is the generating function of the number of ways to place non-attacking rooks on a board B that looks like a subset of the squares of a checkerboard; that is, no two rooks may be in the same row or column. The board B is any subset of the squares of a rectangular board with n rows and m columns; we think of it as the squares in which one is allowed to put a rook. The coefficient, rk(B) of xk in the rook polynomial RB(x) is the number of ways k rooks, none of which attacks another, can be arranged in the squares of B. For any board B, there is a complementary board B ′ {\displaystyle B'} consisting of the squares of the rectangular board that are not in B. This complementary board also has a rook polynomial R B ′ ( x ) {\displaystyle R_{B'}(x)} with coefficients r k ( B ′ ) . {\displaystyle r_{k}(B').} It is sometimes convenient to be able to calculate the highest coefficient of a rook polynomial in terms of the coefficients of the rook polynomial of the complementary board. Without loss of generality we can assume that n ≤ m, so this coefficient is rn(B). The number of ways to place n non-attacking rooks on the complete n × m "checkerboard" (without regard as to whether the rooks are placed in the squares of the board B) is given by the falling factorial: ( m ) n = m ( m − 1 ) ( m − 2 ) ⋯ ( m − n + 1 ) . {\displaystyle (m)_{n}=m(m-1)(m-2)\cdots (m-n+1).} Letting Pi be the property that an assignment of n non-attacking rooks on the complete board has a rook in column i which is not in a square of the board B, then by the principle of inclusion–exclusion we have: r n ( B ) = ∑ t = 0 n ( − 1 ) t ( m − t ) n − t r t ( B ′ ) . {\displaystyle r_{n}(B)=\sum _{t=0}^{n}(-1)^{t}(m-t)_{n-t}r_{t}(B').} === Euler's phi function === Euler's totient or phi function, φ(n) is an arithmetic function that counts the number of positive integers less than or equal to n that are relatively prime to n. That is, if n is a positive integer, then φ(n) is the number of integers k in the range 1 ≤ k ≤ n which have no common factor with n other than 1. The principle of inclusion–exclusion is used to obtain a formula for φ(n). Let S be the set {1, ..., n} and define the property Pi to be that a number in S is divisible by the prime number pi, for 1 ≤ i ≤ r, where the prime factorization of n = p 1 a 1 p 2 a 2 ⋯ p r a r . {\displaystyle n=p_{1}^{a_{1}}p_{2}^{a_{2}}\cdots p_{r}^{a_{r}}.} Then, φ ( n ) = n − ∑ i = 1 r n p i + ∑ 1 ⩽ i < j ⩽ r n p i p j − ⋯ = n ∏ i = 1 r ( 1 − 1 p i ) . {\displaystyle \varphi (n)=n-\sum _{i=1}^{r}{\frac {n}{p_{i}}}+\sum _{1\leqslant i<j\leqslant r}{\frac {n}{p_{i}p_{j}}}-\cdots =n\prod _{i=1}^{r}\left(1-{\frac {1}{p_{i}}}\right).} === Dirichlet hyperbola method === The Dirichlet hyperbola method re-expresses a sum of a multiplicative function f ( n ) {\displaystyle f(n)} by selecting a suitable Dirichlet convolution f = g ∗ h {\displaystyle f=g\ast h} , recognizing that the sum F ( n ) = ∑ k = 1 n f ( k ) = ∑ k = 1 n ∑ x y = k g ( x ) h ( y ) {\displaystyle F(n)=\sum _{k=1}^{n}f(k)=\sum _{k=1}^{n}\sum _{xy=k}^{}g(x)h(y)} can be recast as a sum over the lattice points in a region bounded by x ≥ 1 {\displaystyle x\geq 1} , y ≥ 1 {\displaystyle y\geq 1} , and x y ≤ n {\displaystyle xy\leq n} , splitting this region into two overlapping subregions, and finally using the inclusion–exclusion principle to conclude that F ( n ) = ∑ k = 1 n f ( k ) = ∑ k = 1 n ∑ x y = k g ( x ) h ( y ) = ∑ x = 1 a ∑ y = 1 n / x g ( x ) h ( y ) + ∑ y = 1 b ∑ x = 1 n / y g ( x ) h ( y ) − ∑ x = 1 a ∑ y = 1 b g ( x ) h ( y ) . {\displaystyle F(n)=\sum _{k=1}^{n}f(k)=\sum _{k=1}^{n}\sum _{xy=k}^{}g(x)h(y)=\sum _{x=1}^{a}\sum _{y=1}^{n/x}g(x)h(y)+\sum _{y=1}^{b}\sum _{x=1}^{n/y}g(x)h(y)-\sum _{x=1}^{a}\sum _{y=1}^{b}g(x)h(y).} == Diluted inclusion–exclusion principle == In many cases where the principle could give an exact formula (in particular, counting prime numbers using the sieve of Eratosthenes), the formula arising does not offer useful content because the number of terms in it is excessive. If each term individually can be estimated accurately, the accumulation of errors may imply that the inclusion–exclusion formula is not directly applicable. In number theory, this difficulty was addressed by Viggo Brun. After a slow start, his ideas were taken up by others, and a large variety of sieve methods developed. These for example may try to find upper bounds for the "sieved" sets, rather than an exact formula. Let A1, ..., An be arbitrary sets and p1, ..., pn real numbers in the closed unit interval [0, 1]. Then, for every even number k in {0, ..., n}, the indicator functions satisfy the inequality: 1 A 1 ∪ ⋯ ∪ A n ≥ ∑ j = 1 k ( − 1 ) j − 1 ∑ 1 ≤ i 1 < ⋯ < i j ≤ n p i 1 … p i j 1 A i 1 ∩ ⋯ ∩ A i j . {\displaystyle 1_{A_{1}\cup \cdots \cup A_{n}}\geq \sum _{j=1}^{k}(-1)^{j-1}\sum _{1\leq i_{1}<\cdots <i_{j}\leq n}p_{i_{1}}\dots p_{i_{j}}\,1_{A_{i_{1}}\cap \cdots \cap A_{i_{j}}}.} == Proof of main statement == Choose an element contained in the union of all sets and let A 1 , A 2 , … , A t {\displaystyle A_{1},A_{2},\dots ,A_{t}} be the individual sets containing it. (Note that t > 0.) Since the element is counted precisely once by the left-hand side of equation (1), we need to show that it is counted precisely once by the right-hand side. On the right-hand side, the only non-zero contributions occur when all the subsets in a particular term contain the chosen element, that is, all the subsets are selected from A 1 , A 2 , … , A t {\displaystyle A_{1},A_{2},\dots ,A_{t}} . The contribution is one for each of these sets (plus or minus depending on the term) and therefore is just the (signed) number of these subsets used in the term. We then have: | { A i ∣ 1 ⩽ i ⩽ t } | − | { A i ∩ A j ∣ 1 ⩽ i < j ⩽ t } | + ⋯ + ( − 1 ) t + 1 | { A 1 ∩ A 2 ∩ ⋯ ∩ A t } | = ( t 1 ) − ( t 2 ) + ⋯ + ( − 1 ) t + 1 ( t t ) . {\displaystyle {\begin{aligned}|\{A_{i}\mid 1\leqslant i\leqslant t\}|&-|\{A_{i}\cap A_{j}\mid 1\leqslant i<j\leqslant t\}|+\cdots +(-1)^{t+1}|\{A_{1}\cap A_{2}\cap \cdots \cap A_{t}\}|={\binom {t}{1}}-{\binom {t}{2}}+\cdots +(-1)^{t+1}{\binom {t}{t}}.\end{aligned}}} By the binomial theorem, 0 = ( 1 − 1 ) t = ( t 0 ) − ( t 1 ) + ( t 2 ) − ⋯ + ( − 1 ) t ( t t ) . {\displaystyle 0=(1-1)^{t}={\binom {t}{0}}-{\binom {t}{1}}+{\binom {t}{2}}-\cdots +(-1)^{t}{\binom {t}{t}}.} Using the fact that ( t 0 ) = 1 {\displaystyle {\binom {t}{0}}=1} and rearranging terms, we have 1 = ( t 1 ) − ( t 2 ) + ⋯ + ( − 1 ) t + 1 ( t t ) , {\displaystyle 1={\binom {t}{1}}-{\binom {t}{2}}+\cdots +(-1)^{t+1}{\binom {t}{t}},} and so, the chosen element is counted only once by the right-hand side of equation (1). === Algebraic proof === An algebraic proof can be obtained using indicator functions (also known as characteristic functions). The indicator function of a subset S of a set X is the function 1 S : X → { 0 , 1 } 1 S ( x ) = { 1 x ∈ S 0 x ∉ S {\displaystyle {\begin{aligned}&\mathbf {1} _{S}:X\to \{0,1\}\\&\mathbf {1} _{S}(x)={\begin{cases}1&x\in S\\0&x\notin S\end{cases}}\end{aligned}}} If A {\displaystyle A} and B {\displaystyle B} are two subsets of X {\displaystyle X} , then 1 A ⋅ 1 B = 1 A ∩ B . {\displaystyle \mathbf {1} _{A}\cdot \mathbf {1} _{B}=\mathbf {1} _{A\cap B}.} Let A denote the union ⋃ i = 1 n A i {\textstyle \bigcup _{i=1}^{n}A_{i}} of the sets A1, ..., An. To prove the inclusion–exclusion principle in general, we first verify the identity for indicator functions, where: A I = ⋂ i ∈ I A i . {\displaystyle A_{I}=\bigcap _{i\in I}A_{i}.} The following function ( 1 A − 1 A 1 ) ( 1 A − 1 A 2 ) ⋯ ( 1 A − 1 A n ) = 0 , {\displaystyle \left(\mathbf {1} _{A}-\mathbf {1} _{A_{1}}\right)\left(\mathbf {1} _{A}-\mathbf {1} _{A_{2}}\right)\cdots \left(\mathbf {1} _{A}-\mathbf {1} _{A_{n}}\right)=0,} is identically zero because: if x is not in A, then all factors are 0−0 = 0; and otherwise, if x does belong to some Am, then the corresponding mth factor is 1−1=0. By expanding the product on the left-hand side, equation (4) follows. To prove the inclusion–exclusion principle for the cardinality of sets, sum the equation (4) over all x in the union of A1, ..., An. To derive the version used in probability, take the expectation in (4). In general, integrate the equation (4) with respect to μ. Always use linearity in these derivations. == See also == Boole's inequality – Inequality applying to probability spaces Combinatorial principles – Methods used in combinatorics Maximum-minimums identity – Relates the maximum element of a set of numbers and the minima of its non-empty subsets Necklace problem Pigeonhole principle – If there are more items than boxes holding them, one box must contain at least two items Schuette–Nesbitt formula – mathematical formula in probability theoryPages displaying wikidata descriptions as a fallback == Notes == == References == Allenby, R.B.J.T.; Slomson, Alan (2010), How to Count: An Introduction to Combinatorics, Discrete Mathematics and Its Applications (2 ed.), CRC Press, pp. 51–60, ISBN 9781420082609 Björklund, A.; Husfeldt, T.; Koivisto, M. (2009), "Set partitioning via inclusion–exclusion", SIAM Journal on Computing, 39 (2): 546–563, CiteSeerX 10.1.1.526.9573, doi:10.1137/070683933 Brualdi, Richard A. (2010), Introductory Combinatorics (5th ed.), Prentice–Hall, ISBN 9780136020400 Cameron, Peter J. (1994), Combinatorics: Topics, Techniques, Algorithms, Cambridge University Press, ISBN 0-521-45761-0 Fernández, Roberto; Fröhlich, Jürg; Alan D., Sokal (1992), Random Walks, Critical Phenomena, and Triviality in Quantum Field Theory, Texts an Monographs in Physics, Berlin: Springer-Verlag, pp. xviii+444, ISBN 3-540-54358-9, MR 1219313, Zbl 0761.60061 Graham, R.L.; Grötschel, M.; Lovász, L. (1995), Hand Book of Combinatorics (volume-2), MIT Press – North Holland, ISBN 9780262071710 Gross, Jonathan L. (2008), Combinatorial Methods with Computer Applications, Chapman&Hall/CRC, ISBN 9781584887430 "Inclusion-and-exclusion principle", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Mazur, David R. (2010), Combinatorics A Guided Tour, The Mathematical Association of America, ISBN 9780883857625 Roberts, Fred S.; Tesman, Barry (2009), Applied Combinatorics (2nd ed.), CRC Press, ISBN 9781420099829 Stanley, Richard P. (1986), Enumerative Combinatorics Volume I, Wadsworth & Brooks/Cole, ISBN 0534065465 van Lint, J.H.; Wilson, R.M. (1992), A Course in Combinatorics, Cambridge University Press, ISBN 0521422604 This article incorporates material from principle of inclusion–exclusion on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Documenting Hate
Documenting Hate is a project of ProPublica, in collaboration with a number of journalistic, academic, and computing organizations, for systematic tracking of hate crimes and bias incidents. It uses an online form to facilitate reporting of incidents by the general public. Since August 2017, it has also used machine learning and natural language processing techniques to monitor and collect news stories about hate crimes and bias incidents. As of October 2017, over 100 news organizations had joined the project. == History == === Origin === Documenting Hate was created in response to ProPublica's dissatisfaction with the quality of reporting and tracking of evidence of hate crimes and bias incidents after the United States presidential election of 2016. The project was launched on 17 January 2017, after the publication on 15 November 2016 of a ProPublica news story about the difficulty of obtaining hard data on hate crimes. === Introduction of the Documenting Hate News Index === On 18 August 2017, ProPublica and Google announced the creation of the Documenting Hate News Index, which uses the Google Cloud Natural Language API for automated monitoring and collection of news stories about hate crimes and bias incidents. The API uses machine learning and natural language processing techniques. The findings of the Index are integrated with reports from members of the public. The Index is a joint project of ProPublica, Google News Lab, and the data visualization studio Pitch Interactive. == Response == === Participation === As of May 2017, thousands of incidents had been reported via Documenting Hate. As of October 2017, over 100 news organizations had joined the project, including the Boston Globe, the New York Times, Vox, and the Georgetown University Hoya. === Relationship to government statistical monitoring === A policy analyst for the Center for Data Innovation (an affiliate of the Information Technology and Innovation Foundation), while supporting ProPublica's critique of the present state of hate-crime statistics, and praising ProPublica for drawing attention to the problem, has argued that a nongovernmental project like Documenting Hate cannot solve it unaided; instead, intervention at the federal level is needed. == See also == Unite the Right rally == References == == External links == Documenting Hate on ProPublica (www.documentinghate.com redirects to this ProPublica page) Documenting Hate News Index Google News Lab Google Cloud Natural Language API Pitch Interactive
Model-free (reinforcement learning)
In reinforcement learning (RL), a model-free algorithm is an algorithm which does not estimate the transition probability distribution (and the reward function) associated with the Markov decision process (MDP), which, in RL, represents the problem to be solved. The transition probability distribution (or transition model) and the reward function are often collectively called the "model" of the environment (or MDP), hence the name "model-free". A model-free RL algorithm can be thought of as an "explicit" trial-and-error algorithm. Typical examples of model-free algorithms include Monte Carlo (MC) RL, SARSA, and Q-learning. Monte Carlo estimation is a central component of many model-free RL algorithms. The MC learning algorithm is essentially an important branch of generalized policy iteration, which has two periodically alternating steps: policy evaluation (PEV) and policy improvement (PIM). In this framework, each policy is first evaluated by its corresponding value function. Then, based on the evaluation result, greedy search is completed to produce a better policy. The MC estimation is mainly applied to the first step of policy evaluation. The simplest idea is used to judge the effectiveness of the current policy, which is to average the returns of all collected samples. As more experience is accumulated, the estimate will converge to the true value by the law of large numbers. Hence, MC policy evaluation does not require any prior knowledge of the environment dynamics. Instead, only experience is needed (i.e., samples of state, action, and reward), which is generated from interacting with an environment (which may be real or simulated). Value function estimation is crucial for model-free RL algorithms. Unlike MC methods, temporal difference (TD) methods learn this function by reusing existing value estimates. TD learning has the ability to learn from an incomplete sequence of events without waiting for the final outcome. It can also approximate the future return as a function of the current state. Similar to MC, TD only uses experience to estimate the value function without knowing any prior knowledge of the environment dynamics. The advantage of TD lies in the fact that it can update the value function based on its current estimate. Therefore, TD learning algorithms can learn from incomplete episodes or continuing tasks in a step-by-step manner, while MC must be implemented in an episode-by-episode fashion. == Model-free reinforcement learning algorithms == Model-free RL algorithms can start from a blank policy candidate and achieve superhuman performance in many complex tasks, including Atari games, StarCraft and Go. Deep neural networks are responsible for recent artificial intelligence breakthroughs, and they can be combined with RL to create superhuman agents such as Google DeepMind's AlphaGo. Mainstream model-free RL algorithms include Deep Q-Network (DQN), Dueling DQN, Double DQN (DDQN), Trust Region Policy Optimization (TRPO), Proximal Policy Optimization (PPO), Asynchronous Advantage Actor-Critic (A3C), Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), Soft Actor-Critic (SAC), Distributional Soft Actor-Critic (DSAC), etc. Some model-free (deep) RL algorithms are listed as follows: == References ==
Catastrophic cancellation
In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L 1 = 253.51 cm {\displaystyle L_{1}=253.51\,{\text{cm}}} long and the other L 2 = 252.49 cm {\displaystyle L_{2}=252.49\,{\text{cm}}} long, and they are measured with a ruler that is good only to the centimeter, then the approximations could come out to be L ~ 1 = 254 cm {\displaystyle {\tilde {L}}_{1}=254\,{\text{cm}}} and L ~ 2 = 252 cm {\displaystyle {\tilde {L}}_{2}=252\,{\text{cm}}} . These may be good approximations, in relative error, to the true lengths: the approximations are in error by less than 0.2% of the true lengths, | L 1 − L ~ 1 | / | L 1 | < 0.2 % {\displaystyle |L_{1}-{\tilde {L}}_{1}|/|L_{1}|<0.2\%} . However, if the approximate lengths are subtracted, the difference will be L ~ 1 − L ~ 2 = 254 cm − 252 cm = 2 cm {\displaystyle {\tilde {L}}_{1}-{\tilde {L}}_{2}=254\,{\text{cm}}-252\,{\text{cm}}=2\,{\text{cm}}} , even though the true difference between the lengths is L 1 − L 2 = 253.51 cm − 252.49 cm = 1.02 cm {\displaystyle L_{1}-L_{2}=253.51\,{\text{cm}}-252.49\,{\text{cm}}=1.02\,{\text{cm}}} . The difference of the approximations, 2 cm {\displaystyle 2\,{\text{cm}}} , is in error by almost 100% of the magnitude of the difference of the true values, 1.02 cm {\displaystyle 1.02\,{\text{cm}}} . Catastrophic cancellation is not affected by how large the inputs are—it applies just as much to large and small inputs. It depends only on how large the difference is, and on the error of the inputs. Exactly the same error would arise by subtracting 52 cm {\displaystyle 52\,{\text{cm}}} from 54 cm {\displaystyle 54\,{\text{cm}}} as approximations to 52.49 cm {\displaystyle 52.49\,{\text{cm}}} and 53.51 cm {\displaystyle 53.51\,{\text{cm}}} , or by subtracting 2.00052 km {\displaystyle 2.00052\,{\text{km}}} from 2.00054 km {\displaystyle 2.00054\,{\text{km}}} as approximations to 2.0005249 km {\displaystyle 2.0005249\,{\text{km}}} and 2.0005351 km {\displaystyle 2.0005351\,{\text{km}}} . Catastrophic cancellation may happen even if the difference is computed exactly, as in the example above—it is not a property of any particular kind of arithmetic like floating-point arithmetic; rather, it is inherent to subtraction, when the inputs are approximations themselves. Indeed, in floating-point arithmetic, when the inputs are close enough, the floating-point difference is computed exactly, by the Sterbenz lemma—there is no rounding error introduced by the floating-point subtraction operation. == Formal analysis == Formally, catastrophic cancellation happens because subtraction is ill-conditioned at nearby inputs: even if approximations x ~ = x ( 1 + δ x ) {\displaystyle {\tilde {x}}=x(1+\delta _{x})} and y ~ = y ( 1 + δ y ) {\displaystyle {\tilde {y}}=y(1+\delta _{y})} have small relative errors | δ x | = | x − x ~ | / | x | {\displaystyle |\delta _{x}|=|x-{\tilde {x}}|/|x|} and | δ y | = | y − y ~ | / | y | {\displaystyle |\delta _{y}|=|y-{\tilde {y}}|/|y|} from true values x {\displaystyle x} and y {\displaystyle y} , respectively, the relative error of the difference x ~ − y ~ {\displaystyle {\tilde {x}}-{\tilde {y}}} of the approximations from the difference x − y {\displaystyle x-y} of the true values is inversely proportional to the difference of the true values: x ~ − y ~ = x ( 1 + δ x ) − y ( 1 + δ y ) = x − y + x δ x − y δ y = x − y + ( x − y ) x δ x − y δ y x − y = ( x − y ) ( 1 + x δ x − y δ y x − y ) . {\displaystyle {\begin{aligned}{\tilde {x}}-{\tilde {y}}&=x(1+\delta _{x})-y(1+\delta _{y})=x-y+x\delta _{x}-y\delta _{y}\\&=x-y+(x-y){\frac {x\delta _{x}-y\delta _{y}}{x-y}}\\&=(x-y){\biggr (}1+{\frac {x\delta _{x}-y\delta _{y}}{x-y}}{\biggr )}.\end{aligned}}} Thus, the relative error of the exact difference x ~ − y ~ {\displaystyle {\tilde {x}}-{\tilde {y}}} of the approximations from the difference x − y {\displaystyle x-y} of the true values is | x δ x − y δ y x − y | . {\displaystyle \left|{\frac {x\delta _{x}-y\delta _{y}}{x-y}}\right|.} which can be arbitrarily large if the true values x {\displaystyle x} and y {\displaystyle y} are close. == In numerical algorithms == Subtracting nearby numbers in floating-point arithmetic does not always cause catastrophic cancellation, or even any error—by the Sterbenz lemma, if the numbers are close enough the floating-point difference is exact. But cancellation may amplify errors in the inputs that arose from rounding in other floating-point arithmetic. === Example: Difference of squares === Given numbers x {\displaystyle x} and y {\displaystyle y} , the naive attempt to compute the mathematical function x 2 − y 2 {\displaystyle x^{2}-y^{2}} by the floating-point arithmetic fl ⁡ ( fl ⁡ ( x 2 ) − fl ⁡ ( y 2 ) ) {\displaystyle \operatorname {fl} (\operatorname {fl} (x^{2})-\operatorname {fl} (y^{2}))} is subject to catastrophic cancellation when x {\displaystyle x} and y {\displaystyle y} are close in magnitude, because the subtraction can expose the rounding errors in the squaring. The alternative factoring ( x + y ) ( x − y ) {\displaystyle (x+y)(x-y)} , evaluated by the floating-point arithmetic fl ⁡ ( fl ⁡ ( x + y ) ⋅ fl ⁡ ( x − y ) ) {\displaystyle \operatorname {fl} (\operatorname {fl} (x+y)\cdot \operatorname {fl} (x-y))} , avoids catastrophic cancellation because it avoids introducing rounding error leading into the subtraction. For example, if x = 1 + 2 − 29 ≈ 1.0000000018626451 {\displaystyle x=1+2^{-29}\approx 1.0000000018626451} and y = 1 + 2 − 30 ≈ 1.0000000009313226 {\displaystyle y=1+2^{-30}\approx 1.0000000009313226} , then the true value of the difference x 2 − y 2 {\displaystyle x^{2}-y^{2}} is 2 − 29 ⋅ ( 1 + 2 − 30 + 2 − 31 ) ≈ 1.8626451518330422 × 10 − 9 {\displaystyle 2^{-29}\cdot (1+2^{-30}+2^{-31})\approx 1.8626451518330422\times 10^{-9}} . In IEEE 754 binary64 arithmetic, evaluating the alternative factoring ( x + y ) ( x − y ) {\displaystyle (x+y)(x-y)} gives the correct result exactly (with no rounding), but evaluating the naive expression x 2 − y 2 {\displaystyle x^{2}-y^{2}} gives the floating-point number 2 − 29 = 1.8626451 4923095703125 _ × 10 − 9 {\displaystyle 2^{-29}=1.8626451{\underline {4923095703125}}\times 10^{-9}} , of which less than half the digits are correct and the other (underlined) digits reflect the missing terms 2 − 59 + 2 − 60 {\displaystyle 2^{-59}+2^{-60}} , lost due to rounding when calculating the intermediate squared values. === Example: Complex arcsine === When computing the complex arcsine function, one may be tempted to use the logarithmic formula directly: arcsin ⁡ ( z ) = i log ⁡ ( 1 − z 2 − i z ) . {\displaystyle \arcsin(z)=i\log {\bigl (}{\sqrt {1-z^{2}}}-iz{\bigr )}.} However, suppose z = i y {\displaystyle z=iy} for y ≪ 0 {\displaystyle y\ll 0} . Then 1 − z 2 ≈ − y {\displaystyle {\sqrt {1-z^{2}}}\approx -y} and i z = − y {\displaystyle iz=-y} ; call the difference between them ε {\displaystyle \varepsilon } —a very small difference, nearly zero. If 1 − z 2 {\displaystyle {\sqrt {1-z^{2}}}} is evaluated in floating-point arithmetic giving fl ⁡ ( fl ⁡ ( 1 − fl ⁡ ( z 2 ) ) ) = 1 − z 2 ( 1 + δ ) {\displaystyle \operatorname {fl} {\Bigl (}{\sqrt {\operatorname {fl} (1-\operatorname {fl} (z^{2}))}}{\Bigr )}={\sqrt {1-z^{2}}}(1+\delta )} with any error δ ≠ 0 {\displaystyle \delta \neq 0} , where fl ⁡ ( ⋯ ) {\displaystyle \operatorname {fl} (\cdots )} denotes floating-point rounding, then computing the difference 1 − z 2 ( 1 + δ ) − i z {\displaystyle {\sqrt {1-z^{2}}}(1+\delta )-iz} of two nearby numbers, both very close to − y {\displaystyle -y} , may amplify the error δ {\displaystyle \delta } in one input by a factor of 1 / ε {\displaystyle 1/\varepsilon } —a very large factor because ε {\displaystyle \varepsilon } was nearly zero. For instance, if z = − 1234567 i {\displaystyle z=-1234567i} , the true value of arcsin ⁡ ( z ) {\displaystyle \arcsin(z)} is approximately − 14.71937803983977 i {\displaystyle -14.71937803983977i} , but using the naive logarithmic formula in IEEE 754 binary64 arithmetic may give − 14.719 644263563968 _ i {\displaystyle -14.719{\underline {644263563968}}i} , with only five out of sixteen digits correct and the remainder (underlined) all incorrect. In the case of z = i y {\displaystyle z=iy} for y < 0 {\displaystyle y<0} , using the identity arcsin ⁡ ( z ) = − arcsin ⁡ ( − z ) {\displaystyle \arcsin(z)=-\arcsin(-z)} avoids cancellation because 1 − ( − z ) 2 = 1 − z 2 ≈ − y {\textstyle {\sqrt {1-(-z)^{2}}}={\sqrt {1-z^{2}}}\approx -y} but i ( − z ) = − i z = y {\displaystyle i(-z)=-iz=y} , so the subtraction is effectively addition with the same sign which does not cancel. === Example: Radix conversion === Numerical constants in software programs are often written in decimal, such as in the C fragment double x = 1.000000000000001; to declare and initialize an IEEE 754 binary64 variable named x. However, 1.000000000000001 {\displaystyle 1.000000000000001} is not a binary64 floating-point number; the nearest one, which x will be initialized to in this fragment, is 1.0000000000000011102230246251565404236316680908203125 = 1 + 5 ⋅ 2 − 52 {\displaystyle 1.0000000000000011102230246251565404236316680908203125=1+5\cdot 2^{-52}} . Although the radix conversion from decimal floating-point to binary floating-point only incurs a small relative error, catastrophic cancellation may amplify it into a much larger one: The difference 1.000000000000002 − 1.000000000000001 {\displaystyle 1.000000000000002-1.000000000000001} is 0.000000000000001 = 1.0 × 10 − 15 {\displaystyle 0.000000000000001=1.0\times 10^{-15}} . The relative errors of x from 1.000000000000001 {\displaystyle 1.000000000000001} and of y from 1.000000000000002 {\displaystyle 1.000000000000002} are both below 10 − 15 = 0.0000000000001 % {\displaystyle 10^{-15}=0.0000000000001\%} , and the floating-point subtraction y - x is computed exactly by the Sterbenz lemma. But even though the inputs are good approximations, and even though the subtraction is computed exactly, the difference of the approximations y ~ − x ~ = ( 1 + 9 ⋅ 2 − 52 ) − ( 1 + 5 ⋅ 2 − 52 ) = 4 ⋅ 2 − 52 ≈ 8.88 × 10 − 16 {\displaystyle {\tilde {y}}-{\tilde {x}}=(1+9\cdot 2^{-52})-(1+5\cdot 2^{-52})=4\cdot 2^{-52}\approx 8.88\times 10^{-16}} has a relative error of over 11 % {\displaystyle 11\%} from the difference 1.0 × 10 − 15 {\displaystyle 1.0\times 10^{-15}} of the original values as written in decimal: catastrophic cancellation amplified a tiny error in radix conversion into a large error in the output. === Benign cancellation === Cancellation is sometimes useful and desirable in numerical algorithms. For example, the 2Sum and Fast2Sum algorithms both rely on such cancellation after a rounding error in order to exactly compute what the error was in a floating-point addition operation as a floating-point number itself. The function log ⁡ ( 1 + x ) {\displaystyle \log(1+x)} , if evaluated naively at points 0 < x ⋘ 1 {\displaystyle 0<x\lll 1} , will lose most of the digits of x {\displaystyle x} in rounding fl ⁡ ( 1 + x ) {\displaystyle \operatorname {fl} (1+x)} . However, the function log ⁡ ( 1 + x ) {\displaystyle \log(1+x)} itself is well-conditioned at inputs near 0 {\displaystyle 0} . Rewriting it as log ⁡ ( 1 + x ) = x log ⁡ ( 1 + x ) ( 1 + x ) − 1 {\displaystyle \log(1+x)=x{\frac {\log(1+x)}{(1+x)-1}}} exploits cancellation in x ^ := fl ⁡ ( 1 + x ) − 1 {\displaystyle {\hat {x}}:=\operatorname {fl} (1+x)-1} to avoid the error from log ⁡ ( 1 + x ) {\displaystyle \log(1+x)} evaluated directly. This works because the cancellation in the numerator log ⁡ ( fl ⁡ ( 1 + x ) ) = x ^ + O ( x ^ 2 ) {\displaystyle \log(\operatorname {fl} (1+x))={\hat {x}}+O({\hat {x}}^{2})} and the cancellation in the denominator x ^ = fl ⁡ ( 1 + x ) − 1 {\displaystyle {\hat {x}}=\operatorname {fl} (1+x)-1} counteract each other; the function μ ( ξ ) = log ⁡ ( 1 + ξ ) / ξ {\displaystyle \mu (\xi )=\log(1+\xi )/\xi } is well-enough conditioned near zero that μ ( x ^ ) {\displaystyle \mu ({\hat {x}})} gives a good approximation to μ ( x ) {\displaystyle \mu (x)} , and thus x ⋅ μ ( x ^ ) {\displaystyle x\cdot \mu ({\hat {x}})} gives a good approximation to x ⋅ μ ( x ) = log ⁡ ( 1 + x ) {\displaystyle x\cdot \mu (x)=\log(1+x)} . == References ==
PatientsLikeMe
PatientsLikeMe (PLM) is an integrated community, health management, and real-world data platform. The platform currently has over 830,000 members who are dealing with more than 2,900 conditions, such as ALS, MS, and epilepsy. Data generated by patients themselves are collected and quantified with the goal of providing an environment for peer support and learning. These data capture the influences of different lifestyle choices, socio-demographics, conditions and treatments on a person's health. == History == PatientsLikeMe was inspired by the life experiences of Stephen Heywood, diagnosed in 1998 at the age of 29 with amyotrophic lateral sclerosis (ALS), or Lou Gehrig's disease. The company was founded in 2004 by his brothers Jamie and Ben Heywood and long-time family friend Jeff Cole. After being diagnosed with ALS, Stephen's family founded a non-profit, ALS Therapy Development Institute, in an attempt to slow his disease and treat his symptoms. However, the slow pace of research and the trial-and-error approach was time-consuming and repetitive. They realized that Stephen's experience was like that of other patients around the world who often have specific questions about their treatment options, and what to expect. PatientsLikeMe was created to help patients connect with others who know firsthand what they are going through to share advice and resources. Through a health profile made up of structured and quantitative clinical reporting tools, members are able to monitor their health between doctor or hospital visits, document the severity of their symptoms, identify triggers, note how they are responding to new treatments, and track side effects. They have the opportunity to learn from the aggregated data of others with the same disease and see how they are doing in context with others. Members of the site use social tools such as forums, private messages, and profile comments to give and receive support from others, a support mechanism that has been shown to help improve their management and perceived control. In 2017, PatientsLikeMe entered into a partnership with iCarbonX to apply next-generation biological measures and machine learning to understand more about the basis of human health and disease. iCarbonX, founded in 2015 by renowned genomicist Jun Wang, took an equity position in PatientsLikeMe and provided multi-omics characterization services to the company. In 2019, PatientsLikeMe was acquired by UnitedHealth Group after being forced by the United States government to divest their investment by iCarbonX. Unitedhealth Group and PatientsLikeMe made plans to help patients with similar health concerns connect to share experiences. In 2020, PatientsLikeMe began to operate as an independent company backed by Optum Ventures, a UnitedHealth Group affiliate. == Expansion beyond ALS == PatientsLikeMe launched its first online community for ALS patients in 2006. From there, the company began adding other communities for other life-changing conditions, including multiple sclerosis (MS), Parkinson's disease, fibromyalgia, HIV, chronic fatigue syndrome, mood disorders, epilepsy, organ transplantation, progressive supranuclear palsy, multiple system atrophy, and Devic's disease (neuromyelitis optica). The company's approach was to read the scientific literature and listen to patients to identify outcome measures, symptoms, and treatments that were important to patients and could be accurately reported. For example, the development of the MS community involved the development of a new patient reported outcome measure, the MS Rating Scale (MSRS), to ensure patients could accurately determine how their condition was progressing over time. However, building one community at a time was a slow process and the company risked being overly narrow in focus while excluding more than 5,000 patients who had requested new communities as of December 2010. In April 2011, the company expanded its scope and opened its doors to any patient with any condition. Today the website covers more than 2,900 health conditions, with new members joining daily from the US and other countries around the world. Of note are the nearly 14,000 ALS members, who have helped make PatientsLikeMe's flagship community the largest online population of ALS patients in the world. In the United States, approximately 10 percent of newly diagnosed ALS patients register on the site each month, and 2 percent of all multiple sclerosis patients in the US participate in the community. == Products and services == === Online data-sharing platform === PatientsLikeMe allows members to input real-world data on their conditions, treatment history, side effects, hospitalizations, symptoms, disease-specific functional scores, weight, mood, quality of life, and more on an ongoing basis. The result is a detailed longitudinal record – organized into charts and graphs – that allows patients to gain insight and identify patterns. The data-sharing platform is designed to help patients answer the question: “Given my status, what is the best outcome I can hope to achieve, and how do I get there?” Answers come in the form of shared longitudinal data from other patients with the same condition(s), thus allowing members to place their experiences in context and see what treatments have helped other patients like them. Some communities, such as ALS, feature visual aids such as percentile curves on the patient profile, so that an individual user can see whether their rate of progression is fast, slow, or about average. A seizure tracker for patients with epilepsy helps identify triggers such as missed medication doses, sleep deprivation, or alcohol use, and a "mood map" for patients with mood disorders helps to show different factors underlying their condition such as emotional control, anxiety, or external stress while all users can look for patterns in their daily health status such as day of the week or time of day. On top of patients being able to organize their treatment, better understand and control their disease, they can access a beneficial psycho-social support network. Patients can share with their peers who have had or are going through similar experiences. Diagnosis of a long-term illness can be socially isolating as the patient is usually the only one in their family or friend group going through it. There is an experience gap between people who are diagnosed with cancer (or other long-term illness) and the ones who are not. Being social animals, this isolation often leads to anxiety and depression (related to diagnosis) which are known to undermine treatment and patient outcomes. On relation to various cancers, peer support groups of "others who have had the same or similar experiences" have been linked to reduced symptoms of depression, increase patient compliance to treatment regimens, and increased survival outcomes. Three studies have been published suggesting that use of the platform improves patient outcomes. A survey conducted in 2010 amongst patients with ALS, MS, Parkinson's disease, HIV, fibromyalgia, and mood disorders found that 72% of users had found the site helpful in learning about a symptom they had experienced, 57% for understanding the side effects of a treatment, 42% in helping them to find another patient like them, amongst others. A second study conducted in epilepsy found that in addition to the earlier benefits reported, patients with epilepsy reported a better understanding of their symptoms (59%), seizures (58%), and symptoms or treatments (55%). The number of benefits they reported from using the site was strongly associated with the number of social connections they made with other members, dubbed the "dose effect curve of friendship". Finally, a third study conducted with the U.S. Department of Veteran Affairs and the University of California at San Francisco reported statistically significant improvements in validated measures of self-management and self-efficacy in veterans with epilepsy as a result of engaging with the site for a period of six weeks. In 2023, the Neurological Clinical Research Institute (NCRI) at Massachusetts General Hospital collaborated with PatientsLikeMe to use the ALS data from the database to expand their Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database. The PLM database includes information such as symptom reports from ALS patients that add data to the clinical trial. === Health economics and outcomes research === The site makes revenue by conducting scientific research studies for pharmaceutical companies, typically with an emphasis on issues that are important to both patients and industry. In 2011, a partnership with Novartis studied the barriers faced by people with multiple sclerosis in being adherent to taking their medication, which led to the development of an MS Treatment Adherence Questionnaire (MS-TAQ) which was made available to help patients and their doctors identify and address these issues through coping strategies and enhanced communication. A 2013 collaboration with UCB explored factors underlying quality of life in epilepsy and identified a number of issues beyond the occurrence of seizures as being important, including symptoms such as problems concentrating, depression, memory problems, and treatment side effects. In 2015, PatientsLikeMe worked with researchers at Genentech on a study inviting potential clinical trial participants to review study protocols in order to provide input and feedback to make the study more appealing. A 2016 collaboration with Novartis published in Nature Biotechnology and Value in Health explored ways in which patients could provide systematic input into guiding drug development to help make it more patient-centered. A different study published with AstraZeneca in 2016 sought to understand the treatment expectations of women living with ovarian cancer and identified a shift from surviving with the condition to living with it. Such research helps to improve understanding of disease, identify new approaches to management, and generate ideas to improve the products and services developed by pharmaceutical companies. === Open Research Exchange === Following the award in 2013 and 2014 of $4.5m in grants from the Robert Wood Johnson Foundation, the company developed an online tool called the Open Research Exchange (ORE) that allowed for the rapid creation, prototyping, testing, and validation of patient reported outcome measures, questionnaires that can establish the impact of symptoms and disease on patients. During the period of the grant, a number of academic collaborators were invited to develop measures on the platform including measures of treatment burden, hypertension management, feelings of satiety in diabetes and treatment burden in chronic illness. The tool offers researchers the ability to rapidly get input from large numbers of patients in a matter of weeks or months as opposed to much slower forms of research which can take years to complete. A number of tools such as the Treatment Burden Questionnaire and the Suicide Ideation and Behavior Assessment Tool (SIBAT) have been published in the scientific literature for use by researchers and an editorial co-authored with industry leaders and a researcher at the FDA outlined ways in which PROs developed on the ORE could be used for the development of new medicines. In addition to the traditional scientist-lead instruments, one instrument was developed by a person living with MS. A 2016 RWJF grant for $900,000 charters PatientsLikeMe to work with the National Quality Forum to develop new measures for healthcare performance. == Scientific work == A key differentiator of the site from more traditional online support groups, message boards, social media sites and list-serves is the emphasis on structured quantitative data which can be aggregated and used for research purposes. This has permitted PatientsLikeMe's research team to author more than 100 peer-reviewed published scientific articles in collaboration with academic and commercial partners in leading journals such as the BMJ, Nature Biotechnology, and Neurology. In addition, PatientsLikeMe has been mentioned by others in more than 3,000 published articles in the scientific literature and has been featured as a business case study by the Harvard Business Review. The company has also invited researchers to become embedded with the company such as an in-depth study explaining the organization of the platform and highlighting some of the challenges that social media and patient-centered research models are facing. Wherever possible, PatientsLikeMe has a policy of publishing its research output in open access form, so that patients, clinicians, and researchers can easily access their scientific output. Instruments and questionnaires developed on PatientsLikeMe such as the MS Rating Scale or MS Treatment Adherence Questionnaire are licensed under Creative Commons so that they can be used freely by the community without complex or costly licensing requirements. The company also provides patients that take part in its studies with "givebacks" which concisely and rapidly give them feedback in lay language as to the results of research in which they have participated so they can understand how donating their data has been useful for research. The company's best known scientific endeavor relates to an online refutation of a clinical trial in ALS. In 2008, a small Italian study was published suggesting that lithium carbonate could slow the progression of ALS. In response, hundreds of members of PatientsLikeMe with the disease began taking the drug off-label. Using the self-reported data of 348 ALS patients, PatientsLikeMe conducted a 9-month long study which demonstrated that lithium did not slow the progress of the disease. The team suggested that online collection of patient self-report data was not a substitute for randomized placebo-controlled trials, but it might be a useful new form of clinical research in certain circumstances. A later study described how patients attempted to use the same tools to unblind clinical trials in which they were enrolled to try and see whether or not the experimental drugs they were taking were working. A 2016 collaboration with Dr Rick Bedlack of the Duke ALS Clinic aims to overcome some of the burden of traditional ALS trials by allowing patients to take part in a clinical trial of a nutritional supplement, Lunasin, from their own home with just two clinic visits rather than regular monthly appointments. Participants completed “virtual visits” to record their ALSFRS-R and other health information in between initial and final on-site visits. Synthetic controls were matched to the intervention arm based on demographics and similarity scores of disease progression using algorithms developed at PLM that analyzed longitudinal ALSFRS-R data from the existing PLM population. This enabled clinicians to effectively power their study while further reducing on-site visits. This virtual model has resulted in fast and effective trial recruitment, retention, and adherence. Led by PLM, recruitment of trial participants for Duke was achieved in less than half the expected time. == Corporate affairs and culture == === Business model === Describing itself “a not just for profit,” PatientsLikeMe does not allow advertising on its site but rather keeps the site free for users by selling research services as well as aggregated, de-identified data to its partners, including pharmaceutical companies and medical device makers. Typical commercial services include helping to optimize the designs of clinical trial protocols, developing new patient reported outcomes, or identifying the severity of symptoms in specific patient groups. The company enforces transparency about who uses the data and partners have included most of the largest pharmaceutical companies worldwide such as UCB, Novartis, Sanofi, Genentech, AstraZeneca, Avanir Pharmaceuticals and Acorda Therapeutics. == Awards and recognition == In 2007, the company was named as one of the "15 Companies That Will Change the World" by Business 2.0 and CNN Money and added to the list of "Top Health IT Innovators" by FierceHealthIT . In 2008, PatientsLikeMe received the Prix Ars Electronica Award of Distinction and in March, featured in a New York Times Magazine article entitled "Practicing Patients", by Thomas Goetz, who later went on to feature the site in his book "The Decision Tree". Later in 2008, a television segment with Sanjay Gupta featuring PatientsLikeMe was aired on the CBS Evening News. Fast Company (magazine)'s 2010 list of Most Innovative Companies ranked PatientsLikeMe at #23. A May 2010 New York Times article entitled "When Patients Meet Online”, outlined the platform's potential for advances for research. In 2012, Sanjay Gupta featured a research project conducted in collaboration with PatientsLikeMe on CNN's The Next List, profiling collaborator Dr. Max Little. In January 2013, the company was featured as a clue on Jeopardy! In 2016, co-founders Jamie and Ben Heywood were awarded the 2016 Humanitarian Award by the International Alliance of ALS/MND Associations. In 2017, PatientsLikeMe was named by Fast Company as one of the Top 10 Most Innovative Companies in Biotech. == References == == External links == New Scientist Article 'How the MySpace mindset can boost medical science' Issue dated May 15, 2008 Newsweek Article 'Power to the bottom' Issue dated September 15, 2008
Automatic summarization
Automatic summarization is the process of shortening a set of data computationally, to create a subset (a summary) that represents the most important or relevant information within the original content. Artificial intelligence algorithms are commonly developed and employed to achieve this, specialized for different types of data. Text summarization is usually implemented by natural language processing methods, designed to locate the most informative sentences in a given document. On the other hand, visual content can be summarized using computer vision algorithms. Image summarization is the subject of ongoing research; existing approaches typically attempt to display the most representative images from a given image collection, or generate a video that only includes the most important content from the entire collection. Video summarization algorithms identify and extract from the original video content the most important frames (key-frames), and/or the most important video segments (key-shots), normally in a temporally ordered fashion. Video summaries simply retain a carefully selected subset of the original video frames and, therefore, are not identical to the output of video synopsis algorithms, where new video frames are being synthesized based on the original video content. == Commercial products == In 2022 Google Docs released an automatic summarization feature. == Approaches == There are two general approaches to automatic summarization: extraction and abstraction. === Extraction-based summarization === Here, content is extracted from the original data, but the extracted content is not modified in any way. Examples of extracted content include key-phrases that can be used to "tag" or index a text document, or key sentences (including headings) that collectively comprise an abstract, and representative images or video segments, as stated above. For text, extraction is analogous to the process of skimming, where the summary (if available), headings and subheadings, figures, the first and last paragraphs of a section, and optionally the first and last sentences in a paragraph are read before one chooses to read the entire document in detail. Other examples of extraction that include key sequences of text in terms of clinical relevance (including patient/problem, intervention, and outcome). === Abstractive-based summarization === Abstractive summarization methods generate new text that did not exist in the original text. This has been applied mainly for text. Abstractive methods build an internal semantic representation of the original content (often called a language model), and then use this representation to create a summary that is closer to what a human might express. Abstraction may transform the extracted content by paraphrasing sections of the source document, to condense a text more strongly than extraction. Such transformation, however, is computationally much more challenging than extraction, involving both natural language processing and often a deep understanding of the domain of the original text in cases where the original document relates to a special field of knowledge. "Paraphrasing" is even more difficult to apply to images and videos, which is why most summarization systems are extractive. === Aided summarization === Approaches aimed at higher summarization quality rely on combined software and human effort. In Machine Aided Human Summarization, extractive techniques highlight candidate passages for inclusion (to which the human adds or removes text). In Human Aided Machine Summarization, a human post-processes software output, in the same way that one edits the output of automatic translation by Google Translate. == Applications and systems for summarization == There are broadly two types of extractive summarization tasks depending on what the summarization program focuses on. The first is generic summarization, which focuses on obtaining a generic summary or abstract of the collection (whether documents, or sets of images, or videos, news stories etc.). The second is query relevant summarization, sometimes called query-based summarization, which summarizes objects specific to a query. Summarization systems are able to create both query relevant text summaries and generic machine-generated summaries depending on what the user needs. An example of a summarization problem is document summarization, which attempts to automatically produce an abstract from a given document. Sometimes one might be interested in generating a summary from a single source document, while others can use multiple source documents (for example, a cluster of articles on the same topic). This problem is called multi-document summarization. A related application is summarizing news articles. Imagine a system, which automatically pulls together news articles on a given topic (from the web), and concisely represents the latest news as a summary. Image collection summarization is another application example of automatic summarization. It consists in selecting a representative set of images from a larger set of images. A summary in this context is useful to show the most representative images of results in an image collection exploration system. Video summarization is a related domain, where the system automatically creates a trailer of a long video. This also has applications in consumer or personal videos, where one might want to skip the boring or repetitive actions. Similarly, in surveillance videos, one would want to extract important and suspicious activity, while ignoring all the boring and redundant frames captured. At a very high level, summarization algorithms try to find subsets of objects (like set of sentences, or a set of images), which cover information of the entire set. This is also called the core-set. These algorithms model notions like diversity, coverage, information and representativeness of the summary. Query based summarization techniques, additionally model for relevance of the summary with the query. Some techniques and algorithms which naturally model summarization problems are TextRank and PageRank, Submodular set function, Determinantal point process, maximal marginal relevance (MMR) etc. === Keyphrase extraction === The task is the following. You are given a piece of text, such as a journal article, and you must produce a list of keywords or key[phrase]s that capture the primary topics discussed in the text. In the case of research articles, many authors provide manually assigned keywords, but most text lacks pre-existing keyphrases. For example, news articles rarely have keyphrases attached, but it would be useful to be able to automatically do so for a number of applications discussed below. Consider the example text from a news article: "The Army Corps of Engineers, rushing to meet President Bush's promise to protect New Orleans by the start of the 2006 hurricane season, installed defective flood-control pumps last year despite warnings from its own expert that the equipment would fail during a storm, according to documents obtained by The Associated Press". A keyphrase extractor might select "Army Corps of Engineers", "President Bush", "New Orleans", and "defective flood-control pumps" as keyphrases. These are pulled directly from the text. In contrast, an abstractive keyphrase system would somehow internalize the content and generate keyphrases that do not appear in the text, but more closely resemble what a human might produce, such as "political negligence" or "inadequate protection from floods". Abstraction requires a deep understanding of the text, which makes it difficult for a computer system. Keyphrases have many applications. They can enable document browsing by providing a short summary, improve information retrieval (if documents have keyphrases assigned, a user could search by keyphrase to produce more reliable hits than a full-text search), and be employed in generating index entries for a large text corpus. Depending on the different literature and the definition of key terms, words or phrases, keyword extraction is a highly related theme. ==== Supervised learning approaches ==== Beginning with the work of Turney, many researchers have approached keyphrase extraction as a supervised machine learning problem. Given a document, we construct an example for each unigram, bigram, and trigram found in the text (though other text units are also possible, as discussed below). We then compute various features describing each example (e.g., does the phrase begin with an upper-case letter?). We assume there are known keyphrases available for a set of training documents. Using the known keyphrases, we can assign positive or negative labels to the examples. Then we learn a classifier that can discriminate between positive and negative examples as a function of the features. Some classifiers make a binary classification for a test example, while others assign a probability of being a keyphrase. For instance, in the above text, we might learn a rule that says phrases with initial capital letters are likely to be keyphrases. After training a learner, we can select keyphrases for test documents in the following manner. We apply the same example-generation strategy to the test documents, then run each example through the learner. We can determine the keyphrases by looking at binary classification decisions or probabilities returned from our learned model. If probabilities are given, a threshold is used to select the keyphrases. Keyphrase extractors are generally evaluated using precision and recall. Precision measures how many of the proposed keyphrases are actually correct. Recall measures how many of the true keyphrases your system proposed. The two measures can be combined in an F-score, which is the harmonic mean of the two (F = 2PR/(P + R) ). Matches between the proposed keyphrases and the known keyphrases can be checked after stemming or applying some other text normalization. Designing a supervised keyphrase extraction system involves deciding on several choices (some of these apply to unsupervised, too). The first choice is exactly how to generate examples. Turney and others have used all possible unigrams, bigrams, and trigrams without intervening punctuation and after removing stopwords. Hulth showed that you can get some improvement by selecting examples to be sequences of tokens that match certain patterns of part-of-speech tags. Ideally, the mechanism for generating examples produces all the known labeled keyphrases as candidates, though this is often not the case. For example, if we use only unigrams, bigrams, and trigrams, then we will never be able to extract a known keyphrase containing four words. Thus, recall may suffer. However, generating too many examples can also lead to low precision. We also need to create features that describe the examples and are informative enough to allow a learning algorithm to discriminate keyphrases from non- keyphrases. Typically features involve various term frequencies (how many times a phrase appears in the current text or in a larger corpus), the length of the example, relative position of the first occurrence, various Boolean syntactic features (e.g., contains all caps), etc. The Turney paper used about 12 such features. Hulth uses a reduced set of features, which were found most successful in the KEA (Keyphrase Extraction Algorithm) work derived from Turney's seminal paper. In the end, the system will need to return a list of keyphrases for a test document, so we need to have a way to limit the number. Ensemble methods (i.e., using votes from several classifiers) have been used to produce numeric scores that can be thresholded to provide a user-provided number of keyphrases. This is the technique used by Turney with C4.5 decision trees. Hulth used a single binary classifier so the learning algorithm implicitly determines the appropriate number. Once examples and features are created, we need a way to learn to predict keyphrases. Virtually any supervised learning algorithm could be used, such as decision trees, Naive Bayes, and rule induction. In the case of Turney's GenEx algorithm, a genetic algorithm is used to learn parameters for a domain-specific keyphrase extraction algorithm. The extractor follows a series of heuristics to identify keyphrases. The genetic algorithm optimizes parameters for these heuristics with respect to performance on training documents with known key phrases. ==== Unsupervised approach: TextRank ==== Another keyphrase extraction algorithm is TextRank. While supervised methods have some nice properties, like being able to produce interpretable rules for what features characterize a keyphrase, they also require a large amount of training data. Many documents with known keyphrases are needed. Furthermore, training on a specific domain tends to customize the extraction process to that domain, so the resulting classifier is not necessarily portable, as some of Turney's results demonstrate. Unsupervised keyphrase extraction removes the need for training data. It approaches the problem from a different angle. Instead of trying to learn explicit features that characterize keyphrases, the TextRank algorithm exploits the structure of the text itself to determine keyphrases that appear "central" to the text in the same way that PageRank selects important Web pages. Recall this is based on the notion of "prestige" or "recommendation" from social networks. In this way, TextRank does not rely on any previous training data at all, but rather can be run on any arbitrary piece of text, and it can produce output simply based on the text's intrinsic properties. Thus the algorithm is easily portable to new domains and languages. TextRank is a general purpose graph-based ranking algorithm for NLP. Essentially, it runs PageRank on a graph specially designed for a particular NLP task. For keyphrase extraction, it builds a graph using some set of text units as vertices. Edges are based on some measure of semantic or lexical similarity between the text unit vertices. Unlike PageRank, the edges are typically undirected and can be weighted to reflect a degree of similarity. Once the graph is constructed, it is used to form a stochastic matrix, combined with a damping factor (as in the "random surfer model"), and the ranking over vertices is obtained by finding the eigenvector corresponding to eigenvalue 1 (i.e., the stationary distribution of the random walk on the graph). The vertices should correspond to what we want to rank. Potentially, we could do something similar to the supervised methods and create a vertex for each unigram, bigram, trigram, etc. However, to keep the graph small, the authors decide to rank individual unigrams in a first step, and then include a second step that merges highly ranked adjacent unigrams to form multi-word phrases. This has a nice side effect of allowing us to produce keyphrases of arbitrary length. For example, if we rank unigrams and find that "advanced", "natural", "language", and "processing" all get high ranks, then we would look at the original text and see that these words appear consecutively and create a final keyphrase using all four together. Note that the unigrams placed in the graph can be filtered by part of speech. The authors found that adjectives and nouns were the best to include. Thus, some linguistic knowledge comes into play in this step. Edges are created based on word co-occurrence in this application of TextRank. Two vertices are connected by an edge if the unigrams appear within a window of size N in the original text. N is typically around 2–10. Thus, "natural" and "language" might be linked in a text about NLP. "Natural" and "processing" would also be linked because they would both appear in the same string of N words. These edges build on the notion of "text cohesion" and the idea that words that appear near each other are likely related in a meaningful way and "recommend" each other to the reader. Since this method simply ranks the individual vertices, we need a way to threshold or produce a limited number of keyphrases. The technique chosen is to set a count T to be a user-specified fraction of the total number of vertices in the graph. Then the top T vertices/unigrams are selected based on their stationary probabilities. A post- processing step is then applied to merge adjacent instances of these T unigrams. As a result, potentially more or less than T final keyphrases will be produced, but the number should be roughly proportional to the length of the original text. It is not initially clear why applying PageRank to a co-occurrence graph would produce useful keyphrases. One way to think about it is the following. A word that appears multiple times throughout a text may have many different co-occurring neighbors. For example, in a text about machine learning, the unigram "learning" might co-occur with "machine", "supervised", "un-supervised", and "semi-supervised" in four different sentences. Thus, the "learning" vertex would be a central "hub" that connects to these other modifying words. Running PageRank/TextRank on the graph is likely to rank "learning" highly. Similarly, if the text contains the phrase "supervised classification", then there would be an edge between "supervised" and "classification". If "classification" appears several other places and thus has many neighbors, its importance would contribute to the importance of "supervised". If it ends up with a high rank, it will be selected as one of the top T unigrams, along with "learning" and probably "classification". In the final post-processing step, we would then end up with keyphrases "supervised learning" and "supervised classification". In short, the co-occurrence graph will contain densely connected regions for terms that appear often and in different contexts. A random walk on this graph will have a stationary distribution that assigns large probabilities to the terms in the centers of the clusters. This is similar to densely connected Web pages getting ranked highly by PageRank. This approach has also been used in document summarization, considered below. === Document summarization === Like keyphrase extraction, document summarization aims to identify the essence of a text. The only real difference is that now we are dealing with larger text units—whole sentences instead of words and phrases. ==== Supervised learning approaches ==== Supervised text summarization is very much like supervised keyphrase extraction. Basically, if you have a collection of documents and human-generated summaries for them, you can learn features of sentences that make them good candidates for inclusion in the summary. Features might include the position in the document (i.e., the first few sentences are probably important), the number of words in the sentence, etc. The main difficulty in supervised extractive summarization is that the known summaries must be manually created by extracting sentences so the sentences in an original training document can be labeled as "in summary" or "not in summary". This is not typically how people create summaries, so simply using journal abstracts or existing summaries is usually not sufficient. The sentences in these summaries do not necessarily match up with sentences in the original text, so it would be difficult to assign labels to examples for training. Note, however, that these natural summaries can still be used for evaluation purposes, since ROUGE-1 evaluation only considers unigrams. ==== Maximum entropy-based summarization ==== During the DUC 2001 and 2002 evaluation workshops, TNO developed a sentence extraction system for multi-document summarization in the news domain. The system was based on a hybrid system using a Naive Bayes classifier and statistical language models for modeling salience. Although the system exhibited good results, the researchers wanted to explore the effectiveness of a maximum entropy (ME) classifier for the meeting summarization task, as ME is known to be robust against feature dependencies. Maximum entropy has also been applied successfully for summarization in the broadcast news domain. ==== Adaptive summarization ==== A promising approach is adaptive document/text summarization. It involves first recognizing the text genre and then applying summarization algorithms optimized for this genre. Such software has been created. ==== TextRank and LexRank ==== The unsupervised approach to summarization is also quite similar in spirit to unsupervised keyphrase extraction and gets around the issue of costly training data. Some unsupervised summarization approaches are based on finding a "centroid" sentence, which is the mean word vector of all the sentences in the document. Then the sentences can be ranked with regard to their similarity to this centroid sentence. A more principled way to estimate sentence importance is using random walks and eigenvector centrality. LexRank is an algorithm essentially identical to TextRank, and both use this approach for document summarization. The two methods were developed by different groups at the same time, and LexRank simply focused on summarization, but could just as easily be used for keyphrase extraction or any other NLP ranking task. In both LexRank and TextRank, a graph is constructed by creating a vertex for each sentence in the document. The edges between sentences are based on some form of semantic similarity or content overlap. While LexRank uses cosine similarity of TF-IDF vectors, TextRank uses a very similar measure based on the number of words two sentences have in common (normalized by the sentences' lengths). The LexRank paper explored using unweighted edges after applying a threshold to the cosine values, but also experimented with using edges with weights equal to the similarity score. TextRank uses continuous similarity scores as weights. In both algorithms, the sentences are ranked by applying PageRank to the resulting graph. A summary is formed by combining the top ranking sentences, using a threshold or length cutoff to limit the size of the summary. It is worth noting that TextRank was applied to summarization exactly as described here, while LexRank was used as part of a larger summarization system (MEAD) that combines the LexRank score (stationary probability) with other features like sentence position and length using a linear combination with either user-specified or automatically tuned weights. In this case, some training documents might be needed, though the TextRank results show the additional features are not absolutely necessary. Unlike TextRank, LexRank has been applied to multi-document summarization. ==== Multi-document summarization ==== Multi-document summarization is an automatic procedure aimed at extraction of information from multiple texts written about the same topic. Resulting summary report allows individual users, such as professional information consumers, to quickly familiarize themselves with information contained in a large cluster of documents. In such a way, multi-document summarization systems are complementing the news aggregators performing the next step down the road of coping with information overload. Multi-document summarization may also be done in response to a question. Multi-document summarization creates information reports that are both concise and comprehensive. With different opinions being put together and outlined, every topic is described from multiple perspectives within a single document. While the goal of a brief summary is to simplify information search and cut the time by pointing to the most relevant source documents, comprehensive multi-document summary should itself contain the required information, hence limiting the need for accessing original files to cases when refinement is required. Automatic summaries present information extracted from multiple sources algorithmically, without any editorial touch or subjective human intervention, thus making it completely unbiased. ===== Diversity ===== Multi-document extractive summarization faces a problem of redundancy. Ideally, we want to extract sentences that are both "central" (i.e., contain the main ideas) and "diverse" (i.e., they differ from one another). For example, in a set of news articles about some event, each article is likely to have many similar sentences. To address this issue, LexRank applies a heuristic post-processing step that adds sentences in rank order, but discards sentences that are too similar to ones already in the summary. This method is called Cross-Sentence Information Subsumption (CSIS). These methods work based on the idea that sentences "recommend" other similar sentences to the reader. Thus, if one sentence is very similar to many others, it will likely be a sentence of great importance. Its importance also stems from the importance of the sentences "recommending" it. Thus, to get ranked highly and placed in a summary, a sentence must be similar to many sentences that are in turn also similar to many other sentences. This makes intuitive sense and allows the algorithms to be applied to an arbitrary new text. The methods are domain-independent and easily portable. One could imagine the features indicating important sentences in the news domain might vary considerably from the biomedical domain. However, the unsupervised "recommendation"-based approach applies to any domain. A related method is Maximal Marginal Relevance (MMR), which uses a general-purpose graph-based ranking algorithm like Page/Lex/TextRank that handles both "centrality" and "diversity" in a unified mathematical framework based on absorbing Markov chain random walks (a random walk where certain states end the walk). The algorithm is called GRASSHOPPER. In addition to explicitly promoting diversity during the ranking process, GRASSHOPPER incorporates a prior ranking (based on sentence position in the case of summarization). The state of the art results for multi-document summarization are obtained using mixtures of submodular functions. These methods have achieved the state of the art results for Document Summarization Corpora, DUC 04 - 07. Similar results were achieved with the use of determinantal point processes (which are a special case of submodular functions) for DUC-04. A new method for multi-lingual multi-document summarization that avoids redundancy generates ideograms to represent the meaning of each sentence in each document, then evaluates similarity by comparing ideogram shape and position. It does not use word frequency, training or preprocessing. It uses two user-supplied parameters: equivalence (when are two sentences to be considered equivalent?) and relevance (how long is the desired summary?). === Submodular functions as generic tools for summarization === The idea of a submodular set function has recently emerged as a powerful modeling tool for various summarization problems. Submodular functions naturally model notions of coverage, information, representation and diversity. Moreover, several important combinatorial optimization problems occur as special instances of submodular optimization. For example, the set cover problem is a special case of submodular optimization, since the set cover function is submodular. The set cover function attempts to find a subset of objects which cover a given set of concepts. For example, in document summarization, one would like the summary to cover all important and relevant concepts in the document. This is an instance of set cover. Similarly, the facility location problem is a special case of submodular functions. The Facility Location function also naturally models coverage and diversity. Another example of a submodular optimization problem is using a determinantal point process to model diversity. Similarly, the Maximum-Marginal-Relevance procedure can also be seen as an instance of submodular optimization. All these important models encouraging coverage, diversity and information are all submodular. Moreover, submodular functions can be efficiently combined, and the resulting function is still submodular. Hence, one could combine one submodular function which models diversity, another one which models coverage and use human supervision to learn a right model of a submodular function for the problem. While submodular functions are fitting problems for summarization, they also admit very efficient algorithms for optimization. For example, a simple greedy algorithm admits a constant factor guarantee. Moreover, the greedy algorithm is extremely simple to implement and can scale to large datasets, which is very important for summarization problems. Submodular functions have achieved state-of-the-art for almost all summarization problems. For example, work by Lin and Bilmes, 2012 shows that submodular functions achieve the best results to date on DUC-04, DUC-05, DUC-06 and DUC-07 systems for document summarization. Similarly, work by Lin and Bilmes, 2011, shows that many existing systems for automatic summarization are instances of submodular functions. This was a breakthrough result establishing submodular functions as the right models for summarization problems. Submodular Functions have also been used for other summarization tasks. Tschiatschek et al., 2014 show that mixtures of submodular functions achieve state-of-the-art results for image collection summarization. Similarly, Bairi et al., 2015 show the utility of submodular functions for summarizing multi-document topic hierarchies. Submodular Functions have also successfully been used for summarizing machine learning datasets. === Applications === Specific applications of automatic summarization include: The Reddit bot "autotldr", created in 2011 summarizes news articles in the comment-section of reddit posts. It was found to be very useful by the reddit community which upvoted its summaries hundreds of thousands of times. The name is reference to TL;DR − Internet slang for "too long; didn't read". Adversarial stylometry may make use of summaries, if the detail lost is not major and the summary is sufficiently stylistically different to the input. == Evaluation == The most common way to evaluate the informativeness of automatic summaries is to compare them with human-made model summaries. Evaluation can be intrinsic or extrinsic, and inter-textual or intra-textual. === Intrinsic versus extrinsic === Intrinsic evaluation assesses the summaries directly, while extrinsic evaluation evaluates how the summarization system affects the completion of some other task. Intrinsic evaluations have assessed mainly the coherence and informativeness of summaries. Extrinsic evaluations, on the other hand, have tested the impact of summarization on tasks like relevance assessment, reading comprehension, etc. === Inter-textual versus intra-textual === Intra-textual evaluation assess the output of a specific summarization system, while inter-textual evaluation focuses on contrastive analysis of outputs of several summarization systems. Human judgement often varies greatly in what it considers a "good" summary, so creating an automatic evaluation process is particularly difficult. Manual evaluation can be used, but this is both time and labor-intensive, as it requires humans to read not only the summaries but also the source documents. Other issues are those concerning coherence and coverage. The most common way to evaluate summaries is ROUGE (Recall-Oriented Understudy for Gisting Evaluation). It is very common for summarization and translation systems in NIST's Document Understanding Conferences.[2] ROUGE is a recall-based measure of how well a summary covers the content of human-generated summaries known as references. It calculates n-gram overlaps between automatically generated summaries and previously written human summaries. It is recall-based to encourage inclusion of all important topics in summaries. Recall can be computed with respect to unigram, bigram, trigram, or 4-gram matching. For example, ROUGE-1 is the fraction of unigrams that appear in both the reference summary and the automatic summary out of all unigrams in the reference summary. If there are multiple reference summaries, their scores are averaged. A high level of overlap should indicate a high degree of shared concepts between the two summaries. ROUGE cannot determine if the result is coherent, that is if sentences flow together in a sensibly. High-order n-gram ROUGE measures help to some degree. Another unsolved problem is Anaphor resolution. Similarly, for image summarization, Tschiatschek et al., developed a Visual-ROUGE score which judges the performance of algorithms for image summarization. === Domain-specific versus domain-independent summarization === Domain-independent summarization techniques apply sets of general features to identify information-rich text segments. Recent research focuses on domain-specific summarization using knowledge specific to the text's domain, such as medical knowledge and ontologies for summarizing medical texts. === Qualitative === The main drawback of the evaluation systems so far is that we need a reference summary (for some methods, more than one), to compare automatic summaries with models. This is a hard and expensive task. Much effort has to be made to create corpora of texts and their corresponding summaries. Furthermore, some methods require manual annotation of the summaries (e.g. SCU in the Pyramid Method). Moreover, they all perform a quantitative evaluation with regard to different similarity metrics. == History == The first publication in the area dates back to 1957 (Hans Peter Luhn), starting with a statistical technique. Research increased significantly in 2015. Term frequency–inverse document frequency had been used by 2016. Pattern-based summarization was the most powerful option for multi-document summarization found by 2016. In the following year it was surpassed by latent semantic analysis (LSA) combined with non-negative matrix factorization (NMF). Although they did not replace other approaches and are often combined with them, by 2019 machine learning methods dominated the extractive summarization of single documents, which was considered to be nearing maturity. By 2020, the field was still very active and research is shifting towards abstractive summation and real-time summarization. === Recent approaches === Recently the rise of transformer models replacing more traditional RNN (LSTM) have provided a flexibility in the mapping of text sequences to text sequences of a different type, which is well suited to automatic summarization. This includes models such as T5 and Pegasus. == See also == Sentence extraction Text mining Multi-document summarization == References == == Works cited == Potthast, Martin; Hagen, Matthias; Stein, Benno (2016). Author Obfuscation: Attacking the State of the Art in Authorship Verification (PDF). Conference and Labs of the Evaluation Forum. == Further reading == Hercules, Dalianis (2003). Porting and evaluation of automatic summarization. Roxana, Angheluta (2002). The Use of Topic Segmentation for Automatic Summarization. Anne, Buist (2004). Automatic Summarization of Meeting Data: A Feasibility Study (PDF). Archived from the original (PDF) on 2021-01-23. Retrieved 2020-07-19. Annie, Louis (2009). Performance Confidence Estimation for Automatic Summarization. Elena, Lloret and Manuel, Palomar (2009). Challenging Issues of Automatic Summarization: Relevance Detection and Quality-based Evaluation. Archived from the original on 2018-10-03. Retrieved 2018-10-03.{{cite book}}: CS1 maint: multiple names: authors list (link) Andrew, Goldberg (2007). Automatic Summarization. Alrehamy, Hassan (2018). "SemCluster: Unsupervised Automatic Keyphrase Extraction Using Affinity Propagation". Advances in Computational Intelligence Systems. Advances in Intelligent Systems and Computing. Vol. 650. pp. 222–235. doi:10.1007/978-3-319-66939-7_19. ISBN 978-3-319-66938-0. Endres-Niggemeyer, Brigitte (1998). Summarizing Information. Springer. ISBN 978-3-540-63735-6. Marcu, Daniel (2000). The Theory and Practice of Discourse Parsing and Summarization. MIT Press. ISBN 978-0-262-13372-2. Mani, Inderjeet (2001). Automatic Summarization. ISBN 978-1-58811-060-2. Huff, Jason (2010). AutoSummarize., Conceptual artwork using automatic summarization software in Microsoft Word 2008. Lehmam, Abderrafih (2010). Essential summarizer: innovative automatic text summarization software in twenty languages - ACM Digital Library. Riao '10. pp. 216–217., Published in Proceeding RIAO'10 Adaptivity, Personalization and Fusion of Heterogeneous Information, CID Paris, France Xiaojin, Zhu, Andrew Goldberg, Jurgen Van Gael, and David Andrzejewski (2007). Improving diversity in ranking using absorbing random walks (PDF).{{cite book}}: CS1 maint: multiple names: authors list (link), The GRASSHOPPER algorithm Miranda-Jiménez, Sabino, Gelbukh, Alexander, and Sidorov, Grigori (2013). "Summarizing Conceptual Graphs for Automatic Summarization Task". Conceptual Structures for STEM Research and Education. Lecture Notes in Computer Science. Vol. 7735. pp. 245–253. doi:10.1007/978-3-642-35786-2_18. ISBN 978-3-642-35785-5.{{cite book}}: CS1 maint: multiple names: authors list (link), Conceptual Structures for STEM Research and Education.
Bayesian structural time series
Bayesian structural time series (BSTS) model is a statistical technique used for feature selection, time series forecasting, nowcasting, inferring causal impact and other applications. The model is designed to work with time series data. The model has also promising application in the field of analytical marketing. In particular, it can be used in order to assess how much different marketing campaigns have contributed to the change in web search volumes, product sales, brand popularity and other relevant indicators. Difference-in-differences models and interrupted time series designs are alternatives to this approach. "In contrast to classical difference-in-differences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) flexibly accommodate multiple sources of variation, including the time-varying influence of contemporaneous covariates, i.e., synthetic controls." == General model description == The model consists of three main components: Kalman filter. The technique for time series decomposition. In this step, a researcher can add different state variables: trend, seasonality, regression, and others. Spike-and-slab method. In this step, the most important regression predictors are selected. Bayesian model averaging. Combining the results and prediction calculation. The model could be used to discover the causations with its counterfactual prediction and the observed data. A possible drawback of the model can be its relatively complicated mathematical underpinning and difficult implementation as a computer program. However, the programming language R has ready-to-use packages for calculating the BSTS model, which do not require strong mathematical background from a researcher. == See also == Bayesian inference using Gibbs sampling Correlation does not imply causation Spike-and-slab regression == References == == Further reading == Scott, S. L., & Varian, H. R. 2014a. Bayesian variable selection for nowcasting economic time series. Economic Analysis of the Digital Economy. Scott, S. L., & Varian, H. R. 2014b. Predicting the present with bayesian structural time series. International Journal of Mathematical Modelling and Numerical Optimisation. Varian, H. R. 2014. Big Data: New Tricks for Econometrics. Journal of Economic Perspectives Brodersen, K. H., Gallusser, F., Koehler, J., Remy, N., & Scott, S. L. 2015. Inferring causal impact using Bayesian structural time-series models. The Annals of Applied Statistics. R package "bsts". R package "CausalImpact". O’Hara, R. B., & Sillanpää, M. J. 2009. A review of Bayesian variable selection methods: what, how and which. Bayesian analysis. Hoeting, J. A., Madigan, D., Raftery, A. E., & Volinsky, C. T. 1999. Bayesian model averaging: a tutorial. Statistical science.
Set-theoretic limit
In mathematics, the limit of a sequence of sets A 1 , A 2 , … {\displaystyle A_{1},A_{2},\ldots } (subsets of a common set X {\displaystyle X} ) is a set whose elements are determined by the sequence in either of two equivalent ways: (1) by upper and lower bounds on the sequence that converge monotonically to the same set (analogous to convergence of real-valued sequences) and (2) by convergence of a sequence of indicator functions which are themselves real-valued. As is the case with sequences of other objects, convergence is not necessary or even usual. More generally, again analogous to real-valued sequences, the less restrictive limit infimum and limit supremum of a set sequence always exist and can be used to determine convergence: the limit exists if the limit infimum and limit supremum are identical. (See below). Such set limits are essential in measure theory and probability. It is a common misconception that the limits infimum and supremum described here involve sets of accumulation points, that is, sets of x = lim k → ∞ x k , {\displaystyle x=\lim _{k\to \infty }x_{k},} where each x k {\displaystyle x_{k}} is in some A n k . {\displaystyle A_{n_{k}}.} This is only true if convergence is determined by the discrete metric (that is, x n → x {\displaystyle x_{n}\to x} if there is N {\displaystyle N} such that x n = x {\displaystyle x_{n}=x} for all n ≥ N {\displaystyle n\geq N} ). This article is restricted to that situation as it is the only one relevant for measure theory and probability. See the examples below. (On the other hand, there are more general topological notions of set convergence that do involve accumulation points under different metrics or topologies.) == Definitions == === The two definitions === Suppose that ( A n ) n = 1 ∞ {\displaystyle \left(A_{n}\right)_{n=1}^{\infty }} is a sequence of sets. The two equivalent definitions are as follows. Using union and intersection: define lim inf n → ∞ A n = ⋃ n ≥ 1 ⋂ j ≥ n A j {\displaystyle \liminf _{n\to \infty }A_{n}=\bigcup _{n\geq 1}\bigcap _{j\geq n}A_{j}} and lim sup n → ∞ A n = ⋂ n ≥ 1 ⋃ j ≥ n A j {\displaystyle \limsup _{n\to \infty }A_{n}=\bigcap _{n\geq 1}\bigcup _{j\geq n}A_{j}} If these two sets are equal, then the set-theoretic limit of the sequence A n {\displaystyle A_{n}} exists and is equal to that common set. Either set as described above can be used to get the limit, and there may be other means to get the limit as well. Using indicator functions: let 1 A n ( x ) {\displaystyle \mathbb {1} _{A_{n}}(x)} equal 1 {\displaystyle 1} if x ∈ A n , {\displaystyle x\in A_{n},} and 0 {\displaystyle 0} otherwise. Define lim inf n → ∞ A n = { x ∈ X : lim inf n → ∞ 1 A n ( x ) = 1 } {\displaystyle \liminf _{n\to \infty }A_{n}={\Bigl \{}x\in X:\liminf _{n\to \infty }\mathbb {1} _{A_{n}}(x)=1{\Bigr \}}} and lim sup n → ∞ A n = { x ∈ X : lim sup n → ∞ 1 A n ( x ) = 1 } , {\displaystyle \limsup _{n\to \infty }A_{n}={\Bigl \{}x\in X:\limsup _{n\to \infty }\mathbb {1} _{A_{n}}(x)=1{\Bigr \}},} where the expressions inside the brackets on the right are, respectively, the limit infimum and limit supremum of the real-valued sequence 1 A n ( x ) . {\displaystyle \mathbb {1} _{A_{n}}(x).} Again, if these two sets are equal, then the set-theoretic limit of the sequence A n {\displaystyle A_{n}} exists and is equal to that common set, and either set as described above can be used to get the limit. To see the equivalence of the definitions, consider the limit infimum. The use of De Morgan's law below explains why this suffices for the limit supremum. Since indicator functions take only values 0 {\displaystyle 0} and 1 , {\displaystyle 1,} lim inf n → ∞ 1 A n ( x ) = 1 {\displaystyle \liminf _{n\to \infty }\mathbb {1} _{A_{n}}(x)=1} if and only if 1 A n ( x ) {\displaystyle \mathbb {1} _{A_{n}}(x)} takes value 0 {\displaystyle 0} only finitely many times. Equivalently, x ∈ ⋃ n ≥ 1 ⋂ j ≥ n A j {\textstyle x\in \bigcup _{n\geq 1}\bigcap _{j\geq n}A_{j}} if and only if there exists n {\displaystyle n} such that the element is in A m {\displaystyle A_{m}} for every m ≥ n , {\displaystyle m\geq n,} which is to say if and only if x ∉ A n {\displaystyle x\not \in A_{n}} for only finitely many n . {\displaystyle n.} Therefore, x {\displaystyle x} is in the lim inf n → ∞ A n {\displaystyle \liminf _{n\to \infty }A_{n}} if and only if x {\displaystyle x} is in all but finitely many A n . {\displaystyle A_{n}.} For this reason, a shorthand phrase for the limit infimum is " x {\displaystyle x} is in A n {\displaystyle A_{n}} all but finitely often", typically expressed by writing " A n {\displaystyle A_{n}} a.b.f.o.". Similarly, an element x {\displaystyle x} is in the limit supremum if, no matter how large n {\displaystyle n} is, there exists m ≥ n {\displaystyle m\geq n} such that the element is in A m . {\displaystyle A_{m}.} That is, x {\displaystyle x} is in the limit supremum if and only if x {\displaystyle x} is in infinitely many A n . {\displaystyle A_{n}.} For this reason, a shorthand phrase for the limit supremum is " x {\displaystyle x} is in A n {\displaystyle A_{n}} infinitely often", typically expressed by writing " A n {\displaystyle A_{n}} i.o.". To put it another way, the limit infimum consists of elements that "eventually stay forever" (are in each set after some n {\displaystyle n} ), while the limit supremum consists of elements that "never leave forever" (are in some set after each n {\displaystyle n} ). Or more formally: === Monotone sequences === The sequence ( A n ) {\displaystyle \left(A_{n}\right)} is said to be nonincreasing if A n + 1 ⊆ A n {\displaystyle A_{n+1}\subseteq A_{n}} for each n , {\displaystyle n,} and nondecreasing if A n ⊆ A n + 1 {\displaystyle A_{n}\subseteq A_{n+1}} for each n . {\displaystyle n.} In each of these cases the set limit exists. Consider, for example, a nonincreasing sequence ( A n ) . {\displaystyle \left(A_{n}\right).} Then ⋂ j ≥ n A j = ⋂ j ≥ 1 A j and ⋃ j ≥ n A j = A n . {\displaystyle \bigcap _{j\geq n}A_{j}=\bigcap _{j\geq 1}A_{j}{\text{ and }}\bigcup _{j\geq n}A_{j}=A_{n}.} From these it follows that lim inf n → ∞ A n = ⋃ n ≥ 1 ⋂ j ≥ n A j = ⋂ j ≥ 1 A j = ⋂ n ≥ 1 ⋃ j ≥ n A j = lim sup n → ∞ A n . {\displaystyle \liminf _{n\to \infty }A_{n}=\bigcup _{n\geq 1}\bigcap _{j\geq n}A_{j}=\bigcap _{j\geq 1}A_{j}=\bigcap _{n\geq 1}\bigcup _{j\geq n}A_{j}=\limsup _{n\to \infty }A_{n}.} Similarly, if ( A n ) {\displaystyle \left(A_{n}\right)} is nondecreasing then lim n → ∞ A n = ⋃ j ≥ 1 A j . {\displaystyle \lim _{n\to \infty }A_{n}=\bigcup _{j\geq 1}A_{j}.} The Cantor set is defined this way. == Properties == If the limit of 1 A n ( x ) , {\displaystyle \mathbb {1} _{A_{n}}(x),} as n {\displaystyle n} goes to infinity, exists for all x {\displaystyle x} then lim n → ∞ A n = { x ∈ X : lim n → ∞ 1 A n ( x ) = 1 } . {\displaystyle \lim _{n\to \infty }A_{n}=\left\{x\in X:\lim _{n\to \infty }\mathbb {1} _{A_{n}}(x)=1\right\}.} Otherwise, the limit for ( A n ) {\displaystyle \left(A_{n}\right)} does not exist. It can be shown that the limit infimum is contained in the limit supremum: lim inf n → ∞ A n ⊆ lim sup n → ∞ A n , {\displaystyle \liminf _{n\to \infty }A_{n}\subseteq \limsup _{n\to \infty }A_{n},} for example, simply by observing that x ∈ A n {\displaystyle x\in A_{n}} all but finitely often implies x ∈ A n {\displaystyle x\in A_{n}} infinitely often. Using the monotonicity of B n = ⋂ j ≥ n A j {\textstyle B_{n}=\bigcap _{j\geq n}A_{j}} and of C n = ⋃ j ≥ n A j , {\textstyle C_{n}=\bigcup _{j\geq n}A_{j},} lim inf n → ∞ A n = lim n → ∞ ⋂ j ≥ n A j and lim sup n → ∞ A n = lim n → ∞ ⋃ j ≥ n A j . {\displaystyle \liminf _{n\to \infty }A_{n}=\lim _{n\to \infty }\bigcap _{j\geq n}A_{j}\quad {\text{ and }}\quad \limsup _{n\to \infty }A_{n}=\lim _{n\to \infty }\bigcup _{j\geq n}A_{j}.} By using De Morgan's law twice, with set complement A c := X ∖ A , {\displaystyle A^{c}:=X\setminus A,} lim inf n → ∞ A n = ⋃ n ( ⋃ j ≥ n A j c ) c = ( ⋂ n ⋃ j ≥ n A j c ) c = ( lim sup n → ∞ A n c ) c . {\displaystyle \liminf _{n\to \infty }A_{n}=\bigcup _{n}\left(\bigcup _{j\geq n}A_{j}^{c}\right)^{c}=\left(\bigcap _{n}\bigcup _{j\geq n}A_{j}^{c}\right)^{c}=\left(\limsup _{n\to \infty }A_{n}^{c}\right)^{c}.} That is, x ∈ A n {\displaystyle x\in A_{n}} all but finitely often is the same as x ∉ A n {\displaystyle x\not \in A_{n}} finitely often. From the second definition above and the definitions for limit infimum and limit supremum of a real-valued sequence, 1 lim inf n → ∞ A n ( x ) = lim inf n → ∞ 1 A n ( x ) = sup n ≥ 1 inf j ≥ n 1 A j ( x ) {\displaystyle \mathbb {1} _{\liminf _{n\to \infty }A_{n}}(x)=\liminf _{n\to \infty }\mathbb {1} _{A_{n}}(x)=\sup _{n\geq 1}\inf _{j\geq n}\mathbb {1} _{A_{j}}(x)} and 1 lim sup n → ∞ A n ( x ) = lim sup n → ∞ 1 A n ( x ) = inf n ≥ 1 sup j ≥ n 1 A j ( x ) . {\displaystyle \mathbb {1} _{\limsup _{n\to \infty }A_{n}}(x)=\limsup _{n\to \infty }\mathbb {1} _{A_{n}}(x)=\inf _{n\geq 1}\sup _{j\geq n}\mathbb {1} _{A_{j}}(x).} Suppose F {\displaystyle {\mathcal {F}}} is a 𝜎-algebra of subsets of X . {\displaystyle X.} That is, F {\displaystyle {\mathcal {F}}} is nonempty and is closed under complement and under unions and intersections of countably many sets. Then, by the first definition above, if each A n ∈ F {\displaystyle A_{n}\in {\mathcal {F}}} then both lim inf n → ∞ A n {\displaystyle \liminf _{n\to \infty }A_{n}} and lim sup n → ∞ A n {\displaystyle \limsup _{n\to \infty }A_{n}} are elements of F . {\displaystyle {\mathcal {F}}.} == Examples == Let A n = ( − 1 n , 1 − 1 n ] . {\displaystyle A_{n}=\left(-{\tfrac {1}{n}},1-{\tfrac {1}{n}}\right].} Then lim inf n → ∞ A n = ⋃ n ⋂ j ≥ n ( − 1 j , 1 − 1 j ] = ⋃ n [ 0 , 1 − 1 n ] = [ 0 , 1 ) {\displaystyle \liminf _{n\to \infty }A_{n}=\bigcup _{n}\bigcap _{j\geq n}\left(-{\tfrac {1}{j}},1-{\tfrac {1}{j}}\right]=\bigcup _{n}\left[0,1-{\tfrac {1}{n}}\right]=[0,1)} and lim sup n → ∞ A n = ⋂ n ⋃ j ≥ n ( − 1 j , 1 − 1 j ] = ⋂ n ( − 1 n , 1 ) = [ 0 , 1 ) {\displaystyle \limsup _{n\to \infty }A_{n}=\bigcap _{n}\bigcup _{j\geq n}\left(-{\tfrac {1}{j}},1-{\tfrac {1}{j}}\right]=\bigcap _{n}\left(-{\tfrac {1}{n}},1\right)=[0,1)} so lim n → ∞ A n = [ 0 , 1 ) {\displaystyle \lim _{n\to \infty }A_{n}=[0,1)} exists. Change the previous example to A n = ( ( − 1 ) n n , 1 − ( − 1 ) n n ] . {\displaystyle A_{n}=\left({\tfrac {(-1)^{n}}{n}},1-{\tfrac {(-1)^{n}}{n}}\right].} Then lim inf n → ∞ A n = ⋃ n ⋂ j ≥ n ( ( − 1 ) j j , 1 − ( − 1 ) j j ] = ⋃ n ( 1 2 n , 1 − 1 2 n ] = ( 0 , 1 ) {\displaystyle \liminf _{n\to \infty }A_{n}=\bigcup _{n}\bigcap _{j\geq n}\left({\tfrac {(-1)^{j}}{j}},1-{\tfrac {(-1)^{j}}{j}}\right]=\bigcup _{n}\left({\tfrac {1}{2n}},1-{\tfrac {1}{2n}}\right]=(0,1)} and lim sup n → ∞ A n = ⋂ n ⋃ j ≥ n ( ( − 1 ) j j , 1 − ( − 1 ) j j ] = ⋂ n ( − 1 2 n − 1 , 1 + 1 2 n − 1 ] = [ 0 , 1 ] , {\displaystyle \limsup _{n\to \infty }A_{n}=\bigcap _{n}\bigcup _{j\geq n}\left({\tfrac {(-1)^{j}}{j}},1-{\tfrac {(-1)^{j}}{j}}\right]=\bigcap _{n}\left(-{\tfrac {1}{2n-1}},1+{\tfrac {1}{2n-1}}\right]=[0,1],} so lim n → ∞ A n {\displaystyle \lim _{n\to \infty }A_{n}} does not exist, despite the fact that the left and right endpoints of the intervals converge to 0 and 1, respectively. Let A n = { 0 , 1 n , 2 n , … , n − 1 n , 1 } . {\displaystyle A_{n}=\left\{0,{\tfrac {1}{n}},{\tfrac {2}{n}},\ldots ,{\tfrac {n-1}{n}},1\right\}.} Then ⋃ j ≥ n A j = Q ∩ [ 0 , 1 ] {\displaystyle \bigcup _{j\geq n}A_{j}=\mathbb {Q} \cap [0,1]} is the set of all rational numbers between 0 and 1 (inclusive), since even for j < n {\displaystyle j<n} and 0 ≤ k ≤ j , {\displaystyle 0\leq k\leq j,} k j = n k n j {\displaystyle {\tfrac {k}{j}}={\tfrac {nk}{nj}}} is an element of the above. Therefore, lim sup n → ∞ A n = Q ∩ [ 0 , 1 ] . {\displaystyle \limsup _{n\to \infty }A_{n}=\mathbb {Q} \cap [0,1].} On the other hand, ⋂ j ≥ n A j = { 0 , 1 } , {\displaystyle \bigcap _{j\geq n}A_{j}=\{0,1\},} which implies lim inf n → ∞ A n = { 0 , 1 } . {\displaystyle \liminf _{n\to \infty }A_{n}=\{0,1\}.} In this case, the sequence A 1 , A 2 , … {\displaystyle A_{1},A_{2},\ldots } does not have a limit. Note that lim n → ∞ A n {\displaystyle \lim _{n\to \infty }A_{n}} is not the set of accumulation points, which would be the entire interval [ 0 , 1 ] {\displaystyle [0,1]} (according to the usual Euclidean metric). == Probability uses == Set limits, particularly the limit infimum and the limit supremum, are essential for probability and measure theory. Such limits are used to calculate (or prove) the probabilities and measures of other, more purposeful, sets. For the following, ( X , F , P ) {\displaystyle (X,{\mathcal {F}},\mathbb {P} )} is a probability space, which means F {\displaystyle {\mathcal {F}}} is a σ-algebra of subsets of X {\displaystyle X} and P {\displaystyle \mathbb {P} } is a probability measure defined on that σ-algebra. Sets in the σ-algebra are known as events. If A 1 , A 2 , … {\displaystyle A_{1},A_{2},\ldots } is a monotone sequence of events in F {\displaystyle {\mathcal {F}}} then lim n → ∞ A n {\displaystyle \lim _{n\to \infty }A_{n}} exists and P ( lim n → ∞ A n ) = lim n → ∞ P ( A n ) . {\displaystyle \mathbb {P} \left(\lim _{n\to \infty }A_{n}\right)=\lim _{n\to \infty }\mathbb {P} \left(A_{n}\right).} === Borel–Cantelli lemmas === In probability, the two Borel–Cantelli lemmas can be useful for showing that the limsup of a sequence of events has probability equal to 1 or to 0. The statement of the first (original) Borel–Cantelli lemma is The second Borel–Cantelli lemma is a partial converse: === Almost sure convergence === One of the most important applications to probability is for demonstrating the almost sure convergence of a sequence of random variables. The event that a sequence of random variables Y 1 , Y 2 , … {\displaystyle Y_{1},Y_{2},\ldots } converges to another random variable Y {\displaystyle Y} is formally expressed as { lim sup n → ∞ | Y n − Y | = 0 } . {\textstyle \left\{\limsup _{n\to \infty }\left|Y_{n}-Y\right|=0\right\}.} It would be a mistake, however, to write this simply as a limsup of events. That is, this is not the event lim sup n → ∞ { | Y n − Y | = 0 } {\textstyle \limsup _{n\to \infty }\left\{\left|Y_{n}-Y\right|=0\right\}} ! Instead, the complement of the event is { lim sup n → ∞ | Y n − Y | ≠ 0 } = { lim sup n → ∞ | Y n − Y | > 1 k for some k } = ⋃ k ≥ 1 ⋂ n ≥ 1 ⋃ j ≥ n { | Y j − Y | > 1 k } = lim k → ∞ lim sup n → ∞ { | Y n − Y | > 1 k } . {\displaystyle {\begin{aligned}\left\{\limsup _{n\to \infty }\left|Y_{n}-Y\right|\neq 0\right\}&=\left\{\limsup _{n\to \infty }\left|Y_{n}-Y\right|>{\frac {1}{k}}{\text{ for some }}k\right\}\\&=\bigcup _{k\geq 1}\bigcap _{n\geq 1}\bigcup _{j\geq n}\left\{\left|Y_{j}-Y\right|>{\tfrac {1}{k}}\right\}\\&=\lim _{k\to \infty }\limsup _{n\to \infty }\left\{\left|Y_{n}-Y\right|>{\tfrac {1}{k}}\right\}.\end{aligned}}} Therefore, P ( { lim sup n → ∞ | Y n − Y | ≠ 0 } ) = lim k → ∞ P ( lim sup n → ∞ { | Y n − Y | > 1 k } ) . {\displaystyle \mathbb {P} \left(\left\{\limsup _{n\to \infty }\left|Y_{n}-Y\right|\neq 0\right\}\right)=\lim _{k\to \infty }\mathbb {P} \left(\limsup _{n\to \infty }\left\{\left|Y_{n}-Y\right|>{\tfrac {1}{k}}\right\}\right).} == See also == List of set identities and relations – Equalities for combinations of sets Set theory – Branch of mathematics that studies sets == References ==
Coupling card trick
The Kruskal count (also known as Kruskal's principle, Dynkin–Kruskal count, Dynkin's counting trick, Dynkin's card trick, coupling card trick or shift coupling) is a probabilistic concept originally demonstrated by the Russian mathematician Evgenii Borisovich Dynkin in the 1950s or 1960s discussing coupling effects and rediscovered as a card trick by the American mathematician Martin David Kruskal in the early 1970s as a side-product while working on another problem. It was published by Kruskal's friend Martin Gardner and magician Karl Fulves in 1975. This is related to a similar trick published by magician Alexander F. Kraus in 1957 as Sum total and later called Kraus principle. Besides uses as a card trick, the underlying phenomenon has applications in cryptography, code breaking, software tamper protection, code self-synchronization, control-flow resynchronization, design of variable-length codes and variable-length instruction sets, web navigation, object alignment, and others. == Card trick == The trick is performed with cards, but is more a magical-looking effect than a conventional magic trick. The magician has no access to the cards, which are manipulated by members of the audience. Thus sleight of hand is not possible. Rather the effect is based on the mathematical fact that the output of a Markov chain, under certain conditions, is typically independent of the input. A simplified version using the hands of a clock performed by David Copperfield is as follows. A volunteer picks a number from one to twelve and does not reveal it to the magician. The volunteer is instructed to start from 12 on the clock and move clockwise by a number of spaces equal to the number of letters that the chosen number has when spelled out. This is then repeated, moving by the number of letters in the new number. The output after three or more moves does not depend on the initially chosen number and therefore the magician can predict it. == See also == Coupling (probability) Discrete logarithm Equifinality Ergodic theory Geometric distribution Overlapping instructions Pollard's kangaroo algorithm Random walk Self-synchronizing code == Notes == == References == == Further reading == Dynkin [Ды́нкин], Evgenii Borisovich [Евге́ний Бори́сович]; Uspenskii [Успе́нский], Vladimir Andreyevich [Влади́мир Андре́евич] (1963). Written at University of Moscow, Moscow, Russia. Putnam, Alfred L.; Wirszup, Izaak (eds.). Random Walks (Mathematical Conversations Part 3). Survey of Recent East European Mathematical Literature. Vol. 3. Translated by Whaland, Jr., Norman D.; Titelbaum, Olga A. (1 ed.). Boston, Massachusetts, US: The University of Chicago / D. C. Heath and Company. LCCN 63-19838. Retrieved 2023-09-03. (1+9+80+9+1 pages) [8] (NB. This is a translation of the first Russian edition published as "Математические беседы: Задачи о многоцветной раскраске / Задачи из теории чисел / Случайные блуждания"[9] by GTTI (ГТТИ) in March 1952 as Number 6 in Library of the Mathematics Circle (Библиотека математического кружка). It is based on seminars held at the School Mathematics Circle in 1945/1946 and 1946/1947 at Moscow State University.) Dynkin [Ды́нкин], Evgenii Borisovich [Евге́ний Бори́сович] (1965) [1963-03-10, 1962-03-31]. Written at University of Moscow, Moscow, Russia. Markov Processes-I. Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete. Vol. I (121). Translated by Fabius, Jaap [at Wikidata]; Greenberg, Vida Lazarus [at Wikidata]; Maitra, Ashok Prasad [at Wikidata]; Majone, Giandomenico (1 ed.). New York, US / Berlin, Germany: Springer-Verlag (Academic Press, Inc.). doi:10.1007/978-3-662-00031-1. ISBN 978-3-662-00033-5. ISSN 0072-7830. LCCN 64-24812. S2CID 251691119. Title-No. 5104. Retrieved 2023-09-02. [10] (xii+365+1 pages); Dynkin, Evgenii Borisovich (1965) [1963-03-10, 1962-03-31]. Written at University of Moscow, Moscow, Russia. Markov Processes-II. Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete. Vol. II (122). Translated by Fabius, Jaap [at Wikidata]; Greenberg, Vida Lazarus [at Wikidata]; Maitra, Ashok Prasad [at Wikidata]; Majone, Giandomenico (1 ed.). New York, US / Berlin, Germany: Springer-Verlag. doi:10.1007/978-3-662-25360-1. ISBN 978-3-662-23320-7. ISSN 0072-7830. LCCN 64-24812. Title-No. 5105. Retrieved 2023-09-02. (viii+274+2 pages) (NB. This was originally published in Russian as "Markovskie prot︠s︡essy" (Марковские процессы) by Fizmatgiz (Физматгиз) in 1963 and translated to English with the assistance of the author.) Dynkin [Ды́нкин], Evgenii Borisovich [Евге́ний Бори́сович]; Yushkevish [Юшкевич], Aleksandr Adol'fovich [Александр Адольфович] [in German] (1969) [1966-01-22]. Written at University of Moscow, Moscow, Russia. Markov Processes: Theorems and Problems (PDF). Translated by Wood, James S. (1 ed.). New York, US: Plenum Press / Plenum Publishing Corporation. LCCN 69-12529. Archived (PDF) from the original on 2023-09-06. Retrieved 2023-09-03. (x+237 pages) (NB. This is a corrected translation of the first Russian edition published as "Теоремы и задачи о процессах Маркова" by Nauka Press (Наука) in 1967 as part of a series on Probability Theory and Mathematical Statistics (Теория вероятностей и математическая статистика) with the assistance of the authors. It is based on lectures held at the Moscow State University in 1962/1963.) Marlo, Edward "Ed" (1976-12-01). Written at Chicago, Illinois, US. Hudson, Charles (ed.). "Approach & Uses for the "Kruskal Kount" / First Presentation Angle / Second Presentation Angle - Checking the Deck / Third Presentation Angle - The 100% Method / Fourth Presentation Angle - "Disaster"". Card Corner. The Linking Ring. Vol. 56, no. 12. Bluffton, Ohio, US: International Brotherhood of Magicians. pp. 82, 83, 83, 84, 85–87. ISSN 0024-4023. Hudson, Charles (1977-10-01). Written at Chicago, Illinois, US. "The Kruskal Principle". Card Corner. The Linking Ring. Vol. 57, no. 10. Bluffton, Ohio, US: International Brotherhood of Magicians. p. 85. ISSN 0024-4023. Gardner, Martin (September 1998). "Ten Amazing Mathematical Tricks". Gardner's Gatherings. Math Horizons. Vol. 6, no. 1. Mathematical Association of America / Taylor & Francis, Ltd. pp. 13–15, 26. ISSN 1072-4117. JSTOR 25678174. (4 pages) Haigh, John (1999). "7. Waiting, waiting, waiting: Packs of cards (2)". Taking Chances: Winning with Probability (1 ed.). Oxford, UK: Oxford University Press Inc. pp. 133–136. ISBN 978-0-19-850291-3. Retrieved 2023-09-06. (4 pages); Haigh, John (2009) [2003]. "7. Waiting, waiting, waiting: Packs of cards (2)". Taking Chances: Winning with Probability (Reprint of 2nd ed.). Oxford, UK: Oxford University Press Inc. pp. 139–142. ISBN 978-0-19-852663-6. Retrieved 2023-09-03. (4 of xiv+373+17 pages) Bean, Gordon (2002). "A Labyrinth in a Labyrinth". In Wolfe, David; Rodgers, Tom (eds.). Puzzlers' Tribute: A Feast for the Mind (1 ed.). CRC Press / Taylor & Francis Group, LLC. pp. 103–106. ISBN 978-1-43986410-4. (xvi+421 pages) Ching, Wai-Ki [at Wikidata]; Lee, Yiu-Fai (September 2005) [2004-05-05]. "A Random Walk on a Circular Path". Miscellany. International Journal of Mathematical Education in Science and Technology. 36 (6). Taylor & Francis, Ltd.: 680–683. doi:10.1080/00207390500064254. eISSN 1464-5211. ISSN 0020-739X. S2CID 121692834. (4 pages) Lee, Yiu-Fai; Ching, Wai-Ki [at Wikidata] (2006-03-07) [2005-09-29]. "On Convergent Probability of a Random Walk" (PDF). Classroom notes. International Journal of Mathematical Education in Science and Technology. 37 (7). Advanced Modeling and Applied Computing Laboratory and Department of Mathematics, The University of Hong Kong, Hong Kong: Taylor & Francis, Ltd.: 833–838. doi:10.1080/00207390600712299. eISSN 1464-5211. ISSN 0020-739X. S2CID 121242696. Archived (PDF) from the original on 2023-09-02. Retrieved 2023-09-02. (6 pages) Humble, Steve "Dr. Maths" (July 2008). "Magic Card Maths". The Montana Mathematics Enthusiast. 5 (2 & 3). Missoula, Montana, US: University of Montana: 327–336. doi:10.54870/1551-3440.1111. ISSN 1551-3440. S2CID 117632058. Article 14. Archived from the original on 2023-09-03. Retrieved 2023-09-02. (1+10 pages) Montenegro, Ravi [at Wikidata]; Tetali, Prasad V. (2010-11-07) [2009-05-31]. How Long Does it Take to Catch a Wild Kangaroo? (PDF). Proceedings of the forty-first annual ACM symposium on Theory of computing (STOC 2009). pp. 553–560. arXiv:0812.0789. doi:10.1145/1536414.1536490. S2CID 12797847. Archived (PDF) from the original on 2023-08-20. Retrieved 2023-08-20. Grime, James [at Wikidata] (2011). "Kruskal's Count" (PDF). singingbanana.com. Archived (PDF) from the original on 2023-08-19. Retrieved 2023-08-19. (8 pages) Bosko, Lindsey R. (2011). Written at Department of Mathematics, North Carolina State University, Raleigh, North Carolina, US. "Cards, Codes, and Kangaroos" (PDF). The UMAP Journal. Modules and Monographs in Undergraduate Mathematics and its Applications (UMAP) Project. 32 (3). Bedford, Massachusetts, US: Consortium For Mathematics & Its Applications, Inc. (COMAP): 199–236. UMAP Unit 808. Archived (PDF) from the original on 2023-08-19. Retrieved 2023-08-19. West, Bob [at Wikidata] (2011-05-26). "Wikipedia's fixed point". dlab @ EPFL. Lausanne, Switzerland: Data Science Lab, École Polytechnique Fédérale de Lausanne. Archived from the original on 2022-05-23. Retrieved 2023-09-04. [...] it turns out there is a card trick that works exactly the same way. It's called the "Kruskal Count" [...] Humble, Steve "Dr. Maths" (September 2012) [2012-07-02]. Written at Kraków, Poland. Behrends, Ehrhard [in German] (ed.). "Mathematics in the Streets of Kraków" (PDF). EMS Newsletter. No. 85. Zürich, Switzerland: EMS Publishing House / European Mathematical Society. pp. 20–21 [21]. ISSN 1027-488X. Archived (PDF) from the original on 2023-09-02. Retrieved 2023-09-02. p. 21: [...] The Kruscal count [...] [11] (2 pages) Andriesse, Dennis; Bos, Herbert [at Wikidata] (2014-07-10). Written at Vrije Universiteit Amsterdam, Amsterdam, Netherlands. Dietrich, Sven (ed.). Instruction-Level Steganography for Covert Trigger-Based Malware (PDF). 11th International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment (DIMVA). Lecture Notes in Computer Science. Egham, UK; Switzerland: Springer International Publishing. pp. 41–50 [45]. doi:10.1007/978-3-319-08509-8_3. eISSN 1611-3349. ISBN 978-3-31908508-1. ISSN 0302-9743. S2CID 4634611. LNCS 8550. Archived (PDF) from the original on 2023-08-26. Retrieved 2023-08-26. (10 pages) Montenegro, Ravi [at Wikidata]; Tetali, Prasad V. (2014-09-07). Kruskal's Principle and Collision Time for Monotone Transitive Walks on the Integers (PDF). Archived (PDF) from the original on 2023-08-22. Retrieved 2023-08-22. (18 pages) Kijima, Shuji; Montenegro, Ravi [at Wikidata] (2015-03-15) [2015-03-30/2015-04-01]. Written at Gaithersburg, Maryland, US. Katz, Jonathan (ed.). Collision of Random Walks and a Refined Analysis of Attacks on the Discrete Logarithm Problem (PDF). Proceedings of the 18th IACR International Conference on Practice and Theory in Public-Key Cryptography. Lecture Notes in Computer Science. Berlin & Heidelberg, Germany: International Association for Cryptologic Research / Springer Science+Business Media. pp. 127–149. doi:10.1007/978-3-662-46447-2_6. ISBN 978-3-662-46446-5. LNCS 9020. Archived (PDF) from the original on 2023-09-03. Retrieved 2023-09-03. (23 pages) Jose, Harish (2016-06-14) [2016-06-02]. "PDCA and the Roads to Rome: Can a lean purist and a Six Sigma purist reach the same answer to a problem?". Lean. Archived from the original on 2023-09-07. Retrieved 2023-09-07. [12][13] Lamprecht, Daniel; Dimitrov, Dimitar; Helic, Denis; Strohmaier, Markus (2016-08-17). "Evaluating and Improving Navigability of Wikipedia: A Comparative Study of Eight Language Editions". Proceedings of the 12th International Symposium on Open Collaboration (PDF). OpenSym, Berlin, Germany: Association for Computing Machinery. pp. 1–10. doi:10.1145/2957792.2957813. ISBN 978-1-4503-4451-7. S2CID 13244770. Archived (PDF) from the original on 2023-09-04. Retrieved 2021-03-17. Jämthagen, Christopher (November 2016). On Offensive and Defensive Methods in Software Security (PDF) (Thesis). Lund, Sweden: Department of Electrical and Information Technology, Lund University. p. 96. ISBN 978-91-7623-942-1. ISSN 1654-790X. Archived (PDF) from the original on 2023-08-26. Retrieved 2023-08-26. (1+xvii+1+152 pages) Mannam, Pragna; Volkov, Jr., Alexander; Paolini, Robert; Chirikjian, Gregory Scott; Mason, Matthew Thomas (2019-02-06) [2018-12-04]. "Sensorless Pose Determination Using Randomized Action Sequences". Entropy. 21 (2). Basel, Switzerland: Multidisciplinary Digital Publishing Institute: 154. arXiv:1812.01195. Bibcode:2019Entrp..21..154M. doi:10.3390/e21020154. ISSN 1099-4300. PMC 7514636. PMID 33266870. S2CID 54444590. Article 154. p. 2: [...] The phenomenon, while also reminiscent of contraction mapping, is similar to an interesting card trick called the Kruskal Count [...] so we have dubbed the phenomenon as "Kruskal effect". [...] (13 pages) Blackburn, Simon Robert; Esfahani, Navid Nasr; Kreher, Donald Lawson; Stinson, Douglas "Doug" Robert (2023-08-22) [2022-11-18]. "Constructions and bounds for codes with restricted overlaps". IEEE Transactions on Information Theory. arXiv:2211.10309. (17 pages) (NB. This source does not mention Dynkin or Kruskal specifically.) == External links == Humble, Steve "Dr. Maths" (2010). "Dr. Maths Randomness Show". YouTube (Video). Alchemist Cafe, Dublin, Ireland. Retrieved 2023-09-05. [23:40] "Mathematical Card Trick Source". Close-Up Magic. GeniiForum. 2015–2017. Archived from the original on 2023-09-04. Retrieved 2023-09-05. Behr, Denis, ed. (2023). "Kruskal Principle". Conjuring Archive. Archived from the original on 2023-09-10. Retrieved 2023-09-10.
Marginal structural model
Marginal structural models are a class of statistical models used for causal inference in epidemiology. Such models handle the issue of time-dependent confounding in evaluation of the efficacy of interventions by inverse probability weighting for receipt of treatment, they allow us to estimate the average causal effects. For instance, in the study of the effect of zidovudine in AIDS-related mortality, CD4 lymphocyte is used both for treatment indication, is influenced by treatment, and affects survival. Time-dependent confounders are typically highly prognostic of health outcomes and applied in dosing or indication for certain therapies, such as body weight or lab values such as alanine aminotransferase or bilirubin. The first marginal structural models were introduced in 2000. The works of James Robins, Babette Brumback, and Miguel Hernán provided an intuitive theory and an easy-to-implement software which made them popular for the analysis of longitudinal data. == References ==
Offset filtration
The offset filtration (also called the "union-of-balls" or "union-of-disks" filtration) is a growing sequence of metric balls used to detect the size and scale of topological features of a data set. The offset filtration commonly arises in persistent homology and the field of topological data analysis. Utilizing a union of balls to approximate the shape of geometric objects was first suggested by Frosini in 1992 in the context of submanifolds of Euclidean space. The construction was independently explored by Robins in 1998, and expanded to considering the collection of offsets indexed over a series of increasing scale parameters (i.e., a growing sequence of balls), in order to observe the stability of topological features with respect to attractors. Homological persistence as introduced in these papers by Frosini and Robins was subsequently formalized by Edelsbrunner et al. in their seminal 2002 paper Topological Persistence and Simplification. Since then, the offset filtration has become a primary example in the study of computational topology and data analysis. == Definition == Let X {\displaystyle X} be a finite set in a metric space ( M , d ) {\displaystyle (M,d)} , and for any x ∈ X {\displaystyle x\in X} let B ( x , ε ) = { y ∈ X ∣ d ( x , y ) ≤ ε } {\displaystyle B(x,\varepsilon )=\{y\in X\mid d(x,y)\leq \varepsilon \}} be the closed ball of radius ε {\displaystyle \varepsilon } centered at x {\displaystyle x} . Then the union X ( ε ) := ⋃ x ∈ X B ( x , ε ) {\textstyle X^{(\varepsilon )}:=\bigcup _{x\in X}B(x,\varepsilon )} is known as the offset of X {\displaystyle X} with respect to the parameter ε {\displaystyle \varepsilon } (or simply the ε {\displaystyle \varepsilon } -offset of X {\displaystyle X} ). By considering the collection of offsets over all ε ∈ [ 0 , ∞ ) {\displaystyle \varepsilon \in [0,\infty )} we get a family of spaces O ( X ) := { X ( ε ) ∣ ε ∈ [ 0 , ∞ ) } {\displaystyle {\mathcal {O}}(X):=\{X^{(\varepsilon )}\mid \varepsilon \in [0,\infty )\}} where X ( ε ) ⊆ X ( ε ′ ) {\displaystyle X^{(\varepsilon )}\subseteq X^{(\varepsilon ^{\prime })}} whenever ε ≤ ε ′ {\displaystyle \varepsilon \leq \varepsilon ^{\prime }} . So O ( X ) {\displaystyle {\mathcal {O}}(X)} is a family of nested topological spaces indexed over ε {\displaystyle \varepsilon } , which defines a filtration known as the offset filtration on X {\displaystyle X} . Note that it is also possible to view the offset filtration as a functor O ( X ) : [ 0 , ∞ ) → T o p {\displaystyle {\mathcal {O}}(X):[0,\infty )\to \mathbf {Top} } from the poset category of non-negative real numbers to the category of topological spaces and continuous maps. There are some advantages to the categorical viewpoint, as explored by Bubenik and others. == Properties == A standard application of the nerve theorem shows that the union of balls has the same homotopy type as its nerve, since closed balls are convex and the intersection of convex sets is convex. The nerve of the union of balls is also known as the Čech complex, which is a subcomplex of the Vietoris-Rips complex. Therefore the offset filtration is weakly equivalent to the Čech filtration (defined as the nerve of each offset across all scale parameters), so their homology groups are isomorphic. Although the Vietoris-Rips filtration is not identical to the Čech filtration in general, it is an approximation in a sense. In particular, for a set X ⊂ R d {\displaystyle X\subset \mathbb {R} ^{d}} we have a chain of inclusions Rips ε ⁡ ( X ) ⊂ Cech ε ′ ⁡ ( X ) ⊂ Rips ε ′ ⁡ ( X ) {\displaystyle \operatorname {Rips} _{\varepsilon }(X)\subset \operatorname {Cech} _{\varepsilon ^{\prime }}(X)\subset \operatorname {Rips} _{\varepsilon ^{\prime }}(X)} between the Rips and Čech complexes on X {\displaystyle X} whenever ε ′ / ε ≥ 2 d / d + 1 {\displaystyle \varepsilon ^{\prime }/\varepsilon \geq {\sqrt {2d/d+1}}} . In general metric spaces, we have that Cech ε ⁡ ( X ) ⊂ Rips 2 ε ⁡ ( X ) ⊂ Cech 2 ε ⁡ ( X ) {\displaystyle \operatorname {Cech} _{\varepsilon }(X)\subset \operatorname {Rips} _{2\varepsilon }(X)\subset \operatorname {Cech} _{2\varepsilon }(X)} for all ε > 0 {\displaystyle \varepsilon >0} , implying that the Rips and Cech filtrations are 2-interleaved with respect to the interleaving distance as introduced by Chazal et al. in 2009. It is a well-known result of Niyogi, Smale, and Weinberger that given a sufficiently dense random point cloud sample of a smooth submanifold in Euclidean space, the union of balls of a certain radius recovers the homology of the object via a deformation retraction of the Čech complex. The offset filtration is also known to be stable with respect to perturbations of the underlying data set. This follows from the fact that the offset filtration can be viewed as a sublevel-set filtration with respect to the distance function of the metric space. The stability of sublevel-set filtrations can be stated as follows: Given any two real-valued functions γ , κ {\displaystyle \gamma ,\kappa } on a topological space T {\displaystyle T} such that for all i ≥ 0 {\displaystyle i\geq 0} , the i th {\displaystyle i{\text{th}}} -dimensional homology modules on the sublevel-set filtrations with respect to γ , κ {\displaystyle \gamma ,\kappa } are point-wise finite dimensional, we have d B ( B i ( γ ) , B i ( κ ) ) ≤ d ∞ ( γ , κ ) {\displaystyle d_{B}({\mathcal {B}}_{i}(\gamma ),{\mathcal {B}}_{i}(\kappa ))\leq d_{\infty }(\gamma ,\kappa )} where d B ( − ) {\displaystyle d_{B}(-)} and d ∞ ( − ) {\displaystyle d_{\infty }(-)} denote the bottleneck and sup-norm distances, respectively, and B i ( − ) {\displaystyle {\mathcal {B}}_{i}(-)} denotes the i th {\displaystyle i{\text{th}}} -dimensional persistent homology barcode. While first stated in 2005, this sublevel stability result also follows directly from an algebraic stability property sometimes known as the "Isometry Theorem," which was proved in one direction in 2009, and the other direction in 2011. A multiparameter extension of the offset filtration defined by considering points covered by multiple balls is given by the multicover bifiltration, and has also been an object of interest in persistent homology and computational geometry. == References ==
Large deviations of Gaussian random functions
A random function – of either one variable (a random process), or two or more variables (a random field) – is called Gaussian if every finite-dimensional distribution is a multivariate normal distribution. Gaussian random fields on the sphere are useful (for example) when analysing the anomalies in the cosmic microwave background radiation (see, pp. 8–9); brain images obtained by positron emission tomography (see, pp. 9–10). Sometimes, a value of a Gaussian random function deviates from its expected value by several standard deviations. This is a large deviation. Though rare in a small domain (of space or/and time), large deviations may be quite usual in a large domain. == Basic statement == Let M {\displaystyle M} be the maximal value of a Gaussian random function X {\displaystyle X} on the (two-dimensional) sphere. Assume that the expected value of X {\displaystyle X} is 0 {\displaystyle 0} (at every point of the sphere), and the standard deviation of X {\displaystyle X} is 1 {\displaystyle 1} (at every point of the sphere). Then, for large a > 0 {\displaystyle a>0} , P ( M > a ) {\displaystyle P(M>a)} is close to C a exp ⁡ ( − a 2 / 2 ) + 2 P ( ξ > a ) {\displaystyle Ca\exp(-a^{2}/2)+2P(\xi >a)} , where ξ {\displaystyle \xi } is distributed N ( 0 , 1 ) {\displaystyle N(0,1)} (the standard normal distribution), and C {\displaystyle C} is a constant; it does not depend on a {\displaystyle a} , but depends on the correlation function of X {\displaystyle X} (see below). The relative error of the approximation decays exponentially for large a {\displaystyle a} . The constant C {\displaystyle C} is easy to determine in the important special case described in terms of the directional derivative of X {\displaystyle X} at a given point (of the sphere) in a given direction (tangential to the sphere). The derivative is random, with zero expectation and some standard deviation. The latter may depend on the point and the direction. However, if it does not depend, then it is equal to ( π / 2 ) 1 / 4 C 1 / 2 {\displaystyle (\pi /2)^{1/4}C^{1/2}} (for the sphere of radius 1 {\displaystyle 1} ). The coefficient 2 {\displaystyle 2} before P ( ξ > a ) {\displaystyle P(\xi >a)} is in fact the Euler characteristic of the sphere (for the torus it vanishes). It is assumed that X {\displaystyle X} is twice continuously differentiable (almost surely), and reaches its maximum at a single point (almost surely). == The clue: mean Euler characteristic == The clue to the theory sketched above is, Euler characteristic χ a {\displaystyle \chi _{a}} of the set { X > a } {\displaystyle \{X>a\}} of all points t {\displaystyle t} (of the sphere) such that X ( t ) > a {\displaystyle X(t)>a} . Its expected value (in other words, mean value) E ( χ a ) {\displaystyle E(\chi _{a})} can be calculated explicitly: E ( χ a ) = C a exp ⁡ ( − a 2 / 2 ) + 2 P ( ξ > a ) {\displaystyle E(\chi _{a})=Ca\exp(-a^{2}/2)+2P(\xi >a)} (which is far from being trivial, and involves Poincaré–Hopf theorem, Gauss–Bonnet theorem, Rice's formula etc.). The set { X > a } {\displaystyle \{X>a\}} is the empty set whenever M < a {\displaystyle M<a} ; in this case χ a = 0 {\displaystyle \chi _{a}=0} . In the other case, when M > a {\displaystyle M>a} , the set { X > a } {\displaystyle \{X>a\}} is non-empty; its Euler characteristic may take various values, depending on the topology of the set (the number of connected components, and possible holes in these components). However, if a {\displaystyle a} is large and M > a {\displaystyle M>a} then the set { X > a } {\displaystyle \{X>a\}} is usually a small, slightly deformed disk or ellipse (which is easy to guess, but quite difficult to prove). Thus, its Euler characteristic χ a {\displaystyle \chi _{a}} is usually equal to 1 {\displaystyle 1} (given that M > a {\displaystyle M>a} ). This is why E ( χ a ) {\displaystyle E(\chi _{a})} is close to P ( M > a ) {\displaystyle P(M>a)} . == See also == Gaussian process Gaussian random field Large deviations theory == Further reading == The basic statement given above is a simple special case of a much more general (and difficult) theory stated by Adler. For a detailed presentation of this special case see Tsirelson's lectures.
Data philanthropy
Data philanthropy refers to the practice of private companies donating corporate data. This data is usually donated to nonprofits or donation-run organizations that have difficulty keeping up with expensive data collection technology. The concept was introduced through the United Nations Global Pulse initiative in 2011 to explore corporate data assets for humanitarian, academic, and societal causes. For example, anonymized mobile data could be used to track disease outbreaks, or data on consumer actions may be shared with researchers to study public health and economic trends. == Definition == A large portion of data collected from the internet consists of user-generated content, such as blogs, social media posts, and information submitted through lead generation and data forms. Additionally, corporations gather and analyze consumer data to gain insight into customer behavior, identify potential markets, and inform investment decisions. United Nations Global Pulse director Robert Kirkpatrick has referred to this type of data as "massive passive data" or "data exhaust." == Challenges == While data philanthropy can enhance development policies, making users' private data available to various organizations raises concerns regarding privacy, ownership, and the equitable use of data. Different techniques, such as differential privacy and alphanumeric strings of information, can allow access to personal data while ensuring user anonymity. However, even if these algorithms work, re-identification may still be possible. Another challenge is convincing corporations to share their data. The data collected by corporations provides them with market competitiveness and insight regarding consumer behavior. Corporations may fear losing their competitive edge if they share the information they have collected with the public. Numerous moral challenges are also encountered. In 2016, Mariarosaria Taddeo, a digital ethics professor at the University of Oxford, proposed an ethical framework to address them. == Sharing strategies == The goal of data philanthropy is to create a global data commons where companies, governments, and individuals can contribute anonymous, aggregated datasets. The United Nations Global Pulse offers four different tactics that companies can use to share their data that preserve consumer anonymity: Share aggregated and derived data sets for analysis under nondisclosure agreements (NDA) Allow researchers to analyze data within the private company's own network under NDAs Real-Time Data Commons: data pooled and aggregated among multiple companies of the same industry to protect competitiveness Public/Private Alerting Network: companies mine data behind their own firewalls and share indicators == Application in various fields == Many corporations take part in data philanthropy, including social networking platforms (e.g., Facebook, Twitter), telecommunications providers (e.g., Verizon, AT&T), and search engines (e.g., Google, Bing). Collecting and sharing anonymized, aggregated user-generated data is made available through data-sharing systems to support research, policy development, and social impact initiatives. By participating in such efforts, these organizations contribute to causes regarded as beneficial to society, allowing institutions to give back meaningfully. With the onset of technological advancements, the sharing of data on a global scale and an in-depth analysis of these data structures could mitigate the effects of global issues such as natural disasters and epidemics. Robert Kirkpatrick, the Director of the United Nations Global Pulse, has argued that this aggregated information is beneficial for the common good and can lead to developments in research and data production in a range of varied fields. === Digital disease detection === Health researchers use digital disease detection by collecting data from various sources—such as social media platforms (e.g., Twitter, Facebook), mobile devices (e.g., cell phones, smartphones), online search queries, mobile apps, and sensor data from wearables and environmental sensors—to monitor and predict the spread of infectious diseases. This approach allows them to track and anticipate outbreaks of epidemics (e.g., COVID-19, Ebola), pandemics, vector-borne diseases (e.g., malaria, dengue fever), and respiratory illnesses (e.g., influenza, SARS), improving response and intervention strategies for the spread of diseases. In 2008, Centers for Disease Control and Prevention collaborated with Google and launched Google Flu Trends, a website that tracked flu-related searches and user locations to track the spread of the flu. Users could visit Google Flu Trends to compare the amount of flu-related search activity versus the reported numbers of flu outbreaks on a graphical map. One drawback of this method of tracking was that Google searches are sometimes performed due to curiosity rather than when an individual is suffering from the flu. According to Ashley Fowlkes, an epidemiologist in the CDC Influenza division, "The Google Flu Trends system tries to account for that type of media bias by modeling search terms over time to see which ones remain stable." Google Flu Trends is no longer publishing current flu estimates on the public website; however, visitors to the site can still view and download previous estimates. Current data can be shared with verified researchers. A study from the Harvard School of Public Health (HSPH), published in the October 12, 2012 issue of Science, discussed how phone data helped curb the spread of malaria in Kenya. The researchers mapped phone calls and texts made by 14,816,521 Kenyan mobile phone subscribers. When individuals left their primary living location, the destination and length of journey were calculated. This data was then compared to a 2009 malaria prevalence map to estimate the disease's commonality in each location. Combining all this information, the researchers could estimate the probability of an individual carrying malaria and map the movement of the disease. This research can be used to track the spread of similar diseases. === Humanitarian aid === Calling patterns of mobile phone users can determine the socioeconomic standings of the populace, which can be used to deduce "its access to housing, education, healthcare, and basic services such as water and electricity." Researchers from Columbia University and Karolinska Institute used daily SIM card location data from both before and after the 2010 Haiti earthquake to estimate the movement of people both in response to the earthquake and during the related 2010 Haiti cholera outbreak. Their research suggests that mobile phone data can provide rapid and accurate estimates of population movements during disasters and outbreaks of infectious disease. Big data can also provide information on looming disasters and can assist relief organizations in rapid-response and locating displaced individuals. By analyzing specific patterns within this 'big data', governments and NGOs can enhance responses to disruptive events such as natural disasters, disease outbreaks, and global economic crises. Leveraging real-time information enables a deeper understanding of individual well-being, allowing for more effective interventions. Corporations utilize digital services, such as human sensor systems, to detect and solve impending problems within communities. This is a strategy used by the private sector to anonymously share customer information for public benefit, while preserving user privacy. === Impoverished areas === Poverty still remains a worldwide issue, with over 2.5 billion people currently impoverished. Statistics indicate the widespread use of mobile phones, even within impoverished communities. Additional data can be collected through Internet access, social media, utility payments and governmental statistics. Data-driven activities can lead to the accumulation of 'big data', which in turn can assist international non-governmental organizations in documenting and evaluating the needs of underprivileged populations. Through data philanthropy, NGOs can distribute information while cooperating with governments and private companies. === Corporate === Data philanthropy incorporates aspects of social philanthropy by allowing corporations to create profound impacts through the act of giving back by dispersing proprietary datasets. The public sector collects and preserves information, considered an essential asset. Companies track and analyze users' online activities to gain insight into their needs related to new products and services. These companies view the welfare of the population as key to business expansion and progression by using their data to highlight global citizens' issues. Experts in the private sector emphasize the importance of integrating diverse data sources—such as retail, mobile, and social media data—to develop essential solutions for global challenges. In Data Philanthropy: New Paradigms for Collaborative Problem Solving (2022), authors Stefaan Verhulst and Andrew Young discuss this approach. Robert Kirkpatrick argues that, although sharing private information carries inherent risks, it ultimately yields public benefits, supporting the common good. – via Harvard Business Review (subscription required) The digital revolution causes an extensive production of big data that is user-generated and available on the web. Corporations accumulate information on customer preferences through the digital services they utilize and products they purchase to gain clear insights on their clientele and future market opportunities. However, the rights of individuals concerning privacy and ownership of data are controversial, as governments and other institutions can use this collective data for unethical purposes. === Academia === Data philanthropy is crucial in the academic field. Researchers face numerous challenges in obtaining data, which is often restricted to a select group of individuals who have exclusive access to certain resources, such as social media feeds. This limited access allows these researchers to generate additional insights and pursue innovative studies. For instance, X Corp. (formerly Twitter Inc.) offers access to its real-time APIs at different price points, such as $5,000 for the ability to read 1,000,000 posts each month, a cost that frequently exceeds the financial capabilities of many researchers. === Human rights === Data philanthropy aids the human rights movement by assisting in dispersing evidence for truth commissions and war crimes tribunals. Advocates for human rights gather data on abuses occurring within countries, which is then used for scientific analysis to raise awareness and drive action. For example, non-profit organizations compile data from human rights monitors in war zones to assist the UN High Commissioner for Human Rights. This data uncovers inconsistencies in the number of war casualties, leading to international attention and influencing global policy discussions. == See also == Big data Open data Freedom of information Data security Public-benefit corporation == References == == External links == Data Philanthropy, where are we now? in UN Global Pulse blog by Adreas Pawelke and Anoush Rima Tatevossian (2013-05-08).
Facebook–Cambridge Analytica data scandal
In the 2010s, personal data belonging to millions of Facebook users was collected by British consulting firm Cambridge Analytica for political advertising without informed consent. The data was collected through an app called "This Is Your Digital Life", developed by data scientist Aleksandr Kogan and his company Global Science Research in 2013. The app consisted of a series of questions to build psychological profiles on users, and collected the personal data of the users’ Facebook friends via Facebook's Open Graph platform. The app harvested the data of up to 87 million Facebook profiles. Cambridge Analytica used the data to analytically assist the 2016 presidential campaigns of Ted Cruz and Donald Trump. Cambridge Analytica was also widely accused of interfering with the Brexit referendum, although the official investigation recognised that the company was not involved "beyond some initial enquiries" and that "no significant breaches" took place. In interviews with The Guardian and The New York Times, information about the data misuse was disclosed in March 2018 by Christopher Wylie, a former Cambridge Analytica employee. In response, Facebook apologized for their role in the data harvesting and their CEO Mark Zuckerberg testified in April 2018 in front of Congress. In July 2019, it was announced that Facebook was to be fined $5 billion by the Federal Trade Commission due to its privacy violations. In October 2019, Facebook agreed to pay a £500,000 fine to the UK Information Commissioner's Office for exposing the data of its users to a "serious risk of harm". In May 2018, Cambridge Analytica filed for Chapter 7 bankruptcy. Other advertising agencies have been implementing various forms of psychological targeting for years and Facebook had patented a similar technology in 2012. Nevertheless, Cambridge Analytica's methods and their high-profile clients — including the Trump presidential campaign and the UK's Leave.EU campaign — brought the problems of psychological targeting that scholars have been warning against to public awareness. The scandal sparked an increased public interest in privacy and social media's influence on politics. The online movement #DeleteFacebook trended on Twitter. == Overview == Aleksandr Kogan, a data scientist at the University of Cambridge, was hired by Cambridge Analytica, an offshoot of SCL Group, to develop an app called "This Is Your Digital Life" (sometimes stylized as "thisisyourdigitallife"). Cambridge Analytica then arranged an informed consent process for research in which several hundred thousand Facebook users would agree to complete a survey for payment that was only for academic use. However, Facebook allowed this app not only to collect personal information from survey respondents but also from respondents’ Facebook friends. In this way, Cambridge Analytica acquired data from millions of Facebook users. The collection of personal data by Cambridge Analytica was first reported in December 2015 by Harry Davies, a journalist for The Guardian. He reported that Cambridge Analytica was working for United States Senator Ted Cruz using data harvested from millions of people's Facebook accounts without their consent. Further reports followed in November 2016 by McKenzie Funk for the New York Times Sunday Review, December 2016 by Hannes Grasseger and Mikael Krogerus for the Swiss publication Das Magazin (later translated and published by Vice), in February 2017 by Carole Cadwalladr for The Guardian (starting in February 2017), and in March 2017 by Mattathias Schwartz for The Intercept. According to PolitiFact, in his 2016 presidential campaign, Trump paid Cambridge Analytica in September, October, and November for data on Americans and their political preferences. Information on the data breach came to a head in March 2018 with the emergence of a whistleblower, an ex-Cambridge Analytica employee Christopher Wylie. He had been an anonymous source for an article in 2017 in The Observer by Cadwalladr, headlined "The Great British Brexit Robbery". Cadwalladr worked with Wylie for a year to coax him to come forward as a whistleblower. She later brought in Channel 4 News in the UK and The New York Times due to legal threats against The Guardian and The Observer by Cambridge Analytica. Kogan's name change to Aleksandr Spectre, which resulted in the ominous "Dr. Spectre", added to the intrigue and popular appeal of the story. The Guardian and The New York Times published articles simultaneously on March 17, 2018. More than $100 billion was knocked off Facebook's market capitalization in days and politicians in the US and UK demanded answers from Facebook CEO Mark Zuckerberg. The negative public response to the media coverage eventually led to him agreeing to testify in front of the United States Congress. Meghan McCain drew an equivalence between the use of data by Cambridge Analytica and Barack Obama's 2012 presidential campaign; PolitiFact, however, alleged that this data was not used in an unethical way, since Obama's campaign used this data to "have their supporters contact their most persuadable friends" rather than using this data for highly targeted digital ads on websites such as Facebook. == Data characteristics == === Numbers === Wired, The New York Times, and The Observer reported that the data-set had included information on 50 million Facebook users. While Cambridge Analytica claimed it had only collected 30 million Facebook user profiles, Facebook later confirmed that it actually had data on potentially over 87 million users, with 70.6 million of those people from the United States. Facebook estimated that California was the most affected U.S. state, with 6.7 million impacted users, followed by Texas, with 5.6 million, and Florida, with 4.3 million. Data was collected on at least 30 million users while only 270,000 people downloaded the app. === Information === Facebook sent a message to those users believed to be affected, saying the information likely included one's "public profile, page likes, birthday and current city". Some of the app's users gave the app permission to access their News Feed, timeline, and messages. The data was detailed enough for Cambridge Analytica to create psychographic profiles of the subjects of the data. The data also included the locations of each person. For a given political campaign, each profile's information suggested what type of advertisement would be most effective to persuade a particular person in a particular location for some political event. == Data use == === Ted Cruz campaign === In 2016, American senator Ted Cruz hired Cambridge Analytica to aid his presidential campaign. The Federal Election Commission reported that Cruz paid the company $5.8 million in services. Although Cambridge Analytica was not well known at the time, this is when it started to create individual psychographic profiles. This data was then used to create tailored advertisements for each person to sway them into voting for Cruz. === Donald Trump campaign === Donald Trump's 2016 presidential campaign used the harvested data to build psychographic profiles, determining users' personality traits based on their Facebook activity. The campaign team used this information as a micro-targeting technique, displaying customized messages about Trump to different US voters on various digital platforms. Ads were segmented into different categories, mainly based on whether individuals were Trump supporters or potential swing votes. As described by Cambridge Analytica's CEO, the key was to identify those who might be enticed to vote for their client or be discouraged to vote for their opponent. Supporters of Trump received triumphant visuals of him, as well as information regarding polling stations. Swing voters were instead often shown images of Trump's more notable supporters and negative graphics or ideas about his opponent, Hillary Clinton. For example, the collected data was specifically used by "Make America Number 1 Super PAC" to attack Clinton through constructed advertisements that accused Clinton of corruption as a way of propping up Trump as a better candidate for the presidency. However, a former Cambridge Analytica employee, Brittany Kaiser, was asked "Is it absolutely proven that the Trump campaign relied on the data that had been illicitly obtained from Facebook?" She responded: "It has not been proven, because the difficult thing about proving a situation like that is that you need to do a forensic analysis of the database". === Interfering in the elections in Trinidad and Tobago === In the Caribbean country of Trinidad and Tobago, the majority of the population is subdivided into two groups, those of South Asian and African origin. The South Asian population constitute the largest ethnic group in the country (approximately 35.4 %), mainly descendants of indentured workers from South Asia (mostly India), brought in to replace freed African slaves who refused to continue working on the sugar plantations. Through cultural preservation, many residents of Indian descent continue to maintain the traditions of their ancestral homeland. On the other hand, the African population constitute the second largest ethnic group in the country, with approximately 34.2 % of the population identifying themselves as of African descent. In 2010, Cambridge Analytica designed a campaign called Do So, aimed at trying to increase abstention among young people of African descent. This campaign, presented as if it were something that arose spontaneously on social media, was developed as a resistance movement against traditional politics, encouraging young people not to vote as a form of protest, so that, by reducing their electoral participation, it favored the United National Congress (UNC), the party representing the Indian population. The Do So campaign had a significant impact on reducing voter turnout among young people of African descent, which contributed to the UNC's electoral victory in 2010. === Alleged usage === ==== Russia ==== In 2018, the Parliament of the United Kingdom questioned SCL Group director Alexander Nix in a hearing about Cambridge Analytica's connections with Russian oil company Lukoil. Nix stated he had no connections to the two companies despite concerns that the oil company was interested in how the company's data was used to target American voters. Cambridge Analytica had become a point of focus in politics since its involvement in Trump's campaign at this point. Democratic officials made it a point of emphasis for improved investigation over concerns of Russian ties with Cambridge Analytica. It was later confirmed by Christopher Wylie that Lukoil was interested in the company's data regarding political targeting. ==== Brexit ==== Cambridge Analytica was allegedly hired as a consultant company for Leave.EU and the UK Independence Party during 2016, as an effort to convince people to support Brexit. These rumors were the result of the leaked internal emails that were shared with the British parliament. Brittany Kaiser declared that the datasets that Leave.EU used to create databases were provided by Cambridge Analytica. These datasets composed of the data obtained from Facebook were said to be work done as an initial job deliverable for them. Although Arron Banks, co-founder of Leave.EU, denied any involvement with the company, he later declared "When we said we'd hired Cambridge Analytica, maybe a better choice of words could have been deployed." The official investigation by the UK Information Commissioner found that Cambridge Analytica was not involved "beyond some initial enquiries" and the regulator did not identify any "significant breaches" of data protection legislation or privacy or marketing regulations "which met the threshold for formal regulatory action". == Responses == === Facebook and other companies === Facebook CEO Mark Zuckerberg first apologized for the situation with Cambridge Analytica on CNN, calling it an "issue", a "mistake" and a "breach of trust". He explained that he was responding to the Facebook community's concerns and that the company's initial focus on data portability had shifted to locking down data; he also reminded the platform's users of their right of access to personal data. Other Facebook officials argued against calling it a "data breach," arguing those who took the personality quiz originally consented to give away their information. Zuckerberg pledged to make changes and reforms in Facebook policy to prevent similar breaches. On March 25, 2018, Zuckerberg published a personal letter in various newspapers apologizing on behalf of Facebook. In April, Facebook decided to implement the EU's General Data Protection Regulation in all areas of operation and not just the EU. In April 2018, Facebook established Social Science One as a response to the event. On April 25, 2018, Facebook released their first earnings report since the scandal was reported. Revenue fell since the last quarter, but this is usual as it followed the holiday season quote. The quarter revenue was the highest for a first quarter, and the second overall. Amazon said that they suspended Cambridge Analytica from using their Amazon Web Services when they learned in 2015 that their service was collecting personal information. The Italian banking company UniCredit stopped advertising and marketing on Facebook in August 2018. === Governmental actions === The governments of India and Brazil demanded that Cambridge Analytica report how anyone used data from the breach in political campaigning, and various regional governments in the United States have lawsuits in their court systems from citizens affected by the data breach. In early July 2018, the United Kingdom's Information Commissioner's Office announced it intended to fine Facebook £500,000 ($663,000) over the data breach, this being the maximum fine allowed at the time of the breach, saying Facebook "contravened the law by failing to safeguard people's information". In March 2019, a court filing by the U.S. Attorney General for the District of Columbia alleged that Facebook knew of Cambridge Analytica's "improper data-gathering practices" months before they were first publicly reported in December 2015. In July 2019, the Federal Trade Commission (FTC) voted 3-2 to approve fining Facebook $5 billion to finally settle the investigation into the data breach. The record-breaking settlement was one of the largest penalties ever assessed by the U.S. government for any violation. In the ruling, the FTC cited Facebook's continued violations of FTC privacy orders from 2012, which included sharing users' data with apps used by their friends, facial recognition being enabled by default, and Facebook's use of user phone numbers for advertising purposes. As a result, Facebook was made subject to a new 20-year settlement order. In July 2019, the FTC sued Cambridge Analytica's CEO Alexander Nix and GSRApp developer Aleksandr Kogan. Both defendants agreed to administrative orders that restrict their future business dealings and to destroy both any collected personal data and any work product made from the data. The GSRApp collected information initially on up to 270,000 GSRApp users, then harvested data on up to 65 million Facebook friends. Cambridge Analytica declared bankruptcy. Again, in July 2019, Facebook has agreed to pay $100 million to settle with the U.S. Securities and Exchange Commission for "misleading investors about the risks it faced from misuse of user data". The SEC's complaint alleged that Facebook did not correct its existing disclosure for more than two years despite discovering the misuse of its users’ information in 2015. === Impact on Facebook users and investors === Since April 2018, the first full month since the breaking of the Cambridge Analytica data breach, the number of likes, posts and shares on the site had decreased by almost 20%, and has decreased ever since, with the aforementioned activity only momentarily increasing during the summer and during the 2018 US midterm elections. Despite this, user growth of the site has risen in the period since increased media coverage, increasing by 1.8% during the final quarter of 2018. On March 26, 2018, a little after a week after the story was initially published, Facebook stock fell by about 24%, equivalent to $134 billion. By May 10, Wall Street reported that the company recovered their losses. === #DeleteFacebook movement === The public reacted to the data privacy breach by initiating the campaign #DeleteFacebook with the aim of starting a movement to boycott Facebook. The co-founder of WhatsApp, which is owned by Facebook, joined in on the movement by declaring it was time to delete the platform. The hashtag was tweeted almost 400,000 times on Twitter within a 30-day period after news of the data breach. 93% of the mentions of the hashtag actually appeared on Twitter, making it the main social media platform used to share the hashtag. However, a survey by investment firm Raymond James found that although approximately 84% of Facebook users were concerned about how the app used their data, about 48% of those surveyed claimed they wouldn't actually cut back on their usage of the social media network. Additionally, in 2018, Mark Zuckerberg commented that he didn't think the company had seen "a meaningful number of people act" on deleting Facebook. An additional campaign and hashtag, #OwnYourData, was coined by Brittany Kaiser. The hashtag was created by Kaiser as a Facebook campaign that pushed for increased transparency on the platform. #OwnYourData was also used in Kaiser's petition for Facebook to alter their policies and give users increased power and control over their data, which she refers to as users’ assets and property. In addition to the hashtag, Kaiser also created the Own Your Data Foundation to promote increased digital intelligence education. === The Great Hack === The Facebook–Cambridge Analytica data scandal also received media coverage in the form of a 2019 Netflix documentary, The Great Hack. This is the first feature-length media piece that ties together the various elements of the scandal through a narrative. The documentary provides information on the background information and events related to Cambridge Analytica, Facebook, and the 2016 election that resulted in the overall data scandal. The Great Hack communicates the experiences and personal journeys of multiple individuals that were involved in the event in different ways and through different relationships. These individuals include David Carroll, Brittany Kaiser, and more. David Carroll is a New York professor in the field of media that attempted to navigate the legal system in order to discover what data Cambridge Analytica had in possession about him. Meanwhile, Brittany Kaiser is a former Cambridge Analytica employee that ultimately became a whistleblower for the data scandal. == Witness and expert testimony == The United States Senate Judiciary Committee called witnesses to testify about the data breach and general data privacy. They held two hearings, one focusing on Facebook's role in the breach and privacy on social media, and the other on Cambridge Analytica's role and its impact in data privacy. The former was held on April 10, 2018, where Mark Zuckerberg testified and Senator Chuck Grassley and Senator Dianne Feinstein gave statements. The latter occurred on May 16, 2018, where Professor Eitan Hersh, Dr. Mark Jamison, and Christopher Wylie testified, while Senators Grassley and Feinstein again made statements. === Mark Zuckerberg === During his testimony before Congress on April 10, 2018, Zuckerberg said it was his personal mistake that he did not do enough to prevent Facebook from being used for harm. "That goes for fake news, foreign interference in elections and hate speech". During the testimony, Mark Zuckerberg publicly apologized for the breach of private data: "It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here". Zuckerberg said that in 2013 Aleksandr Kogan had created a personality quiz app, which was installed by 300,000 people. The app was then able to retrieve Facebook information, including that of the users' friends, and this was obtained by Kogan. It was not until 2015 that Zuckerberg learned that these users' information was shared by Kogan with Cambridge Analytica. Cambridge Analytica was subsequently asked to remove all the data. It was later discovered by The Guardian, The New York Times and Channel 4 that the data had in fact not been deleted. === Eitan Hersh === In 2015, Eitan Hersh published Hacking the Electorate: How Campaigns Perceive Voters, which analyzed the databases used for campaigns between 2008 and 2014. On May 6, 2018, Eitan Hersh, a professor of political science at Tufts University testified before Congress as an expert on voter targeting. Hersh claimed that the voter targeting by Cambridge Analytica did not excessively affect the outcome of the 2016 election because the techniques used by Cambridge Analytica were similar to those of presidential campaigns well before 2016. Further, he claimed that the correlation between user "likes" and personality traits were weak and thus the psychological profiling of users were also weak. === Mark Jamison === Mark Jamison, the director and Gunter Professor of the Public Utility Research Center at the University of Florida, testified before Congress on May 6, 2018, as an expert. Jamison reiterated that it was not unusual for presidential campaigns to use data like Facebook's data to profile voters; Presidents Barack Obama and George W. Bush also used models to micro-target voters. Jamison criticized Facebook for not being "clear and candid with its users" because the users were not aware of the extent that their data would be used. Jamison finished his testimony by saying that if the federal government were to regulate voter targeting to happen on sites like Facebook, it would harm the users of those sites because it would be too restrictive of those sites and would make things worse for regulators. === Christopher Wylie === On May 16, 2018, Christopher Wylie, who is considered the "whistleblower" on Cambridge Analytica and also served as Cambridge Analytica's Director of Research in 2013 and 2014, also testified to the United States Senate Judiciary Committee. He was considered a witness to both British and American authorities, and he claims he decided to whistle-blow to "protect democratic institutions from rogue actors and hostile foreign interference, as well as ensure the safety of Americans online." He claimed that at Cambridge Analytica "anything goes" and that Cambridge Analytica was "a corrupting force in the world." He detailed to Congress how Cambridge Analytica used Facebook's data to categorize people into groups based on political ideology. He also claimed that Eitan Hersh contradicted "copious amounts of peer-reviewed literature in top scientific journals, including the Proceedings of the National Academy of Sciences, Psychological Science, and Journal of Personality and Individual Differences" by saying that Facebook's categorizing of people were weak. Christopher Wylie also testified about Russian contact with Cambridge Analytica and the campaign, voter disengagement, and his thoughts on Facebook's response. == Aftermath == Following the downfall of Cambridge Analytica, a number of related companies have been established by people formerly affiliated with Cambridge Analytica, including Emerdata Limited and Auspex International. At first, Julian Wheatland, the former CEO of Cambridge Analytica and former director of many SCL-connected firms, stated that they did not plan on reestablishing the two companies. Instead, the directors and owners of Cambridge and its London-based parent SCL group strategically positioned themselves to be acquired in the face of bankruptcy procedures and lawsuits. While employees of both companies dispersed to successor firms, Cambridge and SCL were acquired by Emerdata Limited, a data processing company. Wheatland responded to news of this story and emphasized that Emerdata would not inherit SCL companies’ existing data or assets and that this information belongs to the administrators in charge of the SCL companies’ bankruptcy. David Carroll, an American professor who sued Cambridge, stated that Emerdata was aiming to conceal the scandals and minimize further criticism. Carroll's lawyers argued that Cambridge's court administrators were acting unlawfully by liquidating the company's assets prior to a full investigation being performed. While these administrators subjected SCL Group to criminal injury and a $26,000 fine, a U.K. court denied Carroll's lawsuit, allowing SCL to disintegrate without turning over his data. In October 2021, following Facebook employee Frances Haugen whistleblowing Facebook activities, NPR revisited the Cambridge Analytica data scandal by observing that Facebook neither took responsibility for their behavior there nor did consumers get any benefit of reform as a result. In August 2022, Facebook agreed to settle a lawsuit seeking damages in the case for an undisclosed sum. In December 2022, Meta Platforms agreed to pay $725 million to settle a private class-action lawsuit related to the improper user data sharing with Cambridge Analytica and other third-party companies. == Documentary == The documentary The Great Hack, produced by Netflix, examines how Cambridge Analytica used personal data of millions of Facebook users without their consent to influence electoral processes, including Donald Trump's 2016 presidential campaign and the Brexit referendum in the UK. It highlights how personal data exploitation can compromise the holding of free electoral processes in democratic countries, privacy and individual freedoms. The documentary features interviews with several key figures: Brittany Kaiser: former director of business development at Cambridge Analytica, who decided to blow the whistle by providing inside information about the company's operations. David Carroll: professor at Parsons and The New School, who fought in court to obtain a copy of his personal data used by Cambridge Analytica, to help shed light on how the scheme worked. Carole Cadwalladr: British investigative journalist who uncovered the scandal. == See also == AggregateIQ BeLeave The Great Hack, 2019 documentary film Russian interference in the 2016 Brexit referendum State-sponsored Internet propaganda Timeline of investigations into Trump and Russia (2019) == Notes == == References == == External links == BBC Coverage The Guardian Coverage Carole Cadwalladr @TED2019: Facebook's role in Brexit — and the threat to democracy New York Times Coverage The Guardian Article; Revealed "How the Facebook-Cambridge Analytica Saga Unfolded". Bloomberg.com. March 21, 2018. "The Facebook Dilemma". FRONTLINE. Season 37. Episode 4–5. October 29–30, 2018. PBS. WGBH. Retrieved November 10, 2022.
Artificial intelligence in hiring
Artificial intelligence can be used to automate aspects of the job recruitment process. Advances in artificial intelligence, such as the advent of machine learning and the growth of big data, enable AI to be utilized to recruit, screen, and predict the success of applicants. Proponents of artificial intelligence in hiring claim it reduces bias, assists with finding qualified candidates, and frees up human resource workers' time for other tasks, while opponents worry that AI perpetuates inequalities in the workplace and will eliminate jobs. Despite the potential benefits, the ethical implications of AI in hiring remain a subject of debate, with concerns about algorithmic transparency, accountability, and the need for ongoing oversight to ensure fair and unbiased decision-making throughout the recruitment process. == Background == Artificial intelligence has fascinated researchers since the term was coined in the mid-1950s. Researchers have identified four main forms of intelligence that AI would need to possess to truly replace humans in the workplace: mechanical, analytical, intuitive, and empathetic. Automation follows a predictable progression in which it will first be able to replace the mechanical tasks, then analytical tasks, then intuitive tasks, and finally empathy based tasks. However, full automation is not the only potential outcome of AI advancements. Humans may instead work alongside machines, enhancing the effectiveness of both. In the hiring context, this means that AI has already replaced many basic human resource tasks in recruitment and screening, while freeing up time for human resource workers to do other more creative tasks that can not yet be automated or do not make fiscal sense to automate. It also means that the type of jobs companies are recruiting and hiring form will continue to shift as the skillsets that are most valuable change. Human resources has been identified as one of the ten industries most affected by AI. It is increasingly common for companies to use AI to automate aspects of their hiring process. The hospitality, finance, and tech industries in particular have incorporated AI into their hiring processes to significant extents. Human resources is fundamentally an industry based around making predictions. Human resource specialists must predict which people would make quality candidates for a job, which marketing strategies would get those people to apply, which applicants would make the best employees, what kinds of compensation would get them to accept an offer, what is needed to retain an employee, which employees should be promoted, what a companies staffing needs, among others. AI is particularly adept at prediction because it can analyze huge amounts of data. This enables AI to make insights many humans would miss and find connections between seemingly unrelated data points. This provides value to a company and has made it advantageous to use AI to automate or augment many human resource tasks. == Uses == === Screeners === Screeners are tests that allow companies to sift through a large applicant pool and extract applicants that have desirable features. Companies commonly screen through the use of questionnaires, coding tests, interviews, and resume analysis. Artificial Intelligence already plays a major role in the screening process. Resumes can be analyzed using AI for desirable characteristics, such as a certain amount of work experience or a relevant degree. Interviews can then be extended to applicant's whose resumes contain these characteristics. What factors are used to screen applicants is a concern to ethicists and civil rights activists. A screener that favors people who have similar characteristics to those already employed at a company may perpetuate inequalities. For example, if a company that is predominantly white and male uses its employees' data to train its screener it may accidentally create a screening process that favors white, male applicants. The automation of screeners also has the potential to reduce biases. Biases against applicants with African American sounding names have been shown in multiple studies. An AI screener has the potential to limit human bias and error in the hiring process, allowing more minority applicants to be successful. === Recruitment === Recruitment involves the identification of potential applicants and the marketing of positions. AI is commonly utilized in the recruitment process because it can help boost the number of qualified applicants for positions. Companies are able to use AI to target their marketing to applicants who are likely to be good fits for a position. This often involves the use of social media sites advertising tools, which rely on AI. Facebook allows advertisers to target ads based on demographics, location, interests, behavior, and connections. Facebook also allows companies to target a "look-a-like" audience, that is the company supplies Facebook with a data set, typically the company's current employees, and Facebook will target the ad to profiles that are similar to the profiles in the data set. Additionally, job sites like Indeed, Glassdoor, and ZipRecruiter target job listings to applicants that have certain characteristics employers are looking for. Targeted advertising has many advantages for companies trying to recruit such being a more efficient use of resources, reaching a desired audience, and boosting qualified applicants. This has helped make it a mainstay in modern hiring. Who receives a targeted ad can be controversial. In hiring, the implications of targeted ads have to do with who is able to find out about and then apply to a position. Most targeted ad algorithms are proprietary information. Some platforms, like Facebook and Google, allow users to see why they were shown a specific ad, but users who do not receive the ad likely never know of its existence and also have no way of knowing why they were not shown the ad. === Interviews === Chatbots were one of the first applications of AI and are commonly used in the hiring process. Interviewees interact with chatbots to answer interview questions, and their responses can then be analyzed by AI, providing prospective employers with a myriad of insights. Chatbots streamline the interview process and reduce the workload of human resource professionals. Video interviews utilizing AI have become increasingly prevalent. Zappyhire, a recruitment automation startup, has developed a recruitment bot that ensures engagement with the most relevant candidates by leveraging AI-powered resume screening technology. HireVue has created technology that analyzes interviewees' responses and gestures during recorded video interviews. Over 12 million interviewees have been screened by the more than 700 companies that utilize the service. == Controversies == Artificial intelligence in hiring confers many benefits, but it also has some challenges which have concerned experts. AI is only as good as the data it is using. Biases can inadvertently be baked into the data used in AI. Often companies will use data from their employees to decide what people to recruit or hire. This can perpetuate bias and lead to more homogenous workforces. Facebook Ads was an example of a platform that created such controversy for allowing business owners to specify what type of employee they are looking for. For example, job advertisements for nursing and teach could be set such that only women of a specific age group would see the advertisements. Facebook Ads has since then removed this function from its platform, citing the potential problems with the function in perpetuating biases and stereotypes against minorities. The growing use of Artificial Intelligence-enabled hiring systems has become an important component of modern talent hiring, particularly through social networks such as LinkedIn and Facebook. However, data overflow embedded in the hiring systems, based on Natural Language Processing (NLP) methods, may result in unconscious gender bias. Utilizing data driven methods may mitigate some bias generated from these systems It can also be hard to quantify what makes a good employee. This poses a challenge for training AI to predict which employees will be best. Commonly used metrics like performance reviews can be subjective and have been shown to favor white employees over black employees and men over women. Another challenge is the limited amount of available data. Employers only collect certain details about candidates during the initial stages of the hiring process. This requires AI to make determinations about candidates with very limited information to go off of. Additionally, many employers do not hire employees frequently and so have limited firm specific data to go off. To combat this, many firms will use algorithms and data from other firms in their industry. AI's reliance on applicant and current employees personal data raises privacy issues. These issues effect both the applicants and current employees, but also may have implications for third parties who are linked through social media to applicants or current employees. For example, a sweep of someone's social media will also show their friends and people they have tagged in photos or posts. AI makes it easier for companies to search applicants social media accounts. A study conducted by Monash University found that 45% of hiring managers use social media to gain insight on applicants. Seventy percent of those surveyed said they had rejected an applicant because of things discovered on their applicant's social media, yet only 17% of hiring managers saw using social media in the hiring process as a violation of applicants privacy. Using social media in the hiring process is appealing to hiring managers because it offers them a less curated view of applicants lives. The privacy trade-off is significant. Social media profiles often reveal information about applicants that human resource departments are legally not allowed to require applicants to divulge like race, ability status, and sexual orientation. == AI and the future of hiring == Artificial intelligence is changing the recruiting process by gradually replacing routine tasks performed by human recruiters. AI can reduce human involvement in hiring and reduce the human biases that hinder effective hiring decisions. And some platforms such as TalAiro go further Talairo is an AI-powered Talent Impact Platform designed to optimize hiring for agencies and enterprises. It leverages patented AI models to match job descriptions with candidates, automate administrative tasks, and provide deep hiring insights, all in an effort to maximize business outcomes. AI is changing the way work is done. Artificial intelligence along with other technological advances such as improvements in robotics have placed 47% of jobs at risk of being eliminated in the near future. Some classify the shifts in labor brought about by AI as a 4th industrial revolution, which they call Industrial Revolution 4.0. According to some scholars, however, the transformative impact of AI on labor has been overstated. The "no-real-change" theory holds that an IT revolution has already occurred, but that the benefits of implementing new technologies does not outweigh the costs associated with adopting them. This theory claims that the result of the IT revolution is thus much less impactful than had originally been forecasted. Other scholars refute this theory claiming that AI has already led to significant job loss for unskilled labor and that it will eliminate middle skill and high skill jobs in the future. This position is based around the idea that AI is not yet a technology of general use and that any potential 4th industrial revolution has not fully occurred. A third theory holds that the effect of AI and other technological advances is too complicated to yet be understood. This theory is centered around the idea that while AI will likely eliminate jobs in the short term it will also likely increase the demand for other jobs. The question then becomes will the new jobs be accessible to people and will they emerge near when jobs are eliminated. Although robots can replace people to complete some tasks, there are still many tasks that cannot be done alone by robots that master artificial intelligence. A study analyzed 2,000 work tasks in 800 different occupations globally, and concluded that half (totaling US$15 trillion in salaries) could be automated by adapting already existing technologies. Less than 5% of occupations could be fully automated and 60% have at least 30% automatable tasks. In other words, in most cases, artificial intelligence is a tool rather than a substitute for labor. As artificial intelligence enters the field of human work, people have gradually discovered that artificial intelligence is incapable of unique tasks, and the advantage of human beings is to understand uniqueness and use tools rationally. At this time, human-machine reciprocal work came into being. Brandão discovers that people can form organic partnerships with machines. “Humans enable machines to do what they do best: doing repetitive tasks, analyzing significant volumes of data, and dealing with routine cases. Due to reciprocity, machines enable humans to have their potentialities "strengthened" for tasks such as resolving ambiguous information, exercising the judgment of difficult cases, and contacting dissatisfied clients.” Daugherty and Wilson have observed successful new types of human-computer interaction in occupations and tasks in various fields. In other words, even in activities and capabilities that are considered simpler, new technologies will not pose an imminent danger to workers. As far as General Electric is concerned, buyers of it and its equipment will always need maintenance workers. Entrepreneurs need these workers to work well with new systems that can integrate their skills with advanced technologies in novel ways. Artificial intelligence has sped up the hiring process considerably, dramatically reducing costs. For example, Unilever has reviewed over 250,000 applications using AI and reduced its hiring process from 4 months to 4 weeks. This saved the company 50,000 hours of labor. The increased efficiency AI promises has sped up its adoption by human resource departments globally. == Regulations on AI in hiring == The Artificial Intelligence Video Interview Act, effective in Illinois since 2020, regulates the use of AI to analyze and evaluate job applicants’ video interviews. This law requires employers to follow guidelines to avoid any issues regarding using AI in the hiring process. == References ==
QST (genetics)
In quantitative genetics, QST is a statistic intended to measure the degree of genetic differentiation among populations with regard to a quantitative trait. It was developed by Ken Spitze in 1993. Its name reflects that QST was intended to be analogous to the fixation index for a single genetic locus (FST). QST is often compared with FST of neutral loci to test if variation in a quantitative trait is a result of divergent selection or genetic drift, an analysis known as QST–FST comparisons. == Calculation of QST == === Equations === QST represents the proportion of variance among subpopulations, and its calculation is synonymous to FST developed by Sewall Wright. However, instead of using genetic differentiation, QST is calculated by finding the variance of a quantitative trait within and among subpopulations, and for the total population. Variance of a quantitative trait among populations (σ2GB) is described as: σ G B 2 = ( 1 − Q S T ) σ T 2 {\displaystyle \sigma _{GB}^{2}=(1-Q_{ST})\sigma _{T}^{2}} And the variance of a quantitative trait within populations (σ2GW) is described as: σ G W 2 = 2 Q S T σ T 2 {\displaystyle \sigma _{GW}^{2}=2Q_{ST}\sigma _{T}^{2}} Where σ2T is the total genetic variance in all populations. Therefore, QST can be calculated with the following equation: Q S T = σ G B 2 σ G B 2 + 2 σ G W 2 {\displaystyle Q_{ST}={\frac {\sigma _{GB}^{2}}{\sigma _{GB}^{2}+2\sigma _{GW}^{2}}}} === Assumptions === Calculation of QST is subject to several assumptions: populations must be in Hardy–Weinberg equilibrium, observed variation is assumed to be due to additive genetic effects only, selection and linkage disequilibrium are not present, and the subpopulations exist within an island model. == QST–FST comparisons == QST–FST analyses often involve culturing organisms in consistent environmental conditions, known as common garden experiments, and comparing the phenotypic variance to genetic variance. If QST is found to exceed FST, this is interpreted as evidence of divergent selection, because it indicates more differentiation in the trait than could be produced solely by genetic drift. If QST is less than FST, balancing selection is expected to be present. If the values of QST and FSTare equivalent, the observed trait differentiation could be due to genetic drift. Suitable comparison of QST and FST is subject to multiple ecological and evolutionary assumptions, and since the development of QST, multiple studies have examined the limitations and constrictions of QST–FST analyses. Leinonen et al. notes FST must be calculated with neutral loci, however over filtering of non-neutral loci can artificially reduce FSTvalues. Cubry et al. found QST is reduced in the presence of dominance, resulting in conservative estimates of divergent selection when QST is high, and inconclusive results of balancing selection when QST is low. Additionally, population structure can significantly impact QST–FST ratios. Stepping stone models, which can generate more evolutionary noise than island models, are more likely to experience type 1 errors. If a subset of populations act as sources, such as during invasion, weighting the genetic contributions of each population can increase detection of adaptation. In order to improve precision of QST analyses, more populations (>20) should be included in analyses. == QST applications in literature == Multiple studies have incorporated QST to separate effects of natural selection and genetic drift, and QST is often observed to exceed FST, indicating local adaptation. In an ecological restoration study, Bower and Aitken used QST to evaluate suitable populations for seed transfer of whitebark pine. They found high QST values in many populations, suggesting local adaptation for cold-adapted characteristics. During an assessment of the invasive species, Brachypodium sylvaticum, Marchini et al. found divergence between native and invasive populations during initial establishment in the invaded range, but minimal divergence during range expansion. In an examination of the common snapdragon (Antirrhinum majus) along an elevation gradient, QST–FST analyses revealed different adaptation trends between two subspecies (A. m. pseudomajus and A. m. striatum). While both subspecies occur at all elevations, A. m. striatum had high QST values for traits associated with altitude adaptation: plant height, number of branches, and internode length. A. m. pseudomajus had lower QST than FST values for germination time. == See also == F-statistics Quantitative genetics Conservation genetics Divergent selection Genetic diversity == References ==
Neural scaling law
In machine learning, a neural scaling law is an empirical scaling law that describes how neural network performance changes as key factors are scaled up or down. These factors typically include the number of parameters, training dataset size, and training cost. == Introduction == In general, a deep learning model can be characterized by four parameters: model size, training dataset size, training cost, and the post-training error rate (e.g., the test set error rate). Each of these variables can be defined as a real number, usually written as N , D , C , L {\displaystyle N,D,C,L} (respectively: parameter count, dataset size, computing cost, and loss). A neural scaling law is a theoretical or empirical statistical law between these parameters. There are also other parameters with other scaling laws. === Size of the model === In most cases, the model's size is simply the number of parameters. However, one complication arises with the use of sparse models, such as mixture-of-expert models. With sparse models, during inference, only a fraction of their parameters are used. In comparison, most other kinds of neural networks, such as transformer models, always use all their parameters during inference. === Size of the training dataset === The size of the training dataset is usually quantified by the number of data points within it. Larger training datasets are typically preferred, as they provide a richer and more diverse source of information from which the model can learn. This can lead to improved generalization performance when the model is applied to new, unseen data. However, increasing the size of the training dataset also increases the computational resources and time required for model training. With the "pretrain, then finetune" method used for most large language models, there are two kinds of training dataset: the pretraining dataset and the finetuning dataset. Their sizes have different effects on model performance. Generally, the finetuning dataset is less than 1% the size of pretraining dataset. In some cases, a small amount of high quality data suffices for finetuning, and more data does not necessarily improve performance. === Cost of training === Training cost is typically measured in terms of time (how long it takes to train the model) and computational resources (how much processing power and memory are required). It is important to note that the cost of training can be significantly reduced with efficient training algorithms, optimized software libraries, and parallel computing on specialized hardware such as GPUs or TPUs. The cost of training a neural network model is a function of several factors, including model size, training dataset size, the training algorithm complexity, and the computational resources available. In particular, doubling the training dataset size does not necessarily double the cost of training, because one may train the model for several times over the same dataset (each being an "epoch"). === Performance === The performance of a neural network model is evaluated based on its ability to accurately predict the output given some input data. Common metrics for evaluating model performance include: Negative log-likelihood per token (logarithm of perplexity) for language modeling; Accuracy, precision, recall, and F1 score for classification tasks; Mean squared error (MSE) or mean absolute error (MAE) for regression tasks; Elo rating in a competition against other models, such as gameplay or preference by a human judge. Performance can be improved by using more data, larger models, different training algorithms, regularizing the model to prevent overfitting, and early stopping using a validation set. When the performance is a number bounded within the range of [ 0 , 1 ] {\displaystyle [0,1]} , such as accuracy, precision, etc., it often scales as a sigmoid function of cost, as seen in the figures. == Examples == === (Hestness, Narang, et al, 2017) === The 2017 paper is a common reference point for neural scaling laws fitted by statistical analysis on experimental data. Previous works before the 2000s, as cited in the paper, were either theoretical or orders of magnitude smaller in scale. Whereas previous works generally found the scaling exponent to scale like L ∝ D − α {\displaystyle L\propto D^{-\alpha }} , with α ∈ { 0.5 , 1 , 2 } {\displaystyle \alpha \in \{0.5,1,2\}} , the paper found that α ∈ [ 0.07 , 0.35 ] {\displaystyle \alpha \in [0.07,0.35]} . Of the factors they varied, only task can change the exponent α {\displaystyle \alpha } . Changing the architecture optimizers, regularizers, and loss functions, would only change the proportionality factor, not the exponent. For example, for the same task, one architecture might have L = 1000 D − 0.3 {\displaystyle L=1000D^{-0.3}} while another might have L = 500 D − 0.3 {\displaystyle L=500D^{-0.3}} . They also found that for a given architecture, the number of parameters necessary to reach lowest levels of loss, given a fixed dataset size, grows like N ∝ D β {\displaystyle N\propto D^{\beta }} for another exponent β {\displaystyle \beta } . They studied machine translation with LSTM ( α ∼ 0.13 {\displaystyle \alpha \sim 0.13} ), generative language modelling with LSTM ( α ∈ [ 0.06 , 0.09 ] , β ≈ 0.7 {\displaystyle \alpha \in [0.06,0.09],\beta \approx 0.7} ), ImageNet classification with ResNet ( α ∈ [ 0.3 , 0.5 ] , β ≈ 0.6 {\displaystyle \alpha \in [0.3,0.5],\beta \approx 0.6} ), and speech recognition with two hybrid (LSTMs complemented by either CNNs or an attention decoder) architectures ( α ≈ 0.3 {\displaystyle \alpha \approx 0.3} ). === (Henighan, Kaplan, et al, 2020) === A 2020 analysis studied statistical relations between C , N , D , L {\displaystyle C,N,D,L} over a wide range of values and found similar scaling laws, over the range of N ∈ [ 10 3 , 10 9 ] {\displaystyle N\in [10^{3},10^{9}]} , C ∈ [ 10 12 , 10 21 ] {\displaystyle C\in [10^{12},10^{21}]} , and over multiple modalities (text, video, image, text to image, etc.). In particular, the scaling laws it found are (Table 1 of ): For each modality, they fixed one of the two C , N {\displaystyle C,N} , and varying the other one ( D {\displaystyle D} is varied along using D = C / 6 N {\displaystyle D=C/6N} ), the achievable test loss satisfies L = L 0 + ( x 0 x ) α {\displaystyle L=L_{0}+\left({\frac {x_{0}}{x}}\right)^{\alpha }} where x {\displaystyle x} is the varied variable, and L 0 , x 0 , α {\displaystyle L_{0},x_{0},\alpha } are parameters to be found by statistical fitting. The parameter α {\displaystyle \alpha } is the most important one. When N {\displaystyle N} is the varied variable, α {\displaystyle \alpha } ranges from 0.037 {\displaystyle 0.037} to 0.24 {\displaystyle 0.24} depending on the model modality. This corresponds to the α = 0.34 {\displaystyle \alpha =0.34} from the Chinchilla scaling paper. When C {\displaystyle C} is the varied variable, α {\displaystyle \alpha } ranges from 0.048 {\displaystyle 0.048} to 0.19 {\displaystyle 0.19} depending on the model modality. This corresponds to the β = 0.28 {\displaystyle \beta =0.28} from the Chinchilla scaling paper. Given fixed computing budget, optimal model parameter count is consistently around N o p t ( C ) = ( C 5 × 10 − 12 petaFLOP-day ) 0.7 = 9.0 × 10 − 7 C 0.7 {\displaystyle N_{opt}(C)=\left({\frac {C}{5\times 10^{-12}{\text{petaFLOP-day}}}}\right)^{0.7}=9.0\times 10^{-7}C^{0.7}} The parameter 9.0 × 10 − 7 {\displaystyle 9.0\times 10^{-7}} varies by a factor of up to 10 for different modalities. The exponent parameter 0.7 {\displaystyle 0.7} varies from 0.64 {\displaystyle 0.64} to 0.75 {\displaystyle 0.75} for different modalities. This exponent corresponds to the ≈ 0.5 {\displaystyle \approx 0.5} from the Chinchilla scaling paper. It's "strongly suggested" (but not statistically checked) that D o p t ( C ) ∝ N o p t ( C ) 0.4 ∝ C 0.28 {\displaystyle D_{opt}(C)\propto N_{opt}(C)^{0.4}\propto C^{0.28}} . This exponent corresponds to the ≈ 0.5 {\displaystyle \approx 0.5} from the Chinchilla scaling paper. The scaling law of L = L 0 + ( C 0 / C ) 0.048 {\displaystyle L=L_{0}+(C_{0}/C)^{0.048}} was confirmed during the training of GPT-3 (Figure 3.1 ). === Chinchilla scaling (Hoffmann, et al, 2022) === One particular scaling law ("Chinchilla scaling") states that, for a large language model (LLM) autoregressively trained for one epoch, with a cosine learning rate schedule, we have: { C = C 0 N D L = A N α + B D β + L 0 {\displaystyle {\begin{cases}C=C_{0}ND\\L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}\end{cases}}} where the variables are C {\displaystyle C} is the cost of training the model, in FLOPS. N {\displaystyle N} is the number of parameters in the model. D {\displaystyle D} is the number of tokens in the training set. L {\displaystyle L} is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset. L 0 {\displaystyle L_{0}} represents the loss of an ideal generative process on the test data A N α {\displaystyle {\frac {A}{N^{\alpha }}}} captures the fact that a Transformer language model with N {\displaystyle N} parameters underperforms the ideal generative process B D β {\displaystyle {\frac {B}{D^{\beta }}}} captures the fact that the model trained on D {\displaystyle D} tokens underperforms the ideal generative process and the statistical parameters are C 0 = 6 {\displaystyle C_{0}=6} , meaning that it costs 6 FLOPs per parameter to train on one token. This is estimated by Kaplan et al. Note that training cost is much higher than inference cost, as training entails both forward and backward passes, whereas inference costs 1 to 2 FLOPs per parameter to infer on one token. α = 0.34 , β = 0.28 , A = 406.4 , B = 410.7 , L 0 = 1.69 {\displaystyle \alpha =0.34,\beta =0.28,A=406.4,B=410.7,L_{0}=1.69} . Although Besiroglu et al. claims that the statistical estimation is slightly off, and should be α = 0.35 , β = 0.37 , A = 482.01 , B = 2085.43 , L 0 = 1.82 {\displaystyle \alpha =0.35,\beta =0.37,A=482.01,B=2085.43,L_{0}=1.82} . The statistical laws were fitted over experimental data with N ∈ [ 7 × 10 7 , 1.6 × 10 10 ] , D ∈ [ 5 × 10 9 , 5 × 10 11 ] , C ∈ [ 10 18 , 10 24 ] {\displaystyle N\in [7\times 10^{7},1.6\times 10^{10}],D\in [5\times 10^{9},5\times 10^{11}],C\in [10^{18},10^{24}]} . Since there are 4 variables related by 2 equations, imposing 1 additional constraint and 1 additional optimization objective allows us to solve for all four variables. In particular, for any fixed C {\displaystyle C} , we can uniquely solve for all 4 variables that minimizes L {\displaystyle L} . This provides us with the optimal D o p t ( C ) , N o p t ( C ) {\displaystyle D_{opt}(C),N_{opt}(C)} for any fixed C {\displaystyle C} : N o p t ( C ) = G ( C 6 ) a , D o p t ( C ) = G − 1 ( C 6 ) b , where G = ( α A β B ) 1 α + β , a = β α + β , and b = α α + β . {\displaystyle N_{opt}(C)=G\left({\frac {C}{6}}\right)^{a},\quad D_{opt}(C)=G^{-1}\left({\frac {C}{6}}\right)^{b},\quad {\text{ where }}\quad G=\left({\frac {\alpha A}{\beta B}}\right)^{\frac {1}{\alpha +\beta }},\quad a={\frac {\beta }{\alpha +\beta }}{\text{, and }}b={\frac {\alpha }{\alpha +\beta }}{\text{. }}} Plugging in the numerical values, we obtain the "Chinchilla efficient" model size and training dataset size, as well as the test loss achievable: { N o p t ( C ) = 0.6 C 0.45 D o p t ( C ) = 0.3 C 0.55 L o p t ( C ) = 1070 C − 0.154 + 1.7 {\displaystyle {\begin{cases}N_{opt}(C)=0.6\;C^{0.45}\\D_{opt}(C)=0.3\;C^{0.55}\\L_{opt}(C)=1070\;C^{-0.154}+1.7\end{cases}}} Similarly, we may find the optimal training dataset size and training compute budget for any fixed model parameter size, and so on. There are other estimates for "Chinchilla efficient" model size and training dataset size. The above is based on a statistical model of L = A N α + B D β + L 0 {\displaystyle L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}} . One can also directly fit a statistical law for D o p t ( C ) , N o p t ( C ) {\displaystyle D_{opt}(C),N_{opt}(C)} without going through the detour, for which one obtains: { N o p t ( C ) = 0.1 C 0.5 D o p t ( C ) = 1.7 C 0.5 {\displaystyle {\begin{cases}N_{opt}(C)=0.1\;C^{0.5}\\D_{opt}(C)=1.7\;C^{0.5}\end{cases}}} or as tabulated: ==== Discrepancy ==== The Chinchilla scaling law analysis for training transformer language models suggests that for a given training compute budget ( C {\displaystyle C} ), to achieve the minimal pretraining loss for that budget, the number of model parameters ( N {\displaystyle N} ) and the number of training tokens ( D {\displaystyle D} ) should be scaled in equal proportions, N o p t ( C ) ∝ C 0.5 , D o p t ( C ) ∝ C 0.5 {\displaystyle N_{opt}(C)\propto C^{0.5},D_{opt}(C)\propto C^{0.5}} . This conclusion differs from analysis conducted by Kaplan et al., which found that N {\displaystyle N} should be increased more quickly than D {\displaystyle D} , N o p t ( C ) ∝ C 0.73 , D o p t ( C ) ∝ C 0.27 {\displaystyle N_{opt}(C)\propto C^{0.73},D_{opt}(C)\propto C^{0.27}} . This discrepancy can primarily be attributed to the two studies using different methods for measuring model size. Kaplan et al.: did not count the parameters in the token embedding layer, which when analyzed at smaller model sizes leads to biased coefficients; studied smaller models than the Chinchilla group, magnifying the effect; assumed that L ∞ = 0 {\displaystyle L_{\infty }=0} . Secondary effects also arise due to differences in hyperparameter tuning and learning rate schedules. Kaplan et al.: used a warmup schedule that was too long for smaller models, making them appear less efficient; did not fully tuning optimization hyperparameters. ==== Beyond Chinchilla scaling ==== As Chinchilla scaling has been the reference point for many large-scaling training runs, there had been a concurrent effort to go "beyond Chinchilla scaling", meaning to modify some of the training pipeline in order to obtain the same loss with less effort, or deliberately train for longer than what is "Chinchilla optimal". Usually, the goal is to make the scaling law exponent larger, which means the same loss can be trained for much less compute. For instance, filtering data can make the scaling law exponent larger. Another strand of research studies how to deal with limited data, as according to Chinchilla scaling laws, the training dataset size for the largest language models already approaches what is available on the internet. found that augmenting the dataset with a mix of "denoising objectives" constructed from the dataset improves performance. studies optimal scaling when all available data is already exhausted (such as in rare languages), so one must train multiple epoches over the same dataset (whereas Chinchilla scaling requires only one epoch). The Phi series of small language models were trained on textbook-like data generated by large language models, for which data is only limited by amount of compute available. Chinchilla optimality was defined as "optimal for training compute", whereas in actual production-quality models, there will be a lot of inference after training is complete. "Overtraining" during training means better performance during inference. LLaMA models were overtrained for this reason. Subsequent studies discovered scaling laws in the overtraining regime, for dataset sizes up to 32x more than Chinchilla-optimal. === Broken neural scaling laws (BNSL) === A 2022 analysis found that many scaling behaviors of artificial neural networks follow a smoothly broken power law functional form: y = a + ( b x − c 0 ) ∏ i = 1 n ( 1 + ( x d i ) 1 / f i ) − c i ∗ f i {\displaystyle y=a+{\bigg (}bx^{-c_{0}}{\bigg )}\prod _{i=1}^{n}\left(1+\left({\frac {x}{d_{i}}}\right)^{1/f_{i}}\right)^{-c_{i}*f_{i}}} in which x {\displaystyle x} refers to the quantity being scaled (i.e. C {\displaystyle C} , N {\displaystyle N} , D {\displaystyle D} , number of training steps, number of inference steps, or model input size) and y {\displaystyle y} refers to the downstream (or upstream) performance evaluation metric of interest (e.g. prediction error, cross entropy, calibration error, AUROC, BLEU score percentage, F1 score, reward, Elo rating, solve rate, or FID score) in zero-shot, prompted, or fine-tuned settings. The parameters a , b , c 0 , c 1 . . . c n , d 1 . . . d n , f 1 . . . f n {\displaystyle a,b,c_{0},c_{1}...c_{n},d_{1}...d_{n},f_{1}...f_{n}} are found by statistical fitting. On a log–log plot, when f i {\displaystyle f_{i}} is not too large and a {\displaystyle a} is subtracted out from the y-axis, this functional form looks like a series of linear segments connected by arcs; the n {\displaystyle n} transitions between the segments are called "breaks", hence the name broken neural scaling laws (BNSL). The scenarios in which the scaling behaviors of artificial neural networks were found to follow this functional form include large-scale vision, language, audio, video, diffusion, generative modeling, multimodal learning, contrastive learning, AI alignment, AI capabilities, robotics, out-of-distribution (OOD) generalization, continual learning, transfer learning, uncertainty estimation / calibration, out-of-distribution detection, adversarial robustness, distillation, sparsity, retrieval, quantization, pruning, fairness, molecules, computer programming/coding, math word problems, arithmetic, emergent abilities, double descent, supervised learning, unsupervised/self-supervised learning, and reinforcement learning (single agent and multi-agent). The architectures for which the scaling behaviors of artificial neural networks were found to follow this functional form include residual neural networks, transformers, MLPs, MLP-mixers, recurrent neural networks, convolutional neural networks, graph neural networks, U-nets, encoder-decoder (and encoder-only) (and decoder-only) models, ensembles (and non-ensembles), MoE (mixture of experts) (and non-MoE) models, and sparse pruned (and non-sparse unpruned) models. === Inference scaling === Other than scaling up training compute, one can also scale up inference compute (or "test-time compute"). As an example, the Elo rating of AlphaGo improves steadily as it is allowed to spend more time on its Monte Carlo Tree Search per play.: Fig 4  For AlphaGo Zero, increasing Elo by 120 requires either 2x model size and training, or 2x test-time search. Similarly, a language model for solving competition-level coding challenges, AlphaCode, consistently improved (log-linearly) in performance with more search time. For Hex, 10x training-time compute trades for 15x test-time compute. For Libratus for heads up no-limit Texas hold 'em, and Cicero for Diplomacy, and many other abstract games of partial information, inference-time searching improves performance at a similar tradeoff ratio, for up to 100,000x effective increase in training-time compute. In 2024, the OpenAI o1 report documented that o1's performance consistently improved with both increased train-time compute and test-time compute, and gave numerous examples of test-time compute scaling in mathematics, scientific reasoning, and coding tasks. One method for scaling up test-time compute is process-based supervision, where a model generates a step-by-step reasoning chain to answer a question, and another model (either human or AI) provides a reward score on some of the intermediate steps, not just the final answer. Process-based supervision can be scaled arbitrarily by using synthetic reward score without another model, for example, by running Monte Carlo rollouts and scoring each step in the reasoning according to how likely it leads to the right answer. Another method is by revision models, which are models trained to solve a problem multiple times, each time revising the previous attempt. === Other examples === ==== Vision transformers ==== Vision transformers, similar to language transformers, exhibit scaling laws. A 2022 research trained vision transformers, with parameter counts N ∈ [ 5 × 10 6 , 2 × 10 9 ] {\displaystyle N\in [5\times 10^{6},2\times 10^{9}]} , on image sets of sizes D ∈ [ 3 × 10 7 , 3 × 10 9 ] {\displaystyle D\in [3\times 10^{7},3\times 10^{9}]} , for computing C ∈ [ 0.2 , 10 4 ] {\displaystyle C\in [0.2,10^{4}]} (in units of TPUv3-core-days). After training the model, it is finetuned on ImageNet training set. Let L {\displaystyle L} be the error probability of the finetuned model classifying ImageNet test set. They found min N , D L = 0.09 + 0.26 ( C + 0.01 ) 0.35 {\displaystyle \min _{N,D}L=0.09+{\frac {0.26}{(C+0.01)^{0.35}}}} . ==== Neural machine translation ==== Ghorbani, Behrooz et al. studied scaling laws for neural machine translation (specifically, English as source, and German as target) in encoder-decoder Transformer models, trained until convergence on the same datasets (thus they did not fit scaling laws for computing cost C {\displaystyle C} or dataset size D {\displaystyle D} ). They varied N ∈ [ 10 8 , 3.5 × 10 9 ] {\displaystyle N\in [10^{8},3.5\times 10^{9}]} They found three results: L {\displaystyle L} is a scaling law function of N E , N D {\displaystyle N_{E},N_{D}} , where N E , N D {\displaystyle N_{E},N_{D}} are encoder and decoder parameter count. It is not simply a function of total parameter count N = N E + N D {\displaystyle N=N_{E}+N_{D}} . The function has form L ( N e , N d ) = α ( N ¯ e N e ) p e ( N ¯ d N d ) p d + L ∞ {\displaystyle L\left(N_{e},N_{d}\right)=\alpha \left({\frac {{\bar {N}}_{e}}{N_{e}}}\right)^{p_{e}}\left({\frac {{\bar {N}}_{d}}{N_{d}}}\right)^{p_{d}}+L_{\infty }} , where α , p e , p d , L ∞ , N ¯ e , N ¯ d {\displaystyle \alpha ,p_{e},p_{d},L_{\infty },{\bar {N}}_{e},{\bar {N}}_{d}} are fitted parameters. They found that N d / N ≈ 0.55 {\displaystyle N_{d}/N\approx 0.55} minimizes loss if N {\displaystyle N} is held fixed. L {\displaystyle L} "saturates" (that is, it reaches L ∞ {\displaystyle L_{\infty }} ) for smaller models when the training and testing datasets are "source-natural" than "target-natural". A "source-natural" data point means a pair of English-German sentences, and the model is asked to translate the English sentence into German, and the English sentence is written by a natural English writer, while the German sentence is translated from the English sentence by a machine translator. To construct the two kinds of datasets, the authors collected natural English and German sentences online, then used machine translation to generate their translations. As models grow larger, models trained on source-original datasets can achieve low loss but bad BLEU score. In contrast, models trained on target-original datasets achieve low loss and good BLEU score in tandem (Figure 10, 11 ). The authors hypothesize that source-natural datasets have uniform and dull target sentences, and so a model that is trained to predict the target sentences would quickly overfit. trained Transformers for machine translations with sizes N ∈ [ 4 × 10 5 , 5.6 × 10 7 ] {\displaystyle N\in [4\times 10^{5},5.6\times 10^{7}]} on dataset sizes D ∈ [ 6 × 10 5 , 6 × 10 9 ] {\displaystyle D\in [6\times 10^{5},6\times 10^{9}]} . They found the Kaplan et al. (2020) scaling law applied to machine translation: L ( N , D ) = [ ( N C N ) α N α D + D C D ] α D {\displaystyle L(N,D)=\left[\left({\frac {N_{C}}{N}}\right)^{\frac {\alpha _{N}}{\alpha _{D}}}+{\frac {D_{C}}{D}}\right]^{\alpha _{D}}} . They also found the BLEU score scaling as B L E U ≈ C e − k L {\displaystyle BLEU\approx Ce^{-kL}} . ==== Transfer learning ==== Hernandez, Danny et al. studied scaling laws for transfer learning in language models. They trained a family of Transformers in three ways: pretraining on English, finetuning on Python pretraining on an equal mix of English and Python, finetuning on Python training on Python The idea is that pretraining on English should help the model achieve low loss on a test set of Python text. Suppose the model has parameter count N {\displaystyle N} , and after being finetuned on D F {\displaystyle D_{F}} Python tokens, it achieves some loss L {\displaystyle L} . We say that its "transferred token count" is D T {\displaystyle D_{T}} , if another model with the same N {\displaystyle N} achieves the same L {\displaystyle L} after training on D F + D T {\displaystyle D_{F}+D_{T}} Python tokens. They found D T = 1.9 e 4 ( D F ) .18 ( N ) .38 {\displaystyle D_{T}=1.9e4\left(D_{F}\right)^{.18}(N)^{.38}} for pretraining on English text, and D T = 2.1 e 5 ( D F ) .096 ( N ) .38 {\displaystyle D_{T}=2.1e5\left(D_{F}\right)^{.096}(N)^{.38}} for pretraining on English and non-Python code. ==== Precision ==== Kumar et al. study scaling laws for numerical precision in the training of language models. They train a family of language models with weights, activations, and KV cache in varying numerical precision in both integer and floating-point type to measure the effects on loss as a function of precision. For training, their scaling law accounts for lower precision by wrapping the effects of precision into an overall "effective parameter count" that governs loss scaling, using the parameterization N ↦ N eff ( P ) = N ( 1 − e − P / γ ) {\displaystyle N\mapsto N_{\text{eff}}(P)=N(1-e^{-P/\gamma })} . This illustrates how training in lower precision degrades performance by reducing the true capacity of the model in a manner that varies exponentially with bits. For inference, they find that extreme overtraining of language models past Chinchilla-optimality can lead to models being more sensitive to quantization, a standard technique for efficient deep learning. This is demonstrated by observing that the degradation in loss due to weight quantization increases as an approximate power law in the token/parameter ratio D / N {\displaystyle D/N} seen during pretraining, so that models pretrained on extreme token budgets can perform worse in terms of validation loss than those trained on more modest token budgets if post-training quantization is applied. Other work examining the effects of overtraining include Sardana et al. and Gadre et al. ==== Densing laws ==== Xiao et al. considered the parameter efficiency ("density") of models over time. The idea is that over time, researchers would discover models that use their parameters more efficiently, in that models with the same performance can have fewer parameters. A model can have an actual parameter count N {\displaystyle N} , defined as the actual number of parameters in the model, and an "effective" parameter count N ^ {\displaystyle {\hat {N}}} , defined as how many parameters it would have taken a previous well-known model to reach he same performance on some benchmarks, such as MMLU. N ^ {\displaystyle {\hat {N}}} is not measured directly, but rather by measuring the actual model performance S {\displaystyle S} , then plugging it back to a previously fitted scaling law, such as the Chinchilla scaling law, to obtain what N ^ {\displaystyle {\hat {N}}} would be required to reach that performance S {\displaystyle S} , according to that previously fitted scaling laws. A densing law states that ln ⁡ ( N ^ N ) m a x = A t + B {\displaystyle \ln \left({\frac {\hat {N}}{N}}\right)_{max}=At+B} , where t {\displaystyle t} is real-world time, measured in days. == See also == == References ==
Deep learning in photoacoustic imaging
Photoacoustic imaging (PA) is based on the photoacoustic effect, in which optical absorption causes a rise in temperature, which causes a subsequent rise in pressure via thermo-elastic expansion. This pressure rise propagates through the tissue and is sensed via ultrasonic transducers. Due to the proportionality between the optical absorption, the rise in temperature, and the rise in pressure, the ultrasound pressure wave signal can be used to quantify the original optical energy deposition within the tissue. Photoacoustic imaging has applications of deep learning in both photoacoustic computed tomography (PACT) and photoacoustic microscopy (PAM). PACT utilizes wide-field optical excitation and an array of unfocused ultrasound transducers. Similar to other computed tomography methods, the sample is imaged at multiple view angles, which are then used to perform an inverse reconstruction algorithm based on the detection geometry (typically through universal backprojection, modified delay-and-sum, or time reversal ) to elicit the initial pressure distribution within the tissue. PAM on the other hand uses focused ultrasound detection combined with weakly focused optical excitation (acoustic resolution PAM or AR-PAM) or tightly focused optical excitation (optical resolution PAM or OR-PAM). PAM typically captures images point-by-point via a mechanical raster scanning pattern. At each scanned point, the acoustic time-of-flight provides axial resolution while the acoustic focusing yields lateral resolution. == Applications of deep learning in PACT == The first application of deep learning in PACT was by Reiter et al. in which a deep neural network was trained to learn spatial impulse responses and locate photoacoustic point sources. The resulting mean axial and lateral point location errors on 2,412 of their randomly selected test images were 0.28 mm and 0.37 mm respectively. After this initial implementation, the applications of deep learning in PACT have branched out primarily into removing artifacts from acoustic reflections, sparse sampling, limited-view, and limited-bandwidth. There has also been some recent work in PACT toward using deep learning for wavefront localization. There have been networks based on fusion of information from two different reconstructions to improve the reconstruction using deep learning fusion based networks. === Using deep learning to locate photoacoustic point sources === Traditional photoacoustic beamforming techniques modeled photoacoustic wave propagation by using detector array geometry and the time-of-flight to account for differences in the PA signal arrival time. However, this technique failed to account for reverberant acoustic signals caused by acoustic reflection, resulting in acoustic reflection artifacts that corrupt the true photoacoustic point source location information. In Reiter et al., a convolutional neural network (similar to a simple VGG-16 style architecture) was used that took pre-beamformed photoacoustic data as input and outputted a classification result specifying the 2-D point source location. ==== Deep learning for PA wavefront localization ==== Johnstonbaugh et al. was able to localize the source of photoacoustic wavefronts with a deep neural network. The network used was an encoder-decoder style convolutional neural network. The encoder-decoder network was made of residual convolution, upsampling, and high field-of-view convolution modules. A Nyquist convolution layer and differentiable spatial-to-numerical transform layer were also used within the architecture. Simulated PA wavefronts served as the input for training the model. To create the wavefronts, the forward simulation of light propagation was done with the NIRFast toolbox and the light-diffusion approximation, while the forward simulation of sound propagation was done with the K-Wave toolbox. The simulated wavefronts were subjected to different scattering mediums and Gaussian noise. The output for the network was an artifact free heat map of the targets axial and lateral position. The network had a mean error rate of less than 30 microns when localizing target below 40 mm and had a mean error rate of 1.06 mm for localizing targets between 40 mm and 60 mm. With a slight modification to the network, the model was able to accommodate multi target localization. A validation experiment was performed in which pencil lead was submerged into an intralipid solution at a depth of 32 mm. The network was able to localize the lead's position when the solution had a reduced scattering coefficient of 0, 5, 10, and 15 cm−1. The results of the network show improvements over standard delay-and-sum or frequency-domain beamforming algorithms and Johnstonbaugh proposes that this technology could be used for optical wavefront shaping, circulating melanoma cell detection, and real-time vascular surgeries. === Removing acoustic reflection artifacts (in the presence of multiple sources and channel noise) === Building on the work of Reiter et al., Allman et al. utilized a full VGG-16 architecture to locate point sources and remove reflection artifacts within raw photoacoustic channel data (in the presence of multiple sources and channel noise). This utilization of deep learning trained on simulated data produced in the MATLAB k-wave library, and then later reaffirmed their results on experimental data. === Ill-posed PACT reconstruction === In PACT, tomographic reconstruction is performed, in which the projections from multiple solid angles are combined to form an image. When reconstruction methods like filtered backprojection or time reversal, are ill-posed inverse problems due to sampling under the Nyquist-Shannon's sampling requirement or with limited-bandwidth/view, the resulting reconstruction contains image artifacts. Traditionally these artifacts were removed with slow iterative methods like total variation minimization, but the advent of deep learning approaches has opened a new avenue that utilizes a priori knowledge from network training to remove artifacts. In the deep learning methods that seek to remove these sparse sampling, limited-bandwidth, and limited-view artifacts, the typical workflow involves first performing the ill-posed reconstruction technique to transform the pre-beamformed data into a 2-D representation of the initial pressure distribution that contains artifacts. Then, a convolutional neural network (CNN) is trained to remove the artifacts, in order to produce an artifact-free representation of the ground truth initial pressure distribution. ==== Using deep learning to remove sparse sampling artifacts ==== When the density of uniform tomographic view angles is under what is prescribed by the Nyquist-Shannon's sampling theorem, it is said that the imaging system is performing sparse sampling. Sparse sampling typically occurs as a way of keeping production costs low and improving image acquisition speed. The typical network architectures used to remove these sparse sampling artifacts are U-net and Fully Dense (FD) U-net. Both of these architectures contain a compression and decompression phase. The compression phase learns to compress the image to a latent representation that lacks the imaging artifacts and other details. The decompression phase then combines with information passed by the residual connections in order to add back image details without adding in the details associated with the artifacts. FD U-net modifies the original U-net architecture by including dense blocks that allow layers to utilize information learned by previous layers within the dense block. Another technique was proposed using a simple CNN based architecture for removal of artifacts and improving the k-wave image reconstruction. ==== Removing limited-view artifacts with deep learning ==== When a region of partial solid angles are not captured, generally due to geometric limitations, the image acquisition is said to have limited-view. As illustrated by the experiments of Davoudi et al., limited-view corruptions can be directly observed as missing information in the frequency domain of the reconstructed image. Limited-view, similar to sparse sampling, makes the initial reconstruction algorithm ill-posed. Prior to deep learning, the limited-view problem was addressed with complex hardware such as acoustic deflectors and full ring-shaped transducer arrays, as well as solutions like compressed sensing, weighted factor, and iterative filtered backprojection. The result of this ill-posed reconstruction is imaging artifacts that can be removed by CNNs. The deep learning algorithms used to remove limited-view artifacts include U-net and FD U-net, as well as generative adversarial networks (GANs) and volumetric versions of U-net. One GAN implementation of note improved upon U-net by using U-net as a generator and VGG as a discriminator, with the Wasserstein metric and gradient penalty to stabilize training (WGAN-GP). ==== Pixel-wise interpolation and deep learning for faster reconstruction of limited-view signals ==== Guan et al. was able to apply a FD U-net to remove artifacts from simulated limited-view reconstructed PA images. PA images reconstructed with the time-reversal process and PA data collected with either 16, 32, or 64 sensors served as the input to the network and the ground truth images served as the desired output. The network was able to remove artifacts created in the time-reversal process from synthetic, mouse brain, fundus, and lung vasculature phantoms. This process was similar to the work done for clearing artifacts from sparse and limited view images done by Davoudi et al. To improve the speed of reconstruction and to allow for the FD U-net to use more information from the sensor, Guan et al. proposed to use a pixel-wise interpolation as an input to the network instead of a reconstructed image. Using a pixel-wise interpolation would remove the need to produce an initial image that may remove small details or make details unrecoverable by obscuring them with artifacts. To create the pixel-wise interpolation, the time-of-flight for each pixel was calculated using the wave propagation equation. Next, a reconstruction grid was created from pressure measurements calculated from the pixels' time-of-flight. Using the reconstruction grid as an input, the FD U-net was able to create artifact free reconstructed images. This pixel-wise interpolation method was faster and achieved better peak signal to noise ratios (PSNR) and structural similarity index measures (SSIM) than artifact free images created when the time-reversal images served as the input to the FD U-net. This pixel-wise interpolation method was significantly faster and had comparable PSNR and SSIM than the images reconstructed from the computationally intensive iterative approach. The pixel-wise method proposed in this study was only proven for in silico experiments with homogenous medium, but Guan posits that the pixel-wise method can be used for real time PAT rendering. ==== Limited-bandwidth artifact removal with deep neural networks ==== The limited-bandwidth problem occurs as a result of the ultrasound transducer array's limited detection frequency bandwidth. This transducer array acts like a band-pass filter in the frequency domain, attenuating both high and low frequencies within the photoacoustic signal. This limited-bandwidth can cause artifacts and limit the axial resolution of the imaging system. The primary deep neural network architectures used to remove limited-bandwidth artifacts have been WGAN-GP and modified U-net. The typical method to remove artifacts and denoise limited-bandwidth reconstructions before deep learning was Wiener filtering, which helps to expand the PA signal's frequency spectrum. The primary advantage of the deep learning method over Wiener filtering is that Wiener filtering requires a high initial signal-to-noise ratio (SNR), which is not always possible, while the deep learning model has no such restriction. Fusion of information for improving photoacoustic Images with deep neural networks The complementary information is utilized using fusion based architectures for improving the photoacoustic image reconstruction. Since different reconstructions promote different characteristics in the output and hence the image quality and characteristics vary if a different reconstruction technique is used. A novel fusion based architecture was proposed to combine the output of two different reconstructions and give a better image quality as compared to any of those reconstructions. It includes weight sharing, and fusion of characteristics to achieve the desired improvement in the output image quality. === Deep learning to improve penetration depth of PA images === High energy lasers allow for light to reach deep into tissue and they allow for deep structures to be visible in PA images. High energy lasers provide a greater penetration depth than low energy lasers. Around an 8 mm greater penetration depth for lasers with a wavelength between 690 and 900 nm. The American National Standards Institute has set a maximal permissible exposure (MPE) for different biological tissues. Lasers with specifications above the MPE can cause mechanical or thermal damage to the tissue they are imaging. Manwar et al. was able to increase the penetration of depth of low energy lasers that meet the MPE standard by applying a U-net architecture to the images created by a low energy laser. The network was trained with images of an ex vivo sheep brain created by a low energy laser of 20 mJ as the input to the network and images of the same sheep brain created by a high energy laser of 100 mJ, 20 mJ above the MPE, as the desired output. A perceptually sensitive loss function was used to train the network to increase the low signal-to-noise ratio in PA images created by the low energy laser. The trained network was able to increase the peak-to-background ratio by 4.19 dB and penetration depth by 5.88% for photos created by the low energy laser of an in vivo sheep brain. Manwar claims that this technology could be beneficial in neonatal brain imaging where transfontanelle imaging is possible to look for any lessions or injury. == Applications of deep learning in PAM == Photoacoustic microscopy differs from other forms of photoacoustic tomography in that it uses focused ultrasound detection to acquire images pixel-by-pixel. PAM images are acquired as time-resolved volumetric data that is typically mapped to a 2-D projection via a Hilbert transform and maximum amplitude projection (MAP). The first application of deep learning to PAM, took the form of a motion-correction algorithm. This procedure was posed to correct the PAM artifacts that occur when an in vivo model moves during scanning. This movement creates the appearance of vessel discontinuities. === Deep learning to remove motion artifacts in PAM === The two primary motion artifact types addressed by deep learning in PAM are displacements in the vertical and tilted directions. Chen et al. used a simple three layer convolutional neural network, with each layer represented by a weight matrix and a bias vector, in order to remove the PAM motion artifacts. Two of the convolutional layers contain RELU activation functions, while the last has no activation function. Using this architecture, kernel sizes of 3 × 3, 4 × 4, and 5 × 5 were tested, with the largest kernel size of 5 × 5 yielding the best results. After training, the performance of the motion correction model was tested and performed well on both simulation and in vivo data. === Deep learning-assisted frequency-domain PAM === Frequency-domain PAM constitutes a powerful cost-efficient imaging method integrating intensity-modulated laser beams emitted by continuous wave sources for the excitation of single-frequency PA signals. Nevertheless, this imaging approach generally provides smaller signal-to-noise ratios (SNR) which can be up to two orders of magnitude lower than the conventional time-domain systems. To overcome the inherent SNR limitation of frequency-domain PAM, a U-Net neural network has been utilized to augment the generated images without the need for excessive averaging or the application of high optical power on the sample. In this context, the accessibility of PAM is improved as the system's cost is dramatically reduced while retaining sufficiently high image quality standards for demanding biological observations. == See also == Photoacoustic imaging Photoacoustic microscopy Photoacoustic effect == References == == External links == Photoacoustic imaging Photoacoustic microscopy Photoacoustic effect
Typical set
In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property (AEP) which is a kind of law of large numbers. The notion of typicality is only concerned with the probability of a sequence and not the actual sequence itself. This has great use in compression theory as it provides a theoretical means for compressing data, allowing us to represent any sequence Xn using nH(X) bits on average, and, hence, justifying the use of entropy as a measure of information from a source. The AEP can also be proven for a large class of stationary ergodic processes, allowing typical set to be defined in more general cases. Additionally, the typical set concept is foundational in understanding the limits of data transmission and error correction in communication systems. By leveraging the properties of typical sequences, efficient coding schemes like Shannon's source coding theorem and channel coding theorem are developed, enabling near-optimal data compression and reliable transmission over noisy channels. == (Weakly) typical sequences (weak typicality, entropy typicality) == If a sequence x1, ..., xn is drawn from an independent identically-distributed random variable (IID) X defined over a finite alphabet X {\displaystyle {\mathcal {X}}} , then the typical set, Aε(n) ∈ X {\displaystyle \in {\mathcal {X}}} (n) is defined as those sequences which satisfy: 2 − n ( H ( X ) + ε ) ⩽ p ( x 1 , x 2 , … , x n ) ⩽ 2 − n ( H ( X ) − ε ) {\displaystyle 2^{-n(H(X)+\varepsilon )}\leqslant p(x_{1},x_{2},\dots ,x_{n})\leqslant 2^{-n(H(X)-\varepsilon )}} where H ( X ) = − ∑ x ∈ X p ( x ) log 2 ⁡ p ( x ) {\displaystyle H(X)=-\sum _{x\in {\mathcal {X}}}p(x)\log _{2}p(x)} is the information entropy of X. The probability above need only be within a factor of 2n ε. Taking the logarithm on all sides and dividing by -n, this definition can be equivalently stated as H ( X ) − ε ≤ − 1 n log 2 ⁡ p ( x 1 , x 2 , … , x n ) ≤ H ( X ) + ε . {\displaystyle H(X)-\varepsilon \leq -{\frac {1}{n}}\log _{2}p(x_{1},x_{2},\ldots ,x_{n})\leq H(X)+\varepsilon .} For i.i.d sequence, since p ( x 1 , x 2 , … , x n ) = ∏ i = 1 n p ( x i ) , {\displaystyle p(x_{1},x_{2},\ldots ,x_{n})=\prod _{i=1}^{n}p(x_{i}),} we further have H ( X ) − ε ≤ − 1 n ∑ i = 1 n log 2 ⁡ p ( x i ) ≤ H ( X ) + ε . {\displaystyle H(X)-\varepsilon \leq -{\frac {1}{n}}\sum _{i=1}^{n}\log _{2}p(x_{i})\leq H(X)+\varepsilon .} By the law of large numbers, for sufficiently large n − 1 n ∑ i = 1 n log 2 ⁡ p ( x i ) → H ( X ) . {\displaystyle -{\frac {1}{n}}\sum _{i=1}^{n}\log _{2}p(x_{i})\rightarrow H(X).} === Properties === An essential characteristic of the typical set is that, if one draws a large number n of independent random samples from the distribution X, the resulting sequence (x1, x2, ..., xn) is very likely to be a member of the typical set, even though the typical set comprises only a small fraction of all the possible sequences. Formally, given any ε > 0 {\displaystyle \varepsilon >0} , one can choose n such that: The probability of a sequence from X(n) being drawn from Aε(n) is greater than 1 − ε, i.e. P r [ x ( n ) ∈ A ϵ ( n ) ] ≥ 1 − ε {\displaystyle Pr[x^{(n)}\in A_{\epsilon }^{(n)}]\geq 1-\varepsilon } | A ε ( n ) | ⩽ 2 n ( H ( X ) + ε ) {\displaystyle \left|{A_{\varepsilon }}^{(n)}\right|\leqslant 2^{n(H(X)+\varepsilon )}} | A ε ( n ) | ⩾ ( 1 − ε ) 2 n ( H ( X ) − ε ) {\displaystyle \left|{A_{\varepsilon }}^{(n)}\right|\geqslant (1-\varepsilon )2^{n(H(X)-\varepsilon )}} If the distribution over X {\displaystyle {\mathcal {X}}} is not uniform, then the fraction of sequences that are typical is | A ϵ ( n ) | | X ( n ) | ≡ 2 n H ( X ) 2 n log 2 ⁡ | X | = 2 − n ( log 2 ⁡ | X | − H ( X ) ) → 0 {\displaystyle {\frac {|A_{\epsilon }^{(n)}|}{|{\mathcal {X}}^{(n)}|}}\equiv {\frac {2^{nH(X)}}{2^{n\log _{2}|{\mathcal {X}}|}}}=2^{-n(\log _{2}|{\mathcal {X}}|-H(X))}\rightarrow 0} as n becomes very large, since H ( X ) < log 2 ⁡ | X | , {\displaystyle H(X)<\log _{2}|{\mathcal {X}}|,} where | X | {\displaystyle |{\mathcal {X}}|} is the cardinality of X {\displaystyle {\mathcal {X}}} . For a general stochastic process {X(t)} with AEP, the (weakly) typical set can be defined similarly with p(x1, x2, ..., xn) replaced by p(x0τ) (i.e. the probability of the sample limited to the time interval [0, τ]), n being the degree of freedom of the process in the time interval and H(X) being the entropy rate. If the process is continuous valued, differential entropy is used instead. === Example === Counter-intuitively, the most likely sequence is often not a member of the typical set. For example, suppose that X is an i.i.d Bernoulli random variable with p(0)=0.1 and p(1)=0.9. In n independent trials, since p(1)>p(0), the most likely sequence of outcome is the sequence of all 1's, (1,1,...,1). Here the entropy of X is H(X)=0.469, while − 1 n log 2 ⁡ p ( x ( n ) = ( 1 , 1 , … , 1 ) ) = − 1 n log 2 ⁡ ( 0.9 n ) = 0.152 {\displaystyle -{\frac {1}{n}}\log _{2}p\left(x^{(n)}=(1,1,\ldots ,1)\right)=-{\frac {1}{n}}\log _{2}(0.9^{n})=0.152} So this sequence is not in the typical set because its average logarithmic probability cannot come arbitrarily close to the entropy of the random variable X no matter how large we take the value of n. For Bernoulli random variables, the typical set consists of sequences with average numbers of 0s and 1s in n independent trials. This is easily demonstrated: If p(1) = p and p(0) = 1-p, then for n trials with m 1's, we have − 1 n log 2 ⁡ p ( x ( n ) ) = − 1 n log 2 ⁡ p m ( 1 − p ) n − m = − m n log 2 ⁡ p − ( n − m n ) log 2 ⁡ ( 1 − p ) . {\displaystyle -{\frac {1}{n}}\log _{2}p(x^{(n)})=-{\frac {1}{n}}\log _{2}p^{m}(1-p)^{n-m}=-{\frac {m}{n}}\log _{2}p-\left({\frac {n-m}{n}}\right)\log _{2}(1-p).} The average number of 1's in a sequence of Bernoulli trials is m = np. Thus, we have − 1 n log 2 ⁡ p ( x ( n ) ) = − p log 2 ⁡ p − ( 1 − p ) log 2 ⁡ ( 1 − p ) = H ( X ) . {\displaystyle -{\frac {1}{n}}\log _{2}p(x^{(n)})=-p\log _{2}p-(1-p)\log _{2}(1-p)=H(X).} For this example, if n=10, then the typical set consist of all sequences that have a single 0 in the entire sequence. In case p(0)=p(1)=0.5, then every possible binary sequences belong to the typical set. == Strongly typical sequences (strong typicality, letter typicality) == If a sequence x1, ..., xn is drawn from some specified joint distribution defined over a finite or an infinite alphabet X {\displaystyle {\mathcal {X}}} , then the strongly typical set, Aε,strong(n) ∈ X {\displaystyle \in {\mathcal {X}}} is defined as the set of sequences which satisfy | N ( x i ) n − p ( x i ) | < ε ‖ X ‖ . {\displaystyle \left|{\frac {N(x_{i})}{n}}-p(x_{i})\right|<{\frac {\varepsilon }{\|{\mathcal {X}}\|}}.} where N ( x i ) {\displaystyle {N(x_{i})}} is the number of occurrences of a specific symbol in the sequence. It can be shown that strongly typical sequences are also weakly typical (with a different constant ε), and hence the name. The two forms, however, are not equivalent. Strong typicality is often easier to work with in proving theorems for memoryless channels. However, as is apparent from the definition, this form of typicality is only defined for random variables having finite support. == Jointly typical sequences == Two sequences x n {\displaystyle x^{n}} and y n {\displaystyle y^{n}} are jointly ε-typical if the pair ( x n , y n ) {\displaystyle (x^{n},y^{n})} is ε-typical with respect to the joint distribution p ( x n , y n ) = ∏ i = 1 n p ( x i , y i ) {\displaystyle p(x^{n},y^{n})=\prod _{i=1}^{n}p(x_{i},y_{i})} and both x n {\displaystyle x^{n}} and y n {\displaystyle y^{n}} are ε-typical with respect to their marginal distributions p ( x n ) {\displaystyle p(x^{n})} and p ( y n ) {\displaystyle p(y^{n})} . The set of all such pairs of sequences ( x n , y n ) {\displaystyle (x^{n},y^{n})} is denoted by A ε n ( X , Y ) {\displaystyle A_{\varepsilon }^{n}(X,Y)} . Jointly ε-typical n-tuple sequences are defined similarly. Let X ~ n {\displaystyle {\tilde {X}}^{n}} and Y ~ n {\displaystyle {\tilde {Y}}^{n}} be two independent sequences of random variables with the same marginal distributions p ( x n ) {\displaystyle p(x^{n})} and p ( y n ) {\displaystyle p(y^{n})} . Then for any ε>0, for sufficiently large n, jointly typical sequences satisfy the following properties: P [ ( X n , Y n ) ∈ A ε n ( X , Y ) ] ⩾ 1 − ϵ {\displaystyle P\left[(X^{n},Y^{n})\in A_{\varepsilon }^{n}(X,Y)\right]\geqslant 1-\epsilon } | A ε n ( X , Y ) | ⩽ 2 n ( H ( X , Y ) + ϵ ) {\displaystyle \left|A_{\varepsilon }^{n}(X,Y)\right|\leqslant 2^{n(H(X,Y)+\epsilon )}} | A ε n ( X , Y ) | ⩾ ( 1 − ϵ ) 2 n ( H ( X , Y ) − ϵ ) {\displaystyle \left|A_{\varepsilon }^{n}(X,Y)\right|\geqslant (1-\epsilon )2^{n(H(X,Y)-\epsilon )}} P [ ( X ~ n , Y ~ n ) ∈ A ε n ( X , Y ) ] ⩽ 2 − n ( I ( X ; Y ) − 3 ϵ ) {\displaystyle P\left[({\tilde {X}}^{n},{\tilde {Y}}^{n})\in A_{\varepsilon }^{n}(X,Y)\right]\leqslant 2^{-n(I(X;Y)-3\epsilon )}} P [ ( X ~ n , Y ~ n ) ∈ A ε n ( X , Y ) ] ⩾ ( 1 − ϵ ) 2 − n ( I ( X ; Y ) + 3 ϵ ) {\displaystyle P\left[({\tilde {X}}^{n},{\tilde {Y}}^{n})\in A_{\varepsilon }^{n}(X,Y)\right]\geqslant (1-\epsilon )2^{-n(I(X;Y)+3\epsilon )}} == Applications of typicality == === Typical set encoding === In information theory, typical set encoding encodes only the sequences in the typical set of a stochastic source with fixed length block codes. Since the size of the typical set is about 2nH(X), only nH(X) bits are required for the coding, while at the same time ensuring that the chances of encoding error is limited to ε. Asymptotically, it is, by the AEP, lossless and achieves the minimum rate equal to the entropy rate of the source. === Typical set decoding === In information theory, typical set decoding is used in conjunction with random coding to estimate the transmitted message as the one with a codeword that is jointly ε-typical with the observation. i.e. w ^ = w ⟺ ( ∃ w ) ( ( x 1 n ( w ) , y 1 n ) ∈ A ε n ( X , Y ) ) {\displaystyle {\hat {w}}=w\iff (\exists w)((x_{1}^{n}(w),y_{1}^{n})\in A_{\varepsilon }^{n}(X,Y))} where w ^ , x 1 n ( w ) , y 1 n {\displaystyle {\hat {w}},x_{1}^{n}(w),y_{1}^{n}} are the message estimate, codeword of message w {\displaystyle w} and the observation respectively. A ε n ( X , Y ) {\displaystyle A_{\varepsilon }^{n}(X,Y)} is defined with respect to the joint distribution p ( x 1 n ) p ( y 1 n | x 1 n ) {\displaystyle p(x_{1}^{n})p(y_{1}^{n}|x_{1}^{n})} where p ( y 1 n | x 1 n ) {\displaystyle p(y_{1}^{n}|x_{1}^{n})} is the transition probability that characterizes the channel statistics, and p ( x 1 n ) {\displaystyle p(x_{1}^{n})} is some input distribution used to generate the codewords in the random codebook. === Universal null-hypothesis testing === === Universal channel code === == See also == Asymptotic equipartition property Source coding theorem Noisy-channel coding theorem == References == C. E. Shannon, "A Mathematical Theory of Communication", Bell System Technical Journal, vol. 27, pp. 379–423, 623-656, July, October, 1948 Cover, Thomas M. (2006). "Chapter 3: Asymptotic Equipartition Property, Chapter 5: Data Compression, Chapter 8: Channel Capacity". Elements of Information Theory. John Wiley & Sons. ISBN 0-471-24195-4. David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1
Kronecker sum of discrete Laplacians
In mathematics, the Kronecker sum of discrete Laplacians, named after Leopold Kronecker, is a discrete version of the separation of variables for the continuous Laplacian in a rectangular cuboid domain. == General form of the Kronecker sum of discrete Laplacians == In a general situation of the separation of variables in the discrete case, the multidimensional discrete Laplacian is a Kronecker sum of 1D discrete Laplacians. === Example: 2D discrete Laplacian on a regular grid with the homogeneous Dirichlet boundary condition === Mathematically, using the Kronecker sum: L = D x x ⊗ I + I ⊗ D y y , {\displaystyle L=\mathbf {D_{xx}} \otimes \mathbf {I} +\mathbf {I} \otimes \mathbf {D_{yy}} ,\,} where D x x {\displaystyle \mathbf {D_{xx}} } and D y y {\displaystyle \mathbf {D_{yy}} } are 1D discrete Laplacians in the x- and y-directions, correspondingly, and I {\displaystyle \mathbf {I} } are the identities of appropriate sizes. Both D x x {\displaystyle \mathbf {D_{xx}} } and D y y {\displaystyle \mathbf {D_{yy}} } must correspond to the case of the homogeneous Dirichlet boundary condition at end points of the x- and y-intervals, in order to generate the 2D discrete Laplacian L corresponding to the homogeneous Dirichlet boundary condition everywhere on the boundary of the rectangular domain. Here is a sample OCTAVE/MATLAB code to compute L on the regular 10×15 2D grid: == Eigenvalues and eigenvectors of multidimensional discrete Laplacian on a regular grid == Knowing all eigenvalues and eigenvectors of the factors, all eigenvalues and eigenvectors of the Kronecker product can be explicitly calculated. Based on this, eigenvalues and eigenvectors of the Kronecker sum can also be explicitly calculated. The eigenvalues and eigenvectors of the standard central difference approximation of the second derivative on an interval for traditional combinations of boundary conditions at the interval end points are well known. Combining these expressions with the formulas of eigenvalues and eigenvectors for the Kronecker sum, one can easily obtain the required answer. === Example: 3D discrete Laplacian on a regular grid with the homogeneous Dirichlet boundary condition === L = D x x ⊗ I ⊗ I + I ⊗ D y y ⊗ I + I ⊗ I ⊗ D z z , {\displaystyle L=\mathbf {D_{xx}} \otimes \mathbf {I} \otimes \mathbf {I} +\mathbf {I} \otimes \mathbf {D_{yy}} \otimes \mathbf {I} +\mathbf {I} \otimes \mathbf {I} \otimes \mathbf {D_{zz}} ,\,} where D x x , D y y {\displaystyle \mathbf {D_{xx}} ,\,\mathbf {D_{yy}} } and D z z {\displaystyle \mathbf {D_{zz}} } are 1D discrete Laplacians in every of the 3 directions, and I {\displaystyle \mathbf {I} } are the identities of appropriate sizes. Each 1D discrete Laplacian must correspond to the case of the homogeneous Dirichlet boundary condition, in order to generate the 3D discrete Laplacian L corresponding to the homogeneous Dirichlet boundary condition everywhere on the boundary. The eigenvalues are λ j x , j y , j z = − 4 h x 2 sin ⁡ ( π j x 2 ( n x + 1 ) ) 2 − 4 h y 2 sin ⁡ ( π j y 2 ( n y + 1 ) ) 2 − 4 h z 2 sin ⁡ ( π j z 2 ( n z + 1 ) ) 2 {\displaystyle \lambda _{jx,jy,jz}=-{\frac {4}{h_{x}^{2}}}\sin \left({\frac {\pi j_{x}}{2(n_{x}+1)}}\right)^{2}-{\frac {4}{h_{y}^{2}}}\sin \left({\frac {\pi j_{y}}{2(n_{y}+1)}}\right)^{2}-{\frac {4}{h_{z}^{2}}}\sin \left({\frac {\pi j_{z}}{2(n_{z}+1)}}\right)^{2}} where j x = 1 , … , n x , j y = 1 , … , n y , j z = 1 , … , n z , {\displaystyle j_{x}=1,\ldots ,n_{x},\,j_{y}=1,\ldots ,n_{y},\,j_{z}=1,\ldots ,n_{z},\,} , and the corresponding eigenvectors are v i x , i y , i z , j x , j y , j z = 2 n x + 1 sin ⁡ ( i x j x π n x + 1 ) 2 n y + 1 sin ⁡ ( i y j y π n y + 1 ) 2 n z + 1 sin ⁡ ( i z j z π n z + 1 ) {\displaystyle v_{ix,iy,iz,jx,jy,jz}={\sqrt {\frac {2}{n_{x}+1}}}\sin \left({\frac {i_{x}j_{x}\pi }{n_{x}+1}}\right){\sqrt {\frac {2}{n_{y}+1}}}\sin \left({\frac {i_{y}j_{y}\pi }{n_{y}+1}}\right){\sqrt {\frac {2}{n_{z}+1}}}\sin \left({\frac {i_{z}j_{z}\pi }{n_{z}+1}}\right)} where the multi-index j x , j y , j z {\displaystyle {jx,jy,jz}} pairs the eigenvalues and the eigenvectors, while the multi-index i x , i y , i z {\displaystyle {ix,iy,iz}} determines the location of the value of every eigenvector at the regular grid. The boundary points, where the homogeneous Dirichlet boundary condition are imposed, are just outside the grid. == Available software == An OCTAVE/MATLAB code http://www.mathworks.com/matlabcentral/fileexchange/27279-laplacian-in-1d-2d-or-3d is available under a BSD License, which computes the sparse matrix of the 1, 2D, and 3D negative Laplacians on a rectangular grid for combinations of Dirichlet, Neumann, and Periodic boundary conditions using Kronecker sums of discrete 1D Laplacians. The code also provides the exact eigenvalues and eigenvectors using the explicit formulas given above.
Concept drift
In predictive analytics, data science, machine learning and related fields, concept drift or drift is an evolution of data that invalidates the data model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. Drift detection and drift adaptation are of paramount importance in the fields that involve dynamically changing data and data models. == Predictive model decay == In machine learning and predictive analytics this drift phenomenon is called concept drift. In machine learning, a common element of a data model are the statistical properties, such as probability distribution of the actual data. If they deviate from the statistical properties of the training data set, then the learned predictions may become invalid, if the drift is not addressed. == Data configuration decay == Another important area is software engineering, where three types of data drift affecting data fidelity may be recognized. Changes in the software environment ("infrastructure drift") may invalidate software infrastructure configuration. "Structural drift" happens when the data schema changes, which may invalidate databases. "Semantic drift" is changes in the meaning of data while the structure does not change. In many cases this may happen in complicated applications when many independent developers introduce changes without proper awareness of the effects of their changes in other areas of the software system. For many application systems, the nature of data on which they operate are subject to changes for various reasons, e.g., due to changes in business model, system updates, or switching the platform on which the system operates. In the case of cloud computing, infrastructure drift that may affect the applications running on cloud may be caused by the updates of cloud software. There are several types of detrimental effects of data drift on data fidelity. Data corrosion is passing the drifted data into the system undetected. Data loss happens when valid data are ignored due to non-conformance with the applied schema. Squandering is the phenomenon when new data fields are introduced upstream the data processing pipeline, but somewhere downstream there data fields are absent. == Inconsistent data == "Data drift" may refer to the phenomenon when database records fail to match the real-world data due to the changes in the latter over time. This is a common problem with databases involving people, such as customers, employees, citizens, residents, etc. Human data drift may be caused by unrecorded changes in personal data, such as place of residence or name, as well as due to errors during data input. "Data drift" may also refer to inconsistency of data elements between several replicas of a database. The reasons can be difficult to identify. A simple drift detection is to run checksum regularly. However the remedy may be not so easy. == Examples == The behavior of the customers in an online shop may change over time. For example, if weekly merchandise sales are to be predicted, and a predictive model has been developed that works satisfactorily. The model may use inputs such as the amount of money spent on advertising, promotions being run, and other metrics that may affect sales. The model is likely to become less and less accurate over time – this is concept drift. In the merchandise sales application, one reason for concept drift may be seasonality, which means that shopping behavior changes seasonally. Perhaps there will be higher sales in the winter holiday season than during the summer, for example. Concept drift generally occurs when the covariates that comprise the data set begin to explain the variation of your target set less accurately — there may be some confounding variables that have emerged, and that one simply cannot account for, which renders the model accuracy to progressively decrease with time. Generally, it is advised to perform health checks as part of the post-production analysis and to re-train the model with new assumptions upon signs of concept drift. == Possible remedies == To prevent deterioration in prediction accuracy because of concept drift, reactive and tracking solutions can be adopted. Reactive solutions retrain the model in reaction to a triggering mechanism, such as a change-detection test, to explicitly detect concept drift as a change in the statistics of the data-generating process. When concept drift is detected, the current model is no longer up-to-date and must be replaced by a new one to restore prediction accuracy. A shortcoming of reactive approaches is that performance may decay until the change is detected. Tracking solutions seek to track the changes in the concept by continually updating the model. Methods for achieving this include online machine learning, frequent retraining on the most recently observed samples, and maintaining an ensemble of classifiers where one new classifier is trained on the most recent batch of examples and replaces the oldest classifier in the ensemble. Contextual information, when available, can be used to better explain the causes of the concept drift: for instance, in the sales prediction application, concept drift might be compensated by adding information about the season to the model. By providing information about the time of the year, the rate of deterioration of your model is likely to decrease, but concept drift is unlikely to be eliminated altogether. This is because actual shopping behavior does not follow any static, finite model. New factors may arise at any time that influence shopping behavior, the influence of the known factors or their interactions may change. Concept drift cannot be avoided for complex phenomena that are not governed by fixed laws of nature. All processes that arise from human activity, such as socioeconomic processes, and biological processes are likely to experience concept drift. Therefore, periodic retraining, also known as refreshing, of any model is necessary. == See also == Data stream mining Data mining Snyk, a company whose portfolio includes drift detection in software applications == Further reading == Many papers have been published describing algorithms for concept drift detection. Only reviews, surveys and overviews are here: === Reviews === == External links == === Software === Frouros: An open-source Python library for drift detection in machine learning systems. NannyML: An open-source Python library for detecting univariate and multivariate distribution drift and estimating machine learning model performance without ground truth labels. RapidMiner: Formerly Yet Another Learning Environment (YALE): free open-source software for knowledge discovery, data mining, and machine learning also featuring data stream mining, learning time-varying concepts, and tracking drifting concept. It is used in combination with its data stream mining plugin (formerly concept drift plugin). EDDM (Early Drift Detection Method): free open-source implementation of drift detection methods in Weka. MOA (Massive Online Analysis): free open-source software specific for mining data streams with concept drift. It contains a prequential evaluation method, the EDDM concept drift methods, a reader of ARFF real datasets, and artificial stream generators as SEA concepts, STAGGER, rotating hyperplane, random tree, and random radius based functions. MOA supports bi-directional interaction with Weka. === Datasets === ==== Real ==== USP Data Stream Repository, 27 real-world stream datasets with concept drift compiled by Souza et al. (2020). Access Airline, approximately 116 million flight arrival and departure records (cleaned and sorted) compiled by E. Ikonomovska. Reference: Data Expo 2009 Competition [1]. Access Chess.com (online games) and Luxembourg (social survey) datasets compiled by I. Zliobaite. Access ECUE spam 2 datasets each consisting of more than 10,000 emails collected over a period of approximately 2 years by an individual. Access from S.J.Delany webpage Elec2, electricity demand, 2 classes, 45,312 instances. Reference: M. Harries, Splice-2 comparative evaluation: Electricity pricing, Technical report, The University of South Wales, 1999. Access from J.Gama webpage. Comment on applicability. PAKDD'09 competition data represents the credit evaluation task. It is collected over a five-year period. Unfortunately, the true labels are released only for the first part of the data. Access Sensor stream and Power supply stream datasets are available from X. Zhu's Stream Data Mining Repository. Access SMEAR is a benchmark data stream with a lot of missing values. Environment observation data over 7 years. Predict cloudiness. Access Text mining, a collection of text mining datasets with concept drift, maintained by I. Katakis. Access Gas Sensor Array Drift Dataset, a collection of 13,910 measurements from 16 chemical sensors utilized for drift compensation in a discrimination task of 6 gases at various levels of concentrations. Access ==== Other ==== KDD'99 competition data contains simulated intrusions in a military network environment. It is often used as a benchmark to evaluate handling concept drift. Access ==== Synthetic ==== Extreme verification latency benchmark Souza, V.M.A.; Silva, D.F.; Gama, J.; Batista, G.E.A.P.A. (2015). "Data Stream Classification Guided by Clustering on Nonstationary Environments and Extreme Verification Latency". Proceedings of the 2015 SIAM International Conference on Data Mining (SDM). SIAM. pp. 873–881. doi:10.1137/1.9781611974010.98. ISBN 9781611974010. S2CID 19198944. Access from Nonstationary Environments – Archive. Sine, Line, Plane, Circle and Boolean Data Sets Minku, L.L.; White, A.P.; Yao, X. (2010). "The Impact of Diversity on On-line Ensemble Learning in the Presence of Concept Drift" (PDF). IEEE Transactions on Knowledge and Data Engineering. 22 (5): 730–742. doi:10.1109/TKDE.2009.156. S2CID 16592739. Access from L.Minku webpage. SEA concepts Street, N.W.; Kim, Y. (2001). "A streaming ensemble algorithm (SEA) for large-scale classification" (PDF). KDD'01: Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 377–382. doi:10.1145/502512.502568. ISBN 978-1-58113-391-2. S2CID 11868540. Access from J.Gama webpage. STAGGER Schlimmer, J.C.; Granger, R.H. (1986). "Incremental Learning from Noisy Data". Mach. Learn. 1 (3): 317–354. doi:10.1007/BF00116895. S2CID 33776987. Mixed Gama, J.; Medas, P.; Castillo, G.; Rodrigues, P. (2004). "Learning with drift detection". Brazilian symposium on artificial intelligence. Springer. pp. 286–295. doi:10.1007/978-3-540-28645-5_29. ISBN 978-3-540-28645-5. S2CID 2606652. ==== Data generation frameworks ==== Minku, White & Yao 2010 Download from L.Minku webpage. Lindstrom, P.; Delany, S.J.; MacNamee, B. (2008). "Autopilot: Simulating Changing Concepts in Real Data" (PDF). Proceedings of the 19th Irish Conference on Artificial Intelligence & Cognitive Science. pp. 272–263. Narasimhamurthy, A.; Kuncheva, L.I. (2007). "A framework for generating data to simulate changing environments". AIAP'07: Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications. pp. 384–389. Code === Projects === INFER: Computational Intelligence Platform for Evolving and Robust Predictive Systems (2010–2014), Bournemouth University (UK), Evonik Industries (Germany), Research and Engineering Centre (Poland) HaCDAIS: Handling Concept Drift in Adaptive Information Systems (2008–2012), Eindhoven University of Technology (the Netherlands) KDUS: Knowledge Discovery from Ubiquitous Streams, INESC Porto and Laboratory of Artificial Intelligence and Decision Support (Portugal) ADEPT: Adaptive Dynamic Ensemble Prediction Techniques, University of Manchester (UK), University of Bristol (UK) ALADDIN: autonomous learning agents for decentralised data and information networks (2005–2010) GAENARI: C++ incremental decision tree algorithm. it minimize concept drifting damage. (2022) === Benchmarks === NAB: The Numenta Anomaly Benchmark, benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. (2014–2018) === Meetings === 2014 [] Special Session on "Concept Drift, Domain Adaptation & Learning in Dynamic Environments" @IEEE IJCNN 2014 2013 RealStream Real-World Challenges for Data Stream Mining Workshop-Discussion at the ECML PKDD 2013, Prague, Czech Republic. LEAPS 2013 The 1st International Workshop on Learning stratEgies and dAta Processing in nonStationary environments 2011 LEE 2011 Special Session on Learning in evolving environments and its application on real-world problems at ICMLA'11 HaCDAIS 2011 The 2nd International Workshop on Handling Concept Drift in Adaptive Information Systems ICAIS 2011 Track on Incremental Learning IJCNN 2011 Special Session on Concept Drift and Learning Dynamic Environments CIDUE 2011 Symposium on Computational Intelligence in Dynamic and Uncertain Environments 2010 HaCDAIS 2010 International Workshop on Handling Concept Drift in Adaptive Information Systems: Importance, Challenges and Solutions ICMLA10 Special Session on Dynamic learning in non-stationary environments SAC 2010 Data Streams Track at ACM Symposium on Applied Computing SensorKDD 2010 International Workshop on Knowledge Discovery from Sensor Data StreamKDD 2010 Novel Data Stream Pattern Mining Techniques Concept Drift and Learning in Nonstationary Environments at IEEE World Congress on Computational Intelligence MLMDS’2010 Special Session on Machine Learning Methods for Data Streams at the 10th International Conference on Intelligent Design and Applications, ISDA’10 == References ==
Instantaneously trained neural networks
Instantaneously trained neural networks are feedforward artificial neural networks that create a new hidden neuron node for each novel training sample. The weights to this hidden neuron separate out not only this training sample but others that are near it, thus providing generalization. This separation is done using the nearest hyperplane that can be written down instantaneously. In the two most important implementations the neighborhood of generalization either varies with the training sample (CC1 network) or remains constant (CC4 network). These networks use unary coding for an effective representation of the data sets. This type of network was first proposed in a 1993 paper of Subhash Kak. Since then, instantaneously trained neural networks have been proposed as models of short term learning and used in web search, and financial time series prediction applications. They have also been used in instant classification of documents and for deep learning and data mining. As in other neural networks, their normal use is as software, but they have also been implemented in hardware using FPGAs and by optical implementation. == CC4 network == In the CC4 network, which is a three-stage network, the number of input nodes is one more than the size of the training vector, with the extra node serving as the biasing node whose input is always 1. For binary input vectors, the weights from the input nodes to the hidden neuron (say of index j) corresponding to the trained vector is given by the following formula: w i j = { − 1 , for x i = 0 + 1 , for x i = 1 r − s + 1 , for i = n + 1 {\displaystyle w_{ij}={\begin{cases}-1,&{\mbox{for }}x_{i}=0\\+1,&{\mbox{for }}x_{i}=1\\r-s+1,&{\mbox{for }}i=n+1\end{cases}}} where r {\displaystyle r} is the radius of generalization and s {\displaystyle s} is the Hamming weight (the number of 1s) of the binary sequence. From the hidden layer to the output layer the weights are 1 or -1 depending on whether the vector belongs to a given output class or not. The neurons in the hidden and output layers output 1 if the weighted sum to the input is 0 or positive and 0, if the weighted sum to the input is negative: y = { 1 if ∑ x i ≥ 0 0 if ∑ x i < 0 {\displaystyle y=\left\{{\begin{matrix}1&{\mbox{if }}\sum x_{i}\geq 0\\0&{\mbox{if }}\sum x_{i}<0\end{matrix}}\right.} == Other networks == The CC4 network has also been modified to include non-binary input with varying radii of generalization so that it effectively provides a CC1 implementation. In feedback networks the Willshaw network as well as the Hopfield network are able to learn instantaneously. == References ==
Web intelligence
Web intelligence is the area of scientific research and development that explores the roles and makes use of artificial intelligence and information technology for new products, services and frameworks that are empowered by the World Wide Web. The term was coined in a paper written by Ning Zhong, Jiming Liu Yao and Y.Y. Ohsuga in the Computer Software and Applications Conference in 2000. == Research == The research about the web intelligence covers many fields – including data mining (in particular web mining), information retrieval, pattern recognition, predictive analytics, the semantic web, web data warehousing – typically with a focus on web personalization and adaptive websites. == References == == External links == Web Intelligence Journal Page Web Intelligence Consortium, an international, non-profit organization dedicated to advancing worldwide scientific research and industrial development in the field of Web Intelligence Web intelligence Research Group at University of Chile == Further reading == Zhong, Ning; Liu Yao, Jiming; Yao, Yiyu (2003). Web Intelligence. Springer. ISBN 978-3-540-44384-1. Shroff, Gautam (January 2014). The Intelligent Web: Search, smart algorithms, and big data. OUP Oxford. ISBN 978-0-19-964671-5. Velasquez, Juan; Vacile, Palade (2008). Adaptive Web Site: A Knowledge Extraction from Web Data Approach (1st ed.). IOS Press. ISBN 978-1-58603-831-1. Chbeir, Richard; Badr, Youakim; Abraham, Ajith; Hassanien, Aboul-Ella (April 2010). Emergent Web Intelligence: Advanced Information Retrieval (Advanced Information and Knowledge Processing) (PDF). Springer. ISBN 978-1-84996-073-1. Archived from the original (PDF) on 2012-11-11. Retrieved 2015-06-13.
Barycentric coordinate system
In geometry, a barycentric coordinate system is a coordinate system in which the location of a point is specified by reference to a simplex (a triangle for points in a plane, a tetrahedron for points in three-dimensional space, etc.). The barycentric coordinates of a point can be interpreted as masses placed at the vertices of the simplex, such that the point is the center of mass (or barycenter) of these masses. These masses can be zero or negative; they are all positive if and only if the point is inside the simplex. Every point has barycentric coordinates, and their sum is never zero. Two tuples of barycentric coordinates specify the same point if and only if they are proportional; that is to say, if one tuple can be obtained by multiplying the elements of the other tuple by the same non-zero number. Therefore, barycentric coordinates are either considered to be defined up to multiplication by a nonzero constant, or normalized for summing to unity. Barycentric coordinates were introduced by August Möbius in 1827. They are special homogeneous coordinates. Barycentric coordinates are strongly related with Cartesian coordinates and, more generally, to affine coordinates (see Affine space § Relationship between barycentric and affine coordinates). Barycentric coordinates are particularly useful in triangle geometry for studying properties that do not depend on the angles of the triangle, such as Ceva's theorem, Routh's theorem, and Menelaus's theorem. In computer-aided design, they are useful for defining some kinds of Bézier surfaces. == Definition == Let A 0 , … , A n {\displaystyle A_{0},\ldots ,A_{n}} be n + 1 points in a Euclidean space, a flat or an affine space A {\displaystyle \mathbf {A} } of dimension n that are affinely independent; this means that there is no affine subspace of dimension n − 1 that contains all the points, or, equivalently that the points define a simplex. Given any point P ∈ A , {\displaystyle P\in \mathbf {A} ,} there are scalars a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} that are not all zero, such that ( a 0 + ⋯ + a n ) O P → = a 0 O A 0 → + ⋯ + a n O A n → , {\displaystyle (a_{0}+\cdots +a_{n}){\overset {}{\overrightarrow {OP}}}=a_{0}{\overset {}{\overrightarrow {OA_{0}}}}+\cdots +a_{n}{\overset {}{\overrightarrow {OA_{n}}}},} for any point O. (As usual, the notation A B → {\displaystyle {\overset {}{\overrightarrow {AB}}}} represents the translation vector or free vector that maps the point A to the point B.) The elements of a (n + 1) tuple ( a 0 : … : a n ) {\displaystyle (a_{0}:\dotsc :a_{n})} that satisfies this equation are called barycentric coordinates of P with respect to A 0 , … , A n . {\displaystyle A_{0},\ldots ,A_{n}.} The use of colons in the notation of the tuple means that barycentric coordinates are a sort of homogeneous coordinates, that is, the point is not changed if all coordinates are multiplied by the same nonzero constant. Moreover, the barycentric coordinates are also not changed if the auxiliary point O, the origin, is changed. The barycentric coordinates of a point are unique up to a scaling. That is, two tuples ( a 0 : … : a n ) {\displaystyle (a_{0}:\dotsc :a_{n})} and ( b 0 : … : b n ) {\displaystyle (b_{0}:\dotsc :b_{n})} are barycentric coordinates of the same point if and only if there is a nonzero scalar λ {\displaystyle \lambda } such that b i = λ a i {\displaystyle b_{i}=\lambda a_{i}} for every i. In some contexts, it is useful to constrain the barycentric coordinates of a point so that they are unique. This is usually achieved by imposing the condition ∑ a i = 1 , {\displaystyle \sum a_{i}=1,} or equivalently by dividing every a i {\displaystyle a_{i}} by the sum of all a i . {\displaystyle a_{i}.} These specific barycentric coordinates are called normalized or absolute barycentric coordinates. Sometimes, they are also called affine coordinates, although this term refers commonly to a slightly different concept. Sometimes, it is the normalized barycentric coordinates that are called barycentric coordinates. In this case the above defined coordinates are called homogeneous barycentric coordinates. With above notation, the homogeneous barycentric coordinates of Ai are all zero, except the one of index i. When working over the real numbers (the above definition is also used for affine spaces over an arbitrary field), the points whose all normalized barycentric coordinates are nonnegative form the convex hull of { A 0 , … , A n } , {\displaystyle \{A_{0},\ldots ,A_{n}\},} which is the simplex that has these points as its vertices. With above notation, a tuple ( a 1 , … , a n ) {\displaystyle (a_{1},\ldots ,a_{n})} such that ∑ i = 0 n a i = 0 {\displaystyle \sum _{i=0}^{n}a_{i}=0} does not define any point, but the vector a 0 O A 0 → + ⋯ + a n O A n → {\displaystyle a_{0}{\overset {}{\overrightarrow {OA_{0}}}}+\cdots +a_{n}{\overset {}{\overrightarrow {OA_{n}}}}} is independent from the origin O. As the direction of this vector is not changed if all a i {\displaystyle a_{i}} are multiplied by the same scalar, the homogeneous tuple ( a 0 : … : a n ) {\displaystyle (a_{0}:\dotsc :a_{n})} defines a direction of lines, that is a point at infinity. See below for more details. == Relationship with Cartesian or affine coordinates == Barycentric coordinates are strongly related to Cartesian coordinates and, more generally, affine coordinates. For a space of dimension n, these coordinate systems are defined relative to a point O, the origin, whose coordinates are zero, and n points A 1 , … , A n , {\displaystyle A_{1},\ldots ,A_{n},} whose coordinates are zero except that of index i that equals one. A point has coordinates ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} for such a coordinate system if and only if its normalized barycentric coordinates are ( 1 − x 1 − ⋯ − x n , x 1 , … , x n ) {\displaystyle (1-x_{1}-\cdots -x_{n},x_{1},\ldots ,x_{n})} relatively to the points O , A 1 , … , A n . {\displaystyle O,A_{1},\ldots ,A_{n}.} The main advantage of barycentric coordinate systems is to be symmetric with respect to the n + 1 defining points. They are therefore often useful for studying properties that are symmetric with respect to n + 1 points. On the other hand, distances and angles are difficult to express in general barycentric coordinate systems, and when they are involved, it is generally simpler to use a Cartesian coordinate system. == Relationship with projective coordinates == Homogeneous barycentric coordinates are also strongly related with some projective coordinates. However this relationship is more subtle than in the case of affine coordinates, and, for being clearly understood, requires a coordinate-free definition of the projective completion of an affine space, and a definition of a projective frame. The projective completion of an affine space of dimension n is a projective space of the same dimension that contains the affine space as the complement of a hyperplane. The projective completion is unique up to an isomorphism. The hyperplane is called the hyperplane at infinity, and its points are the points at infinity of the affine space. Given a projective space of dimension n, a projective frame is an ordered set of n + 2 points that are not contained in the same hyperplane. A projective frame defines a projective coordinate system such that the coordinates of the (n + 2)th point of the frame are all equal, and, otherwise, all coordinates of the ith point are zero, except the ith one. When constructing the projective completion from an affine coordinate system, one commonly defines it with respect to a projective frame consisting of the intersections with the hyperplane at infinity of the coordinate axes, the origin of the affine space, and the point that has all its affine coordinates equal to one. This implies that the points at infinity have their last coordinate equal to zero, and that the projective coordinates of a point of the affine space are obtained by completing its affine coordinates by one as (n + 1)th coordinate. When one has n + 1 points in an affine space that define a barycentric coordinate system, this is another projective frame of the projective completion that is convenient to choose. This frame consists of these points and their centroid, that is the point that has all its barycentric coordinates equal. In this case, the homogeneous barycentric coordinates of a point in the affine space are the same as the projective coordinates of this point. A point is at infinity if and only if the sum of its coordinates is zero. This point is in the direction of the vector defined at the end of § Definition. == Barycentric coordinates on triangles == In the context of a triangle, barycentric coordinates are also known as area coordinates or areal coordinates, because the coordinates of P with respect to triangle ABC are equivalent to the (signed) ratios of the areas of PBC, PCA and PAB to the area of the reference triangle ABC. Areal and trilinear coordinates are used for similar purposes in geometry. Barycentric or areal coordinates are extremely useful in engineering applications involving triangular subdomains. These make analytic integrals often easier to evaluate, and Gaussian quadrature tables are often presented in terms of area coordinates. Consider a triangle A B C {\displaystyle ABC} with vertices A = ( a 1 , a 2 ) {\displaystyle A=(a_{1},a_{2})} , B = ( b 1 , b 2 ) {\displaystyle B=(b_{1},b_{2})} , C = ( c 1 , c 2 ) {\displaystyle C=(c_{1},c_{2})} in the x,y-plane, R 2 {\displaystyle \mathbb {R} ^{2}} . One may regard points in R 2 {\displaystyle \mathbb {R} ^{2}} as vectors, so it makes sense to add or subtract them and multiply them by scalars. Each triangle A B C {\displaystyle ABC} has a signed area or sarea, which is plus or minus its area: sarea ⁡ ( A B C ) = ± area ⁡ ( A B C ) . {\displaystyle \operatorname {sarea} (ABC)=\pm \operatorname {area} (ABC).} The sign is plus if the path from A {\displaystyle A} to B {\displaystyle B} to C {\displaystyle C} then back to A {\displaystyle A} goes around the triangle in a counterclockwise direction. The sign is minus if the path goes around in a clockwise direction. Let P {\displaystyle P} be a point in the plane, and let ( λ 1 , λ 2 , λ 3 ) {\displaystyle (\lambda _{1},\lambda _{2},\lambda _{3})} be its normalized barycentric coordinates with respect to the triangle A B C {\displaystyle ABC} , so P = λ 1 A + λ 2 B + λ 3 C {\displaystyle P=\lambda _{1}A+\lambda _{2}B+\lambda _{3}C} and 1 = λ 1 + λ 2 + λ 3 . {\displaystyle 1=\lambda _{1}+\lambda _{2}+\lambda _{3}.} Normalized barycentric coordinates ( λ 1 , λ 2 , λ 3 ) {\displaystyle (\lambda _{1},\lambda _{2},\lambda _{3})} are also called areal coordinates because they represent ratios of signed areas of triangles: λ 1 = sarea ⁡ ( P B C ) / sarea ⁡ ( A B C ) λ 2 = sarea ⁡ ( A P C ) / sarea ⁡ ( A B C ) λ 3 = sarea ⁡ ( A B P ) / sarea ⁡ ( A B C ) . {\displaystyle {\begin{aligned}\lambda _{1}&=\operatorname {sarea} (PBC)/\operatorname {sarea} (ABC)\\\lambda _{2}&=\operatorname {sarea} (APC)/\operatorname {sarea} (ABC)\\\lambda _{3}&=\operatorname {sarea} (ABP)/\operatorname {sarea} (ABC).\end{aligned}}} One may prove these ratio formulas based on the facts that a triangle is half of a parallelogram, and the area of a parallelogram is easy to compute using a determinant. Specifically, let D = − A + B + C . {\displaystyle D=-A+B+C.} A B C D {\displaystyle ABCD} is a parallelogram because its pairs of opposite sides, represented by the pairs of displacement vectors D − C = B − A {\displaystyle D-C=B-A} , and D − B = C − A {\displaystyle D-B=C-A} , are parallel and congruent. Triangle A B C {\displaystyle ABC} is half of the parallelogram A B D C {\displaystyle ABDC} , so twice its signed area is equal to the signed area of the parallelogram, which is given by the 2 × 2 {\displaystyle 2\times 2} determinant det ( B − A , C − A ) {\displaystyle \det(B-A,C-A)} whose columns are the displacement vectors B − A {\displaystyle B-A} and C − A {\displaystyle C-A} : sarea ⁡ ( A B C D ) = det ( b 1 − a 1 c 1 − a 1 b 2 − a 2 c 2 − a 2 ) {\displaystyle \operatorname {sarea} (ABCD)=\det {\begin{pmatrix}b_{1}-a_{1}&c_{1}-a_{1}\\b_{2}-a_{2}&c_{2}-a_{2}\end{pmatrix}}} Expanding the determinant, using its alternating and multilinear properties, one obtains det ( B − A , C − A ) = det ( B , C ) − det ( A , C ) − det ( B , A ) + det ( A , A ) = det ( A , B ) + det ( B , C ) + det ( C , A ) {\displaystyle {\begin{aligned}\det(B-A,C-A)&=\det(B,C)-\det(A,C)-\det(B,A)+\det(A,A)\\&=\det(A,B)+\det(B,C)+\det(C,A)\end{aligned}}} so 2 sarea ⁡ ( A B C ) = det ( A , B ) + det ( B , C ) + det ( C , A ) . {\displaystyle 2\operatorname {sarea} (ABC)=\det(A,B)+\det(B,C)+\det(C,A).} Similarly, 2 sarea ⁡ ( P B C ) = det ( P , B ) + det ( B , C ) + det ( C , P ) {\displaystyle 2\operatorname {sarea} (PBC)=\det(P,B)+\det(B,C)+\det(C,P)} , To obtain the ratio of these signed areas, express P {\displaystyle P} in the second formula in terms of its barycentric coordinates: 2 sarea ⁡ ( P B C ) = det ( λ 1 A + λ 2 B + λ 3 C , B ) + det ( B , C ) + det ( C , λ 1 A + λ 2 B + λ 3 C ) = λ 1 det ( A , B ) + λ 3 det ( C , B ) + det ( B , C ) + λ 1 det ( C , A ) + λ 2 det ( C , B ) = λ 1 det ( A , B ) + λ 1 det ( C , A ) + ( 1 − λ 2 − λ 3 ) det ( B , C ) . {\displaystyle {\begin{aligned}2\operatorname {sarea} (PBC)&=\det(\lambda _{1}A+\lambda _{2}B+\lambda _{3}C,B)+\det(B,C)+\det(C,\lambda _{1}A+\lambda _{2}B+\lambda _{3}C)\\&=\lambda _{1}\det(A,B)+\lambda _{3}\det(C,B)+\det(B,C)+\lambda _{1}\det(C,A)+\lambda _{2}\det(C,B)\\&=\lambda _{1}\det(A,B)+\lambda _{1}\det(C,A)+(1-\lambda _{2}-\lambda _{3})\det(B,C)\end{aligned}}.} The barycentric coordinates are normalized so 1 = λ 1 + λ 2 + λ 3 {\displaystyle 1=\lambda _{1}+\lambda _{2}+\lambda _{3}} , hence λ 1 = ( 1 − λ 2 − λ 3 ) {\displaystyle \lambda _{1}=(1-\lambda _{2}-\lambda _{3})} . Plug that into the previous line to obtain 2 sarea ⁡ ( P B C ) = λ 1 ( det ( A , B ) + det ( B , C ) + det ( C , A ) ) = ( λ 1 ) ( 2 sarea ⁡ ( A B C ) ) . {\displaystyle {\begin{aligned}2\operatorname {sarea} (PBC)&=\lambda _{1}(\det(A,B)+\det(B,C)+\det(C,A))\\&=(\lambda _{1})(2\operatorname {sarea} (ABC)).\end{aligned}}} Therefore λ 1 = sarea ⁡ ( P B C ) / sarea ⁡ ( A B C ) {\displaystyle \lambda _{1}=\operatorname {sarea} (PBC)/\operatorname {sarea} (ABC)} . Similar calculations prove the other two formulas λ 2 = sarea ⁡ ( A P C ) / sarea ⁡ ( A B C ) {\displaystyle \lambda _{2}=\operatorname {sarea} (APC)/\operatorname {sarea} (ABC)} λ 3 = sarea ⁡ ( A B P ) / sarea ⁡ ( A B C ) {\displaystyle \lambda _{3}=\operatorname {sarea} (ABP)/\operatorname {sarea} (ABC)} . Trilinear coordinates ( γ 1 , γ 2 , γ 3 ) {\displaystyle (\gamma _{1},\gamma _{2},\gamma _{3})} of P {\displaystyle P} are signed distances from P {\displaystyle P} to the lines BC, AC, and AB, respectively. The sign of γ 1 {\displaystyle \gamma _{1}} is positive if P {\displaystyle P} and A {\displaystyle A} lie on the same side of BC, negative otherwise. The signs of γ 2 {\displaystyle \gamma _{2}} and γ 3 {\displaystyle \gamma _{3}} are assigned similarly. Let a = length ⁡ ( B C ) {\displaystyle a=\operatorname {length} (BC)} , b = length ⁡ ( C A ) {\displaystyle b=\operatorname {length} (CA)} , c = length ⁡ ( A B ) {\displaystyle c=\operatorname {length} (AB)} . Then γ 1 a = ± 2 sarea ⁡ ( P B C ) γ 2 b = ± 2 sarea ⁡ ( A P C ) γ 3 c = ± 2 sarea ⁡ ( A B P ) {\displaystyle {\begin{aligned}\gamma _{1}a&=\pm 2\operatorname {sarea} (PBC)\\\gamma _{2}b&=\pm 2\operatorname {sarea} (APC)\\\gamma _{3}c&=\pm 2\operatorname {sarea} (ABP)\end{aligned}}} where, as above, sarea stands for signed area. All three signs are plus if triangle ABC is positively oriented, minus otherwise. The relations between trilinear and barycentric coordinates are obtained by substituting these formulas into the above formulas that express barycentric coordinates as ratios of areas. Switching back and forth between the barycentric coordinates and other coordinate systems makes some problems much easier to solve. === Conversion between barycentric and Cartesian coordinates === ==== Edge approach ==== Given a point r {\displaystyle \mathbf {r} } in a triangle's plane one can obtain the barycentric coordinates λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} and λ 3 {\displaystyle \lambda _{3}} from the Cartesian coordinates ( x , y ) {\displaystyle (x,y)} or vice versa. We can write the Cartesian coordinates of the point r {\displaystyle \mathbf {r} } in terms of the Cartesian components of the triangle vertices r 1 {\displaystyle \mathbf {r} _{1}} , r 2 {\displaystyle \mathbf {r} _{2}} , r 3 {\displaystyle \mathbf {r} _{3}} where r i = ( x i , y i ) {\displaystyle \mathbf {r} _{i}=(x_{i},y_{i})} and in terms of the barycentric coordinates of r {\displaystyle \mathbf {r} } as x = λ 1 x 1 + λ 2 x 2 + λ 3 x 3 y = λ 1 y 1 + λ 2 y 2 + λ 3 y 3 {\displaystyle {\begin{aligned}x&=\lambda _{1}x_{1}+\lambda _{2}x_{2}+\lambda _{3}x_{3}\\[2pt]y&=\lambda _{1}y_{1}+\lambda _{2}y_{2}+\lambda _{3}y_{3}\end{aligned}}} That is, the Cartesian coordinates of any point are a weighted average of the Cartesian coordinates of the triangle's vertices, with the weights being the point's barycentric coordinates summing to unity. To find the reverse transformation, from Cartesian coordinates to barycentric coordinates, we first substitute λ 3 = 1 − λ 1 − λ 2 {\displaystyle \lambda _{3}=1-\lambda _{1}-\lambda _{2}} into the above to obtain x = λ 1 x 1 + λ 2 x 2 + ( 1 − λ 1 − λ 2 ) x 3 y = λ 1 y 1 + λ 2 y 2 + ( 1 − λ 1 − λ 2 ) y 3 {\displaystyle {\begin{aligned}x&=\lambda _{1}x_{1}+\lambda _{2}x_{2}+(1-\lambda _{1}-\lambda _{2})x_{3}\\[2pt]y&=\lambda _{1}y_{1}+\lambda _{2}y_{2}+(1-\lambda _{1}-\lambda _{2})y_{3}\end{aligned}}} Rearranging, this is λ 1 ( x 1 − x 3 ) + λ 2 ( x 2 − x 3 ) + x 3 − x = 0 λ 1 ( y 1 − y 3 ) + λ 2 ( y 2 − y 3 ) + y 3 − y = 0 {\displaystyle {\begin{aligned}\lambda _{1}(x_{1}-x_{3})+\lambda _{2}(x_{2}-x_{3})+x_{3}-x&=0\\[2pt]\lambda _{1}(y_{1}-y_{3})+\lambda _{2}(y_{2}-\,y_{3})+y_{3}-\,y&=0\end{aligned}}} This linear transformation may be written more succinctly as T ⋅ λ = r − r 3 {\displaystyle \mathbf {T} \cdot \lambda =\mathbf {r} -\mathbf {r} _{3}} where λ {\displaystyle \lambda } is the vector of the first two barycentric coordinates, r {\displaystyle \mathbf {r} } is the vector of Cartesian coordinates, and T {\displaystyle \mathbf {T} } is a matrix given by T = ( x 1 − x 3 x 2 − x 3 y 1 − y 3 y 2 − y 3 ) {\displaystyle \mathbf {T} =\left({\begin{matrix}x_{1}-x_{3}&x_{2}-x_{3}\\y_{1}-y_{3}&y_{2}-y_{3}\end{matrix}}\right)} Now the matrix T {\displaystyle \mathbf {T} } is invertible, since r 1 − r 3 {\displaystyle \mathbf {r} _{1}-\mathbf {r} _{3}} and r 2 − r 3 {\displaystyle \mathbf {r} _{2}-\mathbf {r} _{3}} are linearly independent (if this were not the case, then r 1 {\displaystyle \mathbf {r} _{1}} , r 2 {\displaystyle \mathbf {r} _{2}} , and r 3 {\displaystyle \mathbf {r} _{3}} would be collinear and would not form a triangle). Thus, we can rearrange the above equation to get ( λ 1 λ 2 ) = T − 1 ( r − r 3 ) {\displaystyle \left({\begin{matrix}\lambda _{1}\\\lambda _{2}\end{matrix}}\right)=\mathbf {T} ^{-1}(\mathbf {r} -\mathbf {r} _{3})} Finding the barycentric coordinates has thus been reduced to finding the 2×2 inverse matrix of T {\displaystyle \mathbf {T} } , an easy problem. Explicitly, the formulae for the barycentric coordinates of point r {\displaystyle \mathbf {r} } in terms of its Cartesian coordinates (x, y) and in terms of the Cartesian coordinates of the triangle's vertices are: λ 1 = ( y 2 − y 3 ) ( x − x 3 ) + ( x 3 − x 2 ) ( y − y 3 ) det ( T ) = ( y 2 − y 3 ) ( x − x 3 ) + ( x 3 − x 2 ) ( y − y 3 ) ( y 2 − y 3 ) ( x 1 − x 3 ) + ( x 3 − x 2 ) ( y 1 − y 3 ) = ( r − r 3 ) × ( r 2 − r 3 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) λ 2 = ( y 3 − y 1 ) ( x − x 3 ) + ( x 1 − x 3 ) ( y − y 3 ) det ( T ) = ( y 3 − y 1 ) ( x − x 3 ) + ( x 1 − x 3 ) ( y − y 3 ) ( y 2 − y 3 ) ( x 1 − x 3 ) + ( x 3 − x 2 ) ( y 1 − y 3 ) = ( r − r 3 ) × ( r 3 − r 1 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) λ 3 = 1 − λ 1 − λ 2 = 1 − ( r − r 3 ) × ( r 2 − r 1 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) = ( r − r 1 ) × ( r 1 − r 2 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) {\displaystyle {\begin{aligned}\lambda _{1}=&\ {\frac {(y_{2}-y_{3})(x-x_{3})+(x_{3}-x_{2})(y-y_{3})}{\det(\mathbf {T} )}}\\[4pt]&={\frac {(y_{2}-y_{3})(x-x_{3})+(x_{3}-x_{2})(y-y_{3})}{(y_{2}-y_{3})(x_{1}-x_{3})+(x_{3}-x_{2})(y_{1}-y_{3})}}\\[4pt]&={\frac {(\mathbf {r} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\\[12pt]\lambda _{2}=&\ {\frac {(y_{3}-y_{1})(x-x_{3})+(x_{1}-x_{3})(y-y_{3})}{\det(\mathbf {T} )}}\\[4pt]&={\frac {(y_{3}-y_{1})(x-x_{3})+(x_{1}-x_{3})(y-y_{3})}{(y_{2}-y_{3})(x_{1}-x_{3})+(x_{3}-x_{2})(y_{1}-y_{3})}}\\[4pt]&={\frac {(\mathbf {r} -\mathbf {r_{3}} )\times (\mathbf {r_{3}} -\mathbf {r_{1}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\\[12pt]\lambda _{3}=&\ 1-\lambda _{1}-\lambda _{2}\\[4pt]&=1-{\frac {(\mathbf {r} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{1}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\\[4pt]&={\frac {(\mathbf {r} -\mathbf {r_{1}} )\times (\mathbf {r_{1}} -\mathbf {r_{2}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\end{aligned}}} When understanding the last line of equation, note the identity ( r 1 − r 3 ) × ( r 2 − r 3 ) = ( r 3 − r 1 ) × ( r 1 − r 2 ) {\displaystyle (\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )=(\mathbf {r_{3}} -\mathbf {r_{1}} )\times (\mathbf {r_{1}} -\mathbf {r_{2}} )} . ==== Vertex approach ==== Another way to solve the conversion from Cartesian to barycentric coordinates is to write the relation in the matrix form R λ = r {\displaystyle \mathbf {R} {\boldsymbol {\lambda }}=\mathbf {r} } with R = ( r 1 | r 2 | r 3 ) {\displaystyle \mathbf {R} =\left(\,\mathbf {r} _{1}\,|\,\mathbf {r} _{2}\,|\,\mathbf {r} _{3}\right)} and λ = ( λ 1 , λ 2 , λ 3 ) ⊤ , {\displaystyle {\boldsymbol {\lambda }}=\left(\lambda _{1},\lambda _{2},\lambda _{3}\right)^{\top },} i.e. ( x 1 x 2 x 3 y 1 y 2 y 3 ) ( λ 1 λ 2 λ 3 ) = ( x y ) {\displaystyle {\begin{pmatrix}x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\end{pmatrix}}{\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{pmatrix}}={\begin{pmatrix}x\\y\end{pmatrix}}} To get the unique normalized solution we need to add the condition λ 1 + λ 2 + λ 3 = 1 {\displaystyle \lambda _{1}+\lambda _{2}+\lambda _{3}=1} . The barycentric coordinates are thus the solution of the linear system ( 1 1 1 x 1 x 2 x 3 y 1 y 2 y 3 ) ( λ 1 λ 2 λ 3 ) = ( 1 x y ) {\displaystyle \left({\begin{matrix}1&1&1\\x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\end{matrix}}\right){\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{pmatrix}}=\left({\begin{matrix}1\\x\\y\end{matrix}}\right)} which is ( λ 1 λ 2 λ 3 ) = 1 2 A ( x 2 y 3 − x 3 y 2 y 2 − y 3 x 3 − x 2 x 3 y 1 − x 1 y 3 y 3 − y 1 x 1 − x 3 x 1 y 2 − x 2 y 1 y 1 − y 2 x 2 − x 1 ) ( 1 x y ) {\displaystyle {\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{pmatrix}}={\frac {1}{2A}}{\begin{pmatrix}x_{2}y_{3}-x_{3}y_{2}&y_{2}-y_{3}&x_{3}-x_{2}\\x_{3}y_{1}-x_{1}y_{3}&y_{3}-y_{1}&x_{1}-x_{3}\\x_{1}y_{2}-x_{2}y_{1}&y_{1}-y_{2}&x_{2}-x_{1}\end{pmatrix}}{\begin{pmatrix}1\\x\\y\end{pmatrix}}} where 2 A = det ( 1 | R ) = x 1 ( y 2 − y 3 ) + x 2 ( y 3 − y 1 ) + x 3 ( y 1 − y 2 ) {\displaystyle 2A=\det(1|R)=x_{1}(y_{2}-y_{3})+x_{2}(y_{3}-y_{1})+x_{3}(y_{1}-y_{2})} is twice the signed area of the triangle. The area interpretation of the barycentric coordinates can be recovered by applying Cramer's rule to this linear system. === Conversion between barycentric and trilinear coordinates === A point with trilinear coordinates x : y : z has barycentric coordinates ax : by : cz where a, b, c are the side lengths of the triangle. Conversely, a point with barycentrics λ 1 : λ 2 : λ 3 {\displaystyle \lambda _{1}:\lambda _{2}:\lambda _{3}} has trilinears λ 1 / a : λ 2 / b : λ 3 / c . {\displaystyle \lambda _{1}/a:\lambda _{2}/b:\lambda _{3}/c.} === Equations in barycentric coordinates === The three sides a, b, c respectively have equations λ 1 = 0 , λ 2 = 0 , λ 3 = 0. {\displaystyle \lambda _{1}=0,\quad \lambda _{2}=0,\quad \lambda _{3}=0.} The equation of a triangle's Euler line is | λ 1 λ 2 λ 3 1 1 1 tan ⁡ A tan ⁡ B tan ⁡ C | = 0. {\displaystyle {\begin{vmatrix}\lambda _{1}&\lambda _{2}&\lambda _{3}\\1&1&1\\\tan A&\tan B&\tan C\end{vmatrix}}=0.} Using the previously given conversion between barycentric and trilinear coordinates, the various other equations given in Trilinear coordinates#Formulas can be rewritten in terms of barycentric coordinates. === Distance between points === The displacement vector of two normalized points P = ( p 1 , p 2 , p 3 ) {\displaystyle P=(p_{1},p_{2},p_{3})} and Q = ( q 1 , q 2 , q 3 ) {\displaystyle Q=(q_{1},q_{2},q_{3})} is P Q → = ( p 1 − q 1 , p 2 − q 2 , p 3 − q 3 ) . {\displaystyle {\overset {}{\overrightarrow {PQ}}}=(p_{1}-q_{1},p_{2}-q_{2},p_{3}-q_{3}).} The distance d between P and Q, or the length of the displacement vector P Q → = ( x , y , z ) , {\displaystyle {\overset {}{\overrightarrow {PQ}}}=(x,y,z),} is d 2 = | P Q | 2 = − a 2 y z − b 2 z x − c 2 x y = 1 2 [ x 2 ( b 2 + c 2 − a 2 ) + y 2 ( c 2 + a 2 − b 2 ) + z 2 ( a 2 + b 2 − c 2 ) ] . {\displaystyle {\begin{aligned}d^{2}&=|PQ|^{2}\\[2pt]&=-a^{2}yz-b^{2}zx-c^{2}xy\\[4pt]&={\frac {1}{2}}\left[x^{2}(b^{2}+c^{2}-a^{2})+y^{2}(c^{2}+a^{2}-b^{2})+z^{2}(a^{2}+b^{2}-c^{2})\right].\end{aligned}}} where a, b, c are the sidelengths of the triangle. The equivalence of the last two expressions follows from x + y + z = 0 , {\displaystyle x+y+z=0,} which holds because x + y + z = ( p 1 − q 1 ) + ( p 2 − q 2 ) + ( p 3 − q 3 ) = ( p 1 + p 2 + p 3 ) − ( q 1 + q 2 + q 3 ) = 1 − 1 = 0. {\displaystyle {\begin{aligned}x+y+z&=(p_{1}-q_{1})+(p_{2}-q_{2})+(p_{3}-q_{3})\\[2pt]&=(p_{1}+p_{2}+p_{3})-(q_{1}+q_{2}+q_{3})\\[2pt]&=1-1=0.\end{aligned}}} The barycentric coordinates of a point can be calculated based on distances di to the three triangle vertices by solving the equation ( − c 2 c 2 b 2 − a 2 − b 2 c 2 − a 2 b 2 1 1 1 ) λ = ( d A 2 − d B 2 d A 2 − d C 2 1 ) . {\displaystyle \left({\begin{matrix}-c^{2}&c^{2}&b^{2}-a^{2}\\-b^{2}&c^{2}-a^{2}&b^{2}\\1&1&1\end{matrix}}\right){\boldsymbol {\lambda }}=\left({\begin{matrix}d_{A}^{2}-d_{B}^{2}\\d_{A}^{2}-d_{C}^{2}\\1\end{matrix}}\right).} === Applications === ==== Determining location with respect to a triangle ==== Although barycentric coordinates are most commonly used to handle points inside a triangle, they can also be used to describe a point outside the triangle. If the point is not inside the triangle, then we can still use the formulas above to compute the barycentric coordinates. However, since the point is outside the triangle, at least one of the coordinates will violate our original assumption that λ 1...3 ≥ 0 {\displaystyle \lambda _{1...3}\geq 0} . In fact, given any point in cartesian coordinates, we can use this fact to determine where this point is with respect to a triangle. If a point lies in the interior of the triangle, all of the Barycentric coordinates lie in the open interval ( 0 , 1 ) . {\displaystyle (0,1).} If a point lies on an edge of the triangle but not at a vertex, one of the area coordinates λ 1...3 {\displaystyle \lambda _{1...3}} (the one associated with the opposite vertex) is zero, while the other two lie in the open interval ( 0 , 1 ) . {\displaystyle (0,1).} If the point lies on a vertex, the coordinate associated with that vertex equals 1 and the others equal zero. Finally, if the point lies outside the triangle at least one coordinate is negative. Summarizing, Point r {\displaystyle \mathbf {r} } lies inside the triangle if and only if 0 < λ i < 1 ∀ i in 1 , 2 , 3 {\displaystyle 0<\lambda _{i}<1\;\forall \;i{\text{ in }}{1,2,3}} . r {\displaystyle \mathbf {r} } lies on the edge or corner of the triangle if 0 ≤ λ i ≤ 1 ∀ i in 1 , 2 , 3 {\displaystyle 0\leq \lambda _{i}\leq 1\;\forall \;i{\text{ in }}{1,2,3}} and λ i = 0 , for some i in 1 , 2 , 3 {\displaystyle \lambda _{i}=0\;{\text{, for some i in }}{1,2,3}} . Otherwise, r {\displaystyle \mathbf {r} } lies outside the triangle. In particular, if a point lies on the far side of a line the barycentric coordinate of the point in the triangle that is not on the line will have a negative value. ==== Interpolation on a triangular unstructured grid ==== If f ( r 1 ) , f ( r 2 ) , f ( r 3 ) {\displaystyle f(\mathbf {r} _{1}),f(\mathbf {r} _{2}),f(\mathbf {r} _{3})} are known quantities, but the values of f inside the triangle defined by r 1 , r 2 , r 3 {\displaystyle \mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3}} is unknown, they can be approximated using linear interpolation. Barycentric coordinates provide a convenient way to compute this interpolation. If r {\displaystyle \mathbf {r} } is a point inside the triangle with barycentric coordinates λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} , λ 3 {\displaystyle \lambda _{3}} , then f ( r ) ≈ λ 1 f ( r 1 ) + λ 2 f ( r 2 ) + λ 3 f ( r 3 ) {\displaystyle f(\mathbf {r} )\approx \lambda _{1}f(\mathbf {r} _{1})+\lambda _{2}f(\mathbf {r} _{2})+\lambda _{3}f(\mathbf {r} _{3})} In general, given any unstructured grid or polygon mesh, this kind of technique can be used to approximate the value of f at all points, as long as the function's value is known at all vertices of the mesh. In this case, we have many triangles, each corresponding to a different part of the space. To interpolate a function f at a point r {\displaystyle \mathbf {r} } , first a triangle must be found that contains r {\displaystyle \mathbf {r} } . To do so, r {\displaystyle \mathbf {r} } is transformed into the barycentric coordinates of each triangle. If some triangle is found such that the coordinates satisfy 0 ≤ λ i ≤ 1 ∀ i in 1 , 2 , 3 {\displaystyle 0\leq \lambda _{i}\leq 1\;\forall \;i{\text{ in }}1,2,3} , then the point lies in that triangle or on its edge (explained in the previous section). Then the value of f ( r ) {\displaystyle f(\mathbf {r} )} can be interpolated as described above. These methods have many applications, such as the finite element method (FEM). ==== Integration over a triangle or tetrahedron ==== The integral of a function over the domain of the triangle can be annoying to compute in a cartesian coordinate system. One generally has to split the triangle up into two halves, and great messiness follows. Instead, it is often easier to make a change of variables to any two barycentric coordinates, e.g. λ 1 , λ 2 {\displaystyle \lambda _{1},\lambda _{2}} . Under this change of variables, ∫ T f ( r ) d r = 2 A ∫ 0 1 ∫ 0 1 − λ 2 f ( λ 1 r 1 + λ 2 r 2 + ( 1 − λ 1 − λ 2 ) r 3 ) d λ 1 d λ 2 {\displaystyle \int _{T}f(\mathbf {r} )\ d\mathbf {r} =2A\int _{0}^{1}\int _{0}^{1-\lambda _{2}}f(\lambda _{1}\mathbf {r} _{1}+\lambda _{2}\mathbf {r} _{2}+(1-\lambda _{1}-\lambda _{2})\mathbf {r} _{3})\ d\lambda _{1}\ d\lambda _{2}} where A is the area of the triangle. This result follows from the fact that a rectangle in barycentric coordinates corresponds to a quadrilateral in cartesian coordinates, and the ratio of the areas of the corresponding shapes in the corresponding coordinate systems is given by 2 A {\displaystyle 2A} . Similarly, for integration over a tetrahedron, instead of breaking up the integral into two or three separate pieces, one could switch to 3D tetrahedral coordinates under the change of variables ∫ ∫ T f ( r ) d r = 6 V ∫ 0 1 ∫ 0 1 − λ 3 ∫ 0 1 − λ 2 − λ 3 f ( λ 1 r 1 + λ 2 r 2 + λ 3 r 3 + ( 1 − λ 1 − λ 2 − λ 3 ) r 4 ) d λ 1 d λ 2 d λ 3 {\displaystyle \int \int _{T}f(\mathbf {r} )\ d\mathbf {r} =6V\int _{0}^{1}\int _{0}^{1-\lambda _{3}}\int _{0}^{1-\lambda _{2}-\lambda _{3}}f(\lambda _{1}\mathbf {r} _{1}+\lambda _{2}\mathbf {r} _{2}+\lambda _{3}\mathbf {r} _{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})\mathbf {r} _{4})\ d\lambda _{1}\ d\lambda _{2}\ d\lambda _{3}} where V is the volume of the tetrahedron. === Examples of special points === In the homogeneous barycentric coordinate system defined with respect to a triangle A B C {\displaystyle ABC} , the following statements about special points of A B C {\displaystyle ABC} hold. The three vertices A, B, and C have coordinates A = 1 : 0 : 0 B = 0 : 1 : 0 C = 0 : 0 : 1 {\displaystyle {\begin{array}{rccccc}A=&1&:&0&:&0\\B=&0&:&1&:&0\\C=&0&:&0&:&1\end{array}}} The centroid has coordinates 1 : 1 : 1. {\displaystyle 1:1:1.} If a, b, c are the edge lengths B C {\displaystyle BC} , C A {\displaystyle CA} , A B {\displaystyle AB} respectively, α {\displaystyle \alpha } , β {\displaystyle \beta } , γ {\displaystyle \gamma } are the angle measures ∠ C A B {\displaystyle \angle CAB} , ∠ A B C {\displaystyle \angle ABC} , and ∠ B C A {\displaystyle \angle BCA} respectively, and s is the semiperimeter of A B C {\displaystyle ABC} , then the following statements about special points of A B C {\displaystyle ABC} hold in addition. The circumcenter has coordinates sin ⁡ 2 α : sin ⁡ 2 β : sin ⁡ 2 γ = 1 − cot ⁡ β cot ⁡ γ : 1 − cot ⁡ γ cot ⁡ α : 1 − cot ⁡ α cot ⁡ β = a 2 ( − a 2 + b 2 + c 2 ) : b 2 ( a 2 − b 2 + c 2 ) : c 2 ( a 2 + b 2 − c 2 ) {\displaystyle {\begin{array}{rccccc}&\sin 2\alpha &:&\sin 2\beta &:&\sin 2\gamma \\[2pt]=&1-\cot \beta \cot \gamma &:&1-\cot \gamma \cot \alpha &:&1-\cot \alpha \cot \beta \\[2pt]=&a^{2}(-a^{2}+b^{2}+c^{2})&:&b^{2}(a^{2}-b^{2}+c^{2})&:&c^{2}(a^{2}+b^{2}-c^{2})\end{array}}} The orthocenter has coordinates tan ⁡ α : tan ⁡ β : tan ⁡ γ = a cos ⁡ β cos ⁡ γ : b cos ⁡ γ cos ⁡ α : c cos ⁡ α cos ⁡ β = ( a 2 + b 2 − c 2 ) ( a 2 − b 2 + c 2 ) : ( − a 2 + b 2 + c 2 ) ( a 2 + b 2 − c 2 ) : ( a 2 − b 2 + c 2 ) ( − a 2 + b 2 + c 2 ) {\displaystyle {\begin{array}{rccccc}&\tan \alpha &:&\tan \beta &:&\tan \gamma \\[2pt]=&a\cos \beta \cos \gamma &:&b\cos \gamma \cos \alpha &:&c\cos \alpha \cos \beta \\[2pt]=&(a^{2}+b^{2}-c^{2})(a^{2}-b^{2}+c^{2})&:&(-a^{2}+b^{2}+c^{2})(a^{2}+b^{2}-c^{2})&:&(a^{2}-b^{2}+c^{2})(-a^{2}+b^{2}+c^{2})\end{array}}} The incenter has coordinates a : b : c = sin ⁡ α : sin ⁡ β : sin ⁡ γ . {\displaystyle a:b:c=\sin \alpha :\sin \beta :\sin \gamma .} The excenters have coordinates J A = − a : b : c J B = a : − b : c J C = a : b : − c {\displaystyle {\begin{array}{rrcrcr}J_{A}=&-a&:&b&:&c\\J_{B}=&a&:&-b&:&c\\J_{C}=&a&:&b&:&-c\end{array}}} The nine-point center has coordinates a cos ⁡ ( β − γ ) : b cos ⁡ ( γ − α ) : c cos ⁡ ( α − β ) = 1 + cot ⁡ β cot ⁡ γ : 1 + cot ⁡ γ cot ⁡ α : 1 + cot ⁡ α cot ⁡ β = a 2 ( b 2 + c 2 ) − ( b 2 − c 2 ) 2 : b 2 ( c 2 + a 2 ) − ( c 2 − a 2 ) 2 : c 2 ( a 2 + b 2 ) − ( a 2 − b 2 ) 2 {\displaystyle {\begin{array}{rccccc}&a\cos(\beta -\gamma )&:&b\cos(\gamma -\alpha )&:&c\cos(\alpha -\beta )\\[4pt]=&1+\cot \beta \cot \gamma &:&1+\cot \gamma \cot \alpha &:&1+\cot \alpha \cot \beta \\[4pt]=&a^{2}(b^{2}+c^{2})-(b^{2}-c^{2})^{2}&:&b^{2}(c^{2}+a^{2})-(c^{2}-a^{2})^{2}&:&c^{2}(a^{2}+b^{2})-(a^{2}-b^{2})^{2}\end{array}}} The Gergonne point has coordinates ( s − b ) ( s − c ) : ( s − c ) ( s − a ) : ( s − a ) ( s − b ) {\displaystyle (s-b)(s-c):(s-c)(s-a):(s-a)(s-b)} . The Nagel point has coordinates s − a : s − b : s − c {\displaystyle s-a:s-b:s-c} . The symmedian point has coordinates a 2 : b 2 : c 2 {\displaystyle a^{2}:b^{2}:c^{2}} . == Barycentric coordinates on tetrahedra == Barycentric coordinates may be easily extended to three dimensions. The 3D simplex is a tetrahedron, a polyhedron having four triangular faces and four vertices. Once again, the four barycentric coordinates are defined so that the first vertex r 1 {\displaystyle \mathbf {r} _{1}} maps to barycentric coordinates λ = ( 1 , 0 , 0 , 0 ) {\displaystyle \lambda =(1,0,0,0)} , r 2 → ( 0 , 1 , 0 , 0 ) {\displaystyle \mathbf {r} _{2}\to (0,1,0,0)} , etc. This is again a linear transformation, and we may extend the above procedure for triangles to find the barycentric coordinates of a point r {\displaystyle \mathbf {r} } with respect to a tetrahedron: ( λ 1 λ 2 λ 3 ) = T − 1 ( r − r 4 ) {\displaystyle \left({\begin{matrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{matrix}}\right)=\mathbf {T} ^{-1}(\mathbf {r} -\mathbf {r} _{4})} where T {\displaystyle \mathbf {T} } is now a 3×3 matrix: T = ( x 1 − x 4 x 2 − x 4 x 3 − x 4 y 1 − y 4 y 2 − y 4 y 3 − y 4 z 1 − z 4 z 2 − z 4 z 3 − z 4 ) {\displaystyle \mathbf {T} =\left({\begin{matrix}x_{1}-x_{4}&x_{2}-x_{4}&x_{3}-x_{4}\\y_{1}-y_{4}&y_{2}-y_{4}&y_{3}-y_{4}\\z_{1}-z_{4}&z_{2}-z_{4}&z_{3}-z_{4}\end{matrix}}\right)} and λ 4 = 1 − λ 1 − λ 2 − λ 3 {\displaystyle \lambda _{4}=1-\lambda _{1}-\lambda _{2}-\lambda _{3}} with the corresponding Cartesian coordinates: x = λ 1 x 1 + λ 2 x 2 + λ 3 x 3 + ( 1 − λ 1 − λ 2 − λ 3 ) x 4 y = λ 1 y 1 + λ 2 y 2 + λ 3 y 3 + ( 1 − λ 1 − λ 2 − λ 3 ) y 4 z = λ 1 z 1 + λ 2 z 2 + λ 3 z 3 + ( 1 − λ 1 − λ 2 − λ 3 ) z 4 {\displaystyle {\begin{aligned}x&=\lambda _{1}x_{1}+\lambda _{2}x_{2}+\lambda _{3}x_{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})x_{4}\\y&=\lambda _{1}y_{1}+\,\lambda _{2}y_{2}+\lambda _{3}y_{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})y_{4}\\z&=\lambda _{1}z_{1}+\,\lambda _{2}z_{2}+\lambda _{3}z_{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})z_{4}\end{aligned}}} Once again, the problem of finding the barycentric coordinates has been reduced to inverting a 3×3 matrix. 3D barycentric coordinates may be used to decide if a point lies inside a tetrahedral volume, and to interpolate a function within a tetrahedral mesh, in an analogous manner to the 2D procedure. Tetrahedral meshes are often used in finite element analysis because the use of barycentric coordinates can greatly simplify 3D interpolation. == Generalized barycentric coordinates == Barycentric coordinates ( λ 1 , λ 2 , . . . , λ k ) {\displaystyle (\lambda _{1},\lambda _{2},...,\lambda _{k})} of a point p ∈ R n {\displaystyle p\in \mathbb {R} ^{n}} that are defined with respect to a finite set of k points x 1 , x 2 , . . . , x k ∈ R n {\displaystyle x_{1},x_{2},...,x_{k}\in \mathbb {R} ^{n}} instead of a simplex are called generalized barycentric coordinates. For these, the equation ( λ 1 + λ 2 + ⋯ + λ k ) p = λ 1 x 1 + λ 2 x 2 + ⋯ + λ k x k {\displaystyle (\lambda _{1}+\lambda _{2}+\cdots +\lambda _{k})p=\lambda _{1}x_{1}+\lambda _{2}x_{2}+\cdots +\lambda _{k}x_{k}} is still required to hold. Usually one uses normalized coordinates, λ 1 + λ 2 + ⋯ + λ k = 1 {\displaystyle \lambda _{1}+\lambda _{2}+\cdots +\lambda _{k}=1} . As for the case of a simplex, the points with nonnegative normalized generalized coordinates ( 0 ≤ λ i ≤ 1 {\displaystyle 0\leq \lambda _{i}\leq 1} ) form the convex hull of x1, ..., xn. If there are more points than in a full simplex ( k > n + 1 {\displaystyle k>n+1} ) the generalized barycentric coordinates of a point are not unique, as the defining linear system (here for n=2) ( 1 1 1 . . . x 1 x 2 x 3 . . . y 1 y 2 y 3 . . . ) ( λ 1 λ 2 λ 3 ⋮ ) = ( 1 x y ) {\displaystyle \left({\begin{matrix}1&1&1&...\\x_{1}&x_{2}&x_{3}&...\\y_{1}&y_{2}&y_{3}&...\end{matrix}}\right){\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\\\vdots \end{pmatrix}}=\left({\begin{matrix}1\\x\\y\end{matrix}}\right)} is underdetermined. The simplest example is a quadrilateral in the plane. Various kinds of additional restrictions can be used to define unique barycentric coordinates. === Abstraction === More abstractly, generalized barycentric coordinates express a convex polytope with n vertices, regardless of dimension, as the image of the standard ( n − 1 ) {\displaystyle (n-1)} -simplex, which has n vertices – the map is onto: Δ n − 1 ↠ P . {\displaystyle \Delta ^{n-1}\twoheadrightarrow P.} The map is one-to-one if and only if the polytope is a simplex, in which case the map is an isomorphism; this corresponds to a point not having unique generalized barycentric coordinates except when P is a simplex. Dual to generalized barycentric coordinates are slack variables, which measure by how much margin a point satisfies the linear constraints, and gives an embedding P ↪ ( R ≥ 0 ) f {\displaystyle P\hookrightarrow (\mathbf {R} _{\geq 0})^{f}} into the f-orthant, where f is the number of faces (dual to the vertices). This map is one-to-one (slack variables are uniquely determined) but not onto (not all combinations can be realized). This use of the standard ( n − 1 ) {\displaystyle (n-1)} -simplex and f-orthant as standard objects that map to a polytope or that a polytope maps into should be contrasted with the use of the standard vector space K n {\displaystyle K^{n}} as the standard object for vector spaces, and the standard affine hyperplane { ( x 0 , … , x n ) ∣ ∑ x i = 1 } ⊂ K n + 1 {\displaystyle \{(x_{0},\ldots ,x_{n})\mid \sum x_{i}=1\}\subset K^{n+1}} as the standard object for affine spaces, where in each case choosing a linear basis or affine basis provides an isomorphism, allowing all vector spaces and affine spaces to be thought of in terms of these standard spaces, rather than an onto or one-to-one map (not every polytope is a simplex). Further, the n-orthant is the standard object that maps to cones. === Applications === Generalized barycentric coordinates have applications in computer graphics and more specifically in geometric modelling. Often, a three-dimensional model can be approximated by a polyhedron such that the generalized barycentric coordinates with respect to that polyhedron have a geometric meaning. In this way, the processing of the model can be simplified by using these meaningful coordinates. Barycentric coordinates are also used in geophysics. == See also == Ternary plot Convex combination Water pouring puzzle Homogeneous coordinates == References == Scott, J. A. Some examples of the use of areal coordinates in triangle geometry, Mathematical Gazette 83, November 1999, 472–477. Schindler, Max; Chen, Evan (July 13, 2012). Barycentric Coordinates in Olympiad Geometry (PDF). Retrieved 14 January 2016. Clark Kimberling's Encyclopedia of Triangles Encyclopedia of Triangle Centers. Archived from the original on 2012-04-19. Retrieved 2012-06-02. Bradley, Christopher J. (2007). The Algebra of Geometry: Cartesian, Areal and Projective Co-ordinates. Bath: Highperception. ISBN 978-1-906338-00-8. Coxeter, H.S.M. (1969). Introduction to geometry (2nd ed.). John Wiley and Sons. pp. 216–221. ISBN 978-0-471-50458-0. Zbl 0181.48101. Barycentric Calculus In Euclidean And Hyperbolic Geometry: A Comparative Introduction, Abraham Ungar, World Scientific, 2010 Hyperbolic Barycentric Coordinates, Abraham A. Ungar, The Australian Journal of Mathematical Analysis and Applications, Vol.6, No.1, Article 18, pp. 1–35, 2009 Weisstein, Eric W. "Areal Coordinates". MathWorld. Weisstein, Eric W. "Barycentric Coordinates". MathWorld. Barycentric coordinates computation in homogeneous coordinates, Vaclav Skala, Computers and Graphics, Vol.32, No.1, pp. 120–127, 2008 == External links == Law of the lever The uses of homogeneous barycentric coordinates in plane euclidean geometry Barycentric Coordinates – a collection of scientific papers about (generalized) barycentric coordinates Barycentric coordinates: A Curious Application (solving the "three glasses" problem) at cut-the-knot Accurate point in triangle test Barycentric Coordinates in Olympiad Geometry by Evan Chen and Max Schindler Barycenter command and TriangleCurve command at Geogebra.
Instance selection
Instance selection (or dataset reduction, or dataset condensation) is an important data pre-processing step that can be applied in many machine learning (or data mining) tasks. Approaches for instance selection can be applied for reducing the original dataset to a manageable volume, leading to a reduction of the computational resources that are necessary for performing the learning process. Algorithms of instance selection can also be applied for removing noisy instances, before applying learning algorithms. This step can improve the accuracy in classification problems. Algorithm for instance selection should identify a subset of the total available data to achieve the original purpose of the data mining (or machine learning) application as if the whole data had been used. Considering this, the optimal outcome of IS would be the minimum data subset that can accomplish the same task with no performance loss, in comparison with the performance achieved when the task is performed using the whole available data. Therefore, every instance selection strategy should deal with a trade-off between the reduction rate of the dataset and the classification quality. == Instance selection algorithms == The literature provides several different algorithms for instance selection. They can be distinguished from each other according to several different criteria. Considering this, instance selection algorithms can be grouped in two main classes, according to what instances they select: algorithms that preserve the instances at the boundaries of classes and algorithms that preserve the internal instances of the classes. Within the category of algorithms that select instances at the boundaries it is possible to cite DROP3, ICF and LSBo. On the other hand, within the category of algorithms that select internal instances, it is possible to mention ENN and LSSm. In general, algorithm such as ENN and LSSm are used for removing harmful (noisy) instances from the dataset. They do not reduce the data as the algorithms that select border instances, but they remove instances at the boundaries that have a negative impact on the data mining task. They can be used by other instance selection algorithms, as a filtering step. For example, the ENN algorithm is used by DROP3 as the first step, and the LSSm algorithm is used by LSBo. There is also another group of algorithms that adopt different selection criteria. For example, the algorithms LDIS, CDIS and XLDIS select the densest instances in a given arbitrary neighborhood. The selected instances can include both, border and internal instances. The LDIS and CDIS algorithms are very simple and select subsets that are very representative of the original dataset. Besides that, since they search by the representative instances in each class separately, they are faster (in terms of time complexity and effective running time) than other algorithms, such as DROP3 and ICF. Besides that, there is a third category of algorithms that, instead of selecting actual instances of the dataset, select prototypes (that can be synthetic instances). In this category it is possible to include PSSA, PSDSP and PSSP. The three algorithms adopt the notion of spatial partition (a hyperrectangle) for identifying similar instances and extract prototypes for each set of similar instances. In general, these approaches can also be modified for selecting actual instances of the datasets. The algorithm ISDSP adopts a similar approach for selecting actual instances (instead of prototypes). == References ==
Characteristic samples
Characteristic samples is a concept in the field of grammatical inference, related to passive learning. In passive learning, an inference algorithm I {\displaystyle I} is given a set of pairs of strings and labels S {\displaystyle S} , and returns a representation R {\displaystyle R} that is consistent with S {\displaystyle S} . Characteristic samples consider the scenario when the goal is not only finding a representation consistent with S {\displaystyle S} , but finding a representation that recognizes a specific target language. A characteristic sample of language L {\displaystyle L} is a set of pairs of the form ( s , l ( s ) ) {\displaystyle (s,l(s))} where: l ( s ) = 1 {\displaystyle l(s)=1} if and only if s ∈ L {\displaystyle s\in L} l ( s ) = − 1 {\displaystyle l(s)=-1} if and only if s ∉ L {\displaystyle s\notin L} Given the characteristic sample S {\displaystyle S} , I {\displaystyle I} 's output on it is a representation R {\displaystyle R} , e.g. an automaton, that recognizes L {\displaystyle L} . == Formal Definition == === The Learning Paradigm associated with Characteristic Samples === There are three entities in the learning paradigm connected to characteristic samples, the adversary, the teacher and the inference algorithm. Given a class of languages C {\displaystyle \mathbb {C} } and a class of representations for the languages R {\displaystyle \mathbb {R} } , the paradigm goes as follows: The adversary A {\displaystyle A} selects a language L ∈ C {\displaystyle L\in \mathbb {C} } and reports it to the teacher The teacher T {\displaystyle T} then computes a set of strings and label them correctly according to L {\displaystyle L} , trying to make sure that the inference algorithm will compute L {\displaystyle L} The adversary can add correctly labeled words to the set in order to confuse the inference algorithm The inference algorithm I {\displaystyle I} gets the sample and computes a representation R ∈ R {\displaystyle R\in \mathbb {R} } consistent with the sample. The goal is that when the inference algorithm receives a characteristic sample for a language L {\displaystyle L} , or a sample that subsumes a characteristic sample for L {\displaystyle L} , it will return a representation that recognizes exactly the language L {\displaystyle L} . === Sample === Sample S {\displaystyle S} is a set of pairs of the form ( s , l ( s ) ) {\displaystyle (s,l(s))} such that l ( s ) ∈ { − 1 , 1 } {\displaystyle l(s)\in \{-1,1\}} ==== Sample consistent with a language ==== We say that a sample S {\displaystyle S} is consistent with language L {\displaystyle L} if for every pair ( s , l ( s ) ) {\displaystyle (s,l(s))} in S {\displaystyle S} : l ( s ) = 1 if and only if s ∈ L {\displaystyle l(s)=1{\text{ if and only if }}s\in L} l ( s ) = − 1 if and only if s ∉ L {\displaystyle l(s)=-1{\text{ if and only if }}s\notin L} === Characteristic sample === Given an inference algorithm I {\displaystyle I} and a language L {\displaystyle L} , a sample S {\displaystyle S} that is consistent with L {\displaystyle L} is called a characteristic sample of L {\displaystyle L} for I {\displaystyle I} if: I {\displaystyle I} 's output on S {\displaystyle S} is a representation R {\displaystyle R} that recognizes L {\displaystyle L} . For every sample D {\displaystyle D} that is consistent with L {\displaystyle L} and also fulfils S ⊆ D {\displaystyle S\subseteq D} , I {\displaystyle I} 's output on D {\displaystyle D} is a representation R {\displaystyle R} that recognizes L {\displaystyle L} . A Class of languages C {\displaystyle \mathbb {C} } is said to have charistaristic samples if every L ∈ C {\displaystyle L\in \mathbb {C} } has a characteristic sample. == Related Theorems == === Theorem === If equivalence is undecidable for a class C {\textstyle \mathbb {C} } over Σ {\textstyle \Sigma } of cardinality bigger than 1, then C {\textstyle \mathbb {C} } doesn't have characteristic samples. ==== Proof ==== Given a class of representations C {\textstyle \mathbb {C} } such that equivalence is undecidable, for every polynomial p ( x ) {\displaystyle p(x)} and every n ∈ N {\displaystyle n\in \mathbb {N} } , there exist two representations r 1 {\displaystyle r_{1}} and r 2 {\displaystyle r_{2}} of sizes bounded by n {\displaystyle n} , that recognize different languages but are inseparable by any string of size bounded by p ( n ) {\displaystyle p(n)} . Assuming this is not the case, we can decide if r 1 {\displaystyle r_{1}} and r 2 {\displaystyle r_{2}} are equivalent by simulating their run on all strings of size smaller than p ( n ) {\displaystyle p(n)} , contradicting the assumption that equivalence is undecidable. === Theorem === If S 1 {\displaystyle S_{1}} is a characteristic sample for L 1 {\displaystyle L_{1}} and is also consistent with L 2 {\displaystyle L_{2}} , then every characteristic sample of L 2 {\displaystyle L_{2}} , is inconsistent with L 1 {\displaystyle L_{1}} . ==== Proof ==== Given a class C {\textstyle \mathbb {C} } that has characteristic samples, let R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} be representations that recognize L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} respectively. Under the assumption that there is a characteristic sample for L 1 {\displaystyle L_{1}} , S 1 {\displaystyle S_{1}} that is also consistent with L 2 {\displaystyle L_{2}} , we'll assume falsely that there exist a characteristic sample for L 2 {\displaystyle L_{2}} , S 2 {\displaystyle S_{2}} that is consistent with L 1 {\displaystyle L_{1}} . By the definition of characteristic sample, the inference algorithm I {\displaystyle I} must return a representation which recognizes the language if given a sample that subsumes the characteristic sample itself. But for the sample S 1 ∪ S 2 {\displaystyle S_{1}\cup S_{2}} , the answer of the inferring algorithm needs to recognize both L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} , in contradiction. === Theorem === If a class is polynomially learnable by example based queries, it is learnable with characteristic samples. == Polynomialy characterizable classes == === Regular languages === The proof that DFA's are learnable using characteristic samples, relies on the fact that every regular language has a finite number of equivalence classes with respect to the right congruence relation, ∼ L {\displaystyle \sim _{L}} (where x ∼ L y {\displaystyle x\sim _{L}y} for x , y ∈ Σ ∗ {\displaystyle x,y\in \Sigma ^{*}} if and only if ∀ z ∈ Σ ∗ : x z ∈ L ↔ y z ∈ L {\displaystyle \forall z\in \Sigma ^{*}:xz\in L\leftrightarrow yz\in L} ). Note that if x {\displaystyle x} , y {\displaystyle y} are not congruent with respect to ∼ L {\displaystyle \sim _{L}} , there exists a string z {\displaystyle z} such that x z ∈ L {\displaystyle xz\in L} but y z ∉ L {\displaystyle yz\notin L} or vice versa, this string is called a separating suffix. ==== Constructing a characteristic sample ==== The construction of a characteristic sample for a language L {\displaystyle L} by the teacher goes as follows. Firstly, by running a depth first search on a deterministic automaton A {\displaystyle A} recognizing L {\displaystyle L} , starting from its initial state, we get a suffix closed set of words, W {\displaystyle W} , ordered in shortlex order. From the fact above, we know that for every two states in the automaton, there exists a separating suffix that separates between every two strings that the run of A {\displaystyle A} on them ends in the respective states. We refer to the set of separating suffixes as S {\displaystyle S} . The labeled set (sample) of words the teacher gives the adversary is { ( w , l ( w ) ) | w ∈ W ⋅ S ∪ W ⋅ Σ ⋅ S } {\displaystyle \{(w,l(w))|w\in W\cdot S\cup W\cdot \Sigma \cdot S\}} where l ( w ) {\displaystyle l(w)} is the correct lable of w {\displaystyle w} (whether it is in L {\displaystyle L} or not). We may assume that ϵ ∈ S {\displaystyle \epsilon \in S} . ==== Constructing a deterministic automata ==== Given the sample from the adversary W {\displaystyle W} , the construction of the automaton by the inference algorithm I {\displaystyle I} starts with defining P = prefix ( W ) {\displaystyle P={\text{prefix}}(W)} and S = suffix ( W ) {\displaystyle S={\text{suffix}}(W)} , which are the set of prefixes and suffixes of W {\displaystyle W} respectively. Now the algorithm constructs a matrix M {\displaystyle M} where the elements of P {\displaystyle P} function as the rows, ordered by the shortlex order, and the elements of S {\displaystyle S} function as the columns, ordered by the shortlex order. Next, the cells in the matrix are filled in the following manner for prefix p i {\displaystyle p_{i}} and suffix s j {\displaystyle s_{j}} : If p i s j ∈ W → M i j = l ( p i s j ) {\displaystyle p_{i}s_{j}\in W\rightarrow M_{ij}=l(p_{i}s_{j})} else, M i j = 0 {\displaystyle M_{ij}=0} Now, we say row i {\displaystyle i} and t {\displaystyle t} are distinguishable if there exists an index j {\displaystyle j} such that M i j = − 1 × M t j {\displaystyle M_{ij}=-1\times M_{tj}} . The next stage of the inference algorithm is to construct the set Q {\displaystyle Q} of distinguishable rows in M {\displaystyle M} , by initializing Q {\displaystyle Q} with ϵ {\displaystyle \epsilon } and iterating from the first row of M {\displaystyle M} downwards and doing the following for row r i {\displaystyle r_{i}} : If r i {\displaystyle r_{i}} is distinguishable from all elements in Q {\displaystyle Q} , add it to Q {\displaystyle Q} else, pass on it to the next row From the way the teacher constructed the sample it passed to the adversary, we know that for every s ∈ Q {\displaystyle s\in Q} and every σ ∈ Σ {\displaystyle \sigma \in \Sigma } , the row s σ {\displaystyle s\sigma } exists in M {\displaystyle M} , and from the construction of Q {\displaystyle Q} , there exists a row s ′ ∈ Q {\displaystyle s'\in Q} such that s ′ {\displaystyle s'} and s σ {\displaystyle s\sigma } are indistinguishable. The output automaton will be defined as follows: The set of states is Q {\displaystyle Q} . The initial state is the state corresponding to row ϵ ∈ Q {\displaystyle \epsilon \in Q} . The accepting states is the set { s ∈ Q | l ( s ) = 1 } {\displaystyle \{s\in Q|{\text{ }}l(s)=1\}} . The transitions function will be defined δ ( s , σ ) = s ′ {\displaystyle \delta (s,\sigma )=s'} , where s ′ {\displaystyle s'} is the element in Q {\displaystyle Q} that is indistinguishable from s σ {\displaystyle s\sigma } . === Other polynomially characterizable classes === Class of languages recognizable by multiplicity automatons Class of languages recognizable by tree automata Class of languages recognizable by multiplicity tree automata Class of languages recognizable by Fully-Ordered Lattice Automata Class of languages recognizable by Visibly One-Counter Automata Class of fully informative omega regular languages == Non polynomially characterizable classes == There are some classes that do not have polynomially sized characteristic samples. For example, from the first theorem in the Related theorems segment, it has been shown that the following classes of languages do not have polynomial sized characteristic samples: C F G {\displaystyle \mathbb {CFG} } - The class of context-free grammars Languages over Σ {\displaystyle \Sigma } of cardinality larger than 1 {\displaystyle 1} L I N G {\displaystyle \mathbb {LING} } - The class of linear grammar languages over Σ {\displaystyle \Sigma } of cardinality larger than 1 {\displaystyle 1} S D G {\displaystyle \mathbb {SDG} } - The class of simple deterministic grammars Languages N F A {\displaystyle \mathbb {NFA} } - The class of nondeterministic finite automata Languages == Relations to other learning paradigms == Classes of representations that has characteristic samples relates to the following learning paradigms: === Class of semi-poly teachable languages === A representation class C {\displaystyle \mathbb {C} } is semi-poly T / L {\displaystyle T/L} teachable if there exist 3 polynomials p , q , r {\displaystyle p,q,r} , a teacher T {\displaystyle T} and an inference algorithm I {\displaystyle I} , such that for any adversary A {\displaystyle A} the following holds: A {\displaystyle A} Selects a representation R {\displaystyle R} of size n {\displaystyle n} from C {\displaystyle \mathbb {C} } T {\displaystyle T} computes a sample that is consistent with the language that R {\displaystyle R} recognize, of size bounded by p ( n ) {\displaystyle p(n)} and the strings in the sample bounded by length q ( n ) {\displaystyle q(n)} A {\displaystyle A} adds correctly labeled strings to the sample computed by T {\displaystyle T} , making the new sample of size m {\displaystyle m} I {\displaystyle I} then computes a representation equivalent to R {\displaystyle R} in time bounded by r ( m ) {\displaystyle r(m)} The class of languages that there exists a polynomial algorithm that given a sample, returns a representation consistent with the sample is called consistency easy. === Polynomially characterizable languages === Given a representation class R {\displaystyle \mathbb {R} } , and I {\displaystyle {\mathcal {I}}} a set of identification algorithms for R {\displaystyle \mathbb {R} } , R {\displaystyle \mathbb {R} } is polynomially characterizable for I {\displaystyle {\mathcal {I}}} if any R ∈ R {\displaystyle R\in \mathbb {R} } has a characteristic sample of size polynomial of R {\displaystyle R} 's size, S {\displaystyle S} , that for every I ∈ I {\displaystyle I\in {\mathcal {I}}} , I {\displaystyle I} 's output on S {\displaystyle S} is R {\displaystyle R} . === Releations between the paradigms === ==== Theorem ==== A consistency-easy class C {\displaystyle \mathbb {C} } has characteristic samples if and only if it is semi-poly T / L {\displaystyle T/L} teachable. ===== Proof ===== Assuming C {\displaystyle \mathbb {C} } has characteristic samples, then for every representation R ∈ C {\displaystyle R\in \mathbb {C} } , its characteristic sample S {\displaystyle S} holds the conditions for the sample computaed by the teacher, and the output of I {\displaystyle I} on every sample S ′ {\displaystyle S'} such that S ⊆ S ′ {\displaystyle S\subseteq S'} is equivalent to R {\displaystyle R} from the definition of characteristic sample. Assuming that C {\displaystyle \mathbb {C} } is semi-poly T / L {\displaystyle T/L} teachable, then for every representation R ∈ C {\displaystyle R\in \mathbb {C} } , the computed sample by the teacher S {\displaystyle S} is a characteristic sample for R {\displaystyle R} . ==== Theorem ==== If C {\displaystyle \mathbb {C} } has characteristic sample, then C {\displaystyle \mathbb {C} } is polynomially characterizable. ===== Proof ===== Assuming falsely that C {\displaystyle \mathbb {C} } is not polynomially characterizable, there are two non equivalent representations R 1 , R 2 ∈ C {\displaystyle R_{1},R_{2}\in \mathbb {C} } , with characteristic samples S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} respectively. From the definition of characteristic samples, any inference algorithm I {\displaystyle I} need to infer from the sample S 1 ∪ S 2 {\displaystyle S_{1}\cup S_{2}} a representation compatible with R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} , in contradiction. == See also == Grammar induction Passive learning Induction of regular languages Deterministic finite automaton == References ==
Category__colon__Data journalism
Articles related to or about instances of data journalism.
Weighted majority algorithm (machine learning)
In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human experts. The algorithm assumes that we have no prior knowledge about the accuracy of the algorithms in the pool, but there are sufficient reasons to believe that one or more will perform well. Assume that the problem is a binary decision problem. To construct the compound algorithm, a positive weight is given to each of the algorithms in the pool. The compound algorithm then collects weighted votes from all the algorithms in the pool, and gives the prediction that has a higher vote. If the compound algorithm makes a mistake, the algorithms in the pool that contributed to the wrong predicting will be discounted by a certain ratio β where 0<β<1. It can be shown that the upper bounds on the number of mistakes made in a given sequence of predictions from a pool of algorithms A {\displaystyle \mathbf {A} } is O ( l o g | A | + m ) {\displaystyle \mathbf {O(log|A|+m)} } if one algorithm in x i {\displaystyle \mathbf {x} _{i}} makes at most m {\displaystyle \mathbf {m} } mistakes. There are many variations of the weighted majority algorithm to handle different situations, like shifting targets, infinite pools, or randomized predictions. The core mechanism remains similar, with the final performances of the compound algorithm bounded by a function of the performance of the specialist (best performing algorithm) in the pool. == See also == Randomized weighted majority algorithm == References ==
Statistical model validation
In statistics, model validation is the task of evaluating whether a chosen statistical model is appropriate or not. Oftentimes in statistical inference, inferences from models that appear to fit their data may be flukes, resulting in a misunderstanding by researchers of the actual relevance of their model. To combat this, model validation is used to test whether a statistical model can hold up to permutations in the data. Model validation is also called model criticism or model evaluation. This topic is not to be confused with the closely related task of model selection, the process of discriminating between multiple candidate models: model validation does not concern so much the conceptual design of models as it tests only the consistency between a chosen model and its stated outputs. There are many ways to validate a model. Residual plots plot the difference between the actual data and the model's predictions: correlations in the residual plots may indicate a flaw in the model. Cross validation is a method of model validation that iteratively refits the model, each time leaving out just a small sample and comparing whether the samples left out are predicted by the model: there are many kinds of cross validation. Predictive simulation is used to compare simulated data to actual data. External validation involves fitting the model to new data. Akaike information criterion estimates the quality of a model. == Overview == Model validation comes in many forms and the specific method of model validation a researcher uses is often a constraint of their research design. To emphasize, what this means is that there is no one-size-fits-all method to validating a model. For example, if a researcher is operating with a very limited set of data, but data they have strong prior assumptions about, they may consider validating the fit of their model by using a Bayesian framework and testing the fit of their model using various prior distributions. However, if a researcher has a lot of data and is testing multiple nested models, these conditions may lend themselves toward cross validation and possibly a leave one out test. These are two abstract examples and any actual model validation will have to consider far more intricacies than describes here but these example illustrate that model validation methods are always going to be circumstantial. In general, models can be validated using existing data or with new data, and both methods are discussed more in the following subsections, and a note of caution is provided, too. === Validation with existing data === Validation based on existing data involves analyzing the goodness of fit of the model or analyzing whether the residuals seem to be random (i.e. residual diagnostics). This method involves using analyses of the models closeness to the data and trying to understand how well the model predicts its own data. One example of this method is in Figure 1, which shows a polynomial function fit to some data. We see that the polynomial function does not conform well to the data, which appears linear, and might invalidate this polynomial model. Commonly, statistical models on existing data are validated using a validation set, which may also be referred to as a holdout set. A validation set is a set of data points that the user leaves out when fitting a statistical model. After the statistical model is fitted, the validation set is used as a measure of the model's error. If the model fits well on the initial data but has a large error on the validation set, this is a sign of overfitting. === Validation with new data === If new data becomes available, an existing model can be validated by assessing whether the new data is predicted by the old model. If the new data is not predicted by the old model, then the model might not be valid for the researcher's goals. With this in mind, a modern approach is to validate a neural network is to test its performance on domain-shifted data. This ascertains if the model learned domain-invariant features. === A note of caution === A model can be validated only relative to some application area. A model that is valid for one application might be invalid for some other applications. As an example, consider the curve in Figure 1: if the application only used inputs from the interval [0, 2], then the curve might well be an acceptable model. == Methods for validating == When doing a validation, there are three notable causes of potential difficulty, according to the Encyclopedia of Statistical Sciences. The three causes are these: lack of data; lack of control of the input variables; uncertainty about the underlying probability distributions and correlations. The usual methods for dealing with difficulties in validation include the following: checking the assumptions made in constructing the model; examining the available data and related model outputs; applying expert judgment. Note that expert judgment commonly requires expertise in the application area. Expert judgment can sometimes be used to assess the validity of a prediction without obtaining real data: e.g. for the curve in Figure 1, an expert might well be able to assess that a substantial extrapolation will be invalid. Additionally, expert judgment can be used in Turing-type tests, where experts are presented with both real data and related model outputs and then asked to distinguish between the two. For some classes of statistical models, specialized methods of performing validation are available. As an example, if the statistical model was obtained via a regression, then specialized analyses for regression model validation exist and are generally employed. === Residual diagnostics === Residual diagnostics comprise analyses of the residuals to determine whether the residuals seem to be effectively random. Such analyses typically requires estimates of the probability distributions for the residuals. Estimates of the residuals' distributions can often be obtained by repeatedly running the model, i.e. by using repeated stochastic simulations (employing a pseudorandom number generator for random variables in the model). If the statistical model was obtained via a regression, then regression-residual diagnostics exist and may be used; such diagnostics have been well studied. === Cross validation === Cross validation is a method of sampling that involves leaving some parts of the data out of the fitting process and then seeing whether those data that are left out are close or far away from where the model predicts they would be. What that means practically is that cross validation techniques fit the model many, many times with a portion of the data and compares each model fit to the portion it did not use. If the models very rarely describe the data that they were not trained on, then the model is probably wrong. == See also == == References == == Further reading == Barlas, Y. (1996), "Formal aspects of model validity and validation in system dynamics", System Dynamics Review, 12 (3): 183–210, doi:10.1002/(SICI)1099-1727(199623)12:3<183::AID-SDR103>3.0.CO;2-4 Good, P. I.; Hardin, J. W. (2012), "Chapter 15: Validation", Common Errors in Statistics (Fourth ed.), John Wiley & Sons, pp. 277–285 Huber, P. J. (2002), "Chapter 3: Approximate models", in Huber-Carol, C.; Balakrishnan, N.; Nikulin, M. S.; Mesbah, M. (eds.), Goodness-of-Fit Tests and Model Validity, Springer, pp. 25–41 == External links == How can I tell if a model fits my data? —Handbook of Statistical Methods (NIST) Hicks, Dan (July 14, 2017). "What are core statistical model validation techniques?". Stack Exchange.
Developmental robotics
Developmental robotics (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to be cumulative and of progressively increasing complexity, and to result from self-exploration of the world in combination with social interaction. The typical methodological approach consists in starting from theories of human and animal development elaborated in fields such as developmental psychology, neuroscience, developmental and evolutionary biology, and linguistics, then to formalize and implement them in robots, sometimes exploring extensions or variants of them. The experimentation of those models in robots allows researchers to confront them with reality, and as a consequence, developmental robotics also provides feedback and novel hypotheses on theories of human and animal development. Developmental robotics is related to but differs from evolutionary robotics (ER). ER uses populations of robots that evolve over time, whereas DevRob is interested in how the organization of a single robot's control system develops through experience, over time. DevRob is also related to work done in the domains of robotics and artificial life. == Background == Can a robot learn like a child? Can it learn a variety of new skills and new knowledge unspecified at design time and in a partially unknown and changing environment? How can it discover its body and its relationships with the physical and social environment? How can its cognitive capacities continuously develop without the intervention of an engineer once it is "out of the factory"? What can it learn through natural social interactions with humans? These are the questions at the center of developmental robotics. Alan Turing, as well as a number of other pioneers of cybernetics, already formulated those questions and the general approach in 1950, but it is only since the end of the 20th century that they began to be investigated systematically. Because the concept of adaptive intelligent machines is central to developmental robotics, it has relationships with fields such as artificial intelligence, machine learning, cognitive robotics or computational neuroscience. Yet, while it may reuse some of the techniques elaborated in these fields, it differs from them from many perspectives. It differs from classical artificial intelligence because it does not assume the capability of advanced symbolic reasoning and focuses on embodied and situated sensorimotor and social skills rather than on abstract symbolic problems. It differs from cognitive robotics because it focuses on the processes that allow the formation of cognitive capabilities rather than these capabilities themselves. It differs from computational neuroscience because it focuses on functional modeling of integrated architectures of development and learning. More generally, developmental robotics is uniquely characterized by the following three features: It targets task-independent architectures and learning mechanisms, i.e. the machine/robot has to be able to learn new tasks that are unknown by the engineer; It emphasizes open-ended development and lifelong learning, i.e. the capacity of an organism to acquire continuously novel skills. This should not be understood as a capacity for learning "anything" or even “everything”, but just that the set of skills that is acquired can be infinitely extended at least in some (not all) directions; The complexity of acquired knowledge and skills shall increase (and the increase be controlled) progressively. Developmental robotics emerged at the crossroads of several research communities including embodied artificial intelligence, enactive and dynamical systems cognitive science, connectionism. Starting from the essential idea that learning and development happen as the self-organized result of the dynamical interactions among brains, bodies and their physical and social environment, and trying to understand how this self-organization can be harnessed to provide task-independent lifelong learning of skills of increasing complexity, developmental robotics strongly interacts with fields such as developmental psychology, developmental and cognitive neuroscience, developmental biology (embryology), evolutionary biology, and cognitive linguistics. As many of the theories coming from these sciences are verbal and/or descriptive, this implies a crucial formalization and computational modeling activity in developmental robotics. These computational models are then not only used as ways to explore how to build more versatile and adaptive machines but also as a way to evaluate their coherence and possibly explore alternative explanations for understanding biological development. == Research directions == === Skill domains === Due to the general approach and methodology, developmental robotics projects typically focus on having robots develop the same types of skills as human infants. A first category that is important being investigated is the acquisition of sensorimotor skills. These include the discovery of one's own body, including its structure and dynamics such as hand-eye coordination, locomotion, and interaction with objects as well as tool use, with a particular focus on the discovery and learning of affordances. A second category of skills targeted by developmental robots are social and linguistic skills: the acquisition of simple social behavioural games such as turn-taking, coordinated interaction, lexicons, syntax and grammar, and the grounding of these linguistic skills into sensorimotor skills (sometimes referred as symbol grounding). In parallel, the acquisition of associated cognitive skills are being investigated such as the emergence of the self/non-self distinction, the development of attentional capabilities, of categorization systems and higher-level representations of affordances or social constructs, of the emergence of values, empathy, or theories of mind. === Mechanisms and constraints === The sensorimotor and social spaces in which humans and robot live are so large and complex that only a small part of potentially learnable skills can actually be explored and learnt within a life-time. Thus, mechanisms and constraints are necessary to guide developmental organisms in their development and control of the growth of complexity. There are several important families of these guiding mechanisms and constraints which are studied in developmental robotics, all inspired by human development: Motivational systems, generating internal reward signals that drive exploration and learning, which can be of two main types: extrinsic motivations push robots/organisms to maintain basic specific internal properties such as food and water level, physical integrity, or light (e.g. in phototropic systems); intrinsic motivations push robot to search for novelty, challenge, compression or learning progress per se, thus generating what is sometimes called curiosity-driven learning and exploration, or alternatively active learning and exploration; Social guidance: as humans learn a lot by interacting with their peers, developmental robotics investigates mechanisms that can allow robots to participate to human-like social interaction. By perceiving and interpreting social cues, this may allow robots both to learn from humans (through diverse means such as imitation, emulation, stimulus enhancement, demonstration, etc. ...) and to trigger natural human pedagogy. Thus, social acceptance of developmental robots is also investigated; Statistical inference biases and cumulative knowledge/skill reuse: biases characterizing both representations/encodings and inference mechanisms can typically allow considerable improvement of the efficiency of learning and are thus studied. Related to this, mechanisms allowing to infer new knowledge and acquire new skills by reusing previously learnt structures is also an essential field of study; The properties of embodiment, including geometry, materials, or innate motor primitives/synergies often encoded as dynamical systems, can considerably simplify the acquisition of sensorimotor or social skills, and is sometimes referred as morphological computation. The interaction of these constraints with other constraints is an important axis of investigation; Maturational constraints: In human infants, both the body and the neural system grow progressively, rather than being full-fledged already at birth. This implies, for example, that new degrees of freedom, as well as increases of the volume and resolution of available sensorimotor signals, may appear as learning and development unfold. Transposing these mechanisms in developmental robots, and understanding how it may hinder or on the contrary ease the acquisition of novel complex skills is a central question in developmental robotics. === From bio-mimetic development to functional inspiration. === While most developmental robotics projects interact closely with theories of animal and human development, the degrees of similarities and inspiration between identified biological mechanisms and their counterpart in robots, as well as the abstraction levels of modeling, may vary a lot. While some projects aim at modeling precisely both the function and biological implementation (neural or morphological models), such as in Neurorobotics, some other projects only focus on functional modeling of the mechanisms and constraints described above, and might for example reuse in their architectures techniques coming from applied mathematics or engineering fields. == Open questions == As developmental robotics is a relatively new research field and at the same time very ambitious, many fundamental open challenges remain to be solved. First of all, existing techniques are far from allowing real-world high-dimensional robots to learn an open-ended repertoire of increasingly complex skills over a life-time period. High-dimensional continuous sensorimotor spaces constitute a significant obstacle to be solved. Lifelong cumulative learning is another one. Actually, no experiments lasting more than a few days have been set up so far, which contrasts severely with the time needed by human infants to learn basic sensorimotor skills while equipped with brains and morphologies which are tremendously more powerful than existing computational mechanisms. Among the strategies to explore to progress towards this target, the interaction between the mechanisms and constraints described in the previous section shall be investigated more systematically. Indeed, they have so far mainly been studied in isolation. For example, the interaction of intrinsically motivated learning and socially guided learning, possibly constrained by maturation, is an essential issue to be investigated. Another important challenge is to allow robots to perceive, interpret and leverage the diversity of multimodal social cues provided by non-engineer humans during human-robot interaction. These capacities are so far, mostly too limited to allow efficient general-purpose teaching from humans. A fundamental scientific issue to be understood and resolved, which applied equally to human development, is how compositionality, functional hierarchies, primitives, and modularity, at all levels of sensorimotor and social structures, can be formed and leveraged during development. This is deeply linked with the problem of the emergence of symbols, sometimes referred to as the "symbol grounding problem" when it comes to language acquisition. Actually, the very existence and need for symbols in the brain are actively questioned, and alternative concepts, still allowing for compositionality and functional hierarchies are being investigated. During biological epigenesis, morphology is not fixed but rather develops in constant interaction with the development of sensorimotor and social skills. The development of morphology poses obvious practical problems with robots, but it may be a crucial mechanism that should be further explored, at least in simulation, such as in morphogenetic robotics. Another open problem is the understanding of the relation between the key phenomena investigated by developmental robotics (e.g., hierarchical and modular sensorimotor systems, intrinsic/extrinsic/social motivations, and open-ended learning) and the underlying brain mechanisms. Similarly, in biology, developmental mechanisms (operating at the ontogenetic time scale) interact closely with evolutionary mechanisms (operating at the phylogenetic time scale) as shown in the flourishing "evo-devo" scientific literature. However, the interaction of those mechanisms in artificial organisms, developmental robots, in particular, is still vastly understudied. The interaction of evolutionary mechanisms, unfolding morphologies and developing sensorimotor and social skills will thus be a highly stimulating topic for the future of developmental robotics. == Main journals == IEEE Transactions on Cognitive and Developmental Systems (previously known as IEEE Transactions on Autonomous Mental Development): https://cis.ieee.org/publications/t-cognitive-and-developmental-systems == Main conferences == International Conference on Development and Learning: http://www.cogsci.ucsd.edu/~triesch/icdl/ Epigenetic Robotics: https://www.lucs.lu.se/epirob/ ICDL-EpiRob: http://www.icdl-epirob.org/ (the two above joined since 2011) Developmental Robotics: http://cs.brynmawr.edu/DevRob05/ The NSF/DARPA funded Workshop on Development and Learning was held April 5–7, 2000 at Michigan State University. It was the first international meeting devoted to computational understanding of mental development by robots and animals. The term "by" was used since the agents are active during development. == See also == Evolutionary developmental robotics Robot learning == References == == External links == === Technical committees === IEEE Technical Committee on Cognitive and Developmental Systems (CDSTC), previously known as IEEE Technical Committee on Autonomous Mental Development, [1] IEEE Technical Committee on Cognitive Robotics, https://www.ieee-ras.org/cognitive-robotics IEEE Technical Committee on Robot Learning, https://www.ieee-ras.org/robot-learning/ === Academic institutions and researchers in the field === Lund University Cognitive Science - Robotics Group Cognitive Development Lab, University of Indiana, US Michigan State University – Embodied Intelligence Lab Inria and Ensta ParisTech FLOWERS team, France: Exploration, interaction and learning in developmental robotics University of Tokyo—Intelligent Systems and Informatics Lab Cognitive Robotics Lab of Juergen Schmidhuber at IDSIA and Technical University of Munich LIRA-Lab, University of Genova, Italy CITEC at University of Bielefeld, Germany Vision Lab, Psychology Department, Southern Illinois University Carbondale FIAS (J. Triesch lab.) LPP, CNRS (K. Oregan lab.) AI Lab, SoftBank Robotics Europe, France Departement of Computer Science, University of Aberdeen Asada Laboratory, Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, Japan The University of Texas at Austin, UTCS Intelligent Robotics Lab Bryn Mawr College's Developmental Robotics Project: research projects by faculty and students at Swarthmore and Bryn Mawr Colleges, Philadelphia, PA, USA Jean Project: Information Sciences Institute of the University of Southern California Cognitive Robotics (including Hide and Seek) at the Naval Research Laboratory Archived August 8, 2010, at the Wayback Machine The Laboratory for Perceptual Robotics, University of Massachusetts Amherst Amherst, USA Centre for Robotics and Neural Systems, Plymouth University Plymouth, United Kingdom Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Science and Technologies National Research Council, Rome, Italy Neurocybernetic team, ETIS Lab., ENSEA – University of Cergy-Pontoise – CNRS, France Machine Perception and Cognitive Robotics Lab, Florida Atlantic University, Boca Raton, Florida Adaptive Systems Group, Department of Computer Science, Humboldt University of Berlin, Germany Cognitive Developmental Robotics Lab (Nagai Lab), The University of Tokyo, Japan === Related large-scale projects === RobotDoC Project (funded by European Commission) Italk Project (funded by European Commission) IM-CLeVeR Project (funded by European Commission) ERC Grant EXPLORERS Project (funded by European Research Council) RobotCub Project (funded by European Commission) Feelix Growing Project (funded by European Commission) === Courses === The first undergraduate courses in DevRob were offered at Bryn Mawr College and Swarthmore College in the Spring of 2003 by Douglas Blank and Lisa Meeden, respectively. The first graduate course in DevRob was offered at Iowa State University by Alexander Stoytchev in the Fall of 2005.
Category__colon__Deep learning software
Direct members of this category should be general-purpose software for training or otherwise interacting with deep learning models. Specific deep learning models, or consumer software applications powered by deep learning, should be placed in Category:Deep learning software applications.
Category__colon__Applied data mining
Notable applications and use of data mining
Exploration–exploitation dilemma
The exploration–exploitation dilemma, also known as the explore–exploit tradeoff, is a fundamental concept in decision-making that arises in many domains. It is depicted as the balancing act between two opposing strategies. Exploitation involves choosing the best option based on current knowledge of the system (which may be incomplete or misleading), while exploration involves trying out new options that may lead to better outcomes in the future at the expense of an exploitation opportunity. Finding the optimal balance between these two strategies is a crucial challenge in many decision-making problems whose goal is to maximize long-term benefits. == Application in machine learning == In the context of machine learning, the exploration–exploitation tradeoff is fundamental in reinforcement learning (RL), a type of machine learning that involves training agents to make decisions based on feedback from the environment. Crucially, this feedback may be incomplete or delayed. The agent must decide whether to exploit the current best-known policy or explore new policies to improve its performance. === Multi-armed bandit methods === The multi-armed bandit (MAB) problem was a classic example of the tradeoff, and many methods were developed for it, such as epsilon-greedy, Thompson sampling, and the upper confidence bound (UCB). See the page on MAB for details. In more complex RL situations than the MAB problem, the agent can treat each choice as a MAB, where the payoff is the expected future reward. For example, if the agent performs an epsilon-greedy method, then the agent will often "pull the best lever" by picking the action that had the best predicted expected reward (exploit). However, it would pick a random action with probability epsilon (explore). Monte Carlo tree search, for example, uses a variant of the UCB method. === Exploration problems === There are some problems that make exploration difficult. Sparse reward. If rewards occur only once a long while, then the agent might not persist in exploring. Furthermore, if the space of actions is large, then the sparse reward would mean the agent would not be guided by the reward to find a good direction for deeper exploration. A standard example is Montezuma's Revenge. Deceptive reward. If some early actions give immediate small reward, but other actions give later large reward, then the agent might be lured away from exploring the other actions. Noisy TV problem. If certain observations are irreducibly noisy (such as a television showing random images), then the agent might be trapped exploring those observations (watching the television). === Exploration reward === This section based on. The exploration reward (also called exploration bonus) methods convert the exploration-exploitation dilemma into a balance of exploitations. That is, instead of trying to get the agent to balance exploration and exploitation, exploration is simply treated as another form of exploitation, and the agent simply attempts to maximize the sum of rewards from exploration and exploitation. The exploration reward can be treated as a form of intrinsic reward. We write these as r t i , r t e {\displaystyle r_{t}^{i},r_{t}^{e}} , meaning the intrinsic and extrinsic rewards at time step t {\displaystyle t} . However, exploration reward is different from exploitation in two regards: The reward of exploitation is not freely chosen, but given by the environment, but the reward of exploration may be picked freely. Indeed, there are many different ways to design r t i {\displaystyle r_{t}^{i}} described below. The reward of exploitation is usually stationary (i.e. the same action in the same state gives the same reward), but the reward of exploration is non-stationary (i.e. the same action in the same state should give less and less reward). Count-based exploration uses N n ( s ) {\displaystyle N_{n}(s)} , the number of visits to a state s {\displaystyle s} during the time-steps 1 : n {\displaystyle 1:n} , to calculate the exploration reward. This is only possible in small and discrete state space. Density-based exploration extends count-based exploration by using a density model ρ n ( s ) {\displaystyle \rho _{n}(s)} . The idea is that, if a state has been visited, then nearby states are also partly-visited. In maximum entropy exploration, the entropy of the agent's policy π {\displaystyle \pi } is included as a term in the intrinsic reward. That is, r t i = − ∑ a π ( a | s t ) ln ⁡ π ( a | s t ) + ⋯ {\displaystyle r_{t}^{i}=-\sum _{a}\pi (a|s_{t})\ln \pi (a|s_{t})+\cdots } . === Prediction-based === This section based on. The forward dynamics model is a function for predicting the next state based on the current state and the current action: f : ( s t , a t ) ↦ s t + 1 {\displaystyle f:(s_{t},a_{t})\mapsto s_{t+1}} . The forward dynamics model is trained as the agent plays. The model becomes better at predicting state transition for state-action pairs that had been done many times. A forward dynamics model can define an exploration reward by r t i = ‖ f ( s t , a t ) − s t + 1 ‖ 2 2 {\displaystyle r_{t}^{i}=\|f(s_{t},a_{t})-s_{t+1}\|_{2}^{2}} . That is, the reward is the squared-error of the prediction compared to reality. This rewards the agent to perform state-action pairs that had not been done many times. This is however susceptible to the noisy TV problem. Dynamics model can be run in latent space. That is, r t i = ‖ f ( s t , a t ) − ϕ ( s t + 1 ) ‖ 2 2 {\displaystyle r_{t}^{i}=\|f(s_{t},a_{t})-\phi (s_{t+1})\|_{2}^{2}} for some featurizer ϕ {\displaystyle \phi } . The featurizer can be the identity function (i.e. ϕ ( x ) = x {\displaystyle \phi (x)=x} ), randomly generated, the encoder-half of a variational autoencoder, etc. A good featurizer improves forward dynamics exploration. The Intrinsic Curiosity Module (ICM) method trains simultaneously a forward dynamics model and a featurizer. The featurizer is trained by an inverse dynamics model, which is a function for predicting the current action based on the features of the current and the next state: g : ( ϕ ( s t ) , ϕ ( s t + 1 ) ) ↦ a t {\displaystyle g:(\phi (s_{t}),\phi (s_{t+1}))\mapsto a_{t}} . By optimizing the inverse dynamics, both the inverse dynamics model and the featurizer are improved. Then, the improved featurizer improves the forward dynamics model, which improves the exploration of the agent. Random Network Distillation (RND) method attempts to solve this problem by teacher–student distillation. Instead of a forward dynamics model, it has two models f , f ′ {\displaystyle f,f'} . The f ′ {\displaystyle f'} teacher model is fixed, and the f {\displaystyle f} student model is trained to minimize ‖ f ( s ) − f ′ ( s ) ‖ 2 2 {\displaystyle \|f(s)-f'(s)\|_{2}^{2}} on states s {\displaystyle s} . As a state is visited more and more, the student network becomes better at predicting the teacher. Meanwhile, the prediction error is also an exploration reward for the agent, and so the agent learns to perform actions that result in higher prediction error. Thus, we have a student network attempting to minimize the prediction error, while the agent attempting to maximize it, resulting in exploration. The states are normalized by subtracting a running average and dividing a running variance, which is necessary since the teacher model is frozen. The rewards are normalized by dividing with a running variance. Exploration by disagreement trains an ensemble of forward dynamics models, each on a random subset of all ( s t , a t , s t + 1 ) {\displaystyle (s_{t},a_{t},s_{t+1})} tuples. The exploration reward is the variance of the models' predictions. === Noise === For neural network–based agents, the NoisyNet method changes some of its neural network modules by noisy versions. That is, some network parameters are random variables from a probability distribution. The parameters of the distribution are themselves learnable. For example, in a linear layer y = W x + b {\displaystyle y=Wx+b} , both W , b {\displaystyle W,b} are sampled from Gaussian distributions N ( μ W , Σ W ) , N ( μ b , Σ b ) {\displaystyle {\mathcal {N}}(\mu _{W},\Sigma _{W}),{\mathcal {N}}(\mu _{b},\Sigma _{b})} at every step, and the parameters μ W , Σ W , μ b , Σ b {\displaystyle \mu _{W},\Sigma _{W},\mu _{b},\Sigma _{b}} are learned via the reparameterization trick. == References == Amin, Susan; Gomrokchi, Maziar; Satija, Harsh; Hoof, van; Precup, Doina (September 1, 2021). "A Survey of Exploration Methods in Reinforcement Learning". arXiv:2109.00157 [cs.LG].
Local time (mathematics)
In the mathematical theory of stochastic processes, local time is a stochastic process associated with semimartingale processes such as Brownian motion, that characterizes the amount of time a particle has spent at a given level. Local time appears in various stochastic integration formulas, such as Tanaka's formula, if the integrand is not sufficiently smooth. It is also studied in statistical mechanics in the context of random fields. == Formal definition == For a continuous real-valued semimartingale ( B s ) s ≥ 0 {\displaystyle (B_{s})_{s\geq 0}} , the local time of B {\displaystyle B} at the point x {\displaystyle x} is the stochastic process which is informally defined by L x ( t ) = ∫ 0 t δ ( x − B s ) d [ B ] s , {\displaystyle L^{x}(t)=\int _{0}^{t}\delta (x-B_{s})\,d[B]_{s},} where δ {\displaystyle \delta } is the Dirac delta function and [ B ] {\displaystyle [B]} is the quadratic variation. It is a notion invented by Paul Lévy. The basic idea is that L x ( t ) {\displaystyle L^{x}(t)} is an (appropriately rescaled and time-parametrized) measure of how much time B s {\displaystyle B_{s}} has spent at x {\displaystyle x} up to time t {\displaystyle t} . More rigorously, it may be written as the almost sure limit L x ( t ) = lim ε ↓ 0 1 2 ε ∫ 0 t 1 { x − ε < B s < x + ε } d [ B ] s , {\displaystyle L^{x}(t)=\lim _{\varepsilon \downarrow 0}{\frac {1}{2\varepsilon }}\int _{0}^{t}1_{\{x-\varepsilon <B_{s}<x+\varepsilon \}}\,d[B]_{s},} which may be shown to always exist. Note that in the special case of Brownian motion (or more generally a real-valued diffusion of the form d B = b ( t , B ) d t + d W {\displaystyle dB=b(t,B)\,dt+dW} where W {\displaystyle W} is a Brownian motion), the term d [ B ] s {\displaystyle d[B]_{s}} simply reduces to d s {\displaystyle ds} , which explains why it is called the local time of B {\displaystyle B} at x {\displaystyle x} . For a discrete state-space process ( X s ) s ≥ 0 {\displaystyle (X_{s})_{s\geq 0}} , the local time can be expressed more simply as L x ( t ) = ∫ 0 t 1 { x } ( X s ) d s . {\displaystyle L^{x}(t)=\int _{0}^{t}1_{\{x\}}(X_{s})\,ds.} == Tanaka's formula == Tanaka's formula also provides a definition of local time for an arbitrary continuous semimartingale ( X s ) s ≥ 0 {\displaystyle (X_{s})_{s\geq 0}} on R : {\displaystyle \mathbb {R} :} L x ( t ) = | X t − x | − | X 0 − x | − ∫ 0 t ( 1 ( 0 , ∞ ) ( X s − x ) − 1 ( − ∞ , 0 ] ( X s − x ) ) d X s , t ≥ 0. {\displaystyle L^{x}(t)=|X_{t}-x|-|X_{0}-x|-\int _{0}^{t}\left(1_{(0,\infty )}(X_{s}-x)-1_{(-\infty ,0]}(X_{s}-x)\right)\,dX_{s},\qquad t\geq 0.} A more general form was proven independently by Meyer and Wang; the formula extends Itô's lemma for twice differentiable functions to a more general class of functions. If F : R → R {\displaystyle F:\mathbb {R} \rightarrow \mathbb {R} } is absolutely continuous with derivative F ′ , {\displaystyle F',} which is of bounded variation, then F ( X t ) = F ( X 0 ) + ∫ 0 t F − ′ ( X s ) d X s + 1 2 ∫ − ∞ ∞ L x ( t ) d F − ′ ( x ) , {\displaystyle F(X_{t})=F(X_{0})+\int _{0}^{t}F'_{-}(X_{s})\,dX_{s}+{\frac {1}{2}}\int _{-\infty }^{\infty }L^{x}(t)\,dF'_{-}(x),} where F − ′ {\displaystyle F'_{-}} is the left derivative. If X {\displaystyle X} is a Brownian motion, then for any α ∈ ( 0 , 1 / 2 ) {\displaystyle \alpha \in (0,1/2)} the field of local times L = ( L x ( t ) ) x ∈ R , t ≥ 0 {\displaystyle L=(L^{x}(t))_{x\in \mathbb {R} ,t\geq 0}} has a modification which is a.s. Hölder continuous in x {\displaystyle x} with exponent α {\displaystyle \alpha } , uniformly for bounded x {\displaystyle x} and t {\displaystyle t} . In general, L {\displaystyle L} has a modification that is a.s. continuous in t {\displaystyle t} and càdlàg in x {\displaystyle x} . Tanaka's formula provides the explicit Doob–Meyer decomposition for the one-dimensional reflecting Brownian motion, ( | B s | ) s ≥ 0 {\displaystyle (|B_{s}|)_{s\geq 0}} . == Ray–Knight theorems == The field of local times L t = ( L t x ) x ∈ E {\displaystyle L_{t}=(L_{t}^{x})_{x\in E}} associated to a stochastic process on a space E {\displaystyle E} is a well studied topic in the area of random fields. Ray–Knight type theorems relate the field Lt to an associated Gaussian process. In general Ray–Knight type theorems of the first kind consider the field Lt at a hitting time of the underlying process, whilst theorems of the second kind are in terms of a stopping time at which the field of local times first exceeds a given value. === First Ray–Knight theorem === Let (Bt)t ≥ 0 be a one-dimensional Brownian motion started from B0 = a > 0, and (Wt)t≥0 be a standard two-dimensional Brownian motion started from W0 = 0 ∈ R2. Define the stopping time at which B first hits the origin, T = inf { t ≥ 0 : B t = 0 } {\displaystyle T=\inf\{t\geq 0\colon B_{t}=0\}} . Ray and Knight (independently) showed that where (Lt)t ≥ 0 is the field of local times of (Bt)t ≥ 0, and equality is in distribution on C[0, a]. The process |Wx|2 is known as the squared Bessel process. === Second Ray–Knight theorem === Let (Bt)t ≥ 0 be a standard one-dimensional Brownian motion B0 = 0 ∈ R, and let (Lt)t ≥ 0 be the associated field of local times. Let Ta be the first time at which the local time at zero exceeds a > 0 T a = inf { t ≥ 0 : L t 0 > a } . {\displaystyle T_{a}=\inf\{t\geq 0\colon L_{t}^{0}>a\}.} Let (Wt)t ≥ 0 be an independent one-dimensional Brownian motion started from W0 = 0, then Equivalently, the process ( L T a x ) x ≥ 0 {\displaystyle (L_{T_{a}}^{x})_{x\geq 0}} (which is a process in the spatial variable x {\displaystyle x} ) is equal in distribution to the square of a 0-dimensional Bessel process started at a {\displaystyle a} , and as such is Markovian. === Generalized Ray–Knight theorems === Results of Ray–Knight type for more general stochastic processes have been intensively studied, and analogue statements of both (1) and (2) are known for strongly symmetric Markov processes. == See also == Tanaka's formula Brownian motion Random field == Notes == == References == K. L. Chung and R. J. Williams, Introduction to Stochastic Integration, 2nd edition, 1990, Birkhäuser, ISBN 978-0-8176-3386-8. M. Marcus and J. Rosen, Markov Processes, Gaussian Processes, and Local Times, 1st edition, 2006, Cambridge University Press ISBN 978-0-521-86300-1 P. Mörters and Y. Peres, Brownian Motion, 1st edition, 2010, Cambridge University Press, ISBN 978-0-521-76018-8.
Regenerative process
In applied probability, a regenerative process is a class of stochastic process with the property that certain portions of the process can be treated as being statistically independent of each other. This property can be used in the derivation of theoretical properties of such processes. == History == Regenerative processes were first defined by Walter L. Smith in Proceedings of the Royal Society A in 1955. == Definition == A regenerative process is a stochastic process with time points at which, from a probabilistic point of view, the process restarts itself. These time point may themselves be determined by the evolution of the process. That is to say, the process {X(t), t ≥ 0} is a regenerative process if there exist time points 0 ≤ T0 < T1 < T2 < ... such that the post-Tk process {X(Tk + t) : t ≥ 0} has the same distribution as the post-T0 process {X(T0 + t) : t ≥ 0} is independent of the pre-Tk process {X(t) : 0 ≤ t < Tk} for k ≥ 1. Intuitively this means a regenerative process can be split into i.i.d. cycles. When T0 = 0, X(t) is called a nondelayed regenerative process. Else, the process is called a delayed regenerative process. == Examples == Renewal processes are regenerative processes, with T1 being the first renewal. Alternating renewal processes, where a system alternates between an 'on' state and an 'off' state. A recurrent Markov chain is a regenerative process, with T1 being the time of first recurrence. This includes Harris chains. Reflected Brownian motion is a regenerative process (where one measures the time it takes particles to leave and come back). == Properties == By the renewal reward theorem, with probability 1, lim t → ∞ 1 t ∫ 0 t X ( s ) d s = E [ R ] E [ τ ] . {\displaystyle \lim _{t\to \infty }{\frac {1}{t}}\int _{0}^{t}X(s)ds={\frac {\mathbb {E} [R]}{\mathbb {E} [\tau ]}}.} where τ {\displaystyle \tau } is the length of the first cycle and R = ∫ 0 τ X ( s ) d s {\displaystyle R=\int _{0}^{\tau }X(s)ds} is the value over the first cycle. A measurable function of a regenerative process is a regenerative process with the same regeneration time == References ==
Algorithm selection
Algorithm selection (sometimes also called per-instance algorithm selection or offline algorithm selection) is a meta-algorithmic technique to choose an algorithm from a portfolio on an instance-by-instance basis. It is motivated by the observation that on many practical problems, different algorithms have different performance characteristics. That is, while one algorithm performs well in some scenarios, it performs poorly in others and vice versa for another algorithm. If we can identify when to use which algorithm, we can optimize for each scenario and improve overall performance. This is what algorithm selection aims to do. The only prerequisite for applying algorithm selection techniques is that there exists (or that there can be constructed) a set of complementary algorithms. == Definition == Given a portfolio P {\displaystyle {\mathcal {P}}} of algorithms A ∈ P {\displaystyle {\mathcal {A}}\in {\mathcal {P}}} , a set of instances i ∈ I {\displaystyle i\in {\mathcal {I}}} and a cost metric m : P × I → R {\displaystyle m:{\mathcal {P}}\times {\mathcal {I}}\to \mathbb {R} } , the algorithm selection problem consists of finding a mapping s : I → P {\displaystyle s:{\mathcal {I}}\to {\mathcal {P}}} from instances I {\displaystyle {\mathcal {I}}} to algorithms P {\displaystyle {\mathcal {P}}} such that the cost ∑ i ∈ I m ( s ( i ) , i ) {\displaystyle \sum _{i\in {\mathcal {I}}}m(s(i),i)} across all instances is optimized. == Examples == === Boolean satisfiability problem (and other hard combinatorial problems) === A well-known application of algorithm selection is the Boolean satisfiability problem. Here, the portfolio of algorithms is a set of (complementary) SAT solvers, the instances are Boolean formulas, the cost metric is for example average runtime or number of unsolved instances. So, the goal is to select a well-performing SAT solver for each individual instance. In the same way, algorithm selection can be applied to many other N P {\displaystyle {\mathcal {NP}}} -hard problems (such as mixed integer programming, CSP, AI planning, TSP, MAXSAT, QBF and answer set programming). Competition-winning systems in SAT are SATzilla, 3S and CSHC === Machine learning === In machine learning, algorithm selection is better known as meta-learning. The portfolio of algorithms consists of machine learning algorithms (e.g., Random Forest, SVM, DNN), the instances are data sets and the cost metric is for example the error rate. So, the goal is to predict which machine learning algorithm will have a small error on each data set. == Instance features == The algorithm selection problem is mainly solved with machine learning techniques. By representing the problem instances by numerical features f {\displaystyle f} , algorithm selection can be seen as a multi-class classification problem by learning a mapping f i ↦ A {\displaystyle f_{i}\mapsto {\mathcal {A}}} for a given instance i {\displaystyle i} . Instance features are numerical representations of instances. For example, we can count the number of variables, clauses, average clause length for Boolean formulas, or number of samples, features, class balance for ML data sets to get an impression about their characteristics. === Static vs. probing features === We distinguish between two kinds of features: Static features are in most cases some counts and statistics (e.g., clauses-to-variables ratio in SAT). These features ranges from very cheap features (e.g. number of variables) to very complex features (e.g., statistics about variable-clause graphs). Probing features (sometimes also called landmarking features) are computed by running some analysis of algorithm behavior on an instance (e.g., accuracy of a cheap decision tree algorithm on an ML data set, or running for a short time a stochastic local search solver on a Boolean formula). These feature often cost more than simple static features. === Feature costs === Depending on the used performance metric m {\displaystyle m} , feature computation can be associated with costs. For example, if we use running time as performance metric, we include the time to compute our instance features into the performance of an algorithm selection system. SAT solving is a concrete example, where such feature costs cannot be neglected, since instance features for CNF formulas can be either very cheap (e.g., to get the number of variables can be done in constant time for CNFs in the DIMACs format) or very expensive (e.g., graph features which can cost tens or hundreds of seconds). It is important to take the overhead of feature computation into account in practice in such scenarios; otherwise a misleading impression of the performance of the algorithm selection approach is created. For example, if the decision which algorithm to choose can be made with perfect accuracy, but the features are the running time of the portfolio algorithms, there is no benefit to the portfolio approach. This would not be obvious if feature costs were omitted. == Approaches == === Regression approach === One of the first successful algorithm selection approaches predicted the performance of each algorithm m ^ A : I → R {\displaystyle {\hat {m}}_{\mathcal {A}}:{\mathcal {I}}\to \mathbb {R} } and selected the algorithm with the best predicted performance a r g min A ∈ P m ^ A ( i ) {\displaystyle arg\min _{{\mathcal {A}}\in {\mathcal {P}}}{\hat {m}}_{\mathcal {A}}(i)} for an instance i {\displaystyle i} . === Clustering approach === A common assumption is that the given set of instances I {\displaystyle {\mathcal {I}}} can be clustered into homogeneous subsets and for each of these subsets, there is one well-performing algorithm for all instances in there. So, the training consists of identifying the homogeneous clusters via an unsupervised clustering approach and associating an algorithm with each cluster. A new instance is assigned to a cluster and the associated algorithm selected. A more modern approach is cost-sensitive hierarchical clustering using supervised learning to identify the homogeneous instance subsets. === Pairwise cost-sensitive classification approach === A common approach for multi-class classification is to learn pairwise models between every pair of classes (here algorithms) and choose the class that was predicted most often by the pairwise models. We can weight the instances of the pairwise prediction problem by the performance difference between the two algorithms. This is motivated by the fact that we care most about getting predictions with large differences correct, but the penalty for an incorrect prediction is small if there is almost no performance difference. Therefore, each instance i {\displaystyle i} for training a classification model A 1 {\displaystyle {\mathcal {A}}_{1}} vs A 2 {\displaystyle {\mathcal {A}}_{2}} is associated with a cost | m ( A 1 , i ) − m ( A 2 , i ) | {\displaystyle |m({\mathcal {A}}_{1},i)-m({\mathcal {A}}_{2},i)|} . == Requirements == The algorithm selection problem can be effectively applied under the following assumptions: The portfolio P {\displaystyle {\mathcal {P}}} of algorithms is complementary with respect to the instance set I {\displaystyle {\mathcal {I}}} , i.e., there is no single algorithm A ∈ P {\displaystyle {\mathcal {A}}\in {\mathcal {P}}} that dominates the performance of all other algorithms over I {\displaystyle {\mathcal {I}}} (see figures to the right for examples on complementary analysis). In some application, the computation of instance features is associated with a cost. For example, if the cost metric is running time, we have also to consider the time to compute the instance features. In such cases, the cost to compute features should not be larger than the performance gain through algorithm selection. == Application domains == Algorithm selection is not limited to single domains but can be applied to any kind of algorithm if the above requirements are satisfied. Application domains include: hard combinatorial problems: SAT, Mixed Integer Programming, CSP, AI Planning, TSP, MAXSAT, QBF and Answer Set Programming combinatorial auctions in machine learning, the problem is known as meta-learning software design black-box optimization multi-agent systems numerical optimization linear algebra, differential equations evolutionary algorithms vehicle routing problem power systems For an extensive list of literature about algorithm selection, we refer to a literature overview. == Variants of algorithm selection == === Online selection === Online algorithm selection refers to switching between different algorithms during the solving process. This is useful as a hyper-heuristic. In contrast, offline algorithm selection selects an algorithm for a given instance only once and before the solving process. === Computation of schedules === An extension of algorithm selection is the per-instance algorithm scheduling problem, in which we do not select only one solver, but we select a time budget for each algorithm on a per-instance base. This approach improves the performance of selection systems in particular if the instance features are not very informative and a wrong selection of a single solver is likely. === Selection of parallel portfolios === Given the increasing importance of parallel computation, an extension of algorithm selection for parallel computation is parallel portfolio selection, in which we select a subset of the algorithms to simultaneously run in a parallel portfolio. == External links == Algorithm Selection Library (ASlib) Algorithm selection literature == References ==
Adapted process
In the study of stochastic processes, a stochastic process is adapted (also referred to as a non-anticipating or non-anticipative process) if information about the value of the process at a given time is available at that same time. An informal interpretation is that X is adapted if and only if, for every realisation and every n, Xn is known at time n. The concept of an adapted process is essential, for instance, in the definition of the Itō integral, which only makes sense if the integrand is an adapted process. == Definition == Let ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} be a probability space; I {\displaystyle I} be an index set with a total order ≤ {\displaystyle \leq } (often, I {\displaystyle I} is N {\displaystyle \mathbb {N} } , N 0 {\displaystyle \mathbb {N} _{0}} , [ 0 , T ] {\displaystyle [0,T]} or [ 0 , + ∞ ) {\displaystyle [0,+\infty )} ); F = ( F i ) i ∈ I {\displaystyle \mathbb {F} =\left({\mathcal {F}}_{i}\right)_{i\in I}} be a filtration of the sigma algebra F {\displaystyle {\mathcal {F}}} ; ( S , Σ ) {\displaystyle (S,\Sigma )} be a measurable space, the state space; X i : I × Ω → S {\displaystyle X_{i}:I\times \Omega \to S} be a stochastic process. The stochastic process ( X i ) i ∈ I {\displaystyle (X_{i})_{i\in I}} is said to be adapted to the filtration ( F i ) i ∈ I {\displaystyle \left({\mathcal {F}}_{i}\right)_{i\in I}} if the random variable X i : Ω → S {\displaystyle X_{i}:\Omega \to S} is a ( F i , Σ ) {\displaystyle ({\mathcal {F}}_{i},\Sigma )} -measurable function for each i ∈ I {\displaystyle i\in I} . == Examples == Consider a stochastic process X : [0, T] × Ω → R, and equip the real line R with its usual Borel sigma algebra generated by the open sets. If we take the natural filtration F•X, where FtX is the σ-algebra generated by the pre-images Xs−1(B) for Borel subsets B of R and times 0 ≤ s ≤ t, then X is automatically F•X-adapted. Intuitively, the natural filtration F•X contains "total information" about the behaviour of X up to time t. This offers a simple example of a non-adapted process X : [0, 2] × Ω → R: set Ft to be the trivial σ-algebra {∅, Ω} for times 0 ≤ t < 1, and Ft = FtX for times 1 ≤ t ≤ 2. Since the only way that a function can be measurable with respect to the trivial σ-algebra is to be constant, any process X that is non-constant on [0, 1] will fail to be F•-adapted. The non-constant nature of such a process "uses information" from the more refined "future" σ-algebras Ft, 1 ≤ t ≤ 2. == See also == Predictable process Progressively measurable process == References ==
Rasch model
The Rasch model, named after Georg Rasch, is a psychometric model for analyzing categorical data, such as answers to questions on a reading assessment or questionnaire responses, as a function of the trade-off between the respondent's abilities, attitudes, or personality traits, and the item difficulty. For example, they may be used to estimate a student's reading ability or the extremity of a person's attitude to capital punishment from responses on a questionnaire. In addition to psychometrics and educational research, the Rasch model and its extensions are used in other areas, including the health profession, agriculture, and market research. The mathematical theory underlying Rasch models is a special case of item response theory. However, there are important differences in the interpretation of the model parameters and its philosophical implications that separate proponents of the Rasch model from the item response modeling tradition. A central aspect of this divide relates to the role of specific objectivity, a defining property of the Rasch model according to Georg Rasch, as a requirement for successful measurement. == Overview == === The Rasch model for measurement === In the Rasch model, the probability of a specified response (e.g. right/wrong answer) is modeled as a function of person and item parameters. Specifically, in the original Rasch model, the probability of a correct response is modeled as a logistic function of the difference between the person and item parameter. The mathematical form of the model is provided later in this article. In most contexts, the parameters of the model characterize the proficiency of the respondents and the difficulty of the items as locations on a continuous latent variable. For example, in educational tests, item parameters represent the difficulty of items while person parameters represent the ability or attainment level of people who are assessed. The higher a person's ability relative to the difficulty of an item, the higher the probability of a correct response on that item. When a person's location on the latent trait is equal to the difficulty of the item, there is by definition a 0.5 probability of a correct response in the Rasch model. A Rasch model is a model in one sense in that it represents the structure which data should exhibit in order to obtain measurements from the data; i.e. it provides a criterion for successful measurement. Beyond data, Rasch's equations model relationships we expect to obtain in the real world. For instance, education is intended to prepare children for the entire range of challenges they will face in life, and not just those that appear in textbooks or on tests. By requiring measures to remain the same (invariant) across different tests measuring the same thing, Rasch models make it possible to test the hypothesis that the particular challenges posed in a curriculum and on a test coherently represent the infinite population of all possible challenges in that domain. A Rasch model is therefore a model in the sense of an ideal or standard that provides a heuristic fiction serving as a useful organizing principle even when it is never actually observed in practice. The perspective or paradigm underpinning the Rasch model is distinct from the perspective underpinning statistical modelling. Models are most often used with the intention of describing a set of data. Parameters are modified and accepted or rejected based on how well they fit the data. In contrast, when the Rasch model is employed, the objective is to obtain data which fit the model. The rationale for this perspective is that the Rasch model embodies requirements which must be met in order to obtain measurement, in the sense that measurement is generally understood in the physical sciences. A useful analogy for understanding this rationale is to consider objects measured on a weighing scale. Suppose the weight of an object A is measured as being substantially greater than the weight of an object B on one occasion, then immediately afterward the weight of object B is measured as being substantially greater than the weight of object A. A property we require of measurements is that the resulting comparison between objects should be the same, or invariant, irrespective of other factors. This key requirement is embodied within the formal structure of the Rasch model. Consequently, the Rasch model is not altered to suit data. Instead, the method of assessment should be changed so that this requirement is met, in the same way that a weighing scale should be rectified if it gives different comparisons between objects upon separate measurements of the objects. Data analysed using the model are usually responses to conventional items on tests, such as educational tests with right/wrong answers. However, the model is a general one, and can be applied wherever discrete data are obtained with the intention of measuring a quantitative attribute or trait. === Scaling === When all test-takers have an opportunity to attempt all items on a single test, each total score on the test maps to a unique estimate of ability and the greater the total, the greater the ability estimate. Total scores do not have a linear relationship with ability estimates. Rather, the relationship is non-linear as shown in Figure 1. The total score is shown on the vertical axis, while the corresponding person location estimate is shown on the horizontal axis. For the particular test on which the test characteristic curve (TCC) shown in Figure 1 is based, the relationship is approximately linear throughout the range of total scores from about 13 to 31. The shape of the TCC is generally somewhat sigmoid as in this example. However, the precise relationship between total scores and person location estimates depends on the distribution of items on the test. The TCC is steeper in ranges on the continuum in which there are more items, such as in the range on either side of 0 in Figures 1 and 2. In applying the Rasch model, item locations are often scaled first, based on methods such as those described below. This part of the process of scaling is often referred to as item calibration. In educational tests, the smaller the proportion of correct responses, the higher the difficulty of an item and hence the higher the item's scale location. Once item locations are scaled, the person locations are measured on the scale. As a result, person and item locations are estimated on a single scale as shown in Figure 2. === Interpreting scale locations === For dichotomous data such as right/wrong answers, by definition, the location of an item on a scale corresponds with the person location at which there is a 0.5 probability of a correct response to the question. In general, the probability of a person responding correctly to a question with difficulty lower than that person's location is greater than 0.5, while the probability of responding correctly to a question with difficulty greater than the person's location is less than 0.5. The Item Characteristic Curve (ICC) or Item Response Function (IRF) shows the probability of a correct response as a function of the ability of persons. A single ICC is shown and explained in more detail in relation to Figure 4 in this article (see also the item response function). The leftmost ICCs in Figure 3 are the easiest items, the rightmost ICCs in the same figure are the most difficult items. When responses of a person are sorted according to item difficulty, from lowest to highest, the most likely pattern is a Guttman pattern or vector; i.e. {1,1,...,1,0,0,0,...,0}. However, while this pattern is the most probable given the structure of the Rasch model, the model requires only probabilistic Guttman response patterns; that is, patterns which tend toward the Guttman pattern. It is unusual for responses to conform strictly to the pattern because there are many possible patterns. It is unnecessary for responses to conform strictly to the pattern in order for data to fit the Rasch model. Each ability estimate has an associated standard error of measurement, which quantifies the degree of uncertainty associated with the ability estimate. Item estimates also have standard errors. Generally, the standard errors of item estimates are considerably smaller than the standard errors of person estimates because there are usually more response data for an item than for a person. That is, the number of people attempting a given item is usually greater than the number of items attempted by a given person. Standard errors of person estimates are smaller where the slope of the ICC is steeper, which is generally through the middle range of scores on a test. Thus, there is greater precision in this range since the steeper the slope, the greater the distinction between any two points on the line. Statistical and graphical tests are used to evaluate the correspondence of data with the model. Certain tests are global, while others focus on specific items or people. Certain tests of fit provide information about which items can be used to increase the reliability of a test by omitting or correcting problems with poor items. In Rasch Measurement the person separation index is used instead of reliability indices. However, the person separation index is analogous to a reliability index. The separation index is a summary of the genuine separation as a ratio to separation including measurement error. As mentioned earlier, the level of measurement error is not uniform across the range of a test, but is generally larger for more extreme scores (low and high). == Features of the Rasch model == The class of models is named after Georg Rasch, a Danish mathematician and statistician who advanced the epistemological case for the models based on their congruence with a core requirement of measurement in physics; namely the requirement of invariant comparison. This is the defining feature of the class of models, as is elaborated upon in the following section. The Rasch model for dichotomous data has a close conceptual relationship to the law of comparative judgment (LCJ), a model formulated and used extensively by L. L. Thurstone, and therefore also to the Thurstone scale. Prior to introducing the measurement model he is best known for, Rasch had applied the Poisson distribution to reading data as a measurement model, hypothesizing that in the relevant empirical context, the number of errors made by a given individual was governed by the ratio of the text difficulty to the person's reading ability. Rasch referred to this model as the multiplicative Poisson model. Rasch's model for dichotomous data – i.e. where responses are classifiable into two categories – is his most widely known and used model, and is the main focus here. This model has the form of a simple logistic function. The brief outline above highlights certain distinctive and interrelated features of Rasch's perspective on social measurement, which are as follows: He was concerned principally with the measurement of individuals, rather than with distributions among populations. He was concerned with establishing a basis for meeting a priori requirements for measurement deduced from physics and, consequently, did not invoke any assumptions about the distribution of levels of a trait in a population. Rasch's approach explicitly recognizes that it is a scientific hypothesis that a given trait is both quantitative and measurable, as operationalized in a particular experimental context. Thus, congruent with the perspective articulated by Thomas Kuhn in his 1961 paper The function of measurement in modern physical science, measurement was regarded both as being founded in theory, and as being instrumental to detecting quantitative anomalies incongruent with hypotheses related to a broader theoretical framework. This perspective is in contrast to that generally prevailing in the social sciences, in which data such as test scores are directly treated as measurements without requiring a theoretical foundation for measurement. Although this contrast exists, Rasch's perspective is actually complementary to the use of statistical analysis or modelling that requires interval-level measurements, because the purpose of applying a Rasch model is to obtain such measurements. Applications of Rasch models are described in a wide variety of sources. === Invariant comparison and sufficiency === The Rasch model for dichotomous data is often regarded as an item response theory (IRT) model with one item parameter. However, rather than being a particular IRT model, proponents of the model: 265  regard it as a model that possesses a property which distinguishes it from other IRT models. Specifically, the defining property of Rasch models is their formal or mathematical embodiment of the principle of invariant comparison. Rasch summarised the principle of invariant comparison as follows: The comparison between two stimuli should be independent of which particular individuals were instrumental for the comparison; and it should also be independent of which other stimuli within the considered class were or might also have been compared. Symmetrically, a comparison between two individuals should be independent of which particular stimuli within the class considered were instrumental for the comparison; and it should also be independent of which other individuals were also compared, on the same or some other occasion. Rasch models embody this principle because their formal structure permits algebraic separation of the person and item parameters, in the sense that the person parameter can be eliminated during the process of statistical estimation of item parameters. This result is achieved through the use of conditional maximum likelihood estimation, in which the response space is partitioned according to person total scores. The consequence is that the raw score for an item or person is the sufficient statistic for the item or person parameter. That is to say, the person total score contains all information available within the specified context about the individual, and the item total score contains all information with respect to the item, with regard to the relevant latent trait. The Rasch model requires a specific structure in the response data, namely a probabilistic Guttman structure. In somewhat more familiar terms, Rasch models provide a basis and justification for obtaining person locations on a continuum from total scores on assessments. Although it is not uncommon to treat total scores directly as measurements, they are actually counts of discrete observations rather than measurements. Each observation represents the observable outcome of a comparison between a person and item. Such outcomes are directly analogous to the observation of the tipping of a beam balance in one direction or another. This observation would indicate that one or other object has a greater mass, but counts of such observations cannot be treated directly as measurements. Rasch pointed out that the principle of invariant comparison is characteristic of measurement in physics using, by way of example, a two-way experimental frame of reference in which each instrument exerts a mechanical force upon solid bodies to produce acceleration. Rasch: 112–3  stated of this context: "Generally: If for any two objects we find a certain ratio of their accelerations produced by one instrument, then the same ratio will be found for any other of the instruments". It is readily shown that Newton's second law entails that such ratios are inversely proportional to the ratios of the masses of the bodies. == The mathematical form of the Rasch model for dichotomous data == Let X n i = x ∈ { 0 , 1 } {\displaystyle X_{ni}=x\in \{0,1\}} be a dichotomous random variable where, for example, x = 1 {\displaystyle x=1} denotes a correct response and x = 0 {\displaystyle x=0} an incorrect response to a given assessment item. In the Rasch model for dichotomous data, the probability of the outcome X n i = 1 {\displaystyle X_{ni}=1} is given by: Pr { X n i = 1 } = e β n − δ i 1 + e β n − δ i , {\displaystyle \Pr\{X_{ni}=1\}={\frac {e^{{\beta _{n}}-{\delta _{i}}}}{1+e^{{\beta _{n}}-{\delta _{i}}}}},} where β n {\displaystyle \beta _{n}} is the ability of person n {\displaystyle n} and δ i {\displaystyle \delta _{i}} is the difficulty of item i {\displaystyle i} . Thus, in the case of a dichotomous attainment item, Pr { X n i = 1 } {\displaystyle \Pr\{X_{ni}=1\}} is the probability of success upon interaction between the relevant person and assessment item. It is readily shown that the log odds, or logit, of correct response by a person to an item, based on the model, is equal to β n − δ i {\displaystyle \beta _{n}-\delta _{i}} . Given two examinees with different ability parameters β 1 {\displaystyle \beta _{1}} and β 2 {\displaystyle \beta _{2}} and an arbitrary item with difficulty δ i {\displaystyle \delta _{i}} , compute the difference in logits for these two examinees by ( β 1 − δ i ) − ( β 2 − δ i ) {\displaystyle (\beta _{1}-\delta _{i})-(\beta _{2}-\delta _{i})} . This difference becomes β 1 − β 2 {\displaystyle \beta _{1}-\beta _{2}} . Conversely, it can be shown that the log odds of a correct response by the same person to one item, conditional on a correct response to one of two items, is equal to the difference between the item locations. For example, l o g - o d d s ⁡ { X n 1 = 1 ∣ r n = 1 } = δ 2 − δ 1 , {\displaystyle \operatorname {log-odds} \{X_{n1}=1\mid \ r_{n}=1\}=\delta _{2}-\delta _{1},\,} where r n {\displaystyle r_{n}} is the total score of person n over the two items, which implies a correct response to one or other of the items. Hence, the conditional log odds does not involve the person parameter β n {\displaystyle \beta _{n}} , which can therefore be eliminated by conditioning on the total score r n = 1 {\displaystyle r_{n}=1} . That is, by partitioning the responses according to raw scores and calculating the log odds of a correct response, an estimate δ 2 − δ 1 {\displaystyle \delta _{2}-\delta _{1}} is obtained without involvement of β n {\displaystyle \beta _{n}} . More generally, a number of item parameters can be estimated iteratively through application of a process such as Conditional Maximum Likelihood estimation (see Rasch model estimation). While more involved, the same fundamental principle applies in such estimations. The ICC of the Rasch model for dichotomous data is shown in Figure 4. The grey line maps the probability of the discrete outcome X n i = 1 {\displaystyle X_{ni}=1} (that is, correctly answering the question) for persons with different locations on the latent continuum (that is, their level of abilities). The location of an item is, by definition, that location at which the probability that X n i = 1 {\displaystyle X_{ni}=1} is equal to 0.5. In figure 4, the black circles represent the actual or observed proportions of persons within Class Intervals for which the outcome was observed. For example, in the case of an assessment item used in the context of educational psychology, these could represent the proportions of persons who answered the item correctly. Persons are ordered by the estimates of their locations on the latent continuum and classified into Class Intervals on this basis in order to graphically inspect the accordance of observations with the model. There is a close conformity of the data with the model. In addition to graphical inspection of data, a range of statistical tests of fit are used to evaluate whether departures of observations from the model can be attributed to random effects alone, as required, or whether there are systematic departures from the model. == Polytomous extensions of the Rasch model == There are multiple polytomous extensions to the Rasch model, which generalize the dichotomous model so that it can be applied in contexts in which successive integer scores represent categories of increasing level or magnitude of a latent trait, such as increasing ability, motor function, endorsement of a statement, and so forth. These polytomous extensions are, for example, applicable to the use of Likert scales, grading in educational assessment, and scoring of performances by judges. == Other considerations == A criticism of the Rasch model is that it is overly restrictive or prescriptive because an assumption of the model is that all items have equal discrimination, whereas in practice, items discriminations vary, and thus no data set will ever show perfect data-model fit. A frequent misunderstanding is that the Rasch model does not permit each item to have a different discrimination, but equal discrimination is an assumption of invariant measurement, so differing item discriminations are not forbidden, but rather indicate that measurement quality does not equal a theoretical ideal. Just as in physical measurement, real world datasets will never perfectly match theoretical models, so the relevant question is whether a particular data set provides sufficient quality of measurement for the purpose at hand, not whether it perfectly matches an unattainable standard of perfection. A criticism specific to the use of the Rasch model with response data from multiple choice items is that there is no provision in the model for guessing because the left asymptote always approaches a zero probability in the Rasch model. This implies that a person of low ability will always get an item wrong. However, low-ability individuals completing a multiple-choice exam have a substantially higher probability of choosing the correct answer by chance alone (for a k-option item, the likelihood is around 1/k). The three-parameter logistic model relaxes both these assumptions and the two-parameter logistic model (2PL) allows varying slopes. However, the specification of uniform discrimination and zero left asymptote are necessary properties of the model in order to sustain sufficiency of the simple, unweighted raw score. In practice, the non-zero lower asymptote found in multiple-choice datasets is less of a threat to measurement than commonly assumed and typically does not result in substantive errors in measurement when well-developed test items are used sensibly Verhelst & Glas (1995) derive Conditional Maximum Likelihood (CML) equations for a model they refer to as the One Parameter Logistic Model (OPLM). In algebraic form it appears to be identical with the 2PL model, but OPLM contains preset discrimination indexes rather than 2PL's estimated discrimination parameters. As noted by these authors, though, the problem one faces in estimation with estimated discrimination parameters is that the discriminations are unknown, meaning that the weighted raw score "is not a mere statistic, and hence it is impossible to use CML as an estimation method".: 217  That is, sufficiency of the weighted "score" in the 2PL cannot be used according to the way in which a sufficient statistic is defined. If the weights are imputed instead of being estimated, as in OPLM, conditional estimation is possible and some of the properties of the Rasch model are retained. In OPLM, the values of the discrimination index are restricted to between 1 and 15. A limitation of this approach is that in practice, values of discrimination indexes must be preset as a starting point. This means some type of estimation of discrimination is involved when the purpose is to avoid doing so. The Rasch model for dichotomous data inherently entails a single discrimination parameter which, as noted by Rasch,: 121  constitutes an arbitrary choice of the unit in terms of which magnitudes of the latent trait are expressed or estimated. However, the Rasch model requires that the discrimination is uniform across interactions between persons and items within a specified frame of reference (i.e. the assessment context given conditions for assessment). Application of the model provides diagnostic information regarding how well the criterion is met. Application of the model can also provide information about how well items or questions on assessments work to measure the ability or trait. For instance, knowing the proportion of persons that engage in a given behavior, the Rasch model can be used to derive the relations between difficulty of behaviors, attitudes and behaviors. Prominent advocates of Rasch models include Benjamin Drake Wright, David Andrich and Erling Andersen. == See also == Mokken scale Guttman scale == References == == Further reading == Andrich, D. (1978a). "A rating formulation for ordered response categories". Psychometrika. 43 (4): 357–74. doi:10.1007/BF02293814. Andrich, D. (1988). Rasch models for measurement. Beverly Hills: Sage Publications. ISBN 978-1-5063-1937-7. Baker, F. (2001). The Basics of Item Response Theory. ERIC Clearinghouse on Assessment and Evaluation, University of Maryland, College Park, MD. ISBN 1-886047-03-0. Available free with software included from "IRT". Edres.org. Archived from the original on 5 February 2024. Fischer, G.H.; Molenaar, I.W., eds. (1995). Rasch models: foundations, recent developments and applications. New York: Springer-Verlag. ISBN 0-387-94499-0. Goldstein, H; Blinkhorn, S (1977). "Monitoring Educational Standards: an inappropriate model" (PDF). Bull.Br.Psychol.Soc. 30: 309–311. Goldstein, H; Blinkhorn, S (1982). "The Rasch Model Still Does Not Fit" (PDF). BERJ. 82: 167–170. Hambleton, RK; Jones, RW (1993). "Comparison of classical test theory and item response" (PDF). Educational Measurement: Issues and Practice. 12 (3): 38–47. doi:10.1111/j.1745-3992.1993.tb00543.x. Archived from the original (PDF) on 8 October 2006. available in the "ITEMS Series". National Council on Measurement in Education. Archived from the original on 8 October 2006. Harris, D. (1989). "Comparison of 1-, 2-, and 3-parameter IRT models" (PDF). Educational Measurement: Issues and Practice. 8: 35–41. doi:10.1111/j.1745-3992.1989.tb00313.x. Archived from the original (PDF) on 8 October 2006. available in the "ITEMS Series". National Council on Measurement in Education. Archived from the original on 8 October 2006. Linacre, J. M. (1999). "Understanding Rasch measurement: Estimation methods for Rasch measures". Journal of Outcome Measurement. 3 (4): 382–405. PMID 10572388. von Davier, M.; Carstensen, C. H. (2007). Multivariate and Mixture Distribution Rasch Models: Extensions and Applications. New York: Springer. doi:10.1007/978-0-387-49839-3. ISBN 978-0-387-49839-3. von Davier, M. (2016). "Rasch Model". In van der Linden, Wim J. (ed.). Handbook of Item Response Theory (PDF). Boca Raton: CRC Press. doi:10.1201/9781315374512. ISBN 9781315374512. Wright, B.D.; Stone, M.H. (1979). Best Test Design. Chicago, IL: MESA Press. {{cite book}}: |work= ignored (help) Wu, M.; Adams, R. (2007). Applying the Rasch model to psycho-social measurement: A practical approach (PDF). Melbourne, Australia: Educational Measurement Solutions. Available free from Educational Measurement Solutions == External links == Institute for Objective Measurement Online Rasch Resources Pearson Psychometrics Laboratory, with information about Rasch models "Journal of Applied Measurement". Journal of Outcome Measurement (all issues available for free downloading) Berkeley Evaluation & Assessment Research Center (ConstructMap software) Directory of Rasch Software – freeware and paid "IRT Modeling Lab". U. Illinois Urbana Champ. Archived from the original on 27 June 2001. National Council on Measurement in Education (NCME) Rasch Measurement Transactions The Standards for Educational and Psychological Testing The Trouble with Rasch
How Data Happened
How Data Happened: A History from the Age of Reason to the Age of Algorithms is a 2023 non-fiction book written by Columbia University professors Chris Wiggins and Matthew L. Jones. The book explores the history of data and statistics from the end of the 18th century to the present day. == Content == The book starts at the end of the 18th century, when European states began tabulating physical resources, and ends at the present day, when algorithms manipulate our personal information as a commodity. It looks at the rise of data and statistics, and how early statistical methods were used to justify eugenics, quantify supposed racial differences, and develop military and industrial applications. The authors also discuss the impact of the internet and e-commerce on data collection, the rise of data science, and the consequences of government-run surveillance systems collecting vast amounts of personal data for customized, targeted advertising. They emphasize the importance of privacy and democracy and propose remedies to the problems caused by mass data collection, including stronger regulation of the tech industry and collective action by its employees. The book is a historical analysis that provides context for understanding the debates surrounding data and its control. The book has 336 pages and was published in 2023 by W. W. Norton & Company. == References == == External links == The wild evolution of data science and how to unpack it, book excerpt on Big Think From Eugenics to Targeted Advertising: The Dark Role of Data in Sorting Humanity, book excerpt on Literary Hub
Health care analytics
Health care analytics is the health care analysis activities that can be undertaken as a result of data collected from four areas within healthcare: (1) claims and cost data, (2) pharmaceutical and research and development (R&D) data, (3) clinical data (such as collected from electronic medical records (EHRs)), and (4) patient behaviors and preferences data (e.g. patient satisfaction or retail purchases, such as data captured in stores selling personal health products). Health care analytics is a growing industry in many countries including the United States, where it is expected to grow to more than $31 billion by 2022. It is also increasingly important to governments and public health agencies to support health policy and meet public expectations for transparency, as accelerated by the COVID-19 pandemic. Health care analytics allows for the examination of patterns in various healthcare data in order to determine how clinical care can be improved for patients and provider teams, while limiting excessive spending and improving the health of populations. Areas of the industry focuses on clinical analysis, financial analysis, supply chain analysis, as well as marketing, fraud and HR analysis. There is increasing demand in many countries to incorporate social indicators of patients and providers within health care analytics, to inform improvements for health equity, such as in terms of addressing racism in healthcare or the health of Indigenous peoples. == Balancing Interests in Healthcare Analytics: innovation, privacy, and patient safety == Healthcare analytics requires access to comprehensive data, but its usefulness depends on a balance between expansive limits on the collection of data that may risk the protection for patient rights, erroneous conclusions or statistical predictions,< and misuse of results. Appropriate policies could support gains in process improvements, cost reductions, personalized medicine, and population health. Additionally, providing incentives to encourage appropriate use may address some concerns but could also inadvertently incentivize the misuse of data. Lastly, creating standards for IT infrastructure may encourage data sharing and use, but those standards would need to be reevaluated on a regular, ongoing basis as the fast pace of technological innovation causes standards and best practices to become quickly outdated. Several areas to improve healthcare analytics through national, regional and local collaborations and legislation have been identified. === Limiting data collection === The needs of healthcare providers, government agencies, health plans, and researchers for quality data must be met to ensure adequate medical care and to make improvements to the healthcare system, while still ensuring the patients right to privacy. Data collection should be limited to necessity for medical care and by patient preference beyond that care. Such limits would protect patient privacy while minimizing infrastructure costs to house data. When possible, patients should be informed about what data is collected prior to engaging in medical services. For instance in Canada, data collection among Indigenous populations is governed by principles of First Nations ownership, control, access, and possession. === Limiting data use === Expanding availability of big data increases the risk of statistical errors, erroneous conclusions and predictions, and misuse of results. Evidence supports use of data for process improvements, cost reductions, personalized medicine, and public health. Innovative uses for individual health can harm underserved populations. In the United States, limiting use for denial and exclusion prevents use to determine eligibility for benefits or care and is harmonized with other federal anti-discrimination laws, such as Fair Credit Reporting Act, and is harmonized with anti-discrimination laws like the Civil Rights Act and the Genetic Information Nondiscrimination Act. === Providing incentives to encourage appropriate use === Increasing vertical integration in both public and private sector providers has created massive databases of electronic health records. In the United States, the ACA has provided Medicare and Medicaid incentives to providers to adopt EHR's. Large healthcare institutions also have internal motivation to apply healthcare analytics, largely for reducing costs by providing preventative care. Policy could increase data use by incentivizing insurers and providers to increase population tracking, which improves outcomes. === Creating standards for the IT infrastructure === Inappropriate IT infrastructure likely limits healthcare analytics findings and their impact on clinical practice. Establishing standards ensures IT infrastructure capable of housing big data balanced with addressing accessibility, ownership, and privacy. New possibilities could be explored such as private clouds and “a virtual sandbox” consisting of filtered data authorized to the researchers accessing the sandbox. Standards promote easier coordination in information collaboration between different medical and research organizations resulting in significantly improving patient care by improving communication between providers and reducing duplicity and costs. Minimum standards are necessary to balance privacy and accessibility. Standardization helps improve patient care by facilitating research collaboration and easier communication between medical providers. The research can yield preventive care concepts that can reduce patient caseload and avoid long-term medical costs. == Healthcare analytics in UAE == The Dubai Pharmacy College (DPCG) is a pioneer in healthcare data analytics education in the GCC region. DPC offers a Post-graduate certificate course in "Healthcare business data analytics" for healthcare professionals to motivate the intuition to explore the concept of healthcare data analytics and apply innovations in healthcare computing technologies. The aim of the certification program is to provide a platform for interprofessional researchers to utilize the fundamental technology including software applications for intelligent data acquisition, processing, and analysis of healthcare data. == Healthcare analytics in the United States == === Federal government role in health IT === In the United States, multiple federal entities are heavily involved in health analytics infrastructure. Within the executive branch, the administration itself, Centers for Medicare and Medicaid Services (CMS), and Office of the National Coordinator for Health Information Technology (ONC) each have strategic plans and are involved in determining regulation. Within the legislative branch, multiple committees within the House of Representatives and Senate hold hearings and have opinions on using data and technology to reduce costs and improve outcomes in healthcare. The ONC issued the Federal Health IT Strategic Plan 2015-2020. The plan outlines the steps federal agencies will take to achieve widespread use of health information technology (health IT) and electronic health information to enhance the health IT infrastructure, to advance person-centered and self-managed health, to transform health care delivery and community health, and to foster research, scientific knowledge and innovation. The plan is intended “to provide clarity in federal policies, programs, and actions and includes strategies to align program requirements, harmonize and simplify regulations, and aims to help health IT users to advance the learning health system to achieve better health.” The Strategic Plan includes several key initiatives employing multiple strategies to meet its goals. These include: (1) finalizing and implementing an interoperability roadmap; (2) protecting the privacy and security of health information; (3) identifying, prioritizing and advancing technical standards; (4) increasing user and market confidence in the safety and safe use of health IT; (5) advancing a national communication infrastructure; and (6) collaborating among all stakeholders. === Challenges to address === ==== Creating an interoperability roadmap ==== Dr.Aryan Chavan challenges to be addressed: (1) variation in how standards are tested and implemented; (2) variation in how health IT stakeholders interpret and implement policies and legal requirements; and (3) reluctance of health IT stakeholders to share and collaborate in ways that might foster consumer engagement. The ONC is working to develop a policy advisory for health information exchange by 2017 that will define and outline basic expectations for trading partners around health information exchange, interoperability and the exchange of information. Current federal and state law only prohibits certain kinds of information blocking in limited and narrow circumstances, for example, under the Health Insurance Portability and Accountability Act (HIPAA) or the Anti-Kickback statute. ==== Protecting privacy and security ==== In addition to HIPAA, many states have their own privacy laws protecting an individual’s health information. State laws that are contrary to HIPAA are generally preempted by the federal requirements unless a specific exception applies. For example, if the state law relates to identifiable health information and provides greater privacy protections, then it is not preempted by HIPAA. Since privacy laws may vary from state-to-state, it may create confusion among health IT stakeholders and make it difficult to ensure privacy compliance. ==== Establishing common technical standards ==== Use of common technical standards is necessary to move electronic health information seamlessly and securely. While some clinical record content, such as laboratory results and clinical measurements are easily standardized other content, such as provider notes may be more difficult to standardize. Methods need to be identified that allow for the standardization of provider notes and other traditionally “free form text” data. The ONC HIT Certification Program certifies that a system meets the technological capability, functionality and security requirements adopted by HHS. ONC will assess the program on an ongoing basis “to ensure it can address and reinforce health IT applications and requirements that support federal value-based and alternative payment models.” ==== Increasing confidence in safety and safe use of health IT ==== Health care consumers, providers and organizations need to feel confident that the health IT products, systems or services they are using are not only secure, safe and useful but that they can switch between products, systems or services without loss of valuable information or undue financial burden. Implementation of the Federal Health IT Strategic Plan 2015-2020, along with the 2013 HHS Health IT Patient Safety Action and Surveillance Plan and 2012 Food and Drug Administration Safety and Innovation Act will attempt to address these concerns. ==== Developing national communications structure ==== A national communications infrastructure is necessary to enable the sharing of electronic health information between stakeholders, including providers, individuals and national emergency first responders. It is also necessary for delivering telehealth services or using mobile health applications. “Expanded, secure, and affordable high-speed wireless and broadband services, choice, and spectrum availability will support electronic health information sharing and use, support the communication required for care delivery, and support the continuity of health care and public health services during disasters and public health emergencies.” ==== Stakeholder collaboration ==== The federal government in its role as contributor, beneficiary and collaborator “aims to encourage private-sector innovators and entrepreneurs, as well as researchers, to use government and government-funded data to create useful applications, products, services, and features that help improve health and health care.” HHS receives funds from the Patient-Centered Outcomes Research Trust Fund to build data capacity for patient-centered outcomes research. It is estimated HHS will receive over $140 million for the period between 2011 and 2019. These funds will be used “to enable a comprehensive, interoperable, and sustainable data network infrastructure to collect, link, and analyze data from multiple sources to facilitate patient-centered outcomes research.” ==== Legislation ==== Meaningful Use, the Patient Protection and Affordable Care Act (ACA) and the declining cost of data storage results in health data being stored, shared, and used by multiple providers, insurance companies, and research institutions. Concerns exist about how organizations gather, store, share, and use personal information, including privacy and confidentiality concerns, as well as the concerns over the quality and accuracy of data collected. Expansion of existing regulation can ensure patient privacy and guard patient safety to balance access to data and the ethical impact of exposing that data. == See also == health information management health informatics public health informatics human resources for health information systems (HRHIS) == References == == Further reading == Adam Tanner (2017). Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records. Beacon Press. ISBN 978-0807033340.
Durbin test
Durbin test is a non-parametric statistical test for balanced incomplete designs that reduces to the Friedman test in the case of a complete block design. In the analysis of designed experiments, the Friedman test is the most common non-parametric test for complete block designs. == Background == In a randomized block design, k treatments are applied to b blocks. In a complete block design, every treatment is run for every block and the data are arranged as follows: For some experiments, it may not be realistic to run all treatments in all blocks, so one may need to run an incomplete block design. In this case, it is strongly recommended to run a balanced incomplete design. A balanced incomplete block design has the following properties: Every block contains k experimental units. Every treatment appears in r blocks. Every treatment appears with every other treatment an equal number of times. == Test assumptions == The Durbin test is based on the following assumptions: The b blocks are mutually independent. That means the results within one block do not affect the results within other blocks. The data can be meaningfully ranked (i.e., the data have at least an ordinal scale). == Test definition == Let R(Xij) be the rank assigned to Xij within block i (i.e., ranks within a given row). Average ranks are used in the case of ties. The ranks are summed to obtain R j = ∑ i = 1 b R ( X i j ) {\displaystyle R_{j}=\sum _{i=1}^{b}R(X_{ij})} The Durbin test is then H0: The treatment effects have identical effects Ha: At least one treatment is different from at least one other treatment The test statistic is T 2 = T 1 / ( t − 1 ) ( b k − b − T 1 ) / ( b k − b − t + 1 ) {\displaystyle T_{2}={\frac {T_{1}/\left(t-1\right)}{\left(bk-b-T_{1}\right)/\left(bk-b-t+1\right)}}} where T 1 = t − 1 A − C ( ∑ j = 1 t R j 2 − r C ) {\displaystyle T_{1}={\frac {t-1}{A-C}}\left(\sum _{j=1}^{t}R_{j}^{2}-rC\right)} A = ∑ i = 1 b ∑ j = 1 k R ( X i j ) 2 {\displaystyle A=\sum _{i=1}^{b}\sum _{j=1}^{k}R(X_{ij})^{2}} C = 1 4 b k ( k + 1 ) 2 {\displaystyle C={\frac {1}{4}}bk\left(k+1\right)^{2}} where t is the number of treatments, k is the number of treatments per block, b is the number of blocks, and r is the number of times each treatment appears. For significance level α, the critical region is given by T 2 > F α , k − 1 , b k − b − t + 1 {\displaystyle T_{2}>F_{\alpha ,k-1,bk-b-t+1}} where Fα, k − 1, bk − b − t + 1 denotes the α-quantile of the F distribution with k − 1 numerator degrees of freedom and bk − b − t + 1 denominator degrees of freedom. The null hypothesis is rejected if the test statistic is in the critical region. If the hypothesis of identical treatment effects is rejected, it is often desirable to determine which treatments are different (i.e., multiple comparisons). Treatments i and j are considered different if | R j − R i | > t 1 − α / 2 , b k − b − t + 1 2 ( A − C ) r b k − k − t + 1 ( 1 − T 1 b ( k − 1 ) ) {\displaystyle |R_{j}-R_{i}|>t_{1-\alpha /2,bk-b-t+1}{\sqrt {{\frac {2\left(A-C\right)r}{bk-k-t+1}}\left(1-{\frac {T_{1}}{b\left(k-1\right)}}\right)}}} where Rj and Ri are the column sum of ranks within the blocks, t1 − α/2, bk − b − t + 1 denotes the 1 − α/2 quantile of the t-distribution with bk − b − t + 1 degrees of freedom. == Historical note == T1 was the original statistic proposed by James Durbin, which would have an approximate null distribution of χ t − 1 2 {\displaystyle \chi _{t-1}^{2}} (that is, chi-squared with t − 1 {\displaystyle t-1} degrees of freedom). The T2 statistic has slightly more accurate critical regions, so it is now the preferred statistic. The T2 statistic is the two-way analysis of variance statistic computed on the ranks R(Xij). == Related tests == Cochran's Q test is applied for the special case of a binary response variable (i.e., one that can have only one of two possible outcomes). Cochran's Q test is valid for complete block designs only. == See also == Analysis of variance Friedman test Kruskal-Wallis test Van der Waerden test == References == Conover, W. J. (1999). Practical Nonparametric Statistics (Third ed.). Wiley. pp. 388–395. ISBN 0-471-16068-7. This article incorporates public domain material from the National Institute of Standards and Technology
Impossibility of a gambling system
The principle of the impossibility of a gambling system is a concept in probability. It states that in a random sequence, the methodical selection of subsequences does not change the probability of specific elements. The first mathematical demonstration is attributed to Richard von Mises (who used the term collective rather than sequence). The principle states that no method for forming a subsequence of a random sequence (the gambling system) improves the odds for a specific event. For instance, a sequence of fair coin tosses produces equal and independent 50/50 chances for heads and tails. A simple system of betting on heads every 3rd, 7th, or 21st toss, etc., does not change the odds of winning in the long run. As a mathematical consequence of computability theory, more complicated betting strategies (such as a martingale) also cannot alter the odds in the long run. Von Mises' mathematical demonstration defines an infinite sequence of zeros and ones as a random sequence if it is not biased by having the frequency stability property. With this property, the frequency of zeroes in the sequence stabilizes at 1/2, and every possible subsequence selected by any systematic method is likewise not biased. The subsequence selection criterion is important, because although the sequence 0101010101... is not biased, selecting the odd positions results in 000000... which is not random. Von Mises did not fully define what constituted a "proper" selection rule for subsequences, but in 1940 Alonzo Church defined it as any recursive function which having read the first N elements of the sequence decides if it wants to select element number N+1. Church was a pioneer in the field of computable functions, and the definition he made relied on the Church Turing Thesis for computability. In the mid-1960s, A. N. Kolmogorov and D. W. Loveland independently proposed a more permissive selection rule. In their view Church's recursive function definition was too restrictive in that it read the elements in order. Instead they proposed a rule based on a partially computable process which having read any N elements of the sequence, decides if it wants to select another element which has not been read yet. The principle influenced modern concepts in randomness, e.g. the work by A. N. Kolmogorov in considering a finite sequence random (with respect to a class of computing systems) if any program that can generate the sequence is at least as long as the sequence itself. == See also == Gambler's ruin History of randomness No free lunch theorem == References ==
Z-test
A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Z-test tests the mean of a distribution. For each significance level in the confidence interval, the Z-test has a single critical value (for example, 1.96 for 5% two-tailed), which makes it more convenient than the Student's t-test whose critical values are defined by the sample size (through the corresponding degrees of freedom). Both the Z-test and Student's t-test have similarities in that they both help determine the significance of a set of data. However, the Z-test is rarely used in practice because the population deviation is difficult to determine. == Applicability == Because of the central limit theorem, many test statistics are approximately normally distributed for large samples. Therefore, many statistical tests can be conveniently performed as approximate Z-tests if the sample size is large or the population variance is known. If the population variance is unknown (and therefore has to be estimated from the sample itself) and the sample size is not large (n < 30), the Student's t-test may be more appropriate (in some cases, n < 50, as described below). == Procedure == How to perform a Z-test when T is a statistic that is approximately normally distributed under the null hypothesis is as follows: First, estimate the expected value μ of T under the null hypothesis and obtain an estimate s of the standard deviation of T. Second, determine the properties of T: one-tailed or two-tailed. For null hypothesis H0: μ ≥ μ0 vs alternative hypothesis H1: μ < μ0, it is lower/left-tailed (one-tailed). For null hypothesis H0: μ ≤ μ0 vs alternative hypothesis H1: μ > μ0, it is upper/right-tailed (one-tailed). For null hypothesis H0: μ = μ0 vs alternative hypothesis H1: μ ≠ μ0, it is two-tailed. Third, calculate the standard score: Z = X ¯ − μ 0 s , {\displaystyle Z={\frac {{\bar {X}}-\mu _{0}}{s}},} which one-tailed and two-tailed p-values can be calculated as Φ(Z) (for lower/left-tailed tests), Φ(−Z) (for upper/right-tailed tests) and 2Φ(−|Z|) (for two-tailed tests), where Φ is the standard normal cumulative distribution function. == Use in location testing == The term "Z-test" is often used to refer specifically to the one-sample location test comparing the mean of a set of measurements to a given constant when the sample variance is known. For example, if the observed data X1, ..., Xn are (i) independent, (ii) have a common mean μ, and (iii) have a common variance σ2, then the sample average X has mean μ and variance σ 2 n {\displaystyle {\frac {\sigma ^{2}}{n}}} . The null hypothesis is that the mean value of X is a given number μ0. We can use X as a test-statistic, rejecting the null hypothesis if X − μ0 is large. To calculate the standardized statistic Z = ( X ¯ − μ 0 ) s {\displaystyle Z={\frac {({\bar {X}}-\mu _{0})}{s}}} , we need to either know or have an approximate value for σ2, from which we can calculate s 2 = σ 2 n {\displaystyle s^{2}={\frac {\sigma ^{2}}{n}}} . In some applications, σ2 is known, but this is uncommon. If the sample size is moderate or large, we can substitute the sample variance for σ2, giving a plug-in test. The resulting test will not be an exact Z-test since the uncertainty in the sample variance is not accounted for—however, it will be a good approximation unless the sample size is small. A t-test can be used to account for the uncertainty in the sample variance when the data are exactly normal. Difference between Z-test and t-test: Z-test is used when sample size is large (n > 50), or the population variance is known. t-test is used when sample size is small (n < 50) and population variance is unknown. There is no universal constant at which the sample size is generally considered large enough to justify use of the plug-in test. Typical rules of thumb: the sample size should be 50 observations or more. For large sample sizes, the t-test procedure gives almost identical p-values as the Z-test procedure. Other location tests that can be performed as Z-tests are the two-sample location test and the paired difference test. == Conditions == For the Z-test to be applicable, certain conditions must be met. Nuisance parameters should be known, or estimated with high accuracy (an example of a nuisance parameter would be the standard deviation in a one-sample location test). Z-tests focus on a single parameter, and treat all other unknown parameters as being fixed at their true values. In practice, due to Slutsky's theorem, "plugging in" consistent estimates of nuisance parameters can be justified. However, if the sample size is not large enough for these estimates to be reasonably accurate, the Z-test may not perform well. The test statistic should follow a normal distribution. Generally, one appeals to the central limit theorem to justify assuming that a test statistic varies normally. There is a great deal of statistical research on the question of when a test statistic varies approximately normally. If the variation of the test statistic is strongly non-normal, a Z-test should not be used. If estimates of nuisance parameters are plugged in as discussed above, it is important to use estimates appropriate for the way the data were sampled. In the special case of Z-tests for the one or two sample location problem, the usual sample standard deviation is only appropriate if the data were collected as an independent sample. In some situations, it is possible to devise a test that properly accounts for the variation in plug-in estimates of nuisance parameters. In the case of one and two sample location problems, a t-test does this. == Example == Suppose that in a particular geographic region, the mean and standard deviation of scores on a reading test are 100 points, and 12 points, respectively. Our interest is in the scores of 55 students in a particular school who received a mean score of 96. We can ask whether this mean score is significantly lower than the regional mean—that is, are the students in this school comparable to a simple random sample of 55 students from the region as a whole, or are their scores surprisingly low? First calculate the standard error of the mean: S E = σ n = 12 55 = 12 7.42 = 1.62 {\displaystyle \mathrm {SE} ={\frac {\sigma }{\sqrt {n}}}={\frac {12}{\sqrt {55}}}={\frac {12}{7.42}}=1.62} where σ {\displaystyle {\sigma }} is the population standard deviation. Next calculate the z-score, which is the distance from the sample mean to the population mean in units of the standard error: z = M − μ S E = 96 − 100 1.62 = − 2.47 {\displaystyle z={\frac {M-\mu }{\mathrm {SE} }}={\frac {96-100}{1.62}}=-2.47} In this example, we treat the population mean and variance as known, which would be appropriate if all students in the region were tested. When population parameters are unknown, a Student's t-test should be conducted instead. The classroom mean score is 96, which is −2.47 standard error units from the population mean of 100. Looking up the z-score in a table of the standard normal distribution cumulative probability, we find that the probability of observing a standard normal value below −2.47 is approximately 0.5 − 0.4932 = 0.0068. This is the one-sided p-value for the null hypothesis that the 55 students are comparable to a simple random sample from the population of all test-takers. The two-sided p-value is approximately 0.014 (twice the one-sided p-value). Another way of stating things is that with probability 1 − 0.014 = 0.986, a simple random sample of 55 students would have a mean test score within 4 units of the population mean. We could also say that with 98.6% confidence we reject the null hypothesis that the 55 test takers are comparable to a simple random sample from the population of test-takers. The Z-test tells us that the 55 students of interest have an unusually low mean test score compared to most simple random samples of similar size from the population of test-takers. A deficiency of this analysis is that it does not consider whether the effect size of 4 points is meaningful. If instead of a classroom, we considered a subregion containing 900 students whose mean score was 99, nearly the same z-score and p-value would be observed. This shows that if the sample size is large enough, very small differences from the null value can be highly statistically significant. See statistical hypothesis testing for further discussion of this issue. == Occurrence and applications == === For maximum likelihood estimation of a parameter === Location tests are the most familiar Z-tests. Another class of Z-tests arises in maximum likelihood estimation of the parameters in a parametric statistical model. Maximum likelihood estimates are approximately normal under certain conditions, and their asymptotic variance can be calculated in terms of the Fisher information. The maximum likelihood estimate divided by its standard error can be used as a test statistic for the null hypothesis that the population value of the parameter equals zero. More generally, if θ ^ {\displaystyle {\hat {\theta }}} is the maximum likelihood estimate of a parameter θ, and θ0 is the value of θ under the null hypothesis, θ ^ − θ 0 S E ( θ ^ ) {\displaystyle {\frac {{\hat {\theta }}-\theta _{0}}{{\rm {SE}}({\hat {\theta }})}}} can be used as a Z-test statistic. When using a Z-test for maximum likelihood estimates, it is important to be aware that the normal approximation may be poor if the sample size is not sufficiently large. Although there is no simple, universal rule stating how large the sample size must be to use a Z-test, simulation can give a good idea as to whether a Z-test is appropriate in a given situation. Z-tests are employed whenever it can be argued that a test statistic follows a normal distribution under the null hypothesis of interest. Many non-parametric test statistics, such as U statistics, are approximately normal for large enough sample sizes, and hence are often performed as Z-tests. === Comparing the proportions of two binomials === The Z-test for comparing two proportions is a statistical method used to evaluate whether the proportion of a certain characteristic differs significantly between two independent samples. This test leverages the property that the sample proportions (which is the average of observations coming from a Bernoulli distribution) are asymptotically normal under the Central Limit Theorem, enabling the construction of a Z-test. The z-statistic for comparing two proportions is computed using: z = p ^ 1 − p ^ 2 p ^ ( 1 − p ^ ) ( 1 n 1 + 1 n 2 ) {\displaystyle z={\frac {{\hat {p}}_{1}-{\hat {p}}_{2}}{\sqrt {{\hat {p}}(1-{\hat {p}})\left({\frac {1}{n_{1}}}+{\frac {1}{n_{2}}}\right)}}}} Where: p ^ 1 {\displaystyle {\hat {p}}_{1}} = sample proportion in the first sample p ^ 2 {\displaystyle {\hat {p}}_{2}} = sample proportion in the second sample n 1 {\displaystyle n_{1}} = size of the first sample n 2 {\displaystyle n_{2}} = size of the second sample p ^ {\displaystyle {\hat {p}}} = pooled proportion, calculated as p ^ = x 1 + x 2 n 1 + n 2 {\displaystyle {\hat {p}}={\frac {x_{1}+x_{2}}{n_{1}+n_{2}}}} , where x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} are the counts of successes in the two samples. The confidence interval for the difference between two proportions, based on the definitions above, is: ( p ^ 1 − p ^ 2 ) ± z α / 2 p ^ 1 ( 1 − p ^ 1 ) n 1 + p ^ 2 ( 1 − p ^ 2 ) n 2 {\displaystyle ({\hat {p}}_{1}-{\hat {p}}_{2})\pm z_{\alpha /2}{\sqrt {{\frac {{\hat {p}}_{1}(1-{\hat {p}}_{1})}{n_{1}}}+{\frac {{\hat {p}}_{2}(1-{\hat {p}}_{2})}{n_{2}}}}}} Where: z α / 2 {\displaystyle z_{\alpha /2}} is the critical value of the standard normal distribution (e.g., 1.96 for a 95% confidence level). The MDE for when using the (two-sided) Z-test formula for comparing two proportions, incorporating critical values for α {\displaystyle \alpha } and 1 − β {\displaystyle 1-\beta } , and the standard errors of the proportions: MDE = | p 1 − p 2 | = z 1 − α / 2 p 0 ( 1 − p 0 ) ( 1 n 1 + 1 n 2 ) + z 1 − β p 1 ( 1 − p 1 ) n 1 + p 2 ( 1 − p 2 ) n 2 {\displaystyle {\text{MDE}}=|p_{1}-p_{2}|=z_{1-\alpha /2}{\sqrt {p_{0}(1-p_{0})\left({\frac {1}{n_{1}}}+{\frac {1}{n_{2}}}\right)}}+z_{1-\beta }{\sqrt {{\frac {p_{1}(1-p_{1})}{n_{1}}}+{\frac {p_{2}(1-p_{2})}{n_{2}}}}}} Where: z 1 − α / 2 {\displaystyle z_{1-\alpha /2}} : Critical value for the significance level. z 1 − β {\displaystyle z_{1-\beta }} : Quantile for the desired power. p 0 = p 1 = p 2 {\displaystyle p_{0}=p_{1}=p_{2}} : When assuming the null is correct. == See also == Normal distribution Standard normal table Standard score Student's t-test == References == == Further reading == Sprinthall, R. C. (2011). Basic Statistical Analysis (9th ed.). Pearson Education. ISBN 978-0-205-05217-2. Casella, G., Berger, R. L. (2002). Statistical Inference. Duxbury Press. ISBN 0-534-24312-6. Douglas C.Montgomery, George C.Runger.(2014). Applied Statistics And Probability For Engineers.(6th ed.). John Wiley & Sons, inc. ISBN 9781118539712, 9781118645062.
Bartlett__apos__s test
In statistics, Bartlett's test, named after Maurice Stevenson Bartlett, is used to test homoscedasticity, that is, if multiple samples are from populations with equal variances. Some statistical tests, such as the analysis of variance, assume that variances are equal across groups or samples, which can be checked with Bartlett's test. In a Bartlett test, we construct the null and alternative hypothesis. For this purpose several test procedures have been devised. The test procedure due to M.S.E (Mean Square Error/Estimator) Bartlett test is represented here. This test procedure is based on the statistic whose sampling distribution is approximately a Chi-Square distribution with (k − 1) degrees of freedom, where k is the number of random samples, which may vary in size and are each drawn from independent normal distributions. Bartlett's test is sensitive to departures from normality. That is, if the samples come from non-normal distributions, then Bartlett's test may simply be testing for non-normality. Levene's test and the Brown–Forsythe test are alternatives to the Bartlett test that are less sensitive to departures from normality. == Specification == Bartlett's test is used to test the null hypothesis, H0 that all k population variances are equal against the alternative that at least two are different. If there are k samples with sizes n i {\displaystyle n_{i}} and sample variances S i 2 {\displaystyle S_{i}^{2}} then Bartlett's test statistic is χ 2 = ( N − k ) ln ⁡ ( S p 2 ) − ∑ i = 1 k ( n i − 1 ) ln ⁡ ( S i 2 ) 1 + 1 3 ( k − 1 ) ( ∑ i = 1 k ( 1 n i − 1 ) − 1 N − k ) {\displaystyle \chi ^{2}={\frac {(N-k)\ln(S_{p}^{2})-\sum _{i=1}^{k}(n_{i}-1)\ln(S_{i}^{2})}{1+{\frac {1}{3(k-1)}}\left(\sum _{i=1}^{k}({\frac {1}{n_{i}-1}})-{\frac {1}{N-k}}\right)}}} where N = ∑ i = 1 k n i {\displaystyle N=\sum _{i=1}^{k}n_{i}} and S p 2 = 1 N − k ∑ i ( n i − 1 ) S i 2 {\displaystyle S_{p}^{2}={\frac {1}{N-k}}\sum _{i}(n_{i}-1)S_{i}^{2}} is the pooled estimate for the variance. The test statistic has approximately a χ k − 1 2 {\displaystyle \chi _{k-1}^{2}} distribution. Thus, the null hypothesis is rejected if χ 2 > χ k − 1 , α 2 {\displaystyle \chi ^{2}>\chi _{k-1,\alpha }^{2}} (where χ k − 1 , α 2 {\displaystyle \chi _{k-1,\alpha }^{2}} is the upper tail critical value for the χ k − 1 2 {\displaystyle \chi _{k-1}^{2}} distribution). Bartlett's test is a modification of the corresponding likelihood ratio test designed to make the approximation to the χ k − 1 2 {\displaystyle \chi _{k-1}^{2}} distribution better (Bartlett, 1937). == Notes == The test statistics may be written in some sources with logarithms of base 10 as: χ 2 = 2.3026 ( N − k ) log 10 ⁡ ( S p 2 ) − ∑ i = 1 k ( n i − 1 ) log 10 ⁡ ( S i 2 ) 1 + 1 3 ( k − 1 ) ( ∑ i = 1 k ( 1 n i − 1 ) − 1 N − k ) {\displaystyle \chi ^{2}=2.3026{\frac {(N-k)\log _{10}(S_{p}^{2})-\sum _{i=1}^{k}(n_{i}-1)\log _{10}(S_{i}^{2})}{1+{\frac {1}{3(k-1)}}\left(\sum _{i=1}^{k}({\frac {1}{n_{i}-1}})-{\frac {1}{N-k}}\right)}}} == See also == Box's M test Levene's test Kaiser–Meyer–Olkin test == References == == External links == NIST page on Bartlett's test
Category__colon__Artificial intelligence conferences
Academic conferences related to artificial intelligence, machine learning and pattern recognition.
Amitsur–Levitzki theorem
In algebra, the Amitsur–Levitzki theorem states that the algebra of n × n matrices over a commutative ring satisfies a certain identity of degree 2n. It was proved by Amitsur and Levitsky (1950). In particular matrix rings are polynomial identity rings such that the smallest identity they satisfy has degree exactly 2n. == Statement == The standard polynomial of degree n is S n ( x 1 , … , x n ) = ∑ σ ∈ S n sgn ( σ ) x σ ( 1 ) ⋯ x σ ( n ) {\displaystyle S_{n}(x_{1},\dots ,x_{n})=\sum _{\sigma \in S_{n}}{\text{sgn}}(\sigma )x_{\sigma (1)}\cdots x_{\sigma (n)}} in non-commuting variables x1, ..., xn, where the sum is taken over all n! elements of the symmetric group Sn. The Amitsur–Levitzki theorem states that for n × n matrices A1, ..., A2n whose entries are taken from a commutative ring then S 2 n ( A 1 , … , A 2 n ) = 0. {\displaystyle S_{2n}(A_{1},\dots ,A_{2n})=0.} == Proofs == Amitsur and Levitzki (1950) gave the first proof. Kostant (1958) deduced the Amitsur–Levitzki theorem from the Koszul–Samelson theorem about primitive cohomology of Lie algebras. Swan (1963) and Swan (1969) gave a simple combinatorial proof as follows. By linearity it is enough to prove the theorem when each matrix has only one nonzero entry, which is 1. In this case each matrix can be encoded as a directed edge of a graph with n vertices. So all matrices together give a graph on n vertices with 2n directed edges. The identity holds provided that for any two vertices A and B of the graph, the number of odd Eulerian paths from A to B is the same as the number of even ones. (Here a path is called odd or even depending on whether its edges taken in order give an odd or even permutation of the 2n edges.) Swan showed that this was the case provided the number of edges in the graph is at least 2n, thus proving the Amitsur–Levitzki theorem. Razmyslov (1974) gave a proof related to the Cayley–Hamilton theorem. Rosset (1976) gave a short proof using the exterior algebra of a vector space of dimension 2n. Procesi (2015) gave another proof, showing that the Amitsur–Levitzki theorem is the Cayley–Hamilton identity for the generic Grassman matrix. == References == Amitsur, A. S.; Levitzki, Jakob (1950), "Minimal identities for algebras" (PDF), Proceedings of the American Mathematical Society, 1 (4): 449–463, doi:10.1090/S0002-9939-1950-0036751-9, ISSN 0002-9939, JSTOR 2032312, MR 0036751 Amitsur, A. S.; Levitzki, Jakob (1951), "Remarks on Minimal identities for algebras" (PDF), Proceedings of the American Mathematical Society, 2 (2): 320–327, doi:10.2307/2032509, ISSN 0002-9939, JSTOR 2032509 Formanek, E. (2001) [1994], "Amitsur–Levitzki theorem", Encyclopedia of Mathematics, EMS Press Formanek, Edward (1991), The polynomial identities and invariants of n×n matrices, Regional Conference Series in Mathematics, vol. 78, Providence, RI: American Mathematical Society, ISBN 0-8218-0730-7, Zbl 0714.16001 Kostant, Bertram (1958), "A theorem of Frobenius, a theorem of Amitsur–Levitski and cohomology theory", J. Math. Mech., 7 (2): 237–264, doi:10.1512/iumj.1958.7.07019, MR 0092755 Razmyslov, Ju. P. (1974), "Identities with trace in full matrix algebras over a field of characteristic zero", Mathematics of the USSR-Izvestiya, 8 (4): 727, doi:10.1070/IM1974v008n04ABEH002126, ISSN 0373-2436, MR 0506414 Rosset, Shmuel (1976), "A new proof of the Amitsur–Levitski identity", Israel Journal of Mathematics, 23 (2): 187–188, doi:10.1007/BF02756797, ISSN 0021-2172, MR 0401804, S2CID 121625182 Swan, Richard G. (1963), "An application of graph theory to algebra" (PDF), Proceedings of the American Mathematical Society, 14 (3): 367–373, doi:10.2307/2033801, ISSN 0002-9939, JSTOR 2033801, MR 0149468 Swan, Richard G. (1969), "Correction to "An application of graph theory to algebra"" (PDF), Proceedings of the American Mathematical Society, 21 (2): 379–380, doi:10.2307/2037008, ISSN 0002-9939, JSTOR 2037008, MR 0255439 Procesi, Claudio (2015), "On the theorem of Amitsur—Levitzki", Israel Journal of Mathematics, 207: 151–154, arXiv:1308.2421, Bibcode:2013arXiv1308.2421P, doi:10.1007/s11856-014-1118-8
Approximation
An approximation is anything that is intentionally similar but not exactly equal to something else. == Etymology and usage == The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ad- (ad- before p becomes ap- by assimilation) meaning to. Words like approximate, approximately and approximation are used especially in technical or scientific contexts. In everyday English, words such as roughly or around are used with a similar meaning. It is often found abbreviated as approx. The term can be applied to various properties (e.g., value, quantity, image, description) that are nearly, but not exactly correct; similar, but not exactly the same (e.g., the approximate time was 10 o'clock). Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws. In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incomplete information prevents use of exact representations. The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation. == Mathematics == Approximation theory is a branch of mathematics, and a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers. Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. For example, 1.5 × 106 means that the true value of something being measured is 1,500,000 to the nearest hundred thousand (so the actual value is somewhere between 1,450,000 and 1,550,000); this is in contrast to the notation 1.500 × 106, which means that the true value is 1,500,000 to the nearest thousand (implying that the true value is somewhere between 1,499,500 and 1,500,500). Numerical approximations sometimes result from using a small number of significant digits. Calculations are likely to involve rounding errors and other approximation errors. Log tables, slide rules and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation expressed in a limited number of significant digits, although they can be programmed to produce more precise results. Approximation can occur when a decimal number cannot be expressed in a finite number of binary digits. Related to approximation of functions is the asymptotic value of a function, i.e. the value as one or more of a function's parameters becomes arbitrarily large. For example, the sum ⁠ k / 2 + k / 4 + k / 8 + ⋯ + k / 2 n {\displaystyle k/2+k/4+k/8+\cdots +k/2^{n}} ⁠ is asymptotically equal to k. No consistent notation is used throughout mathematics and some texts use ≈ to mean approximately equal and ~ to mean asymptotically equal whereas other texts use the symbols the other way around. === Typography === The approximately equals sign, ≈, was introduced by British mathematician Alfred Greenhill in 1892, in his book Applications of Elliptic Functions. ==== LaTeX symbols ==== Typical meanings of LaTeX symbols. ≈ {\displaystyle \approx } (\approx) : approximate equality, like π ≈ 3.14 {\displaystyle \pi \approx 3.14} . ≉ {\displaystyle \not \approx } (\not\approx) : inequality, despite any approximation ( 1 ≉ 2 {\displaystyle 1\not \approx 2} ). ≃ {\displaystyle \simeq } (\simeq) : function asymptotic equivalence, like f ( n ) ≃ 3 n 2 {\displaystyle f(n)\simeq 3n^{2}} . Thus, π ≃ 3.14 {\displaystyle \pi \simeq 3.14} is wrong under this definition, despite wide use. ∼ {\displaystyle \sim } (\sim) : function proportionality; the f ( n ) {\displaystyle f(n)} used in \simeq is f ( n ) ∼ n 2 {\displaystyle f(n)\sim n^{2}} . ≅ {\displaystyle \cong } (\cong) : figure congruence, like Δ A B C ≅ Δ A ′ B ′ C ′ {\displaystyle \Delta ABC\cong \Delta A'B'C'} . ≂ {\displaystyle \eqsim } (\eqsim) : equal up to a constant. ⪅ {\displaystyle \lessapprox } (\lessapprox) and ⪆ {\displaystyle \gtrapprox } (\gtrapprox) : either an inequality holds or approximate equality. ==== Unicode ==== Approximate equalities denoted by wavy or dotted symbols. == Science == Approximation arises naturally in scientific experiments. The predictions of a scientific theory can differ from actual measurements. This can be because there are factors in the real situation that are not included in the theory. For example, simple calculations may not include the effect of air resistance. Under these circumstances, the theory is an approximation to reality. Differences may also arise because of limitations in the measuring technique. In this case, the measurement is an approximation to the actual value. The history of science shows that earlier theories and laws can be approximations to some deeper set of laws. Under the correspondence principle, a new scientific theory should reproduce the results of older, well-established, theories in those domains where the old theories work. The old theory becomes an approximation to the new theory. Some problems in physics are too complex to solve by direct analysis, or progress could be limited by available analytical tools. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly. Physicists often approximate the shape of the Earth as a sphere even though more accurate representations are possible, because many physical characteristics (e.g., gravity) are much easier to calculate for a sphere than for other shapes. Approximation is also used to analyze the motion of several planets orbiting a star. This is extremely difficult due to the complex interactions of the planets' gravitational effects on each other. An approximate solution is effected by performing iterations. In the first iteration, the planets' gravitational interactions are ignored, and the star is assumed to be fixed. If a more precise solution is desired, another iteration is then performed, using the positions and motions of the planets as identified in the first iteration, but adding a first-order gravity interaction from each planet on the others. This process may be repeated until a satisfactorily precise solution is obtained. The use of perturbations to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions. The most common versions of philosophy of science accept that empirical measurements are always approximations — they do not perfectly represent what is being measured. == Law == Within the European Union (EU), "approximation" refers to a process through which EU legislation is implemented and incorporated within Member States' national laws, despite variations in the existing legal framework in each country. Approximation is required as part of the pre-accession process for new member states, and as a continuing process when required by an EU Directive. Approximation is a key word generally employed within the title of a directive, for example the Trade Marks Directive of 16 December 2015 serves "to approximate the laws of the Member States relating to trade marks". The European Commission describes approximation of law as "a unique obligation of membership in the European Union". == See also == == References == == External links == Media related to Approximation at Wikimedia Commons
Glossary of linear algebra
This glossary of linear algebra is a list of definitions and terms relevant to the field of linear algebra, the branch of mathematics concerned with linear equations and their representations as vector spaces. For a glossary related to the generalization of vector spaces through modules, see glossary of module theory. == A == affine transformation A composition of functions consisting of a linear transformation between vector spaces followed by a translation. Equivalently, a function between vector spaces that preserves affine combinations. affine combination A linear combination in which the sum of the coefficients is 1. == B == basis In a vector space, a linearly independent set of vectors spanning the whole vector space. basis vector An element of a given basis of a vector space. bilinear form On vector space V over field K, a bilinear form is a function B : V × V → K {\displaystyle B:V\times V\to K} that is linear in each variable. == C == column vector A matrix with only one column. complex number An element of a complex plane complex plane A linear algebra over the real numbers with basis {1, i }, where i is an imaginary unit coordinate vector The tuple of the coordinates of a vector on a basis. covector An element of the dual space of a vector space, (that is a linear form), identified to an element of the vector space through an inner product. == D == determinant The unique scalar function over square matrices which is distributive over matrix multiplication, multilinear in the rows and columns, and takes the value of 1 {\displaystyle 1} for the identity matrix. diagonal matrix A matrix in which only the entries on the main diagonal are non-zero. dimension The number of elements of any basis of a vector space. dot product Given two vectors of the same length, the dot product is the sum of the products of their corresponding indices. dual space The vector space of all linear forms on a given vector space. == E == elementary matrix Square matrix that differs from the identity matrix by at most one entry == H == hyperbolic unit 1. An operator (x, y) → (y, x), reflecting the plane in the 45° diagonal 2. In a linear algebra, a linear map which when composed with itself yields the identity == I == identity matrix A diagonal matrix all of the diagonal elements of which are equal to 1 {\displaystyle 1} . imaginary unit 1. An operator (x, y) → (y, –x), rotating the plane 90° counterclockwise 2. In a linear algebra, a linear map which when composed with itself produces the negative of the identity inverse matrix Of a matrix A {\displaystyle A} , another matrix B {\displaystyle B} such that A {\displaystyle A} multiplied by B {\displaystyle B} and B {\displaystyle B} multiplied by A {\displaystyle A} both equal the identity matrix. isotropic vector In a vector space with a quadratic form, a non-zero vector for which the form is zero. isotropic quadratic form A vector space with a quadratic form which has a null vector. == L == linear algebra 1. The branch of mathematics that deals with vectors, vector spaces, linear transformations and systems of linear equations. 2. A vector space that has a binary operation making it a ring. This linear algebra is also known as an algebra over a field. linear combination A sum, each of whose summands is an appropriate vector times an appropriate scalar (or ring element). linear dependence A linear dependence of a tuple of vectors v → 1 , … , v → n {\textstyle {\vec {v}}_{1},\ldots ,{\vec {v}}_{n}} is a nonzero tuple of scalar coefficients c 1 , … , c n {\textstyle c_{1},\ldots ,c_{n}} for which the linear combination c 1 v → 1 + ⋯ + c n v → n {\textstyle c_{1}{\vec {v}}_{1}+\cdots +c_{n}{\vec {v}}_{n}} equals 0 → {\textstyle {\vec {0}}} . linear equation A polynomial equation of degree one (such as x = 2 y − 7 {\displaystyle x=2y-7} ). linear form A linear map from a vector space to its field of scalars linear independence Property of being not linearly dependent. linear map A function between vector spaces which respects addition and scalar multiplication. linear transformation A linear map whose domain and codomain are equal; it is generally supposed to be invertible. == M == matrix Rectangular arrangement of numbers or other mathematical objects. A matrix is written A = (ai, j), where ai, j is the entry at row i and column j. matrix multiplication If a matrix A has the same number of columns as does matrix B of rows, then a product C = AB may be formed with ci, j equal to the dot product of row i of A with column j of B. == N == null vector 1. Another term for an isotropic vector. 2. Another term for a zero vector. == O == orthogonality Two vectors u and v are orthogonal with respect to a bilinear form B when B(u,v) = 0. orthonormality A set of vectors is orthonormal when they are all unit vectors and are pairwise orthogonal. orthogonal matrix A real square matrix with rows (or columns) that form an orthonormal set. == R == row vector A matrix with only one row. == S == scalar A scalar is an element of a field used in the definition of a vector space. singular-value decomposition a factorization of an m × n {\displaystyle m\times n} complex matrix M as U Σ V ∗ {\displaystyle \mathbf {U\Sigma V^{*}} } , where U is an m × m {\displaystyle m\times m} complex unitary matrix, Σ {\displaystyle \mathbf {\Sigma } } is an m × n {\displaystyle m\times n} rectangular diagonal matrix with non-negative real numbers on the diagonal, and V is an n × n {\displaystyle n\times n} complex unitary matrix. spectrum Set of the eigenvalues of a matrix. split-complex number An element of a split-complex plane split-complex plane A linear algebra over the real numbers with basis {1, j }, where j is a hyperbolic unit square matrix A matrix having the same number of rows as columns. == T == transpose The transpose of an n × m matrix M is an m × n matrix M T obtained by using the rows of M for the columns of M T. == U == unit vector a vector in a normed vector space whose norm is 1, or a Euclidean vector of length one. == V == vector 1. A directed quantity, one with both magnitude and direction. 2. An element of a vector space. vector space A set, whose elements can be added together, and multiplied by elements of a field (this is scalar multiplication); the set must be an abelian group under addition, and the scalar multiplication must be a linear map. == Z == zero vector The additive identity in a vector space. In a normed vector space, it is the unique vector of norm zero. In a Euclidean vector space, it is the unique vector of length zero. == Notes == == References == Curtis, Charles W. (1968) Linear Algebra: an introductory approach, second edition, Allyn & Bacon Dickson, L. E (1914) Linear Algebras via Internet Archive James, Robert C.; James, Glenn (1992). Mathematics Dictionary (5th ed.). Chapman and Hall. ISBN 978-0442007416. Bourbaki, Nicolas (1989). Algebra I. Springer. ISBN 978-3540193739. Williams, Gareth (2014). Linear algebra with applications (8th ed.). Jones & Bartlett Learning.
Jump process
A jump process is a type of stochastic process that has discrete movements, called jumps, with random arrival times, rather than continuous movement, typically modelled as a simple or compound Poisson process. In finance, various stochastic models are used to model the price movements of financial instruments; for example the Black–Scholes model for pricing options assumes that the underlying instrument follows a traditional diffusion process, with continuous, random movements at all scales, no matter how small. John Carrington Cox and Stephen Ross: 145–166  proposed that prices actually follow a 'jump process'. Robert C. Merton extended this approach to a hybrid model known as jump diffusion, which states that the prices have large jumps interspersed with small continuous movements. == See also == Poisson process, an example of a jump process Continuous-time Markov chain (CTMC), an example of a jump process and a generalization of the Poisson process Counting process, an example of a jump process and a generalization of the Poisson process in a different direction than that of CTMCs Interacting particle system, an example of a jump process Kolmogorov equations (continuous-time Markov chains) == References ==
Topological data analysis
In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particular metric chosen and provides dimensionality reduction and robustness to noise. Beyond this, it inherits functoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools. The initial motivation is to study the shape of data. TDA has combined algebraic topology and other tools from pure mathematics to allow mathematically rigorous study of "shape". The main tool is persistent homology, an adaptation of homology to point cloud data. Persistent homology has been applied to many types of data across many fields. Moreover, its mathematical foundation is also of theoretical importance. The unique features of TDA make it a promising bridge between topology and geometry. == Basic theory == === Intuition === TDA is premised on the idea that the shape of data sets contains relevant information. Real high-dimensional data is typically sparse, and tends to have relevant low dimensional features. One task of TDA is to provide a precise characterization of this fact. For example, the trajectory of a simple predator-prey system governed by the Lotka–Volterra equations forms a closed circle in state space. TDA provides tools to detect and quantify such recurrent motion. Many algorithms for data analysis, including those used in TDA, require setting various parameters. Without prior domain knowledge, the correct collection of parameters for a data set is difficult to choose. The main insight of persistent homology is to use the information obtained from all parameter values by encoding this huge amount of information into an understandable and easy-to-represent form. With TDA, there is a mathematical interpretation when the information is a homology group. In general, the assumption is that features that persist for a wide range of parameters are "true" features. Features persisting for only a narrow range of parameters are presumed to be noise, although the theoretical justification for this is unclear. === Early history === Precursors to the full concept of persistent homology appeared gradually over time. In 1990, Patrizio Frosini introduced a pseudo-distance between submanifolds, and later the size function, which on 1-dim curves is equivalent to the 0th persistent homology. Nearly a decade later, Vanessa Robins studied the images of homomorphisms induced by inclusion. Finally, shortly thereafter, Herbert Edelsbrunner et al. introduced the concept of persistent homology together with an efficient algorithm and its visualization as a persistence diagram. Gunnar Carlsson et al. reformulated the initial definition and gave an equivalent visualization method called persistence barcodes, interpreting persistence in the language of commutative algebra. In algebraic topology the persistent homology has emerged through the work of Sergey Barannikov on Morse theory. The set of critical values of smooth Morse function was canonically partitioned into pairs "birth-death", filtered complexes were classified, their invariants, equivalent to persistence diagram and persistence barcodes, together with the efficient algorithm for their calculation, were described under the name of canonical forms in 1994 by Barannikov. === Concepts === Some widely used concepts are introduced below. Note that some definitions may vary from author to author. A point cloud is often defined as a finite set of points in some Euclidean space, but may be taken to be any finite metric space. The Čech complex of a point cloud is the nerve of the cover of balls of a fixed radius around each point in the cloud. A persistence module U {\displaystyle \mathbb {U} } indexed by Z {\displaystyle \mathbb {Z} } is a vector space U t {\displaystyle U_{t}} for each t ∈ Z {\displaystyle t\in \mathbb {Z} } , and a linear map u t s : U s → U t {\displaystyle u_{t}^{s}\colon U_{s}\to U_{t}} whenever s ≤ t {\displaystyle s\leq t} , such that u t t = 1 {\displaystyle u_{t}^{t}=1} for all t {\displaystyle t} and u t s u s r = u t r {\displaystyle u_{t}^{s}u_{s}^{r}=u_{t}^{r}} whenever r ≤ s ≤ t . {\displaystyle r\leq s\leq t.} An equivalent definition is a functor from Z {\displaystyle \mathbb {Z} } considered as a partially ordered set to the category of vector spaces. The persistent homology group P H {\displaystyle PH} of a point cloud is the persistence module defined as P H k ( X ) = ∏ H k ( X r ) {\displaystyle PH_{k}(X)=\prod H_{k}(X_{r})} , where X r {\displaystyle X_{r}} is the Čech complex of radius r {\displaystyle r} of the point cloud X {\displaystyle X} and H k {\displaystyle H_{k}} is the homology group. A persistence barcode is a multiset of intervals in R {\displaystyle \mathbb {R} } , and a persistence diagram is a multiset of points in Δ {\displaystyle \Delta } ( := { ( u , v ) ∈ R 2 ∣ u , v ≥ 0 , u ≤ v } {\displaystyle :=\{(u,v)\in \mathbb {R} ^{2}\mid u,v\geq 0,u\leq v\}} ). The Wasserstein distance between two persistence diagrams X {\displaystyle X} and Y {\displaystyle Y} is defined as W p [ L q ] ( X , Y ) := inf φ : X → Y [ ∑ x ∈ X ( ‖ x − φ ( x ) ‖ q ) p ] 1 / p {\displaystyle W_{p}[L_{q}](X,Y):=\inf _{\varphi :X\to Y}\left[\sum _{x\in X}(\Vert x-\varphi (x)\Vert _{q})^{p}\right]^{1/p}} where 1 ≤ p , q ≤ ∞ {\displaystyle 1\leq p,q\leq \infty } and φ {\displaystyle \varphi } ranges over bijections between X {\displaystyle X} and Y {\displaystyle Y} . Please refer to figure 3.1 in Munch for illustration. The bottleneck distance between X {\displaystyle X} and Y {\displaystyle Y} is W ∞ [ L q ] ( X , Y ) := inf φ : X → Y sup x ∈ X ‖ x − φ ( x ) ‖ q . {\displaystyle W_{\infty }[L_{q}](X,Y):=\inf _{\varphi :X\to Y}\sup _{x\in X}\Vert x-\varphi (x)\Vert _{q}.} This is a special case of Wasserstein distance, letting p = ∞ {\displaystyle p=\infty } . === Basic property === ==== Structure theorem ==== The first classification theorem for persistent homology appeared in 1994 via Barannikov's canonical forms. The classification theorem interpreting persistence in the language of commutative algebra appeared in 2005: for a finitely generated persistence module C {\displaystyle C} with field F {\displaystyle F} coefficients, H ( C ; F ) ≃ ⨁ i x t i ⋅ F [ x ] ⊕ ( ⨁ j x r j ⋅ ( F [ x ] / ( x s j ⋅ F [ x ] ) ) ) . {\displaystyle H(C;F)\simeq \bigoplus _{i}x^{t_{i}}\cdot F[x]\oplus \left(\bigoplus _{j}x^{r_{j}}\cdot (F[x]/(x^{s_{j}}\cdot F[x]))\right).} Intuitively, the free parts correspond to the homology generators that appear at filtration level t i {\displaystyle t_{i}} and never disappear, while the torsion parts correspond to those that appear at filtration level r j {\displaystyle r_{j}} and last for s j {\displaystyle s_{j}} steps of the filtration (or equivalently, disappear at filtration level s j + r j {\displaystyle s_{j}+r_{j}} ). Persistent homology is visualized through a barcode or persistence diagram. The barcode has its root in abstract mathematics. Namely, the category of finite filtered complexes over a field is semi-simple. Any filtered complex is isomorphic to its canonical form, a direct sum of one- and two-dimensional simple filtered complexes. ==== Stability ==== Stability is desirable because it provides robustness against noise. If X {\displaystyle X} is any space which is homeomorphic to a simplicial complex, and f , g : X → R {\displaystyle f,g:X\to \mathbb {R} } are continuous tame functions, then the persistence vector spaces { H k ( f − 1 ( [ 0 , r ] ) ) } {\displaystyle \{H_{k}(f^{-1}([0,r]))\}} and { H k ( g − 1 ( [ 0 , r ] ) ) } {\displaystyle \{H_{k}(g^{-1}([0,r]))\}} are finitely presented, and W ∞ ( D ( f ) , D ( g ) ) ≤ ‖ f − g ‖ ∞ {\displaystyle W_{\infty }(D(f),D(g))\leq \lVert f-g\rVert _{\infty }} , where W ∞ {\displaystyle W_{\infty }} refers to the bottleneck distance and D {\displaystyle D} is the map taking a continuous tame function to the persistence diagram of its k {\displaystyle k} -th homology. === Workflow === The basic workflow in TDA is: If X {\displaystyle X} is a point cloud, replace X {\displaystyle X} with a nested family of simplicial complexes X r {\displaystyle X_{r}} (such as the Čech or Vietoris-Rips complex). This process converts the point cloud into a filtration of simplicial complexes. Taking the homology of each complex in this filtration gives a persistence module H i ( X r 0 ) → H i ( X r 1 ) → H i ( X r 2 ) → ⋯ {\displaystyle H_{i}(X_{r_{0}})\to H_{i}(X_{r_{1}})\to H_{i}(X_{r_{2}})\to \cdots } Apply the structure theorem to obtain the persistent Betti numbers, persistence diagram, or equivalently, barcode. Graphically speaking, == Computation == The first algorithm over all fields for persistent homology in algebraic topology setting was described by Barannikov through reduction to the canonical form by upper-triangular matrices. The algorithm for persistent homology over F 2 {\displaystyle F_{2}} was given by Edelsbrunner et al. Afra Zomorodian and Carlsson gave the practical algorithm to compute persistent homology over all fields. Edelsbrunner and Harer's book gives general guidance on computational topology. One issue that arises in computation is the choice of complex. The Čech complex and the Vietoris–Rips complex are most natural at first glance; however, their size grows rapidly with the number of data points. The Vietoris–Rips complex is preferred over the Čech complex because its definition is simpler and the Čech complex requires extra effort to define in a general finite metric space. Efficient ways to lower the computational cost of homology have been studied. For example, the α-complex and witness complex are used to reduce the dimension and size of complexes. Recently, Discrete Morse theory has shown promise for computational homology because it can reduce a given simplicial complex to a much smaller cellular complex which is homotopic to the original one. This reduction can in fact be performed as the complex is constructed by using matroid theory, leading to further performance increases. Another recent algorithm saves time by ignoring the homology classes with low persistence. Various software packages are available, such as javaPlex, Dionysus, Perseus, PHAT, DIPHA, GUDHI, Ripser, and TDAstats. A comparison between these tools is done by Otter et al. Giotto-tda is a Python package dedicated to integrating TDA in the machine learning workflow by means of a scikit-learn [1] API. An R package TDA is capable of calculating recently invented concepts like landscape and the kernel distance estimator. The Topology ToolKit is specialized for continuous data defined on manifolds of low dimension (1, 2 or 3), as typically found in scientific visualization. Cubicle is optimized for large (gigabyte-scale) grayscale image data in dimension 1, 2 or 3 using cubical complexes and discrete Morse theory. Another R package, TDAstats, uses the Ripser library to calculate persistent homology. == Visualization == High-dimensional data is impossible to visualize directly. Many methods have been invented to extract a low-dimensional structure from the data set, such as principal component analysis and multidimensional scaling. However, it is important to note that the problem itself is ill-posed, since many different topological features can be found in the same data set. Thus, the study of visualization of high-dimensional spaces is of central importance to TDA, although it does not necessarily involve the use of persistent homology. However, recent attempts have been made to use persistent homology in data visualization. Carlsson et al. have proposed a general method called MAPPER. It inherits the idea of Jean-Pierre Serre that a covering preserves homotopy. A generalized formulation of MAPPER is as follows: Let X {\displaystyle X} and Z {\displaystyle Z} be topological spaces and let f : X → Z {\displaystyle f\colon X\to Z} be a continuous map. Let U = { U α } α ∈ A {\displaystyle \mathbb {U} =\{U_{\alpha }\}_{\alpha \in A}} be a finite open covering of Z {\displaystyle Z} . The output of MAPPER is the nerve of the pullback cover M ( U , f ) := N ( f − 1 ( U ) ) {\textstyle M(\mathbb {U} ,f):=N(f^{-1}(\mathbb {U} ))} , where each preimage is split into its connected components. This is a very general concept, of which the Reeb graph and merge trees are special cases. This is not quite the original definition. Carlsson et al. choose Z {\displaystyle Z} to be R {\displaystyle \mathbb {R} } or R 2 {\displaystyle \mathbb {R} ^{2}} , and cover it with open sets such that at most two intersect. This restriction means that the output is in the form of a complex network. Because the topology of a finite point cloud is trivial, clustering methods (such as single linkage) are used to produce the analogue of connected sets in the preimage f − 1 ( U ) {\displaystyle f^{-1}(U)} when MAPPER is applied to actual data. Mathematically speaking, MAPPER is a variation of the Reeb graph. If the M ( U , f ) {\textstyle M(\mathbb {U} ,f)} is at most one dimensional, then for each i ≥ 0 {\displaystyle i\geq 0} , H i ( X ) ≃ H 0 ( N ( U ) ; F ^ i ) ⊕ H 1 ( N ( U ) ; F ^ i − 1 ) . {\displaystyle H_{i}(X)\simeq H_{0}(N(\mathbb {U} );{\hat {F}}_{i})\oplus H_{1}(N(\mathbb {U} );{\hat {F}}_{i-1}).} The added flexibility also has disadvantages. One problem is instability, in that some change of the choice of the cover can lead to major change of the output of the algorithm. Work has been done to overcome this problem. Three successful applications of MAPPER can be found in Carlsson et al. A comment on the applications in this paper by J. Curry is that "a common feature of interest in applications is the presence of flares or tendrils". A free implementation of MAPPER written by Daniel Müllner and Aravindakshan Babu is available online. MAPPER also forms the basis of Ayasdi's AI platform. == Multidimensional persistence == Multidimensional persistence is important to TDA. The concept arises in both theory and practice. The first investigation of multidimensional persistence was early in the development of TDA. Carlsson-Zomorodian introduced the theory of multidimensional persistence in and in collaboration with Singh introduced the use of tools from symbolic algebra (Grobner basis methods) to compute MPH modules. Their definition presents multidimensional persistence with n parameters as a Z n {\displaystyle \mathbb {Z} ^{n}} graded module over a polynomial ring in n variables. Tools from commutative and homological algebra are applied to the study of multidimensional persistence in work of Harrington-Otter-Schenck-Tillman. The first application to appear in the literature is a method for shape comparison, similar to the invention of TDA. The definition of an n-dimensional persistence module in R n {\displaystyle \mathbb {R} ^{n}} is vector space V s {\displaystyle V_{s}} is assigned to each point in s = ( s 1 , … , s n ) {\displaystyle s=(s_{1},\ldots ,s_{n})} map ρ s t : V s → V t {\displaystyle \rho _{s}^{t}\colon V_{s}\to V_{t}} is assigned if s ≤ t {\displaystyle s\leq t} ( s i ≤ t i , i = 1 , … , n ) {\displaystyle s_{i}\leq t_{i},i=1,\ldots ,n)} maps satisfy ρ r t = ρ s t ∘ ρ r s {\displaystyle \rho _{r}^{t}=\rho _{s}^{t}\circ \rho _{r}^{s}} for all r ≤ s ≤ t {\displaystyle r\leq s\leq t} It might be worth noting that there are controversies on the definition of multidimensional persistence. One of the advantages of one-dimensional persistence is its representability by a diagram or barcode. However, discrete complete invariants of multidimensional persistence modules do not exist. The main reason for this is that the structure of the collection of indecomposables is extremely complicated by Gabriel's theorem in the theory of quiver representations, although a finitely generated n-dim persistence module can be uniquely decomposed into a direct sum of indecomposables due to the Krull-Schmidt theorem. Nonetheless, many results have been established. Carlsson and Zomorodian introduced the rank invariant ρ M ( u , v ) {\displaystyle \rho _{M}(u,v)} , defined as the ρ M ( u , v ) = r a n k ( x u − v : M u → M v ) {\displaystyle \rho _{M}(u,v)=\mathrm {rank} (x^{u-v}\colon M_{u}\to M_{v})} , in which M {\displaystyle M} is a finitely generated n-graded module. In one dimension, it is equivalent to the barcode. In the literature, the rank invariant is often referred as the persistent Betti numbers (PBNs). In many theoretical works, authors have used a more restricted definition, an analogue from sublevel set persistence. Specifically, the persistence Betti numbers of a function f : X → R k {\displaystyle f:X\to \mathbb {R} ^{k}} are given by the function β f : Δ + → N {\displaystyle \beta _{f}\colon \Delta ^{+}\to \mathrm {N} } , taking each ( u , v ) ∈ Δ + {\displaystyle (u,v)\in \Delta ^{+}} to β f ( u , v ) := r a n k ( H ( X ( f ≤ u ) → H ( X ( f ≤ v ) ) ) {\displaystyle \beta _{f}(u,v):=\mathrm {rank} (H(X(f\leq u)\to H(X(f\leq v)))} , where Δ + := { ( u , v ) ∈ R k × R k : u ≤ v } {\displaystyle \Delta ^{+}:=\{(u,v)\in \mathbb {R} ^{k}\times \mathbb {R} ^{k}:u\leq v\}} and X ( f ≤ u ) := { x ∈ X : f ( x ) ≤ u } {\displaystyle X(f\leq u):=\{x\in X:f(x)\leq u\}} . Some basic properties include monotonicity and diagonal jump. Persistent Betti numbers will be finite if X {\displaystyle X} is a compact and locally contractible subspace of R n {\displaystyle \mathbb {R} ^{n}} . Using a foliation method, the k-dim PBNs can be decomposed into a family of 1-dim PBNs by dimensionality deduction. This method has also led to a proof that multi-dim PBNs are stable. The discontinuities of PBNs only occur at points ( u , v ) ( u ≤ v ) {\displaystyle (u,v)(u\leq v)} where either u {\displaystyle u} is a discontinuous point of ρ M ( ⋆ , v ) {\displaystyle \rho _{M}(\star ,v)} or v {\displaystyle v} is a discontinuous point of ρ ( u , ⋆ ) {\displaystyle \rho (u,\star )} under the assumption that f ∈ C 0 ( X , R k ) {\displaystyle f\in C^{0}(X,\mathbb {R} ^{k})} and X {\displaystyle X} is a compact, triangulable topological space. Persistent space, a generalization of persistent diagram, is defined as the multiset of all points with multiplicity larger than 0 and the diagonal. It provides a stable and complete representation of PBNs. An ongoing work by Carlsson et al. is trying to give geometric interpretation of persistent homology, which might provide insights on how to combine machine learning theory with topological data analysis. The first practical algorithm to compute multidimensional persistence was invented very early. After then, many other algorithms have been proposed, based on such concepts as discrete morse theory and finite sample estimating. == Other persistences == The standard paradigm in TDA is often referred as sublevel persistence. Apart from multidimensional persistence, many works have been done to extend this special case. === Zigzag persistence === The nonzero maps in persistence module are restricted by the preorder relationship in the category. However, mathematicians have found that the unanimity of direction is not essential to many results. "The philosophical point is that the decomposition theory of graph representations is somewhat independent of the orientation of the graph edges". Zigzag persistence is important to the theoretical side. The examples given in Carlsson's review paper to illustrate the importance of functorality all share some of its features. === Extended persistence and levelset persistence === There are some attempts to loosen the stricter restriction of the function. Please refer to the Categorification and cosheaves and Impact on mathematics sections for more information. It's natural to extend persistence homology to other basic concepts in algebraic topology, such as cohomology and relative homology/cohomology. An interesting application is the computation of circular coordinates for a data set via the first persistent cohomology group. === Circular persistence === Normal persistence homology studies real-valued functions. The circle-valued map might be useful, "persistence theory for circle-valued maps promises to play the role for some vector fields as does the standard persistence theory for scalar fields", as commented in Dan Burghelea et al. The main difference is that Jordan cells (very similar in format to the Jordan blocks in linear algebra) are nontrivial in circle-valued functions, which would be zero in real-valued case, and combining with barcodes give the invariants of a tame map, under moderate conditions. Two techniques they use are Morse-Novikov theory and graph representation theory. More recent results can be found in D. Burghelea et al. For example, the tameness requirement can be replaced by the much weaker condition, continuous. === Persistence with torsion === The proof of the structure theorem relies on the base domain being field, so not many attempts have been made on persistence homology with torsion. Frosini defined a pseudometric on this specific module and proved its stability. One of its novelty is that it doesn't depend on some classification theory to define the metric. == Categorification and cosheaves == One advantage of category theory is its ability to lift concrete results to a higher level, showing relationships between seemingly unconnected objects. Peter Bubenik et al. offers a short introduction of category theory fitted for TDA. Category theory is the language of modern algebra, and has been widely used in the study of algebraic geometry and topology. It has been noted that "the key observation of is that the persistence diagram produced by depends only on the algebraic structure carried by this diagram." The use of category theory in TDA has proved to be fruitful. Following the notations made in Bubenik et al., the indexing category P {\textstyle P} is any preordered set (not necessarily N {\displaystyle \mathbb {N} } or R {\displaystyle \mathbb {R} } ), the target category D {\displaystyle D} is any category (instead of the commonly used V e c t F {\textstyle \mathrm {Vect} _{\mathbb {F} }} ), and functors P → D {\textstyle P\to D} are called generalized persistence modules in D {\displaystyle D} , over P {\textstyle P} . One advantage of using category theory in TDA is a clearer understanding of concepts and the discovery of new relationships between proofs. Take two examples for illustration. The understanding of the correspondence between interleaving and matching is of huge importance, since matching has been the method used in the beginning (modified from Morse theory). A summary of works can be found in Vin de Silva et al. Many theorems can be proved much more easily in a more intuitive setting. Another example is the relationship between the construction of different complexes from point clouds. It has long been noticed that Čech and Vietoris-Rips complexes are related. Specifically, V r ( X ) ⊂ C 2 r ( X ) ⊂ V 2 r ( X ) {\displaystyle V_{r}(X)\subset C_{{\sqrt {2}}r}(X)\subset V_{2r}(X)} . The essential relationship between Cech and Rips complexes can be seen much more clearly in categorical language. The language of category theory also helps cast results in terms recognizable to the broader mathematical community. Bottleneck distance is widely used in TDA because of the results on stability with respect to the bottleneck distance. In fact, the interleaving distance is the terminal object in a poset category of stable metrics on multidimensional persistence modules in a prime field. Sheaves, a central concept in modern algebraic geometry, are intrinsically related to category theory. Roughly speaking, sheaves are the mathematical tool for understanding how local information determines global information. Justin Curry regards level set persistence as the study of fibers of continuous functions. The objects that he studies are very similar to those by MAPPER, but with sheaf theory as the theoretical foundation. Although no breakthrough in the theory of TDA has yet used sheaf theory, it is promising since there are many beautiful theorems in algebraic geometry relating to sheaf theory. For example, a natural theoretical question is whether different filtration methods result in the same output. == Stability == Stability is of central importance to data analysis, since real data carry noises. By usage of category theory, Bubenik et al. have distinguished between soft and hard stability theorems, and proved that soft cases are formal. Specifically, general workflow of TDA is The soft stability theorem asserts that H F {\displaystyle HF} is Lipschitz continuous, and the hard stability theorem asserts that J {\displaystyle J} is Lipschitz continuous. Bottleneck distance is widely used in TDA. The isometry theorem asserts that the interleaving distance d I {\displaystyle d_{I}} is equal to the bottleneck distance. Bubenik et al. have abstracted the definition to that between functors F , G : P → D {\displaystyle F,G\colon P\to D} when P {\textstyle P} is equipped with a sublinear projection or superlinear family, in which still remains a pseudometric. Considering the magnificent characters of interleaving distance, here we introduce the general definition of interleaving distance(instead of the first introduced one): Let Γ , K ∈ T r a n s P {\displaystyle \Gamma ,K\in \mathrm {Trans_{P}} } (a function from P {\textstyle P} to P {\textstyle P} which is monotone and satisfies x ≤ Γ ( x ) {\displaystyle x\leq \Gamma (x)} for all x ∈ P {\textstyle x\in P} ). A ( Γ , K ) {\displaystyle (\Gamma ,K)} -interleaving between F and G consists of natural transformations φ : F ⇒ G Γ {\displaystyle \varphi \colon F\Rightarrow G\Gamma } and ψ : G ⇒ F K {\displaystyle \psi \colon G\Rightarrow FK} , such that ( ψ Γ ) = φ F η K Γ {\displaystyle (\psi \Gamma )=\varphi F\eta _{K\Gamma }} and ( φ Γ ) = ψ G η Γ K {\displaystyle (\varphi \Gamma )=\psi G\eta _{\Gamma K}} . The two main results are Let P {\textstyle P} be a preordered set with a sublinear projection or superlinear family. Let H : D → E {\textstyle H:D\to E} be a functor between arbitrary categories D , E {\textstyle D,E} . Then for any two functors F , G : P → D {\textstyle F,G\colon P\to D} , we have d I ( H F , H G ) ≤ d I ( F , G ) {\textstyle d_{I}(HF,HG)\leq d_{I}(F,G)} . Let P {\textstyle P} be a poset of a metric space Y {\textstyle Y} , X {\textstyle X} be a topological space. And let f , g : X → Y {\textstyle f,g\colon X\to Y} (not necessarily continuous) be functions, and F , G {\textstyle F,G} to be the corresponding persistence diagram. Then d I ( F , G ) ≤ d ∞ ( f , g ) := sup x ∈ X d Y ( f ( x ) , g ( x ) ) {\displaystyle d_{I}(F,G)\leq d_{\infty }(f,g):=\sup _{x\in X}d_{Y}(f(x),g(x))} . These two results summarize many results on stability of different models of persistence. For the stability theorem of multidimensional persistence, please refer to the subsection of persistence. == Structure theorem == The structure theorem is of central importance to TDA; as commented by G. Carlsson, "what makes homology useful as a discriminator between topological spaces is the fact that there is a classification theorem for finitely generated abelian groups". (see the fundamental theorem of finitely generated abelian groups). The main argument used in the proof of the original structure theorem is the standard structure theorem for finitely generated modules over a principal ideal domain. However, this argument fails if the indexing set is ( R , ≤ ) {\displaystyle (\mathbb {R} ,\leq )} . In general, not every persistence module can be decomposed into intervals. Many attempts have been made at relaxing the restrictions of the original structure theorem. The case for pointwise finite-dimensional persistence modules indexed by a locally finite subset of R {\displaystyle \mathbb {R} } is solved based on the work of Webb. The most notable result is done by Crawley-Boevey, which solved the case of R {\displaystyle \mathbb {R} } . Crawley-Boevey's theorem states that any pointwise finite-dimensional persistence module is a direct sum of interval modules. To understand the definition of his theorem, some concepts need introducing. An interval in ( R , ≤ ) {\displaystyle (\mathbb {R} ,\leq )} is defined as a subset I ⊂ R {\displaystyle I\subset \mathbb {R} } having the property that if r , t ∈ I {\displaystyle r,t\in I} and if there is an s ∈ R {\displaystyle s\in \mathbb {R} } such that r ≤ s ≤ t {\displaystyle r\leq s\leq t} , then s ∈ I {\displaystyle s\in I} as well. An interval module k I {\displaystyle k_{I}} assigns to each element s ∈ I {\displaystyle s\in I} the vector space k {\displaystyle k} and assigns the zero vector space to elements in R ∖ I {\displaystyle \mathbb {R} \setminus I} . All maps ρ s t {\displaystyle \rho _{s}^{t}} are the zero map, unless s , t ∈ I {\displaystyle s,t\in I} and s ≤ t {\displaystyle s\leq t} , in which case ρ s t {\displaystyle \rho _{s}^{t}} is the identity map. Interval modules are indecomposable. Although the result of Crawley-Boevey is a very powerful theorem, it still doesn't extend to the q-tame case. A persistence module is q-tame if the rank of ρ s t {\displaystyle \rho _{s}^{t}} is finite for all s < t {\displaystyle s<t} . There are examples of q-tame persistence modules that fail to be pointwise finite. However, it turns out that a similar structure theorem still holds if the features that exist only at one index value are removed. This holds because the infinite dimensional parts at each index value do not persist, due to the finite-rank condition. Formally, the observable category O b {\displaystyle \mathrm {Ob} } is defined as P e r s / E p h {\displaystyle \mathrm {Pers} /\mathrm {Eph} } , in which E p h {\displaystyle \mathrm {Eph} } denotes the full subcategory of P e r s {\displaystyle \mathrm {Pers} } whose objects are the ephemeral modules ( ρ s t = 0 {\displaystyle \rho _{s}^{t}=0} whenever s < t {\displaystyle s<t} ). Note that the extended results listed here do not apply to zigzag persistence, since the analogue of a zigzag persistence module over R {\displaystyle \mathbb {R} } is not immediately obvious. == Statistics == Real data is always finite, and so its study requires us to take stochasticity into account. Statistical analysis gives us the ability to separate true features of the data from artifacts introduced by random noise. Persistent homology has no inherent mechanism to distinguish between low-probability features and high-probability features. One way to apply statistics to topological data analysis is to study the statistical properties of topological features of point clouds. The study of random simplicial complexes offers some insight into statistical topology. Katharine Turner et al. offers a summary of work in this vein. A second way is to study probability distributions on the persistence space. The persistence space B ∞ {\displaystyle B_{\infty }} is ∐ n B n / ∽ {\displaystyle \coprod _{n}B_{n}/{\backsim }} , where B n {\displaystyle B_{n}} is the space of all barcodes containing exactly n {\displaystyle n} intervals and the equivalences are { [ x 1 , y 1 ] , [ x 2 , y 2 ] , … , [ x n , y n ] } ∽ { [ x 1 , y 1 ] , [ x 2 , y 2 ] , … , [ x n − 1 , y n − 1 ] } {\displaystyle \{[x_{1},y_{1}],[x_{2},y_{2}],\ldots ,[x_{n},y_{n}]\}\backsim \{[x_{1},y_{1}],[x_{2},y_{2}],\ldots ,[x_{n-1},y_{n-1}]\}} if x n = y n {\displaystyle x_{n}=y_{n}} . This space is fairly complicated; for example, it is not complete under the bottleneck metric. The first attempt made to study it is by Yuriy Mileyko et al. The space of persistence diagrams D p {\displaystyle D_{p}} in their paper is defined as D p := { d ∣ ∑ x ∈ d ( 2 inf y ∈ Δ ‖ x − y ‖ ) p < ∞ } {\displaystyle D_{p}:=\left\{d\mid \sum _{x\in d}\left(2\inf _{y\in \Delta }\lVert x-y\rVert \right)^{p}<\infty \right\}} where Δ {\displaystyle \Delta } is the diagonal line in R 2 {\displaystyle \mathbb {R} ^{2}} . A nice property is that D p {\displaystyle D_{p}} is complete and separable in the Wasserstein metric W p ( u , v ) = ( inf γ ∈ Γ ( u , v ) ∫ X × X ρ p ( x , y ) d γ ( x , y ) ) 1 / p {\displaystyle W_{p}(u,v)=\left(\inf _{\gamma \in \Gamma (u,v)}\int _{\mathbb {X} \times \mathbb {X} }\rho ^{p}(x,y)\,\mathrm {d} \gamma (x,y)\right)^{1/p}} . Expectation, variance, and conditional probability can be defined in the Fréchet sense. This allows many statistical tools to be ported to TDA. Works on null hypothesis significance test, confidence intervals, and robust estimates are notable steps. A third way is to consider the cohomology of probabilistic space or statistical systems directly, called information structures and basically consisting in the triple ( Ω , Π , P {\displaystyle \Omega ,\Pi ,P} ), sample space, random variables and probability laws. Random variables are considered as partitions of the n atomic probabilities (seen as a probability (n-1)-simplex, | Ω | = n {\displaystyle |\Omega |=n} ) on the lattice of partitions ( Π n {\displaystyle \Pi _{n}} ). The random variables or modules of measurable functions provide the cochain complexes while the coboundary is considered as the general homological algebra first discovered by Gerhard Hochschild with a left action implementing the action of conditioning. The first cocycle condition corresponds to the chain rule of entropy, allowing to derive uniquely up to the multiplicative constant, Shannon entropy as the first cohomology class. The consideration of a deformed left-action generalises the framework to Tsallis entropies. The information cohomology is an example of ringed topos. Multivariate k-Mutual information appear in coboundaries expressions, and their vanishing, related to cocycle condition, gives equivalent conditions for statistical independence. Minima of mutual-informations, also called synergy, give rise to interesting independence configurations analog to homotopical links. Because of its combinatorial complexity, only the simplicial subcase of the cohomology and of information structure has been investigated on data. Applied to data, those cohomological tools quantifies statistical dependences and independences, including Markov chains and conditional independence, in the multivariate case. Notably, mutual-informations generalize correlation coefficient and covariance to non-linear statistical dependences. These approaches were developed independently and only indirectly related to persistence methods, but may be roughly understood in the simplicial case using Hu Kuo Tin Theorem that establishes one-to-one correspondence between mutual-informations functions and finite measurable function of a set with intersection operator, to construct the Čech complex skeleton. Information cohomology offers some direct interpretation and application in terms of neuroscience (neural assembly theory and qualitative cognition ), statistical physic, and deep neural network for which the structure and learning algorithm are imposed by the complex of random variables and the information chain rule. Persistence landscapes, introduced by Peter Bubenik, are a different way to represent barcodes, more amenable to statistical analysis. The persistence landscape of a persistent module M {\displaystyle M} is defined as a function λ : N × R → R ¯ {\displaystyle \lambda :\mathbb {N} \times \mathbb {R} \to {\bar {\mathbb {R} }}} , λ ( k , t ) := sup ( m ≥ 0 ∣ β t − m , t − m ≥ k ) {\displaystyle \lambda (k,t):=\sup(m\geq 0\mid \beta ^{t-m,t-m}\geq k)} , where R ¯ {\displaystyle {\bar {\mathbb {R} }}} denotes the extended real line and β a , b = d i m ( i m ( M ( a ≤ b ) ) ) {\displaystyle \beta ^{a,b}=\mathrm {dim} (\mathrm {im} (M(a\leq b)))} . The space of persistence landscapes is very nice: it inherits all good properties of barcode representation (stability, easy representation, etc.), but statistical quantities can be readily defined, and some problems in Y. Mileyko et al.'s work, such as the non-uniqueness of expectations, can be overcome. Effective algorithms for computation with persistence landscapes are available. Another approach is to use revised persistence, which is image, kernel and cokernel persistence. == Applications == === Classification of applications === More than one way exists to classify the applications of TDA. Perhaps the most natural way is by field. A very incomplete list of successful applications includes data skeletonization, shape study, graph reconstruction, image analysis, material, progression analysis of disease, sensor network, signal analysis, cosmic web, complex network, fractal geometry, viral evolution, propagation of contagions on networks, bacteria classification using molecular spectroscopy, super-resolution microscopy, hyperspectral imaging in physical-chemistry, remote sensing, feature selection, and early warning signs of financial crashes. Another way is by distinguishing the techniques by G. Carlsson,one being the study of homological invariants of data on individual data sets, and the other is the use of homological invariants in the study of databases where the data points themselves have geometric structure. === Impact on mathematics === Topological data analysis and persistent homology have had impacts on Morse theory. Morse theory has played a very important role in the theory of TDA, including on computation. Some work in persistent homology has extended results about Morse functions to tame functions or, even to continuous functions. A forgotten result of R. Deheuvels long before the invention of persistent homology extends Morse theory to all continuous functions. One recent result is that the category of Reeb graphs is equivalent to a particular class of cosheaf. This is motivated by theoretical work in TDA, since the Reeb graph is related to Morse theory and MAPPER is derived from it. The proof of this theorem relies on the interleaving distance. Persistent homology is closely related to spectral sequences. In particular the algorithm bringing a filtered complex to its canonical form permits much faster calculation of spectral sequences than the standard procedure of calculating E p , q r {\displaystyle E_{p,q}^{r}} groups page by page. Zigzag persistence may turn out to be of theoretical importance to spectral sequences. === DONUT: A Database of TDA Applications === The Database of Original & Non-Theoretical Uses of Topology (DONUT) is a database of scholarly articles featuring practical applications of topological data analysis to various areas of science. DONUT was started in 2017 by Barbara Giunti, Janis Lazovskis, and Bastian Rieck, and as of October 2023 currently contains 447 articles. DONUT was featured in the November 2023 issue of the Notices of the American Mathematical Society. === Applications to Adversarial ML === The stability property of topological features to small perturbations has been applied to make Graph Neural Networks robust against adversaries. Arafat et. al. proposed a robustness framework which systematically integrates both local and global topological graph feature representations, the impact of which is controlled by the robust regularized topological loss. Given the attacker's budget, they derived stability guarantees on the node representations, establishing an important connection between Topological stability and Adversarial ML. == See also == Dimensionality reduction Data mining Computer vision Computational topology Discrete Morse theory Shape analysis (digital geometry) Size theory Algebraic topology Topological deep learning == References == == Further reading == === Brief Introductions === Lesnick, Michael (2013). "Studying the Shape of Data Using Topology". Institute for Advanced Study. Source Material for Topological Data Analysis by Mikael Vejdemo-Johansson === Monograph === Oudot, Steve Y. (2015). Persistence Theory: From Quiver Representations to Data Analysis. American Mathematical Society. ISBN 978-1-4704-2545-6. === Textbooks on Topology === Hatcher, Allen (2002). Algebraic Topology. Cambridge University Press. ISBN 0-521-79540-0. Available for Download Edelsbrunner, Herbert; Harer, John (2010). Computational Topology: An Introduction. American Mathematical Society. ISBN 9780821849255. Elementary Applied Topology, by Robert Ghrist == External links == Database of Original & Non-Theoretical Uses of Topology (DONUT) === Video Lectures === Introduction to Persistent Homology and Topology for Data Analysis, by Matthew Wright The Shape of Data, by Gunnar Carlsson === Other Resources of TDA === Applied Topology, by Stanford Applied algebraic topology research network Archived 2016-01-31 at the Wayback Machine, by the Institute for Mathematics and its Applications
Coefficient matrix
In linear algebra, a coefficient matrix is a matrix consisting of the coefficients of the variables in a set of linear equations. The matrix is used in solving systems of linear equations. == Coefficient matrix == In general, a system with m linear equations and n unknowns can be written as a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m {\displaystyle {\begin{aligned}a_{11}x_{1}+a_{12}x_{2}+\cdots +a_{1n}x_{n}&=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\cdots +a_{2n}x_{n}&=b_{2}\\&\;\;\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\cdots +a_{mn}x_{n}&=b_{m}\end{aligned}}} where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} are the unknowns and the numbers a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\ldots ,a_{mn}} are the coefficients of the system. The coefficient matrix is the m × n matrix with the coefficient aij as the (i, j)th entry: [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] {\displaystyle {\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}}} Then the above set of equations can be expressed more succinctly as A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is the coefficient matrix and b is the column vector of constant terms. == Relation of its properties to properties of the equation system == By the Rouché–Capelli theorem, the system of equations is inconsistent, meaning it has no solutions, if the rank of the augmented matrix (the coefficient matrix augmented with an additional column consisting of the vector b) is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank r equals the number n of variables. Otherwise the general solution has n – r free parameters; hence in such a case there are an infinitude of solutions, which can be found by imposing arbitrary values on n – r of the variables and solving the resulting system for its unique solution; different choices of which variables to fix, and different fixed values of them, give different system solutions. == Dynamic equations == A first-order matrix difference equation with constant term can be written as y t + 1 = A y t + c , {\displaystyle \mathbf {y} _{t+1}=A\mathbf {y} _{t}+\mathbf {c} ,} where A is n × n and y and c are n × 1. This system converges to its steady-state level of y if and only if the absolute values of all n eigenvalues of A are less than 1. A first-order matrix differential equation with constant term can be written as d y d t = A y ( t ) + c . {\displaystyle {\frac {d\mathbf {y} }{dt}}=A\mathbf {y} (t)+\mathbf {c} .} This system is stable if and only if all n eigenvalues of A have negative real parts. == References ==
Progressively measurable process
In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressively measurable is a strictly stronger property than the notion of being an adapted process. Progressively measurable processes are important in the theory of Itô integrals. == Definition == Let ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} be a probability space; ( X , A ) {\displaystyle (\mathbb {X} ,{\mathcal {A}})} be a measurable space, the state space; { F t ∣ t ≥ 0 } {\displaystyle \{{\mathcal {F}}_{t}\mid t\geq 0\}} be a filtration of the sigma algebra F {\displaystyle {\mathcal {F}}} ; X : [ 0 , ∞ ) × Ω → X {\displaystyle X:[0,\infty )\times \Omega \to \mathbb {X} } be a stochastic process (the index set could be [ 0 , T ] {\displaystyle [0,T]} or N 0 {\displaystyle \mathbb {N} _{0}} instead of [ 0 , ∞ ) {\displaystyle [0,\infty )} ); B o r e l ( [ 0 , t ] ) {\displaystyle \mathrm {Borel} ([0,t])} be the Borel sigma algebra on [ 0 , t ] {\displaystyle [0,t]} . The process X {\displaystyle X} is said to be progressively measurable (or simply progressive) if, for every time t {\displaystyle t} , the map [ 0 , t ] × Ω → X {\displaystyle [0,t]\times \Omega \to \mathbb {X} } defined by ( s , ω ) ↦ X s ( ω ) {\displaystyle (s,\omega )\mapsto X_{s}(\omega )} is B o r e l ( [ 0 , t ] ) ⊗ F t {\displaystyle \mathrm {Borel} ([0,t])\otimes {\mathcal {F}}_{t}} -measurable. This implies that X {\displaystyle X} is F t {\displaystyle {\mathcal {F}}_{t}} -adapted. A subset P ⊆ [ 0 , ∞ ) × Ω {\displaystyle P\subseteq [0,\infty )\times \Omega } is said to be progressively measurable if the process X s ( ω ) := χ P ( s , ω ) {\displaystyle X_{s}(\omega ):=\chi _{P}(s,\omega )} is progressively measurable in the sense defined above, where χ P {\displaystyle \chi _{P}} is the indicator function of P {\displaystyle P} . The set of all such subsets P {\displaystyle P} form a sigma algebra on [ 0 , ∞ ) × Ω {\displaystyle [0,\infty )\times \Omega } , denoted by P r o g {\displaystyle \mathrm {Prog} } , and a process X {\displaystyle X} is progressively measurable in the sense of the previous paragraph if, and only if, it is P r o g {\displaystyle \mathrm {Prog} } -measurable. == Properties == It can be shown that L 2 ( B ) {\displaystyle L^{2}(B)} , the space of stochastic processes X : [ 0 , T ] × Ω → R n {\displaystyle X:[0,T]\times \Omega \to \mathbb {R} ^{n}} for which the Itô integral ∫ 0 T X t d B t {\displaystyle \int _{0}^{T}X_{t}\,\mathrm {d} B_{t}} with respect to Brownian motion B {\displaystyle B} is defined, is the set of equivalence classes of P r o g {\displaystyle \mathrm {Prog} } -measurable processes in L 2 ( [ 0 , T ] × Ω ; R n ) {\displaystyle L^{2}([0,T]\times \Omega ;\mathbb {R} ^{n})} . Every adapted process with left- or right-continuous paths is progressively measurable. Consequently, every adapted process with càdlàg paths is progressively measurable. Every measurable and adapted process has a progressively measurable modification. == References ==
Force control
Force control is the control of the force with which a machine or the manipulator of a robot acts on an object or its environment. By controlling the contact force, damage to the machine as well as to the objects to be processed and injuries when handling people can be prevented. In manufacturing tasks, it can compensate for errors and reduce wear by maintaining a uniform contact force. Force control achieves more consistent results than position control, which is also used in machine control. Force control can be used as an alternative to the usual motion control, but is usually used in a complementary way, in the form of hybrid control concepts. The acting force for control is usually measured via force transducers or estimated via the motor current. Force control has been the subject of research for almost three decades and is increasingly opening up further areas of application thanks to advances in sensor and actuator technology and new control concepts. Force control is particularly suitable for contact tasks that serve to mechanically process workpieces, but it is also used in telemedicine, service robot and the scanning of surfaces. For force measurement, force sensors exist that can measure forces and torques in all three spatial directions. Alternatively, the forces can also be estimated without sensors, e.g. on the basis of the motor currents. Indirect force control by modeling the robot as a mechanical resistance (impedance) and direct force control in parallel or hybrid concepts are used as control concepts. Adaptive approaches, fuzzy controllers and machine learning for force control are currently the subject of research. == General == Controlling the contact force between a manipulator and its environment is an increasingly important task in the environment of mechanical manufacturing, as well as industrial and service robot. One motivation for the use of force control is safety for man and machine. For various reasons, movements of the robot or machine parts may be blocked by obstacles while the program is running. In service robot these can be moving objects or people, in industrial robotics problems can occur with cooperating robots, changing work environments or an inaccurate environmental model. If the trajectory is misaligned in classical motion control and thus it is not possible to approach the programmed robot pose(s), the motion control will increase the manipulated variable - usually the motor current - in order to correct the position error. The increase of the manipulated variable can have the following effects: The obstacle is removed or damaged/destroyed. The machine is damaged or destroyed. The manipulated variable limits are exceeded and the robot controller switches off. A force control system can prevent this by regulating the maximum force of the machine in these cases, thus avoiding damage or making collisions detectable at an early stage. In mechanical manufacturing tasks, unevenness of the workpiece often leads to problems with motion control. As can be seen in the adjacent figure, surface unevenness causes the tool to penetrate too far into the surface during position control (red) P 1 ′ {\displaystyle P'_{1}} or lose contact with the workpiece during position control (red) P 2 ′ {\displaystyle P'_{2}} . This results, for example, in an alternating force effect on the workpiece and tool during grinding and polishing. Force control (green) is useful here, as it ensures uniform material removal through constant contact with the workpiece. == Application == In force control, a basic distinction can be made between applications with pronounced contact and applications with potential contact. We speak of pronounced contact when the contact of the machine with the environment or the workpiece is a central component of the task and is explicitly controlled. This includes, above all, tasks of mechanical deformation and surface machining. In tasks with potential contact, the process function variable is the positioning of the machine or its parts. Larger contact forces between machine and environment occur due to dynamic environment or inaccurate environment model. In this case, the machine should yield to the environment and avoid large contact forces. The main applications of force control today are mechanical manufacturing operations. This means in particular manufacturing tasks such as grinding, polishing and deburring as well as force-controlled processes such as controlled joining, bending and pressing of bolts into prefabricated bores. Another common use of force control is scanning unknown surfaces. Here, force control is used to set a constant contact pressure in the normal direction of the surface and the scanning head is moved in the surface direction via position control. The surface can then be described in Cartesian coordinates via direct kinematics. Other applications of force control with potential contact can be found in medical technology and cooperating robots. Robots used in telemedicine, i.e. robot-assisted medical operations, can avoid injuries more effectively via force control. In addition, direct feedback of the measured contact forces to the operator by means of a force feedback control device is of great interest here. Possible applications for this extend to internet-based teleoperations. In principle, force control is also useful wherever machines and robots cooperate with each other or with humans, as well as in environments where the environment is not described exactly or is dynamic and cannot be described exactly. Here, force control helps to deal with obstacles and deviations in the environmental model and to avoid damage. == History == The first important work on force control was published in 1980 by John Kenneth Salisbury at Stanford University. In it, he describes a method for active stiffness control, a simple form of impedance control. However, the method does not yet allow a combination with motion control, but here force control is performed in all spatial directions. The position of the surface must therefore be known. Because of the lower performance of robot controllers of that time, force control could only be performed on mainframe computers. Thus, a controller cycle of ≈100 ms was achieved. In 1981, Raibert and Craig presented a paper on hybrid force/position control which is still important today. In this paper, they describe a method in which a matrix (separation matrix) is used to explicitly specify for all spatial directions whether motion or force control is to be used. Raibert and Craig merely sketch the controller concepts and assume them to be feasible. In 1989, Koivo presented an extended exposition of the concepts of Raibert and Craig. Precise knowledge of the surface position is still necessary here, which still does not allow for the typical tasks of force control today, such as scanning surfaces. Force control has been the subject of intense research over the past two decades and has made great strides with the advancement of sensor technology and control algorithms. For some years now, the major automation technology manufacturers have been offering software and hardware packages for their controllers to allow force control. Modern machine controllers are capable of force control in one spatial direction in real time computing with a cycle time of less than 10 ms. == Force measurement == To close the force control loop in the sense of a closed-loop control, the instantaneous value of the contact force must be known. The contact force can either be measured directly or estimated. === Direct force measurement === The trivial approach to force control is the direct measurement of the occurring contact forces via force/torque sensors at the end effector of the machine or at the wrist of the industrial robot. Force/torque sensors measure the occurring forces by measuring the deformation at the sensor. The most common way to measure deformation is by means of strain gauges. In addition to the widely used strain gauges made of variable electrical resistances, there are also other versions that use piezoelectric, optical or capacitive principles for measurement. In practice, however, they are only used for special applications. Capacitive strain gages, for example, can also be used in the high-temperature range above 1000 °C. Strain gages are designed to have as linear a relationship as possible between strain and electrical resistance within the working space. In addition, several possibilities exist to reduce measurement errors and interference. To exclude temperature influences and increase measurement reliability, two strain gauges can be arranged in a complementary manner. Modern force/torque sensors measure both forces and torques in all three spatial directions and are available with almost any value range. The accuracy is usually in the per mil range of the maximum measured value. The sampling rates of the sensors are in the range of about 1 kHz. An extension of the 6-axis force/torque sensors are 12- and 18-axis sensors which, in addition to the six force or torque components, are also capable of measuring six velocity and acceleration components each. === Six-axis force/torque sensor === In modern applications, so-called six-axis force/torque sensors are frequently used. These are mounted between the robot hand and the end effector and can record both forces and torques in all three spatial directions. For this purpose, they are equipped with six or more strain gauges (possibly strain measurement bridges) that record deformations in the micrometer range. These deformations are converted into three force and torque components each via a calibration matrix. Force/torque sensors contain a digital signal processor that continuously acquires and filters the sensor data (strain) in parallel, calculates the measurement data (forces/torques) and makes it available via the sensor's communication interface. The measured values correspond to the forces at the sensor and usually still have to be converted into the forces and torques at the end effector or tool via a suitable transformation. Since force/torque sensors are still relatively expensive (between €4,000 and €15,000) and very sensitive to overloads and disturbances, they - and thus force control - have been reluctantly used in industry. Indirect force measurement or estimation is one solution, allowing force control without costly and disturbance-prone force sensors. === Force estimation === A cost-saving alternative to direct force measurement is force estimation (also known as "indirect force measurement"). This makes it possible to dispense with the use of force/torque sensors. In addition to cost savings, dispensing with these sensors has other advantages: Force sensors are usually the weakest link in the mechanical chain of the machine or robot system, so dispensing with them brings greater stability and less susceptibility to mechanical faults. In addition, dispensing with force/torque sensors brings greater safety, since there is no need for sensor cables to be routed out and protected directly at the manipulator's wrist. A common method for indirect force measurement or force estimation is the measurement of the motor currents applied for motion control. With some restrictions, these are proportional to the torque applied to the driven robot axis. Adjusted for gravitational, inertial and frictional effects, the motor currents are largely linear to the torques of the individual axes. The contact force at the end effector can be determined via the torques thus known. === Separation of dynamic and static forces === During force measurement and force estimation, filtering of the sensor signals may be necessary. Numerous side effects and secondary forces can occur which do not correspond to the measurement of the contact force. This is especially true if a larger load mass is mounted on the manipulator. This interferes with the force measurement when the manipulator moves with high accelerations. To be able to adjust the measurement for side effects, both an accurate dynamic model of the machine and a model or estimate of the load must be available. This estimate can be determined via reference movements (free movement without object contact). After estimating the load, the measurement or estimate of the forces can be adjusted for Coriolis, centripetal and centrifugal forces, gravitational and frictional effects, and inertia. Adaptive approaches can also be used here to continuously adjust the estimate of the load. == Control concepts == Various control concepts are used for force control. Depending on the desired behavior of the system, a distinction is made between the concepts of direct force control and indirect control via specification of compliance or mechanical impedance. As a rule, force control is combined with motion control. Concepts for force control have to consider the problem of coupling between force and position: If the manipulator is in contact with the environment, a change of the position also means a change of the contact force. === Impedance control === Impedance control, or compliance control, regulates the compliance of the system, i.e., the link between force and position upon object contact. Compliance is defined in the literature as a "measure of the robot's ability to counteract contact forces." There are passive and active approaches to this. Here, the compliance of the robot system is modeled as mechanical impedance, which describes the relationship between applied force and resulting velocity. Here, the robot's machine or manipulator is considered as a mechanical resistance with positional constraints imposed by the environment. Accordingly, the causality of mechanical impedance describes that a movement of the robot results in a force. In mechanical admittance, on the other hand, a force applied to the robot results in a resulting motion. ==== Passive impedance control ==== Passive compliance control (also known as compliance control) does not require force measurement because there is no explicit force control. Instead, the manipulator and/or end effector is flexibly designed in a way that can minimize contact forces that occur during the task to be performed. Typical applications include insertion and gripping operations. The end effector is designed in such a way that it allows translational and rotational deviations orthogonal to the gripping or insertion direction, but has high stiffness in the gripping or insertion direction. The figure opposite shows a so-called Remote Center of Compliance (RCC) that makes this possible. As an alternative to an RCC, the entire machine can also be made structurally elastic. Passive impedance control is a very good solution in terms of system dynamics, since there are no latency due to the control. However, passive compliance control is often limited by the mechanical specification of the end effector in the task and cannot be readily applied to different and changing tasks or environmental conditions. ==== Active impedance control ==== Active compliance control refers to the control of the manipulator based on a deviation of the end effector. This is particularly suitable for guiding robots by an operator, for example as part of a teach-in process. Active compliance control is based on the idea of representing the system of machine and environment as a spring-damper-mass system. The force F {\displaystyle F} and the motion (position x ( t ) {\displaystyle x(t)\!\,} , velocity x ˙ ( t ) {\displaystyle {\dot {x}}(t)} , and acceleration x ¨ ( t ) {\displaystyle {\ddot {x}}(t)} are directly related via the spring-damper-mass equation: F ( t ) = c ⋅ x ( t ) + d ⋅ x ˙ ( t ) + m ⋅ x ¨ ( t ) {\displaystyle F(t)=c\cdot x(t)+d\cdot {\dot {x}}(t)+m\cdot {\ddot {x}}(t)} The compliance or mechanical impedance of the system is determined by the stiffness c {\displaystyle c} , the damping d {\displaystyle d} and the inertia m {\displaystyle m} and can be influenced by these three variables. The control is given a mechanical target impedance via these three variables, which is achieved by the machine control. The figure shows the block diagram of a force-based impedance control. The impedance in the block diagram represents the mentioned components L, A and . A position-based impedance control can be designed analogously with internal position or motion control. Alternatively and analogously, the compliance (admittance) can be controlled instead of the resistance. In contrast to the impedance control, the admittance appears in the control law as the reciprocal of the impedance. === Direct force control === The above concepts are so-called indirect force control, since the contact force is not explicitly specified as a command variable, but is determined indirectly via the controller parameters damping, stiffness and (virtual) mass. Direct force control is presented below. Direct force control uses the desired force as a setpoint within a closed control loop. It is implemented as a parallel force/position control in the form of a cascade control or as a hybrid force/position control in which switching takes place between position and force control. ==== Parallel force/position control ==== One possibility for force control is parallel force/position control. The control is designed as a cascade control and has an external force control loop and an internal position control loop. As shown in the following figure, a corresponding infeed correction is calculated from the difference between the nominal and actual force. This infeed correction is offset against the position command values, whereby in the case of the fusion of X s o l l {\displaystyle X_{soll}} and X k o r r {\displaystyle X_{korr}} , the position command of force control ( X k o r r {\displaystyle X_{korr}} )has a higher priority, i.e. a position error is tolerated in favor of the correct force control. The offset value is the input variable for the inner position control loop. Analogous to an inner position control, an inner velocity control can also take place, which has a higher dynamic. In this case, the inner control loop should have a saturation in order not to generate a (theoretically) arbitrarily increasing velocity in the free movement until contact is made. ==== Hybrid force/position control ==== An improvement over the above concepts is offered by hybrid force/position control, which works with two separate control systems and can also be used with hard, inflexible contact surfaces. In hybrid force/position control, the space is divided into a constrained and an unconstrained space. The constrained space contains restrictions, for example in the form of obstacles, and does not allow free movement; the unconstrained space allows free movement. Each dimension of the space is either constrained or unconstrained. In hybrid force control, force control is used for the restricted space, and position control is used for the unrestricted space. The figure shows such a control. The matrix Σ indicates which space directions are restricted and is a diagonal matrix consisting of zeros and ones. Which spatial direction is restricted and which is unrestricted can, for example, be specified statically. Force and position control is then explicitly specified for each spatial direction; the matrix Σ is then static. Another possibility is to switch the matrix Σ dynamically on the basis of force measurement. In this way, it is possible to switch from position control to force control for individual spatial directions when contact or collision is established. In the case of contact tasks, all spatial directions would be motion-controlled in the case of free movement, and after contact is established, the contact direction would be switched to force control by selecting the appropriate matrix Σ. == Research == In recent years, the subject of research has increasingly been adaptive concepts, the use of fuzzy control system and machine learning, and force-based whole-body control. === Adaptive force control === The previously mentioned, non-adaptive concepts are based on an exact knowledge of the dynamic process parameters. These are usually determined and adjusted by experiments and calibration. Problems can arise due to measurement errors and variable loads. In adaptive force control, position-dependent and thus time-variable parts of the system are regarded as parameter fluctuations and are constantly adapted in the course of the control by adaptation. Due to the changing control, no guarantee can be given for dynamic stability of the system. Adaptive control is therefore usually first used offline and the results are intensively tested in simulation before being used on the real system. === Fuzzy control and machine learning === A prerequisite for the application of classical design methods is an explicit system model. If this is difficult or impossible to represent, fuzzy controllers or machine learning can be considered. By means of fuzzy logic, knowledge acquired by humans can be converted into a control behavior in the form of fuzzy control specifications. Explicit specification of the controller parameters is thus no longer necessary. Approaches using machine learning, moreover, no longer require humans to create the control behavior, but use machine learning as the basis for control. === Whole body control === Due to the high complexity of modern robotic systems, such as humanoid robots, a large number of actuated degrees of freedom must be controlled. In addition, such systems are increasingly used in the direct environment of humans. Accordingly, concepts from force and impedance control are specifically used in this area to increase safety, as this allows the robot to interact with the environment and humans in a compliant manner. == References == == Bibliography == Bruno Siciliano, Luigi Villani (2000), Robot Force Control, Springer, ISBN 0-7923-7733-8 Wolfgang Weber (2002), Industrieroboter. Methoden der Steuerung und Regelung, Fachbuchverlag Leipzig, ISBN 3-446-21604-9 Lorenzo Sciavicco, Bruno Siciliano (1999), Modelling and Control of Robot Manipulators, Springer, ISBN 1-85233-221-2 Klaus Richter (1991), Kraftregelung elastischer Roboter, VDI-Verlag, ISBN 3-18-145908-9
Multivariate logistic regression
Multivariate logistic regression is a type of data analysis that predicts any number of outcomes based on multiple independent variables. It is based on the assumption that the natural logarithm of the odds has a linear relationship with independent variables. == Procedure == First, the baseline odds of a specific outcome compared to not having that outcome are calculated, giving a constant (intercept). Next, the independent variables are incorporated into the model, giving a regression coefficient (beta) and a "P" value for each independent variable. The "P" value determines how significantly the independent variable impacts the odds of having the outcome or not. It is desirable to use as few variables as necessary, and to have at least 10 - 20 times as many observations as independent variables. === Formula === Multivariate logistic regression uses a formula similar to univariate logistic regression, but with multiple independent variables. where v is the number of independent variables. The following formula shows that multivariate logistic regression is simply a standard linear regression model: == Types == The two main types of multivariate logistic regression are linear regression and logistic regression. === Linear regression === Linear regression produces results that show a linear relationship with a single independent variable (IV) and can be plotted on a graph as a straight line. === Logistic regression === In contrast, logistic regression produces results that show a nonlinear relationship. As a result, plotting the data on a graph produces a curved line called a sigmoid. Unlike linear regression, logistic regression produces results based on two or more independent variables. The odds ratio associated with a single independent variable can change when other independent variables are accounted for as well. However, the changes are usually insignificant, but they can indicate errors. ==== Assumptions ==== Multivariate logistic regression assumes that the different observations are independent. It also assumes that the natural logarithm of the odds ratio and the dependent variables show a linear relationship. However, it does not assume a normal distribution of the dependent variables. ===== Null hypothesis ===== A null hypothesis is an assumption that the independent variables do not have any impact on the dependent variable. ==== Dependent variables ==== There are three main types of logistic regression dependent variables (DVs): Binary, multi-class, and ordinal. ===== Binary ===== A binary dependent variable is a variable with only two outcomes, and the possible values must be opposites of each other. ===== Multi-class ===== A multi-class dependent variable is a variable with at least three qualitative (non-numerical) outcomes, usually with a constant numerical stand-in. ===== Ordinal ===== An ordinal dependent variable is a variable with at least three possible outcomes, which are numerically different. == Models == Multivariate logistic regression produces the following models: === Logit models === Logit models distinguish independent and dependent variables. === Log-linear models === Unlike logit models, log-linear models do not distinguish between categories of variables. === Probit models === Probit models function similarly to logit models due to the similarities of normal and logistic distributions. However, since the independent variables are interpreted as standard deviations instead of odds ratios, these models are also more similar to linear models than logit models. == Uses == === Scientists === When scientists use logistic regression, they usually include as many independent variables as necessary. === Doctors and physicians === Multivariate logistic regression is used by physicians to: associate certain characteristics with certain outcomes determine the effects of certain techniques give people with certain conditions appropriate treatments develop appropriate models === Market === Multivariate logistic regression is also used to analyze customer preferences for products. == Artificial intelligence == Multivariate logistic regressions are also used in machine learning. == In comparison to multivariable logistic regression == While both multivariate logistic regression and multivariable logistic regression correlate multiple independent variables to outcomes, multivariate logistic regression correlates independent variables to multiple outcomes, while multivariable logistic regression correlates independent variables to a single outcome. == References ==
Linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as a 1 x 1 + ⋯ + a n x n = b , {\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}=b,} linear maps such as ( x 1 , … , x n ) ↦ a 1 x 1 + ⋯ + a n x n , {\displaystyle (x_{1},\ldots ,x_{n})\mapsto a_{1}x_{1}+\cdots +a_{n}x_{n},} and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to function spaces. Linear algebra is also used in most sciences and fields of engineering because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear map that best approximates the function near that point. == History == The procedure (using counting rods) for solving simultaneous linear equations now called Gaussian elimination appears in the ancient Chinese mathematical text Chapter Eight: Rectangular Arrays of The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. Systems of linear equations arose in Europe with the introduction in 1637 by René Descartes of coordinates in geometry. In fact, in this new geometry, now called Cartesian geometry, lines and planes are represented by linear equations, and computing their intersections amounts to solving systems of linear equations. The first systematic methods for solving linear systems used determinants and were first considered by Leibniz in 1693. In 1750, Gabriel Cramer used them for giving explicit solutions of linear systems, now called Cramer's rule. Later, Gauss further described the method of elimination, which was initially listed as an advancement in geodesy. In 1844 Hermann Grassmann published his "Theory of Extension" which included foundational new topics of what is today called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb. Linear algebra grew with ideas noted in the complex plane. For instance, two numbers w and z in C {\displaystyle \mathbb {C} } have a difference w – z, and the line segments wz and 0(w − z) are of the same length and direction. The segments are equipollent. The four-dimensional system H {\displaystyle \mathbb {H} } of quaternions was discovered by W.R. Hamilton in 1843. The term vector was introduced as v = xi + yj + zk representing a point in space. The quaternion difference p – q also produces a segment equipollent to pq. Other hypercomplex number systems also used the idea of a linear space with a basis. Arthur Cayley introduced matrix multiplication and the inverse matrix in 1856, making possible the general linear group. The mechanism of group representation became available for describing complex and hypercomplex numbers. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants". Benjamin Peirce published his Linear Associative Algebra (1872), and his son Charles Sanders Peirce extended the work later. The telegraph required an explanatory system, and the 1873 publication by James Clerk Maxwell of A Treatise on Electricity and Magnetism instituted a field theory of forces and required differential geometry for expression. Linear algebra is flat differential geometry and serves in tangent spaces to manifolds. Electromagnetic symmetries of spacetime are expressed by the Lorentz transformations, and much of the history of linear algebra is the history of Lorentz transformations. The first modern and more precise definition of a vector space was introduced by Peano in 1888; by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its modern form in the first half of the twentieth century when many ideas and methods of previous centuries were generalized as abstract algebra. The development of computers led to increased research in efficient algorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modeling and simulations. == Vector spaces == Until the 19th century, linear algebra was introduced through systems of linear equations and matrices. In modern mathematics, the presentation through vector spaces is generally preferred, since it is more synthetic, more general (not limited to the finite-dimensional case), and conceptually simpler, although more abstract. A vector space over a field F (often the field of the real numbers or of the complex numbers) is a set V equipped with two binary operations. Elements of V are called vectors, and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The axioms that addition and scalar multiplication must satisfy are the following. (In the list below, u, v and w are arbitrary elements of V, and a and b are arbitrary scalars in the field F.) The first four axioms mean that V is an abelian group under addition. The elements of a specific vector space may have various natures; for example, they could be tuples, sequences, functions, polynomials, or a matrices. Linear algebra is concerned with the properties of such objects that are common to all vector spaces. === Linear maps === Linear maps are mappings between vector spaces that preserve the vector-space structure. Given two vector spaces V and W over a field F, a linear map (also called, in some contexts, linear transformation or linear mapping) is a map T : V → W {\displaystyle T:V\to W} that is compatible with addition and scalar multiplication, that is T ( u + v ) = T ( u ) + T ( v ) , T ( a v ) = a T ( v ) {\displaystyle T(\mathbf {u} +\mathbf {v} )=T(\mathbf {u} )+T(\mathbf {v} ),\quad T(a\mathbf {v} )=aT(\mathbf {v} )} for any vectors u,v in V and scalar a in F. An equivalent condition is that for any vectors u, v in V and scalars a, b in F, one has T ( a u + b v ) = a T ( u ) + b T ( v ) {\displaystyle T(a\mathbf {u} +b\mathbf {v} )=aT(\mathbf {u} )+bT(\mathbf {v} )} . When V = W are the same vector space, a linear map T : V → V is also known as a linear operator on V. A bijective linear map between two vector spaces (that is, every vector from the second space is associated with exactly one in the first) is an isomorphism. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view, in the sense that they cannot be distinguished by using vector space properties. An essential question in linear algebra is testing whether a linear map is an isomorphism or not, and, if it is not an isomorphism, finding its range (or image) and the set of elements that are mapped to the zero vector, called the kernel of the map. All these questions can be solved by using Gaussian elimination or some variant of this algorithm. === Subspaces, span, and basis === The study of those subsets of vector spaces that are in themselves vector spaces under the induced operations is fundamental, similarly as for many mathematical structures. These subsets are called linear subspaces. More precisely, a linear subspace of a vector space V over a field F is a subset W of V such that u + v and au are in W, for every u, v in W, and every a in F. (These conditions suffice for implying that W is a vector space.) For example, given a linear map T : V → W, the image T(V) of V, and the inverse image T−1(0) of 0 (called kernel or null space), are linear subspaces of W and V, respectively. Another important way of forming a subspace is to consider linear combinations of a set S of vectors: the set of all sums a 1 v 1 + a 2 v 2 + ⋯ + a k v k , {\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k},} where v1, v2, ..., vk are in S, and a1, a2, ..., ak are in F form a linear subspace called the span of S. The span of S is also the intersection of all linear subspaces containing S. In other words, it is the smallest (for the inclusion relation) linear subspace containing S. A set of vectors is linearly independent if none is in the span of the others. Equivalently, a set S of vectors is linearly independent if the only way to express the zero vector as a linear combination of elements of S is to take zero for every coefficient ai. A set of vectors that spans a vector space is called a spanning set or generating set. If a spanning set S is linearly dependent (that is not linearly independent), then some element w of S is in the span of the other elements of S, and the span would remain the same if one were to remove w from S. One may continue to remove elements of S until getting a linearly independent spanning set. Such a linearly independent set that spans a vector space V is called a basis of V. The importance of bases lies in the fact that they are simultaneously minimal-generating sets and maximal independent sets. More precisely, if S is a linearly independent set, and T is a spanning set such that S ⊆ T, then there is a basis B such that S ⊆ B ⊆ T. Any two bases of a vector space V have the same cardinality, which is called the dimension of V; this is the dimension theorem for vector spaces. Moreover, two vector spaces over the same field F are isomorphic if and only if they have the same dimension. If any basis of V (and therefore every basis) has a finite number of elements, V is a finite-dimensional vector space. If U is a subspace of V, then dim U ≤ dim V. In the case where V is finite-dimensional, the equality of the dimensions implies U = V. If U1 and U2 are subspaces of V, then dim ⁡ ( U 1 + U 2 ) = dim ⁡ U 1 + dim ⁡ U 2 − dim ⁡ ( U 1 ∩ U 2 ) , {\displaystyle \dim(U_{1}+U_{2})=\dim U_{1}+\dim U_{2}-\dim(U_{1}\cap U_{2}),} where U1 + U2 denotes the span of U1 ∪ U2. == Matrices == Matrices allow explicit manipulation of finite-dimensional vector spaces and linear maps. Their theory is thus an essential part of linear algebra. Let V be a finite-dimensional vector space over a field F, and (v1, v2, ..., vm) be a basis of V (thus m is the dimension of V). By definition of a basis, the map ( a 1 , … , a m ) ↦ a 1 v 1 + ⋯ a m v m F m → V {\displaystyle {\begin{aligned}(a_{1},\ldots ,a_{m})&\mapsto a_{1}\mathbf {v} _{1}+\cdots a_{m}\mathbf {v} _{m}\\F^{m}&\to V\end{aligned}}} is a bijection from Fm, the set of the sequences of m elements of F, onto V. This is an isomorphism of vector spaces, if Fm is equipped with its standard structure of vector space, where vector addition and scalar multiplication are done component by component. This isomorphism allows representing a vector by its inverse image under this isomorphism, that is by the coordinate vector (a1, ..., am) or by the column matrix [ a 1 ⋮ a m ] . {\displaystyle {\begin{bmatrix}a_{1}\\\vdots \\a_{m}\end{bmatrix}}.} If W is another finite dimensional vector space (possibly the same), with a basis (w1, ..., wn), a linear map f from W to V is well defined by its values on the basis elements, that is (f(w1), ..., f(wn)). Thus, f is well represented by the list of the corresponding column matrices. That is, if f ( w j ) = a 1 , j v 1 + ⋯ + a m , j v m , {\displaystyle f(w_{j})=a_{1,j}v_{1}+\cdots +a_{m,j}v_{m},} for j = 1, ..., n, then f is represented by the matrix [ a 1 , 1 ⋯ a 1 , n ⋮ ⋱ ⋮ a m , 1 ⋯ a m , n ] , {\displaystyle {\begin{bmatrix}a_{1,1}&\cdots &a_{1,n}\\\vdots &\ddots &\vdots \\a_{m,1}&\cdots &a_{m,n}\end{bmatrix}},} with m rows and n columns. Matrix multiplication is defined in such a way that the product of two matrices is the matrix of the composition of the corresponding linear maps, and the product of a matrix and a column matrix is the column matrix representing the result of applying the represented linear map to the represented vector. It follows that the theory of finite-dimensional vector spaces and the theory of matrices are two different languages for expressing the same concepts. Two matrices that encode the same linear transformation in different bases are called similar. It can be proved that two matrices are similar if and only if one can transform one into the other by elementary row and column operations. For a matrix representing a linear map from W to V, the row operations correspond to change of bases in V and the column operations correspond to change of bases in W. Every matrix is similar to an identity matrix possibly bordered by zero rows and zero columns. In terms of vector spaces, this means that, for any linear map from W to V, there are bases such that a part of the basis of W is mapped bijectively on a part of the basis of V, and that the remaining basis elements of W, if any, are mapped to zero. Gaussian elimination is the basic algorithm for finding these elementary operations, and proving these results. == Linear systems == A finite set of linear equations in a finite set of variables, for example, x1, x2, ..., xn, or x, y, ..., z is called a system of linear equations or a linear system. Systems of linear equations form a fundamental part of linear algebra. Historically, linear algebra and matrix theory have been developed for solving such systems. In the modern presentation of linear algebra through vector spaces and matrices, many problems may be interpreted in terms of linear systems. For example, let be a linear system. To such a system, one may associate its matrix M = [ 2 1 − 1 − 3 − 1 2 − 2 1 2 ] . {\displaystyle M=\left[{\begin{array}{rrr}2&1&-1\\-3&-1&2\\-2&1&2\end{array}}\right].} and its right member vector v = [ 8 − 11 − 3 ] . {\displaystyle \mathbf {v} ={\begin{bmatrix}8\\-11\\-3\end{bmatrix}}.} Let T be the linear transformation associated with the matrix M. A solution of the system (S) is a vector X = [ x y z ] {\displaystyle \mathbf {X} ={\begin{bmatrix}x\\y\\z\end{bmatrix}}} such that T ( X ) = v , {\displaystyle T(\mathbf {X} )=\mathbf {v} ,} that is an element of the preimage of v by T. Let (S′) be the associated homogeneous system, where the right-hand sides of the equations are put to zero: The solutions of (S′) are exactly the elements of the kernel of T or, equivalently, M. The Gaussian-elimination consists of performing elementary row operations on the augmented matrix [ M v ] = [ 2 1 − 1 8 − 3 − 1 2 − 11 − 2 1 2 − 3 ] {\displaystyle \left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{array}}\right]} for putting it in reduced row echelon form. These row operations do not change the set of solutions of the system of equations. In the example, the reduced echelon form is [ M v ] = [ 1 0 0 2 0 1 0 3 0 0 1 − 1 ] , {\displaystyle \left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}1&0&0&2\\0&1&0&3\\0&0&1&-1\end{array}}\right],} showing that the system (S) has the unique solution x = 2 y = 3 z = − 1. {\displaystyle {\begin{aligned}x&=2\\y&=3\\z&=-1.\end{aligned}}} It follows from this matrix interpretation of linear systems that the same methods can be applied for solving linear systems and for many operations on matrices and linear transformations, which include the computation of the ranks, kernels, matrix inverses. == Endomorphisms and square matrices == A linear endomorphism is a linear map that maps a vector space V to itself. If V has a basis of n elements, such an endomorphism is represented by a square matrix of size n. Concerning general linear maps, linear endomorphisms, and square matrices have some specific properties that make their study an important part of linear algebra, which is used in many parts of mathematics, including geometric transformations, coordinate changes, quadratic forms, and many other parts of mathematics. === Determinant === The determinant of a square matrix A is defined to be ∑ σ ∈ S n ( − 1 ) σ a 1 σ ( 1 ) ⋯ a n σ ( n ) , {\displaystyle \sum _{\sigma \in S_{n}}(-1)^{\sigma }a_{1\sigma (1)}\cdots a_{n\sigma (n)},} where Sn is the group of all permutations of n elements, σ is a permutation, and (−1)σ the parity of the permutation. A matrix is invertible if and only if the determinant is invertible (i.e., nonzero if the scalars belong to a field). Cramer's rule is a closed-form expression, in terms of determinants, of the solution of a system of n linear equations in n unknowns. Cramer's rule is useful for reasoning about the solution, but, except for n = 2 or 3, it is rarely used for computing a solution, since Gaussian elimination is a faster algorithm. The determinant of an endomorphism is the determinant of the matrix representing the endomorphism in terms of some ordered basis. This definition makes sense since this determinant is independent of the choice of the basis. === Eigenvalues and eigenvectors === If f is a linear endomorphism of a vector space V over a field F, an eigenvector of f is a nonzero vector v of V such that f(v) = av for some scalar a in F. This scalar a is an eigenvalue of f. If the dimension of V is finite, and a basis has been chosen, f and v may be represented, respectively, by a square matrix M and a column matrix z; the equation defining eigenvectors and eigenvalues becomes M z = a z . {\displaystyle Mz=az.} Using the identity matrix I, whose entries are all zero, except those of the main diagonal, which are equal to one, this may be rewritten ( M − a I ) z = 0. {\displaystyle (M-aI)z=0.} As z is supposed to be nonzero, this means that M – aI is a singular matrix, and thus that its determinant det (M − aI) equals zero. The eigenvalues are thus the roots of the polynomial det ( x I − M ) . {\displaystyle \det(xI-M).} If V is of dimension n, this is a monic polynomial of degree n, called the characteristic polynomial of the matrix (or of the endomorphism), and there are, at most, n eigenvalues. If a basis exists that consists only of eigenvectors, the matrix of f on this basis has a very simple structure: it is a diagonal matrix such that the entries on the main diagonal are eigenvalues, and the other entries are zero. In this case, the endomorphism and the matrix are said to be diagonalizable. More generally, an endomorphism and a matrix are also said diagonalizable, if they become diagonalizable after extending the field of scalars. In this extended sense, if the characteristic polynomial is square-free, then the matrix is diagonalizable. A symmetric matrix is always diagonalizable. There are non-diagonalizable matrices, the simplest being [ 0 1 0 0 ] {\displaystyle {\begin{bmatrix}0&1\\0&0\end{bmatrix}}} (it cannot be diagonalizable since its square is the zero matrix, and the square of a nonzero diagonal matrix is never zero). When an endomorphism is not diagonalizable, there are bases on which it has a simple form, although not as simple as the diagonal form. The Frobenius normal form does not need to extend the field of scalars and makes the characteristic polynomial immediately readable on the matrix. The Jordan normal form requires to extension of the field of scalar for containing all eigenvalues and differs from the diagonal form only by some entries that are just above the main diagonal and are equal to 1. == Duality == A linear form is a linear map from a vector space V over a field F to the field of scalars F, viewed as a vector space over itself. Equipped by pointwise addition and multiplication by a scalar, the linear forms form a vector space, called the dual space of V, and usually denoted V* or V′. If v1, ..., vn is a basis of V (this implies that V is finite-dimensional), then one can define, for i = 1, ..., n, a linear map vi* such that vi*(vi) = 1 and vi*(vj) = 0 if j ≠ i. These linear maps form a basis of V*, called the dual basis of v1, ..., vn. (If V is not finite-dimensional, the vi* may be defined similarly; they are linearly independent, but do not form a basis.) For v in V, the map f → f ( v ) {\displaystyle f\to f(\mathbf {v} )} is a linear form on V*. This defines the canonical linear map from V into (V*)*, the dual of V*, called the double dual or bidual of V. This canonical map is an isomorphism if V is finite-dimensional, and this allows identifying V with its bidual. (In the infinite-dimensional case, the canonical map is injective, but not surjective.) There is thus a complete symmetry between a finite-dimensional vector space and its dual. This motivates the frequent use, in this context, of the bra–ket notation ⟨ f , x ⟩ {\displaystyle \langle f,\mathbf {x} \rangle } for denoting f(x). === Dual map === Let f : V → W {\displaystyle f:V\to W} be a linear map. For every linear form h on W, the composite function h ∘ f is a linear form on V. This defines a linear map f ∗ : W ∗ → V ∗ {\displaystyle f^{*}:W^{*}\to V^{*}} between the dual spaces, which is called the dual or the transpose of f. If V and W are finite-dimensional, and M is the matrix of f in terms of some ordered bases, then the matrix of f* over the dual bases is the transpose MT of M, obtained by exchanging rows and columns. If elements of vector spaces and their duals are represented by column vectors, this duality may be expressed in bra–ket notation by ⟨ h T , M v ⟩ = ⟨ h T M , v ⟩ . {\displaystyle \langle h^{\mathsf {T}},M\mathbf {v} \rangle =\langle h^{\mathsf {T}}M,\mathbf {v} \rangle .} To highlight this symmetry, the two members of this equality are sometimes written ⟨ h T ∣ M ∣ v ⟩ . {\displaystyle \langle h^{\mathsf {T}}\mid M\mid \mathbf {v} \rangle .} === Inner-product spaces === Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as an inner product. The inner product is an example of a bilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, an inner product is a map. ⟨ ⋅ , ⋅ ⟩ : V × V → F {\displaystyle \langle \cdot ,\cdot \rangle :V\times V\to F} that satisfies the following three axioms for all vectors u, v, w in V and all scalars a in F: Conjugate symmetry: ⟨ u , v ⟩ = ⟨ v , u ⟩ ¯ . {\displaystyle \langle \mathbf {u} ,\mathbf {v} \rangle ={\overline {\langle \mathbf {v} ,\mathbf {u} \rangle }}.} In R {\displaystyle \mathbb {R} } , it is symmetric. Linearity in the first argument: ⟨ a u , v ⟩ = a ⟨ u , v ⟩ . ⟨ u + v , w ⟩ = ⟨ u , w ⟩ + ⟨ v , w ⟩ . {\displaystyle {\begin{aligned}\langle a\mathbf {u} ,\mathbf {v} \rangle &=a\langle \mathbf {u} ,\mathbf {v} \rangle .\\\langle \mathbf {u} +\mathbf {v} ,\mathbf {w} \rangle &=\langle \mathbf {u} ,\mathbf {w} \rangle +\langle \mathbf {v} ,\mathbf {w} \rangle .\end{aligned}}} Positive-definiteness: ⟨ v , v ⟩ ≥ 0 {\displaystyle \langle \mathbf {v} ,\mathbf {v} \rangle \geq 0} with equality only for v = 0. We can define the length of a vector v in V by ‖ v ‖ 2 = ⟨ v , v ⟩ , {\displaystyle \|\mathbf {v} \|^{2}=\langle \mathbf {v} ,\mathbf {v} \rangle ,} and we can prove the Cauchy–Schwarz inequality: | ⟨ u , v ⟩ | ≤ ‖ u ‖ ⋅ ‖ v ‖ . {\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |\leq \|\mathbf {u} \|\cdot \|\mathbf {v} \|.} In particular, the quantity | ⟨ u , v ⟩ | ‖ u ‖ ⋅ ‖ v ‖ ≤ 1 , {\displaystyle {\frac {|\langle \mathbf {u} ,\mathbf {v} \rangle |}{\|\mathbf {u} \|\cdot \|\mathbf {v} \|}}\leq 1,} and so we can call this quantity the cosine of the angle between the two vectors. Two vectors are orthogonal if ⟨u, v⟩ = 0. An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the Gram–Schmidt procedure. Orthonormal bases are particularly easy to deal with, since if v = a1 v1 + ⋯ + an vn, then a i = ⟨ v , v i ⟩ . {\displaystyle a_{i}=\langle \mathbf {v} ,\mathbf {v} _{i}\rangle .} The inner product facilitates the construction of many useful concepts. For instance, given a transform T, we can define its Hermitian conjugate T* as the linear transform satisfying ⟨ T u , v ⟩ = ⟨ u , T ∗ v ⟩ . {\displaystyle \langle T\mathbf {u} ,\mathbf {v} \rangle =\langle \mathbf {u} ,T^{*}\mathbf {v} \rangle .} If T satisfies TT* = T*T, we call T normal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that span V. == Relationship with geometry == There is a strong relationship between linear algebra and geometry, which started with the introduction by René Descartes, in 1637, of Cartesian coordinates. In this new (at that time) geometry, now called Cartesian geometry, points are represented by Cartesian coordinates, which are sequences of three real numbers (in the case of the usual three-dimensional space). The basic objects of geometry, which are lines and planes are represented by linear equations. Thus, computing intersections of lines and planes amounts to solving systems of linear equations. This was one of the main motivations for developing linear algebra. Most geometric transformation, such as translations, rotations, reflections, rigid motions, isometries, and projections transform lines into lines. It follows that they can be defined, specified, and studied in terms of linear maps. This is also the case of homographies and Möbius transformations when considered as transformations of a projective space. Until the end of the 19th century, geometric spaces were defined by axioms relating points, lines, and planes (synthetic geometry). Around this date, it appeared that one may also define geometric spaces by constructions involving vector spaces (see, for example, Projective space and Affine space). It has been shown that the two approaches are essentially equivalent. In classical geometry, the involved vector spaces are vector spaces over the reals, but the constructions may be extended to vector spaces over any field, allowing considering geometry over arbitrary fields, including finite fields. Presently, most textbooks introduce geometric spaces from linear algebra, and geometry is often presented, at the elementary level, as a subfield of linear algebra. == Usage and applications == Linear algebra is used in almost all areas of mathematics, thus making it relevant in almost all scientific domains that use mathematics. These applications may be divided into several wide categories. === Functional analysis === Functional analysis studies function spaces. These are vector spaces with additional structure, such as Hilbert spaces. Linear algebra is thus a fundamental part of functional analysis and its applications, which include, in particular, quantum mechanics (wave functions) and Fourier analysis (orthogonal basis). === Scientific computation === Nearly all scientific computations involve linear algebra. Consequently, linear algebra algorithms have been highly optimized. BLAS and LAPACK are the best known implementations. For improving efficiency, some of them configure the algorithms automatically, at run time, to adapt them to the specificities of the computer (cache size, number of available cores, ...). Since the 1960s there have been processors with specialized instructions for optimizing the operations of linear algebra, optional array processors under the control of a conventional processor, supercomputers designed for array processing and conventional processors augmented with vector registers. Some contemporary processors, typically graphics processing units (GPU), are designed with a matrix structure, for optimizing the operations of linear algebra. === Geometry of ambient space === The modeling of ambient space is based on geometry. Sciences concerned with this space use geometry widely. This is the case with mechanics and robotics, for describing rigid body dynamics; geodesy for describing Earth shape; perspectivity, computer vision, and computer graphics, for describing the relationship between a scene and its plane representation; and many other scientific domains. In all these applications, synthetic geometry is often used for general descriptions and a qualitative approach, but for the study of explicit situations, one must compute with coordinates. This requires the heavy use of linear algebra. === Study of complex systems === Most physical phenomena are modeled by partial differential equations. To solve them, one usually decomposes the space in which the solutions are searched into small, mutually interacting cells. For linear systems this interaction involves linear functions. For nonlinear systems, this interaction is often approximated by linear functions.This is called a linear model or first-order approximation. Linear models are frequently used for complex nonlinear real-world systems because they make parametrization more manageable. In both cases, very large matrices are generally involved. Weather forecasting (or more specifically, parametrization for atmospheric modeling) is a typical example of a real-world application, where the whole Earth atmosphere is divided into cells of, say, 100 km of width and 100 km of height. === Fluid mechanics, fluid dynamics, and thermal energy systems === Linear algebra, a branch of mathematics dealing with vector spaces and linear mappings between these spaces, plays a critical role in various engineering disciplines, including fluid mechanics, fluid dynamics, and thermal energy systems. Its application in these fields is multifaceted and indispensable for solving complex problems. In fluid mechanics, linear algebra is integral to understanding and solving problems related to the behavior of fluids. It assists in the modeling and simulation of fluid flow, providing essential tools for the analysis of fluid dynamics problems. For instance, linear algebraic techniques are used to solve systems of differential equations that describe fluid motion. These equations, often complex and non-linear, can be linearized using linear algebra methods, allowing for simpler solutions and analyses. In the field of fluid dynamics, linear algebra finds its application in computational fluid dynamics (CFD), a branch that uses numerical analysis and data structures to solve and analyze problems involving fluid flows. CFD relies heavily on linear algebra for the computation of fluid flow and heat transfer in various applications. For example, the Navier–Stokes equations, fundamental in fluid dynamics, are often solved using techniques derived from linear algebra. This includes the use of matrices and vectors to represent and manipulate fluid flow fields. Furthermore, linear algebra plays a crucial role in thermal energy systems, particularly in power systems analysis. It is used to model and optimize the generation, transmission, and distribution of electric power. Linear algebraic concepts such as matrix operations and eigenvalue problems are employed to enhance the efficiency, reliability, and economic performance of power systems. The application of linear algebra in this context is vital for the design and operation of modern power systems, including renewable energy sources and smart grids. Overall, the application of linear algebra in fluid mechanics, fluid dynamics, and thermal energy systems is an example of the profound interconnection between mathematics and engineering. It provides engineers with the necessary tools to model, analyze, and solve complex problems in these domains, leading to advancements in technology and industry. == Extensions and generalizations == This section presents several related topics that do not appear generally in elementary textbooks on linear algebra but are commonly considered, in advanced mathematics, as parts of linear algebra. === Module theory === The existence of multiplicative inverses in fields is not involved in the axioms defining a vector space. One may thus replace the field of scalars by a ring R, and this gives the structure called a module over R, or R-module. The concepts of linear independence, span, basis, and linear maps (also called module homomorphisms) are defined for modules exactly as for vector spaces, with the essential difference that, if R is not a field, there are modules that do not have any basis. The modules that have a basis are the free modules, and those that are spanned by a finite set are the finitely generated modules. Module homomorphisms between finitely generated free modules may be represented by matrices. The theory of matrices over a ring is similar to that of matrices over a field, except that determinants exist only if the ring is commutative, and that a square matrix over a commutative ring is invertible only if its determinant has a multiplicative inverse in the ring. Vector spaces are completely characterized by their dimension (up to an isomorphism). In general, there is not such a complete classification for modules, even if one restricts oneself to finitely generated modules. However, every module is a cokernel of a homomorphism of free modules. Modules over the integers can be identified with abelian groups, since the multiplication by an integer may be identified as a repeated addition. Most of the theory of abelian groups may be extended to modules over a principal ideal domain. In particular, over a principal ideal domain, every submodule of a free module is free, and the fundamental theorem of finitely generated abelian groups may be extended straightforwardly to finitely generated modules over a principal ring. There are many rings for which there are algorithms for solving linear equations and systems of linear equations. However, these algorithms have generally a computational complexity that is much higher than similar algorithms over a field. For more details, see Linear equation over a ring. === Multilinear algebra and tensors === In multilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of several different variables. This line of inquiry naturally leads to the idea of the dual space, the vector space V* consisting of linear maps f : V → F where F is the field of scalars. Multilinear maps T : Vn → F can be described via tensor products of elements of V*. If, in addition to vector addition and scalar multiplication, there is a bilinear vector product V × V → V, the vector space is called an algebra; for instance, associative algebras are algebras with an associate vector product (like the algebra of square matrices, or the algebra of polynomials). === Topological vector spaces === Vector spaces that are not finite-dimensional often require additional structure to be tractable. A normed vector space is a vector space along with a function called a norm, which measures the "size" of elements. The norm induces a metric, which measures the distance between elements, and induces a topology, which allows for a definition of continuous maps. The metric also allows for a definition of limits and completeness – a normed vector space that is complete is known as a Banach space. A complete metric space along with the additional structure of an inner product (a conjugate symmetric sesquilinear form) is known as a Hilbert space, which is in some sense a particularly well-behaved Banach space. Functional analysis applies the methods of linear algebra alongside those of mathematical analysis to study various function spaces; the central objects of study in functional analysis are Lp spaces, which are Banach spaces, and especially the L2 space of square-integrable functions, which is the only Hilbert space among them. Functional analysis is of particular importance to quantum mechanics, the theory of partial differential equations, digital signal processing, and electrical engineering. It also provides the foundation and theoretical framework that underlies the Fourier transform and related methods. == See also == Fundamental matrix (computer vision) Geometric algebra Linear programming Linear regression, a statistical estimation method Numerical linear algebra Outline of linear algebra Transformation matrix == Explanatory notes == == Citations == == General and cited sources == == Further reading == === History === Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra", American Mathematical Monthly 86 (1979), pp. 809–817. Grassmann, Hermann (1844), Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert, Leipzig: O. Wigand === Introductory textbooks === Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388 Bretscher, Otto (2004), Linear Algebra with Applications (3rd ed.), Prentice Hall, ISBN 978-0-13-145334-0 Farin, Gerald; Hansford, Dianne (2004), Practical Linear Algebra: A Geometry Toolbox, AK Peters, ISBN 978-1-56881-234-2 Hefferon, Jim (2020). Linear Algebra (4th ed.). Ann Arbor, Michigan: Orthogonal Publishing. ISBN 978-1-944325-11-4. OCLC 1178900366. OL 30872051M. Kolman, Bernard; Hill, David R. (2007), Elementary Linear Algebra with Applications (9th ed.), Prentice Hall, ISBN 978-0-13-229654-0 Lay, David C. (2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7 Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall, ISBN 978-0-13-185785-8 Murty, Katta G. (2014) Computational and Algorithmic Linear Algebra and n-Dimensional Geometry, World Scientific Publishing, ISBN 978-981-4366-62-5. Chapter 1: Systems of Simultaneous Linear Equations Noble, B. & Daniel, J.W. (2nd Ed. 1977) [1], Pearson Higher Education, ISBN 978-0130413437. Poole, David (2010), Linear Algebra: A Modern Introduction (3rd ed.), Cengage – Brooks/Cole, ISBN 978-0-538-73545-2 Ricardo, Henry (2010), A Modern Introduction To Linear Algebra (1st ed.), CRC Press, ISBN 978-1-4398-0040-9 Sadun, Lorenzo (2008), Applied Linear Algebra: the decoupling principle (2nd ed.), AMS, ISBN 978-0-8218-4441-0 Strang, Gilbert (2016), Introduction to Linear Algebra (5th ed.), Wellesley-Cambridge Press, ISBN 978-09802327-7-6 The Manga Guide to Linear Algebra (2012), by Shin Takahashi, Iroha Inoue and Trend-Pro Co., Ltd., ISBN 978-1-59327-413-9 === Advanced textbooks === Bhatia, Rajendra (November 15, 1996), Matrix Analysis, Graduate Texts in Mathematics, Springer, ISBN 978-0-387-94846-1 Demmel, James W. (August 1, 1997), Applied Numerical Linear Algebra, SIAM, ISBN 978-0-89871-389-3 Dym, Harry (2007), Linear Algebra in Action, AMS, ISBN 978-0-8218-3813-6 Gantmacher, Felix R. (2005), Applications of the Theory of Matrices, Dover Publications, ISBN 978-0-486-44554-0 Gantmacher, Felix R. (1990), Matrix Theory Vol. 1 (2nd ed.), American Mathematical Society, ISBN 978-0-8218-1376-8 Gantmacher, Felix R. (2000), Matrix Theory Vol. 2 (2nd ed.), American Mathematical Society, ISBN 978-0-8218-2664-5 Gelfand, Israel M. (1989), Lectures on Linear Algebra, Dover Publications, ISBN 978-0-486-66082-0 Glazman, I. M.; Ljubic, Ju. I. (2006), Finite-Dimensional Linear Analysis, Dover Publications, ISBN 978-0-486-45332-3 Golan, Johnathan S. (January 2007), The Linear Algebra a Beginning Graduate Student Ought to Know (2nd ed.), Springer, ISBN 978-1-4020-5494-5 Golan, Johnathan S. (August 1995), Foundations of Linear Algebra, Kluwer, ISBN 0-7923-3614-3 Greub, Werner H. (October 16, 1981), Linear Algebra, Graduate Texts in Mathematics (4th ed.), Springer, ISBN 978-0-8018-5414-9 Hoffman, Kenneth; Kunze, Ray (1971), Linear algebra (2nd ed.), Englewood Cliffs, N.J.: Prentice-Hall, Inc., MR 0276251 Halmos, Paul R. (August 20, 1993), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-90093-3 Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (September 7, 2018), Linear Algebra (5th ed.), Pearson, ISBN 978-0-13-486024-4 Horn, Roger A.; Johnson, Charles R. (February 23, 1990), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6 Horn, Roger A.; Johnson, Charles R. (June 24, 1994), Topics in Matrix Analysis, Cambridge University Press, ISBN 978-0-521-46713-1 Lang, Serge (March 9, 2004), Linear Algebra, Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN 978-0-387-96412-6 Marcus, Marvin; Minc, Henryk (2010), A Survey of Matrix Theory and Matrix Inequalities, Dover Publications, ISBN 978-0-486-67102-4 Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on October 31, 2009 Mirsky, L. (1990), An Introduction to Linear Algebra, Dover Publications, ISBN 978-0-486-66434-7 Shafarevich, I. R.; Remizov, A. O (2012), Linear Algebra and Geometry, Springer, ISBN 978-3-642-30993-9 Shilov, Georgi E. (June 1, 1977), Linear algebra, Dover Publications, ISBN 978-0-486-63518-7 Shores, Thomas S. (December 6, 2006), Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-33194-2 Smith, Larry (May 28, 1998), Linear Algebra, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-98455-1 Trefethen, Lloyd N.; Bau, David (1997), Numerical Linear Algebra, SIAM, ISBN 978-0-898-71361-9 === Study guides and outlines === Leduc, Steven A. (May 1, 1996), Linear Algebra (Cliffs Quick Review), Cliffs Notes, ISBN 978-0-8220-5331-6 Lipschutz, Seymour; Lipson, Marc (December 6, 2000), Schaum's Outline of Linear Algebra (3rd ed.), McGraw-Hill, ISBN 978-0-07-136200-9 Lipschutz, Seymour (January 1, 1989), 3,000 Solved Problems in Linear Algebra, McGraw–Hill, ISBN 978-0-07-038023-3 McMahon, David (October 28, 2005), Linear Algebra Demystified, McGraw–Hill Professional, ISBN 978-0-07-146579-3 Zhang, Fuzhen (April 7, 2009), Linear Algebra: Challenging Problems for Students, The Johns Hopkins University Press, ISBN 978-0-8018-9125-0 == External links == === Online Resources === MIT Linear Algebra Video Lectures, a series of 34 recorded lectures by Professor Gilbert Strang (Spring 2010) International Linear Algebra Society "Linear algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Linear Algebra on MathWorld Matrix and Linear Algebra Terms on Earliest Known Uses of Some of the Words of Mathematics Earliest Uses of Symbols for Matrices and Vectors on Earliest Uses of Various Mathematical Symbols Essence of linear algebra, a video presentation from 3Blue1Brown of the basics of linear algebra, with emphasis on the relationship between the geometric, the matrix and the abstract points of view === Online books === Beezer, Robert A. (2009) [2004]. A First Course in Linear Algebra. Gainesville, Florida: University Press of Florida. ISBN 9781616100049. Connell, Edwin H. (2004) [1999]. Elements of Abstract and Linear Algebra. University of Miami, Coral Gables, Florida: Self-published. Hefferon, Jim (2020). Linear Algebra (4th ed.). Ann Arbor, Michigan: Orthogonal Publishing. ISBN 978-1-944325-11-4. OCLC 1178900366. OL 30872051M. Margalit, Dan; Rabinoff, Joseph (2019). Interactive Linear Algebra. Georgia Institute of Technology, Atlanta, Georgia: Self-published. Matthews, Keith R. (2013) [1991]. Elementary Linear Algebra. University of Queensland, Brisbane, Australia: Self-published. Mikaelian, Vahagn H. (2020) [2017]. Linear Algebra: Theory and Algorithms. Yerevan, Armenia: Self-published – via ResearchGate. Sharipov, Ruslan, Course of linear algebra and multidimensional geometry Treil, Sergei, Linear Algebra Done Wrong
Committee machine
A committee machine is a type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response. The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare with ensembles of classifiers. == Types == === Static structures === In this class of committee machines, the responses of several predictors (experts) are combined by means of a mechanism that does not involve the input signal, hence the designation static. This category includes the following methods: Ensemble averaging In ensemble averaging, outputs of different predictors are linearly combined to produce an overall output. Boosting In boosting, a weak algorithm is converted into one that achieves arbitrarily high accuracy. === Dynamic structures === In this second class of committee machines, the input signal is directly involved in actuating the mechanism that integrates the outputs of the individual experts into an overall output, hence the designation dynamic. There are two kinds of dynamic structures: Mixture of experts In mixture of experts, the individual responses of the experts are non-linearly combined by means of a single gating network. Hierarchical mixture of experts In hierarchical mixture of experts, the individual responses of the individual experts are non-linearly combined by means of several gating networks arranged in a hierarchical fashion. == References ==
Sequential decision making
Sequential decision making is a concept in control theory and operations research, which involves making a series of decisions over time to optimize an objective function, such as maximizing cumulative rewards or minimizing costs. In this framework, each decision influences subsequent choices and system outcomes, taking into account the current state, available actions, and the probabilistic nature of state transitions. This process is used for modeling and regulation of dynamic systems, especially under uncertainty, and is commonly addressed using methods like Markov decision processes (MDPs) and dynamic programming. == References ==
Corank
In mathematics, corank is complementary to the concept of the rank of a mathematical object, and may refer to the dimension of the left nullspace of a matrix, the dimension of the cokernel of a linear transformation of a vector space, or the number of elements of a matroid minus its rank. == Left nullspace of a matrix == The corank of an m × n {\displaystyle m\times n} matrix is m − r {\displaystyle m-r} where r {\displaystyle r} is the rank of the matrix. It is the dimension of the left nullspace and of the cokernel of the matrix. For a square matrix M {\displaystyle M} , the corank and nullity of M {\displaystyle M} are equivalent. == Cokernel of a linear transformation == Generalizing matrices to linear transformations of vector spaces, the corank of a linear transformation is the dimension of the cokernel of the transformation, which is the quotient of the codomain by the image of the transformation. == Matroid == For a matroid with n {\displaystyle n} elements and matroid rank r {\displaystyle r} , the corank or nullity of the matroid is n − r {\displaystyle n-r} . In the case of linear matroids this coincides with the matrix corank. In the case of graphic matroids the corank is also known as the circuit rank or cyclomatic number. == References ==
3D projection
A 3D projection (or graphical projection) is a design technique used to display a three-dimensional (3D) object on a two-dimensional (2D) surface. These projections rely on visual perspective and aspect analysis to project a complex object for viewing capability on a simpler plane. 3D projections use the primary qualities of an object's basic shape to create a map of points, that are then connected to one another to create a visual element. The result is a graphic that contains conceptual properties to interpret the figure or image as not actually flat (2D), but rather, as a solid object (3D) being viewed on a 2D display. 3D objects are largely displayed on two-dimensional mediums (such as paper and computer monitors). As such, graphical projections are a commonly used design element; notably, in engineering drawing, drafting, and computer graphics. Projections can be calculated through employment of mathematical analysis and formulae, or by using various geometric and optical techniques. == Overview == In order to display a three-dimensional (3D) object on a two-dimensional (2D) surface, a projection transformation is applied to the 3D object using a projection matrix. This transformation removes information in the third dimension while preserving it in the first two. See Projective Geometry for more details. If the size and shape of the 3D object should not be distorted by its relative position to the 2D surface, a parallel projection may be used. Examples of parallel projections: If the 3D perspective of an object should be preserved on a 2D surface, the transformation must include scaling and translation based on the object's relative position to the 2D surface. This process is called perspective projection. Examples of perspective projections: == Parallel projection == In parallel projection, the lines of sight from the object to the projection plane are parallel to each other. Thus, lines that are parallel in three-dimensional space remain parallel in the two-dimensional projected image. Parallel projection also corresponds to a perspective projection with an infinite focal length (the distance from a camera's lens and focal point), or "zoom". Images drawn in parallel projection rely upon the technique of axonometry ("to measure along axes"), as described in Pohlke's theorem. In general, the resulting image is oblique (the rays are not perpendicular to the image plane); but in special cases the result is orthographic (the rays are perpendicular to the image plane). Axonometry should not be confused with axonometric projection, as in English literature the latter usually refers only to a specific class of pictorials (see below). === Orthographic projection === The orthographic projection is derived from the principles of descriptive geometry and is a two-dimensional representation of a three-dimensional object. It is a parallel projection (the lines of projection are parallel both in reality and in the projection plane). It is the projection type of choice for working drawings. If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z axis), the mathematical transformation is as follows; To project the 3D point a x {\displaystyle a_{x}} , a y {\displaystyle a_{y}} , a z {\displaystyle a_{z}} onto the 2D point b x {\displaystyle b_{x}} , b y {\displaystyle b_{y}} using an orthographic projection parallel to the y axis (where positive y represents forward direction - profile view), the following equations can be used: b x = s x a x + c x {\displaystyle b_{x}=s_{x}a_{x}+c_{x}} b y = s z a z + c z {\displaystyle b_{y}=s_{z}a_{z}+c_{z}} where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be used to properly align the viewport. Using matrix multiplication, the equations become: [ b x b y ] = [ s x 0 0 0 0 s z ] [ a x a y a z ] + [ c x c z ] . {\displaystyle {\begin{bmatrix}b_{x}\\b_{y}\end{bmatrix}}={\begin{bmatrix}s_{x}&0&0\\0&0&s_{z}\end{bmatrix}}{\begin{bmatrix}a_{x}\\a_{y}\\a_{z}\end{bmatrix}}+{\begin{bmatrix}c_{x}\\c_{z}\end{bmatrix}}.} While orthographically projected images represent the three dimensional nature of the object projected, they do not represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of whether they are far away or near to the virtual viewer. As a result, lengths are not foreshortened as they would be in a perspective projection. ==== Multiview projection ==== With multiview projections, up to six pictures (called primary views) of an object are produced, with each projection plane parallel to one of the coordinate axes of the object. The views are positioned relative to each other according to either of two schemes: first-angle or third-angle projection. In each, the appearances of views may be thought of as being projected onto planes that form a 6-sided box around the object. Although six different sides can be drawn, usually three views of a drawing give enough information to make a 3D object. These views are known as front view, top view, and end view. The terms elevation, plan and section are also used. === Oblique projection === In oblique projections the parallel projection rays are not perpendicular to the viewing plane as with orthographic projection, but strike the projection plane at an angle other than ninety degrees. In both orthographic and oblique projection, parallel lines in space appear parallel on the projected image. Because of its simplicity, oblique projection is used exclusively for pictorial purposes rather than for formal, working drawings. In an oblique pictorial drawing, the displayed angles among the axes as well as the foreshortening factors (scale) are arbitrary. The distortion created thereby is usually attenuated by aligning one plane of the imaged object to be parallel with the plane of projection thereby creating a true shape, full-size image of the chosen plane. Special types of oblique projections are: ==== Cavalier projection (45°) ==== In cavalier projection (sometimes cavalier perspective or high view point) a point of the object is represented by three coordinates, x, y and z. On the drawing, it is represented by only two coordinates, x″ and y″. On the flat drawing, two axes, x and z on the figure, are perpendicular and the length on these axes are drawn with a 1:1 scale; it is thus similar to the dimetric projections, although it is not an axonometric projection, as the third axis, here y, is drawn in diagonal, making an arbitrary angle with the x″ axis, usually 30 or 45°. The length of the third axis is not scaled. ==== Cabinet projection ==== The term cabinet projection (sometimes cabinet perspective) stems from its use in illustrations by the furniture industry. Like cavalier perspective, one face of the projected object is parallel to the viewing plane, and the third axis is projected as going off in an angle (typically 30° or 45° or arctan(2) = 63.4°). Unlike cavalier projection, where the third axis keeps its length, with cabinet projection the length of the receding lines is cut in half. ==== Military projection ==== A variant of oblique projection is called military projection. In this case, the horizontal sections are isometrically drawn so that the floor plans are not distorted and the verticals are drawn at an angle. The military projection is given by rotation in the xy-plane and a vertical translation an amount z. === Axonometric projection === Axonometric projections show an image of an object as viewed from a skew direction in order to reveal all three directions (axes) of space in one picture. Axonometric projections may be either orthographic or oblique. Axonometric instrument drawings are often used to approximate graphical perspective projections, but there is attendant distortion in the approximation. Because pictorial projections innately contain this distortion, in instrument drawings of pictorials great liberties may then be taken for economy of effort and best effect. Axonometric projection is further subdivided into three categories: isometric projection, dimetric projection, and trimetric projection, depending on the exact angle at which the view deviates from the orthogonal. A typical characteristic of orthographic pictorials is that one axis of space is usually displayed as vertical. ==== Isometric projection ==== In isometric pictorials (for methods, see Isometric projection), the direction of viewing is such that the three axes of space appear equally foreshortened, and there is a common angle of 120° between them. The distortion caused by foreshortening is uniform, therefore the proportionality of all sides and lengths are preserved, and the axes share a common scale. This enables measurements to be read or taken directly from the drawing. ==== Dimetric projection ==== In dimetric pictorials (for methods, see Dimetric projection), the direction of viewing is such that two of the three axes of space appear equally foreshortened, of which the attendant scale and angles of presentation are determined according to the angle of viewing; the scale of the third direction (vertical) is determined separately. Approximations are common in dimetric drawings. ==== Trimetric projection ==== In trimetric pictorials (for methods, see Trimetric projection), the direction of viewing is such that all of the three axes of space appear unequally foreshortened. The scale along each of the three axes and the angles among them are determined separately as dictated by the angle of viewing. Approximations in Trimetric drawings are common. === Limitations of parallel projection === Objects drawn with parallel projection do not appear larger or smaller as they extend closer to or away from the viewer. While advantageous for architectural drawings, where measurements must be taken directly from the image, the result is a perceived distortion, since unlike perspective projection, this is not how our eyes or photography normally work. It also can easily result in situations where depth and altitude are difficult to gauge, as is shown in the illustration to the right. In this isometric drawing, the blue sphere is two units higher than the red one. However, this difference in elevation is not apparent if one covers the right half of the picture, as the boxes (which serve as clues suggesting height) are then obscured. This visual ambiguity has been exploited in op art, as well as "impossible object" drawings. M. C. Escher's Waterfall (1961), while not strictly utilizing parallel projection, is a well-known example, in which a channel of water seems to travel unaided along a downward path, only to then paradoxically fall once again as it returns to its source. The water thus appears to disobey the law of conservation of energy. An extreme example is depicted in the film Inception, where by a forced perspective trick an immobile stairway changes its connectivity. The video game Fez uses tricks of perspective to determine where a player can and cannot move in a puzzle-like fashion. == Perspective projection == Perspective projection or perspective transformation is a projection where three-dimensional objects are projected on a picture plane. This has the effect that distant objects appear smaller than nearer objects. It also means that lines which are parallel in nature (that is, meet at the point at infinity) appear to intersect in the projected image. For example, if railways are pictured with perspective projection, they appear to converge towards a single point, called the vanishing point. Photographic lenses and the human eye work in the same way, therefore the perspective projection looks the most realistic. Perspective projection is usually categorized into one-point, two-point and three-point perspective, depending on the orientation of the projection plane towards the axes of the depicted object. Graphical projection methods rely on the duality between lines and points, whereby two straight lines determine a point while two points determine a straight line. The orthogonal projection of the eye point onto the picture plane is called the principal vanishing point (P.P. in the scheme on the right, from the Italian term punto principale, coined during the renaissance). Two relevant points of a line are: its intersection with the picture plane, and its vanishing point, found at the intersection between the parallel line from the eye point and the picture plane. The principal vanishing point is the vanishing point of all horizontal lines perpendicular to the picture plane. The vanishing points of all horizontal lines lie on the horizon line. If, as is often the case, the picture plane is vertical, all vertical lines are drawn vertically, and have no finite vanishing point on the picture plane. Various graphical methods can be easily envisaged for projecting geometrical scenes. For example, lines traced from the eye point at 45° to the picture plane intersect the latter along a circle whose radius is the distance of the eye point from the plane, thus tracing that circle aids the construction of all the vanishing points of 45° lines; in particular, the intersection of that circle with the horizon line consists of two distance points. They are useful for drawing chessboard floors which, in turn, serve for locating the base of objects on the scene. In the perspective of a geometric solid on the right, after choosing the principal vanishing point —which determines the horizon line— the 45° vanishing point on the left side of the drawing completes the characterization of the (equally distant) point of view. Two lines are drawn from the orthogonal projection of each vertex, one at 45° and one at 90° to the picture plane. After intersecting the ground line, those lines go toward the distance point (for 45°) or the principal point (for 90°). Their new intersection locates the projection of the map. Natural heights are measured above the ground line and then projected in the same way until they meet the vertical from the map. While orthographic projection ignores perspective to allow accurate measurements, perspective projection shows distant objects as smaller to provide additional realism. === Mathematical formula === The perspective projection requires a more involved definition as compared to orthographic projections. A conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation: a x , y , z {\displaystyle \mathbf {a} _{x,y,z}} – the 3D position of a point A that is to be projected c x , y , z {\displaystyle \mathbf {c} _{x,y,z}} – the 3D position of a point C representing the camera θ x , y , z {\displaystyle \mathbf {\theta } _{x,y,z}} – The orientation of the camera (represented by Tait–Bryan angles) e x , y , z {\displaystyle \mathbf {e} _{x,y,z}} – the display surface's position relative to aforementioned c {\displaystyle \mathbf {c} } Most conventions use positive z values (the plane being in front of the pinhole c {\displaystyle \mathbf {c} } ), however negative z values are physically more correct, but the image will be inverted both horizontally and vertically. Which results in: b x , y {\displaystyle \mathbf {b} _{x,y}} – the 2D projection of a . {\displaystyle \mathbf {a} .} When c x , y , z = ⟨ 0 , 0 , 0 ⟩ , {\displaystyle \mathbf {c} _{x,y,z}=\langle 0,0,0\rangle ,} and θ x , y , z = ⟨ 0 , 0 , 0 ⟩ , {\displaystyle \mathbf {\theta } _{x,y,z}=\langle 0,0,0\rangle ,} the 3D vector ⟨ 1 , 2 , 0 ⟩ {\displaystyle \langle 1,2,0\rangle } is projected to the 2D vector ⟨ 1 , 2 ⟩ {\displaystyle \langle 1,2\rangle } . Otherwise, to compute b x , y {\displaystyle \mathbf {b} _{x,y}} we first define a vector d x , y , z {\displaystyle \mathbf {d} _{x,y,z}} as the position of point A with respect to a coordinate system defined by the camera, with origin in C and rotated by θ {\displaystyle \mathbf {\theta } } with respect to the initial coordinate system. This is achieved by subtracting c {\displaystyle \mathbf {c} } from a {\displaystyle \mathbf {a} } and then applying a rotation by − θ {\displaystyle -\mathbf {\theta } } to the result. This transformation is often called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes): [ d x d y d z ] = [ 1 0 0 0 cos ⁡ ( θ x ) sin ⁡ ( θ x ) 0 − sin ⁡ ( θ x ) cos ⁡ ( θ x ) ] [ cos ⁡ ( θ y ) 0 − sin ⁡ ( θ y ) 0 1 0 sin ⁡ ( θ y ) 0 cos ⁡ ( θ y ) ] [ cos ⁡ ( θ z ) sin ⁡ ( θ z ) 0 − sin ⁡ ( θ z ) cos ⁡ ( θ z ) 0 0 0 1 ] ( [ a x a y a z ] − [ c x c y c z ] ) {\displaystyle {\begin{bmatrix}\mathbf {d} _{x}\\\mathbf {d} _{y}\\\mathbf {d} _{z}\end{bmatrix}}={\begin{bmatrix}1&0&0\\0&\cos(\mathbf {\theta } _{x})&\sin(\mathbf {\theta } _{x})\\0&-\sin(\mathbf {\theta } _{x})&\cos(\mathbf {\theta } _{x})\end{bmatrix}}{\begin{bmatrix}\cos(\mathbf {\theta } _{y})&0&-\sin(\mathbf {\theta } _{y})\\0&1&0\\\sin(\mathbf {\theta } _{y})&0&\cos(\mathbf {\theta } _{y})\end{bmatrix}}{\begin{bmatrix}\cos(\mathbf {\theta } _{z})&\sin(\mathbf {\theta } _{z})&0\\-\sin(\mathbf {\theta } _{z})&\cos(\mathbf {\theta } _{z})&0\\0&0&1\end{bmatrix}}\left({{\begin{bmatrix}\mathbf {a} _{x}\\\mathbf {a} _{y}\\\mathbf {a} _{z}\\\end{bmatrix}}-{\begin{bmatrix}\mathbf {c} _{x}\\\mathbf {c} _{y}\\\mathbf {c} _{z}\\\end{bmatrix}}}\right)} This representation corresponds to rotating by three Euler angles (more properly, Tait–Bryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading left-to-right)". If the camera is not rotated ( θ x , y , z = ⟨ 0 , 0 , 0 ⟩ {\displaystyle \mathbf {\theta } _{x,y,z}=\langle 0,0,0\rangle } ), then the matrices drop out (as identities), and this reduces to simply a shift: d = a − c . {\displaystyle \mathbf {d} =\mathbf {a} -\mathbf {c} .} Alternatively, without using matrices (let us replace a x − c x {\displaystyle a_{x}-c_{x}} with x {\displaystyle \mathbf {x} } and so on, and abbreviate cos ⁡ ( θ α ) {\displaystyle \cos \left(\theta _{\alpha }\right)} to c o s α {\displaystyle cos_{\alpha }} and sin ⁡ ( θ α ) {\displaystyle \sin \left(\theta _{\alpha }\right)} to s i n α {\displaystyle sin_{\alpha }} ): d x = c o s y ( s i n z y + c o s z x ) − s i n y z d y = s i n x ( c o s y z + s i n y ( s i n z y + c o s z x ) ) + c o s x ( c o s z y − s i n z x ) d z = c o s x ( c o s y z + s i n y ( s i n z y + c o s z x ) ) − s i n x ( c o s z y − s i n z x ) {\displaystyle {\begin{aligned}\mathbf {d} _{x}&=cos_{y}(sin_{z}\mathbf {y} +cos_{z}\mathbf {x} )-sin_{y}\mathbf {z} \\\mathbf {d} _{y}&=sin_{x}(cos_{y}\mathbf {z} +sin_{y}(sin_{z}\mathbf {y} +cos_{z}\mathbf {x} ))+cos_{x}(cos_{z}\mathbf {y} -sin_{z}\mathbf {x} )\\\mathbf {d} _{z}&=cos_{x}(cos_{y}\mathbf {z} +sin_{y}(sin_{z}\mathbf {y} +cos_{z}\mathbf {x} ))-sin_{x}(cos_{z}\mathbf {y} -sin_{z}\mathbf {x} )\end{aligned}}} This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z): b x = e z d z d x + e x , b y = e z d z d y + e y . {\displaystyle {\begin{aligned}\mathbf {b} _{x}&={\frac {\mathbf {e} _{z}}{\mathbf {d} _{z}}}\mathbf {d} _{x}+\mathbf {e} _{x},\\[5pt]\mathbf {b} _{y}&={\frac {\mathbf {e} _{z}}{\mathbf {d} _{z}}}\mathbf {d} _{y}+\mathbf {e} _{y}.\end{aligned}}} Or, in matrix form using homogeneous coordinates, the system [ f x f y f w ] = [ 1 0 e x e z 0 1 e y e z 0 0 1 e z ] [ d x d y d z ] {\displaystyle {\begin{bmatrix}\mathbf {f} _{x}\\\mathbf {f} _{y}\\\mathbf {f} _{w}\end{bmatrix}}={\begin{bmatrix}1&0&{\frac {\mathbf {e} _{x}}{\mathbf {e} _{z}}}\\0&1&{\frac {\mathbf {e} _{y}}{\mathbf {e} _{z}}}\\0&0&{\frac {1}{\mathbf {e} _{z}}}\end{bmatrix}}{\begin{bmatrix}\mathbf {d} _{x}\\\mathbf {d} _{y}\\\mathbf {d} _{z}\end{bmatrix}}} in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving b x = f x / f w b y = f y / f w {\displaystyle {\begin{aligned}\mathbf {b} _{x}&=\mathbf {f} _{x}/\mathbf {f} _{w}\\\mathbf {b} _{y}&=\mathbf {f} _{y}/\mathbf {f} _{w}\end{aligned}}} The distance of the viewer from the display surface, e z {\displaystyle \mathbf {e} _{z}} , directly relates to the field of view, where α = 2 ⋅ arctan ⁡ ( 1 / e z ) {\displaystyle \alpha =2\cdot \arctan(1/\mathbf {e} _{z})} is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the corners of your viewing surface) The above equations can also be rewritten as: b x = ( d x s x ) / ( d z r x ) r z , b y = ( d y s y ) / ( d z r y ) r z . {\displaystyle {\begin{aligned}\mathbf {b} _{x}&=(\mathbf {d} _{x}\mathbf {s} _{x})/(\mathbf {d} _{z}\mathbf {r} _{x})\mathbf {r} _{z},\\\mathbf {b} _{y}&=(\mathbf {d} _{y}\mathbf {s} _{y})/(\mathbf {d} _{z}\mathbf {r} _{y})\mathbf {r} _{z}.\end{aligned}}} In which s x , y {\displaystyle \mathbf {s} _{x,y}} is the display size, r x , y {\displaystyle \mathbf {r} _{x,y}} is the recording surface size (CCD or Photographic film), r z {\displaystyle \mathbf {r} _{z}} is the distance from the recording surface to the entrance pupil (camera center), and d z {\displaystyle \mathbf {d} _{z}} is the distance, from the 3D point being projected, to the entrance pupil. Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media. === Weak perspective projection === A "weak" perspective projection uses the same principles of an orthographic projection, but requires the scaling factor to be specified, thus ensuring that closer objects appear bigger in the projection, and vice versa. It can be seen as a hybrid between an orthographic and a perspective projection, and described either as a perspective projection with individual point depths Z i {\displaystyle Z_{i}} replaced by an average constant depth Z ave {\displaystyle Z_{\text{ave}}} , or simply as an orthographic projection plus a scaling. The weak-perspective model thus approximates perspective projection while using a simpler model, similar to the pure (unscaled) orthographic perspective. It is a reasonable approximation when the depth of the object along the line of sight is small compared to the distance from the camera, and the field of view is small. With these conditions, it can be assumed that all points on a 3D object are at the same distance Z ave {\displaystyle Z_{\text{ave}}} from the camera without significant errors in the projection (compared to the full perspective model). Equation P x = X Z ave P y = Y Z ave {\displaystyle {\begin{aligned}&P_{x}={\frac {X}{Z_{\text{ave}}}}\\[5pt]&P_{y}={\frac {Y}{Z_{\text{ave}}}}\end{aligned}}} assuming focal length f = 1 {\textstyle f=1} . == Diagram == To determine which screen x-coordinate corresponds to a point at A x , A z {\displaystyle A_{x},A_{z}} multiply the point coordinates by: B x = A x B z A z {\displaystyle B_{x}=A_{x}{\frac {B_{z}}{A_{z}}}} where B x {\displaystyle B_{x}} is the screen x coordinate A x {\displaystyle A_{x}} is the model x coordinate B z {\displaystyle B_{z}} is the focal length—the axial distance from the camera center to the image plane A z {\displaystyle A_{z}} is the subject distance. Since the camera operates in 3D, the same principle applies to the screen’s y coordinate— one can substitute y for x in the diagram and equation above. Alternatively, clipping techniques can be used. These involve substituting values of a point outside the field of view (FOV) with interpolated values from a corresponding point inside the camera's view matrix. This approach, often referred to as the inverse camera method, involves performing a perspective projection calculation using known values. It determines the last visible point along the viewing frustum by projecting from an out-of-view (invisible) point after all necessary transformations have been applied. == See also == == References == == Further reading == Kenneth C. Finney (2004). 3D Game Programming All in One. Thomson Course. p. 93. ISBN 978-1-59200-136-1. 3D projection. Koehler; Ralph (December 2000). 2D/3D Graphics and Splines with Source Code. Author Solutions Incorporated. ISBN 978-0759611870. == External links == Creating 3D Environments from Digital Photographs
Kneser–Ney smoothing
Kneser–Ney smoothing, also known as Kneser-Essen-Ney smoothing, is a method primarily used to calculate the probability distribution of n-grams in a document based on their histories. It is widely considered the most effective method of smoothing due to its use of absolute discounting by subtracting a fixed value from the probability's lower order terms to omit n-grams with lower frequencies. This approach has been considered equally effective for both higher and lower order n-grams. The method was proposed in a 1994 paper by Reinhard Kneser, Ute Essen and Hermann Ney. A common example that illustrates the concept behind this method is the frequency of the bigram "San Francisco". If it appears several times in a training corpus, the frequency of the unigram "Francisco" will also be high. Relying on only the unigram frequency to predict the frequencies of n-grams leads to skewed results; however, Kneser–Ney smoothing corrects this by considering the frequency of the unigram in relation to possible words preceding it. == Method == Let c ( w , w ′ ) {\displaystyle c(w,w')} be the number of occurrences of the word w {\displaystyle w} followed by the word w ′ {\displaystyle w'} in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N ( w i ) {\displaystyle p_{KN}(w_{i}|w_{i-1})={\frac {\max(c(w_{i-1},w_{i})-\delta ,0)}{\sum _{w'}c(w_{i-1},w')}}+\lambda _{w_{i-1}}p_{KN}(w_{i})} Where the unigram probability p K N ( w i ) {\displaystyle p_{KN}(w_{i})} depends on how likely it is to see the word w i {\displaystyle w_{i}} in an unfamiliar context, which is estimated as the number of times it appears after any other word divided by the number of distinct pairs of consecutive words in the corpus: p K N ( w i ) = | { w ′ : 0 < c ( w ′ , w i ) } | | { ( w ′ , w ″ ) : 0 < c ( w ′ , w ″ ) } | {\displaystyle p_{KN}(w_{i})={\frac {|\{w':0<c(w',w_{i})\}|}{|\{(w',w''):0<c(w',w'')\}|}}} Note that p K N {\displaystyle p_{KN}} is a proper distribution, as the values defined in the above way are non-negative and sum to one. The parameter δ {\displaystyle \delta } is a constant which denotes the discount value subtracted from the count of each n-gram, usually between 0 and 1. The value of the normalizing constant λ w i − 1 {\displaystyle \lambda _{w_{i-1}}} is calculated to make the sum of conditional probabilities p K N ( w i | w i − 1 ) {\displaystyle p_{KN}(w_{i}|w_{i-1})} over all w i {\displaystyle w_{i}} equal to one. Observe that (provided δ < 1 {\displaystyle \delta <1} ) for each w i {\displaystyle w_{i}} which occurs at least once in the context of w i − 1 {\displaystyle w_{i-1}} in the corpus we discount the probability by exactly the same constant amount δ / ( ∑ w ′ c ( w i − 1 , w ′ ) ) {\displaystyle {\delta }/\left(\sum _{w'}c(w_{i-1},w')\right)} , so the total discount depends linearly on the number of unique words w i {\displaystyle w_{i}} that can occur after w i − 1 {\displaystyle w_{i-1}} . This total discount is a budget we can spread over all p K N ( w i | w i − 1 ) {\displaystyle p_{KN}(w_{i}|w_{i-1})} proportionally to p K N ( w i ) {\displaystyle p_{KN}(w_{i})} . As the values of p K N ( w i ) {\displaystyle p_{KN}(w_{i})} sum to one, we can simply define λ w i − 1 {\displaystyle \lambda _{w_{i-1}}} to be equal to this total discount: λ w i − 1 = δ ∑ w ′ c ( w i − 1 , w ′ ) | { w ′ : 0 < c ( w i − 1 , w ′ ) } | {\displaystyle \lambda _{w_{i-1}}={\frac {\delta }{\sum _{w'}c(w_{i-1},w')}}|\{w':0<c(w_{i-1},w')\}|} This equation can be extended to n-grams. Let w i − n + 1 i − 1 {\displaystyle w_{i-n+1}^{i-1}} be the n − 1 {\displaystyle n-1} words before w i {\displaystyle w_{i}} : p K N ( w i | w i − n + 1 i − 1 ) = max ( c ( w i − n + 1 i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − n + 1 i − 1 , w ′ ) + δ | { w ′ : 0 < c ( w i − n + 1 i − 1 , w ′ ) } | ∑ w ′ c ( w i − n + 1 i − 1 , w ′ ) p K N ( w i | w i − n + 2 i − 1 ) {\displaystyle p_{KN}(w_{i}|w_{i-n+1}^{i-1})={\frac {\max(c(w_{i-n+1}^{i-1},w_{i})-\delta ,0)}{\sum _{w'}c(w_{i-n+1}^{i-1},w')}}+\delta {\frac {|\{w':0<c(w_{i-n+1}^{i-1},w')\}|}{\sum _{w'}c(w_{i-n+1}^{i-1},w')}}p_{KN}(w_{i}|w_{i-n+2}^{i-1})} This model uses the concept of absolute-discounting interpolation which incorporates information from higher and lower order language models. The addition of the term for lower order n-grams adds more weight to the overall probability when the count for the higher order n-grams is zero. Similarly, the weight of the lower order model decreases when the count of the n-gram is non zero. == Modified Kneser–Ney smoothing == Modifications of this method also exist. Chen and Goodman's 1998 paper lists and benchmarks several such modifications. Computational efficiency and scaling to multi-core systems is the focus of Chen and Goodman’s 1998 modification. This approach is once used for Google Translate under a MapReduce implementation. KenLM is a performant open-source implementation. == References ==
Netvibes
Netvibes is a French subsidiary of Dassault Group that previously ran a web service offering a dashboard and feed reader. == History == === 2005–2012 === Founded in 2005 by Tariq Krim, the company provided software for personalized dashboards for real-time monitoring, social analytics, knowledge sharing, and decision support. === 2012–2025 === On February 9, 2012, Dassault Systèmes announced the acquisition of Netvibes. As of 2024, the Netvibes brand comprises three French software companies acquired by Dassault Systèmes: Exalead: founded in 2000 by François Bourdoncle, the company provided search platforms and search-based applications for consumer and business users. On June 9, 2010, Dassault Systèmes acquired the company. Netvibes: the company provided software for personalized dashboards for real-time monitoring, social analytics, knowledge sharing, and decision support. Proxem: Founded in 2007 by François-Régis Caumartin, the company provided AI-powered semantic processing software and services. On June 23, 2020, Dassault Systèmes acquired Proxem and integrated its technology into the 3DEXPERIENCE® platform to complement its information intelligence applications. === Closure === Dassault Systèmes announced in April 2025 that Netvibes.com would retire its standalone service on June 2, 2025. The company itself continues to operate from within Dassault, but the Netvibes.com service was closed. == Activities == Brand monitoring – to track clients, customers and competitors across media sources all in one place, analyze live results with third party reporting tools, and provide media monitoring dashboards for brand clients. E-reputation management – to visualize real-time online conversations and social activity online feeds, and track new trending topics. Product marketing – to create interactive product microsites, with drag-and-drop publishing interface. Community portals – to engage online communities Personalized workspaces – to gather all essential company updates to support specific divisions (e.g. sales, marketing, human resources) and localizations. The software was a multi-lingual Ajax-based start page or web portal. It was organized into tabs, with each tab containing user-defined modules. Built-in Netvibes modules included an RSS/Atom feed reader, local weather forecasts, a calendar supporting iCal, bookmarks, notes, to-do lists, multiple searches, support for POP3, IMAP4 email as well as several webmail providers including Gmail, Yahoo! Mail, Hotmail, and AOL Mail, Box.net web storage, Delicious, Meebo, Flickr photos, podcast support with a built-in audio player, and several others. A page could be personalized further through the use of existing themes or by creating personal theme. Customized tabs, feeds and modules can be shared with others individually or via the Netvibes Ecosystem. For privacy reasons, only modules with publicly available content could be shared. == References == == External links == Official website
Matrix semiring
In abstract algebra, a matrix ring is a set of matrices with entries in a ring R that form a ring under matrix addition and matrix multiplication. The set of all n × n matrices with entries in R is a matrix ring denoted Mn(R) (alternative notations: Matn(R) and Rn×n). Some sets of infinite matrices form infinite matrix rings. A subring of a matrix ring is again a matrix ring. Over a rng, one can form matrix rngs. When R is a commutative ring, the matrix ring Mn(R) is an associative algebra over R, and may be called a matrix algebra. In this setting, if M is a matrix and r is in R, then the matrix rM is the matrix M with each of its entries multiplied by r. == Examples == The set of all n × n square matrices over R, denoted Mn(R). This is sometimes called the "full ring of n-by-n matrices". The set of all upper triangular matrices over R. The set of all lower triangular matrices over R. The set of all diagonal matrices over R. This subalgebra of Mn(R) is isomorphic to the direct product of n copies of R. For any index set I, the ring of endomorphisms of the right R-module M = ⨁ i ∈ I R {\textstyle M=\bigoplus _{i\in I}R} is isomorphic to the ring C F M I ( R ) {\displaystyle \mathbb {CFM} _{I}(R)} of column finite matrices whose entries are indexed by I × I and whose columns each contain only finitely many nonzero entries. The ring of endomorphisms of M considered as a left R-module is isomorphic to the ring R F M I ( R ) {\displaystyle \mathbb {RFM} _{I}(R)} of row finite matrices. If R is a Banach algebra, then the condition of row or column finiteness in the previous point can be relaxed. With the norm in place, absolutely convergent series can be used instead of finite sums. For example, the matrices whose column sums are absolutely convergent sequences form a ring. Analogously of course, the matrices whose row sums are absolutely convergent series also form a ring. This idea can be used to represent operators on Hilbert spaces, for example. The intersection of the row-finite and column-finite matrix rings forms a ring R C F M I ( R ) {\displaystyle \mathbb {RCFM} _{I}(R)} . If R is commutative, then Mn(R) has a structure of a *-algebra over R, where the involution * on Mn(R) is matrix transposition. If A is a C*-algebra, then Mn(A) is another C*-algebra. If A is non-unital, then Mn(A) is also non-unital. By the Gelfand–Naimark theorem, there exists a Hilbert space H and an isometric *-isomorphism from A to a norm-closed subalgebra of the algebra B(H) of continuous operators; this identifies Mn(A) with a subalgebra of B(H⊕n). For simplicity, if we further suppose that H is separable and A ⊆ {\displaystyle \subseteq } B(H) is a unital C*-algebra, we can break up A into a matrix ring over a smaller C*-algebra. One can do so by fixing a projection p and hence its orthogonal projection 1 − p; one can identify A with ( p A p p A ( 1 − p ) ( 1 − p ) A p ( 1 − p ) A ( 1 − p ) ) {\textstyle {\begin{pmatrix}pAp&pA(1-p)\\(1-p)Ap&(1-p)A(1-p)\end{pmatrix}}} , where matrix multiplication works as intended because of the orthogonality of the projections. In order to identify A with a matrix ring over a C*-algebra, we require that p and 1 − p have the same "rank"; more precisely, we need that p and 1 − p are Murray–von Neumann equivalent, i.e., there exists a partial isometry u such that p = uu* and 1 − p = u*u. One can easily generalize this to matrices of larger sizes. Complex matrix algebras Mn(C) are, up to isomorphism, the only finite-dimensional simple associative algebras over the field C of complex numbers. Prior to the invention of matrix algebras, Hamilton in 1853 introduced a ring, whose elements he called biquaternions and modern authors would call tensors in C ⊗R H, that was later shown to be isomorphic to M2(C). One basis of M2(C) consists of the four matrix units (matrices with one 1 and all other entries 0); another basis is given by the identity matrix and the three Pauli matrices. A matrix ring over a field is a Frobenius algebra, with Frobenius form given by the trace of the product: σ(A, B) = tr(AB). == Structure == The matrix ring Mn(R) can be identified with the ring of endomorphisms of the free right R-module of rank n; that is, Mn(R) ≅ EndR(Rn). Matrix multiplication corresponds to composition of endomorphisms. The ring Mn(D) over a division ring D is an Artinian simple ring, a special type of semisimple ring. The rings C F M I ( D ) {\displaystyle \mathbb {CFM} _{I}(D)} and R F M I ( D ) {\displaystyle \mathbb {RFM} _{I}(D)} are not simple and not Artinian if the set I is infinite, but they are still full linear rings. The Artin–Wedderburn theorem states that every semisimple ring is isomorphic to a finite direct product ∏ i = 1 r M n i ⁡ ( D i ) {\textstyle \prod _{i=1}^{r}\operatorname {M} _{n_{i}}(D_{i})} , for some nonnegative integer r, positive integers ni, and division rings Di. When we view Mn(C) as the ring of linear endomorphisms of Cn, those matrices which vanish on a given subspace V form a left ideal. Conversely, for a given left ideal I of Mn(C) the intersection of null spaces of all matrices in I gives a subspace of Cn. Under this construction, the left ideals of Mn(C) are in bijection with the subspaces of Cn. There is a bijection between the two-sided ideals of Mn(R) and the two-sided ideals of R. Namely, for each ideal I of R, the set of all n × n matrices with entries in I is an ideal of Mn(R), and each ideal of Mn(R) arises in this way. This implies that Mn(R) is simple if and only if R is simple. For n ≥ 2, not every left ideal or right ideal of Mn(R) arises by the previous construction from a left ideal or a right ideal in R. For example, the set of matrices whose columns with indices 2 through n are all zero forms a left ideal in Mn(R). The previous ideal correspondence actually arises from the fact that the rings R and Mn(R) are Morita equivalent. Roughly speaking, this means that the category of left R-modules and the category of left Mn(R)-modules are very similar. Because of this, there is a natural bijective correspondence between the isomorphism classes of left R-modules and left Mn(R)-modules, and between the isomorphism classes of left ideals of R and left ideals of Mn(R). Identical statements hold for right modules and right ideals. Through Morita equivalence, Mn(R) inherits any Morita-invariant properties of R, such as being simple, Artinian, Noetherian, prime. == Properties == If S is a subring of R, then Mn(S) is a subring of Mn(R). For example, Mn(Z) is a subring of Mn(Q). The matrix ring Mn(R) is commutative if and only if n = 0, R = 0, or R is commutative and n = 1. In fact, this is true also for the subring of upper triangular matrices. Here is an example showing two upper triangular 2 × 2 matrices that do not commute, assuming 1 ≠ 0 in R: [ 1 0 0 0 ] [ 1 1 0 0 ] = [ 1 1 0 0 ] {\displaystyle {\begin{bmatrix}1&0\\0&0\end{bmatrix}}{\begin{bmatrix}1&1\\0&0\end{bmatrix}}={\begin{bmatrix}1&1\\0&0\end{bmatrix}}} and [ 1 1 0 0 ] [ 1 0 0 0 ] = [ 1 0 0 0 ] . {\displaystyle {\begin{bmatrix}1&1\\0&0\end{bmatrix}}{\begin{bmatrix}1&0\\0&0\end{bmatrix}}={\begin{bmatrix}1&0\\0&0\end{bmatrix}}.} For n ≥ 2, the matrix ring Mn(R) over a nonzero ring has zero divisors and nilpotent elements; the same holds for the ring of upper triangular matrices. An example in 2 × 2 matrices would be [ 0 1 0 0 ] [ 0 1 0 0 ] = [ 0 0 0 0 ] . {\displaystyle {\begin{bmatrix}0&1\\0&0\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\end{bmatrix}}={\begin{bmatrix}0&0\\0&0\end{bmatrix}}.} The center of Mn(R) consists of the scalar multiples of the identity matrix, In, in which the scalar belongs to the center of R. The unit group of Mn(R), consisting of the invertible matrices under multiplication, is denoted GLn(R). If F is a field, then for any two matrices A and B in Mn(F), the equality AB = In implies BA = In. This is not true for every ring R though. A ring R whose matrix rings all have the mentioned property is known as a stably finite ring (Lam 1999, p. 5). == Matrix semiring == In fact, R needs to be only a semiring for Mn(R) to be defined. In this case, Mn(R) is a semiring, called the matrix semiring. Similarly, if R is a commutative semiring, then Mn(R) is a matrix semialgebra. For example, if R is the Boolean semiring (the two-element Boolean algebra R = {0, 1} with 1 + 1 = 1), then Mn(R) is the semiring of binary relations on an n-element set with union as addition, composition of relations as multiplication, the empty relation (zero matrix) as the zero, and the identity relation (identity matrix) as the unity. == See also == Central simple algebra Clifford algebra Hurwitz's theorem (normed division algebras) Generic matrix ring Sylvester's law of inertia == Citations == == References ==
Google Neural Machine Translation
Google Neural Machine Translation (GNMT) was a neural machine translation (NMT) system developed by Google and introduced in November 2016 that used an artificial neural network to increase fluency and accuracy in Google Translate. The neural network consisted of two main blocks, an encoder and a decoder, both of LSTM architecture with 8 1024-wide layers each and a simple 1-layer 1024-wide feedforward attention mechanism connecting them. The total number of parameters has been variously described as over 160 million, approximately 210 million, 278 million or 380 million. It used WordPiece tokenizer, and beam search decoding strategy. It ran on Tensor Processing Units. By 2020, the system had been replaced by another deep learning system based on a Transformer encoder and an RNN decoder. GNMT improved on the quality of translation by applying an example-based (EBMT) machine translation method in which the system learns from millions of examples of language translation. GNMT's proposed architecture of system learning was first tested on over a hundred languages supported by Google Translate. With the large end-to-end framework, the system learns over time to create better, more natural translations. GNMT attempts to translate whole sentences at a time, rather than just piece by piece. The GNMT network can undertake interlingual machine translation by encoding the semantics of the sentence, rather than by memorizing phrase-to-phrase translations. == History == The Google Brain project was established in 2011 in the "secretive Google X research lab" by Google Fellow Jeff Dean, Google Researcher Greg Corrado, and Stanford University Computer Science professor Andrew Ng. Ng's work has led to some of the biggest breakthroughs at Google and Stanford. In November 2016, Google Neural Machine Translation system (GNMT) was introduced. Since then, Google Translate began using neural machine translation (NMT) in preference to its previous statistical methods (SMT) which had been used since October 2007, with its proprietary, in-house SMT technology. Training GNMT was a big effort at the time and took, by a 2018 OpenAI estimate, on the order of 79 petaFLOP-days (or 7e21 FLOPs) of compute which was 1.5 orders of magnitude larger than Seq2seq model of 2014 (but about 2x smaller than GPT-J-6B in 2021). Google Translate's NMT system uses a large artificial neural network capable of deep learning. By using millions of examples, GNMT improves the quality of translation, using broader context to deduce the most relevant translation. The result is then rearranged and adapted to approach grammatically based human language. GNMT's proposed architecture of system learning was first tested on over a hundred languages supported by Google Translate. GNMT did not create its own universal interlingua but rather aimed at finding the commonality between many languages using insights from psychology and linguistics. The new translation engine was first enabled for eight languages: to and from English and French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish in November 2016. In March 2017, three additional languages were enabled: Russian, Hindi and Vietnamese along with Thai for which support was added later. Support for Hebrew and Arabic was also added with help from the Google Translate Community in the same month. In mid April 2017 Google Netherlands announced support for Dutch and other European languages related to English. Further support was added for nine Indian languages: Hindi, Bengali, Marathi, Gujarati, Punjabi, Tamil, Telugu, Malayalam and Kannada at the end of April 2017. By 2020, Google had changed methodology to use a different neural network system based on transformers, and had phased out NMT. == Evaluation == The GNMT system was said to represent an improvement over the former Google Translate in that it will be able to handle "zero-shot translation", that is it directly translates one language into another. For example, it might be trained just for Japanese-English and Korean-English translation, but can perform Japanese-Korean translation. The system appears to have learned to produce a language-independent intermediate representation of language (an "interlingua"), which allows it to perform zero-shot translation by converting from and to the interlingua. Google Translate previously first translated the source language into English and then translated the English into the target language rather than translating directly from one language to another. A July 2019 study in Annals of Internal Medicine found that "Google Translate is a viable, accurate tool for translating non–English-language trials". Only one disagreement between reviewers reading machine-translated trials was due to a translation error. Since many medical studies are excluded from systematic reviews because the reviewers do not understand the language, GNMT has the potential to reduce bias and improve accuracy in such reviews. == Languages supported by GNMT == As of December 2021, all of the languages of Google Translate support GNMT, with Latin being the most recent addition. == See also == == References == == External links == Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation The Advantages and Disadvantages of Machine Translation Statistical Machine Translation International Association for Machine Translation (IAMT) Archived June 24, 2010, at the Wayback Machine Machine Translation Archive Archived April 1, 2019, at the Wayback Machine by John Hutchins. An electronic repository (and bibliography) of articles, books and papers in the field of machine translation and computer-based translation technology Machine translation (computer-based translation) – Publications by John Hutchins (includes PDFs of several books on machine translation)
Calderón projector
In applied mathematics, the Calderón projector is a pseudo-differential operator used widely in boundary element methods. It is named after Alberto Calderón. == Definition == The interior Calderón projector is defined to be:: 137  C = ( ( 1 − σ ) I d − K V W σ I d + K ′ ) , {\displaystyle {\mathcal {C}}=\left({\begin{array}{cc}(1-\sigma ){\mathsf {Id}}-{\mathsf {K}}&{\mathsf {V}}\\{\mathsf {W}}&\sigma {\mathsf {Id}}+{\mathsf {K}}'\end{array}}\right),} where σ {\displaystyle \sigma } is 1 2 {\displaystyle {\tfrac {1}{2}}} almost everywhere, I d {\displaystyle {\mathsf {Id}}} is the identity boundary operator, K {\displaystyle {\mathsf {K}}} is the double layer boundary operator, V {\displaystyle {\mathsf {V}}} is the single layer boundary operator, K ′ {\displaystyle {\mathsf {K}}'} is the adjoint double layer boundary operator and W {\displaystyle {\mathsf {W}}} is the hypersingular boundary operator. The exterior Calderón projector is defined to be:: 182  C = ( σ I d + K − V − W ( 1 − σ ) I d − K ′ ) . {\displaystyle {\mathcal {C}}=\left({\begin{array}{cc}\sigma {\mathsf {Id}}+{\mathsf {K}}&-{\mathsf {V}}\\-{\mathsf {W}}&(1-\sigma ){\mathsf {Id}}-{\mathsf {K}}'\end{array}}\right).} == References ==
Google DeepMind
DeepMind Technologies Limited, trading as Google DeepMind or simply DeepMind, is a British–American artificial intelligence research laboratory which serves as a subsidiary of Alphabet Inc. Founded in the UK in 2010, it was acquired by Google in 2014 and merged with Google AI's Google Brain division to become Google DeepMind in April 2023. The company is headquartered in London, with research centres in the United States, Canada, France, Germany, and Switzerland. DeepMind introduced neural Turing machines (neural networks that can access external memory like a conventional Turing machine), resulting in a computer that loosely resembles short-term memory in the human brain. DeepMind has created neural network models to play video games and board games. It made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, a world champion, in a five-game match, which was the subject of a documentary film. A more general program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning. In 2020, DeepMind made significant advances in the problem of protein folding with AlphaFold. In July 2022, it was announced that over 200 million predicted protein structures, representing virtually all known proteins, would be released on the AlphaFold database. AlphaFold's database of predictions achieved state of the art records on benchmark tests for protein folding algorithms, although each individual prediction still requires confirmation by experimental tests. AlphaFold3 was released in May 2024, making structural predictions for the interaction of proteins with various molecules. It achieved new standards on various benchmarks, raising the state of the art accuracies from 28 and 52 percent to 65 and 76 percent. == History == The start-up was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in November 2010. Hassabis and Legg first met at the Gatsby Computational Neuroscience Unit at University College London (UCL). Demis Hassabis has said that the start-up began working on artificial intelligence technology by teaching it how to play old games from the seventies and eighties, which are relatively primitive compared to the ones that are available today. Some of those games included Breakout, Pong, and Space Invaders. AI was introduced to one game at a time, without any prior knowledge of its rules. After spending some time on learning the game, AI would eventually become an expert in it. "The cognitive processes which the AI goes through are said to be very like those of a human who had never seen the game would use to understand and attempt to master it." The goal of the founders is to create a general-purpose AI that can be useful and effective for almost anything. Major venture capital firms Horizons Ventures and Founders Fund invested in the company, as well as entrepreneurs Scott Banister, Peter Thiel, and Elon Musk. Jaan Tallinn was an early investor and an adviser to the company. On 26 January 2014, Google confirmed its acquisition of DeepMind for a price reportedly ranging between $400 million and $650 million. and that it had agreed to take over DeepMind Technologies. The sale to Google took place after Facebook reportedly ended negotiations with DeepMind Technologies in 2013. The company was afterwards renamed Google DeepMind and kept that name for about two years. In 2014, DeepMind received the "Company of the Year" award from Cambridge Computer Laboratory. In September 2015, DeepMind and the Royal Free NHS Trust signed their initial information sharing agreement to co-develop a clinical task management app, Streams. After Google's acquisition the company established an artificial intelligence ethics board. The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board. DeepMind has opened a new unit called DeepMind Ethics and Society and focused on the ethical and societal questions raised by artificial intelligence featuring prominent philosopher Nick Bostrom as advisor. In October 2017, DeepMind launched a new research team to investigate AI ethics. In December 2019, co-founder Suleyman announced he would be leaving DeepMind to join Google, working in a policy role. In March 2024, Microsoft appointed him as the EVP and CEO of its newly created consumer AI unit, Microsoft AI. In April 2023, DeepMind merged with Google AI's Google Brain division to form Google DeepMind, as part of the company's continued efforts to accelerate work on AI in response to OpenAI's ChatGPT. This marked the end of a years-long struggle from DeepMind executives to secure greater autonomy from Google. == Products and technologies == Google Research released a paper in 2016 regarding AI safety and avoiding undesirable behaviour during the AI learning process. In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its kill switch or otherwise exhibits certain undesirable behaviours. In July 2018, researchers from DeepMind trained one of its systems to play the computer game Quake III Arena. As of 2020, DeepMind has published over a thousand papers, including thirteen papers that were accepted by Nature or Science. DeepMind received media attention during the AlphaGo period; according to a LexisNexis search, 1842 published news stories mentioned DeepMind in 2016, declining to 1363 in 2019. === Games === Unlike earlier AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within that scope, DeepMind's initial algorithms were intended to be general. They used reinforcement learning, an algorithm that learns from experience using only raw pixels as data input. Their initial approach used deep Q-learning with a convolutional neural network. They tested the system on video games, notably early arcade games, such as Space Invaders or Breakout. Without altering the code, the same AI was able to play certain games more efficiently than any human ever could. In 2013, DeepMind published research on an AI system that surpassed human abilities in games such as Pong, Breakout and Enduro, while surpassing state of the art performance on Seaquest, Beamrider, and Q*bert. This work reportedly led to the company's acquisition by Google. DeepMind's AI had been applied to video games made in the 1970s and 1980s; work was ongoing for more complex 3D games such as Quake, which first appeared in the 1990s. In 2020, DeepMind published Agent57, an AI Agent which surpasses human level performance on all 57 games of the Atari 2600 suite. In July 2022, DeepMind announced the development of DeepNash, a model-free multi-agent reinforcement learning system capable of playing the board game Stratego at the level of a human expert. ==== AlphaGo and successors ==== In October 2015, a computer Go program called AlphaGo, developed by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero. This was the first time an artificial intelligence (AI) defeated a professional Go player. Previously, computers were only known to have played Go at "amateur" level. Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force. In March 2016 it beat Lee Sedol, one of the highest ranked players in the world, with a score of 4 to 1 in a five-game match. In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who had been the world's highest-ranked player for two years. In 2017, an improved version, AlphaGo Zero, defeated AlphaGo in a hundred out of a hundred games. Later that year, AlphaZero, a modified version of AlphaGo Zero, gained superhuman abilities at chess and shogi. In 2019, DeepMind released a new model named MuZero that mastered the domains of Go, chess, shogi, and Atari 2600 games without human data, domain knowledge, or known rules. AlphaGo technology was developed based on deep reinforcement learning, making it different from the AI technologies then on the market. The data fed into the AlphaGo algorithm consisted of various moves based on historical tournament data. The number of moves was increased gradually until over 30 million of them were processed. The aim was to have the system mimic the human player, as represented by the input data, and eventually become better. It played against itself and learned from the outcomes; thus, it learned to improve itself over the time and increased its winning rate as a result. AlphaGo used two deep neural networks: a policy network to evaluate move probabilities and a value network to assess positions. The policy network trained via supervised learning, and was subsequently refined by policy-gradient reinforcement learning. The value network learned to predict winners of games played by the policy network against itself. After training, these networks employed a lookahead Monte Carlo tree search, using the policy network to identify candidate high-probability moves, while the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) evaluated tree positions. In contrast, AlphaGo Zero was trained without being fed data of human-played games. Instead it generated its own data, playing millions of games against itself. It used a single neural network, rather than separate policy and value networks. Its simplified tree search relied upon this neural network to evaluate positions and sample moves. A new reinforcement learning algorithm incorporated lookahead search inside the training loop. AlphaGo Zero employed around 15 people and millions in computing resources. Ultimately, it needed much less computing power than AlphaGo, running on four specialized AI processors (Google TPUs), instead of AlphaGo's 48. It also required less training time, being able to beat its predecessor after just three days, compared with months required for the original AlphaGo. Similarly, AlphaZero also learned via self-play. Researchers applied MuZero to solve the real world challenge of video compression with a set number of bits with respect to Internet traffic on sites such as YouTube, Twitch, and Google Meet. The goal of MuZero is to optimally compress the video so the quality of the video is maintained with a reduction in data. The final result using MuZero was a 6.28% average reduction in bitrate. ==== AlphaStar ==== In 2016, Hassabis discussed the game StarCraft as a future challenge, since it requires strategic thinking and handling imperfect information. In January 2019, DeepMind introduced AlphaStar, a program playing the real-time strategy game StarCraft II. AlphaStar used reinforcement learning based on replays from human players, and then played against itself to enhance its skills. At the time of the presentation, AlphaStar had knowledge equivalent to 200 years of playing time. It won 10 consecutive matches against two professional players, although it had the unfair advantage of being able to see the entire field, unlike a human player who has to move the camera manually. A preliminary version in which that advantage was fixed lost a subsequent match. In July 2019, AlphaStar began playing against random humans on the public 1v1 European multiplayer ladder. Unlike the first iteration of AlphaStar, which played only Protoss v. Protoss, this one played as all of the game's races, and had earlier unfair advantages fixed. By October 2019, AlphaStar had reached Grandmaster level on the StarCraft II ladder on all three StarCraft races, becoming the first AI to reach the top league of a widely popular esport without any game restrictions. === Protein folding === In 2016, DeepMind turned its artificial intelligence to protein folding, a long-standing problem in molecular biology. In December 2018, DeepMind's AlphaFold won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. "This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem," Hassabis said to The Guardian. In 2020, in the 14th CASP, AlphaFold's predictions achieved an accuracy score regarded as comparable with lab techniques. Dr Andriy Kryshtafovych, one of the panel of scientific adjudicators, described the achievement as "truly remarkable", and said the problem of predicting how proteins fold had been "largely solved". In July 2021, the open-source RoseTTAFold and AlphaFold2 were released to allow scientists to run their own versions of the tools. A week later DeepMind announced that AlphaFold had completed its prediction of nearly all human proteins as well as the entire proteomes of 20 other widely studied organisms. The structures were released on the AlphaFold Protein Structure Database. In July 2022, it was announced that the predictions of over 200 million proteins, representing virtually all known proteins, would be released on the AlphaFold database. The most recent update, AlphaFold3, was released in May 2024, predicting the interactions of proteins with DNA, RNA, and various other molecules. In a particular benchmark test on the problem of DNA interactions, AlphaFold3's attained an accuracy of 65%, significantly improving the previous state of the art of 28%. In October 2024, Hassabis and John Jumper received half of the 2024 Nobel Prize in Chemistry jointly for protein structure prediction, citing AlphaFold2 achievement. === Language models === In 2016, DeepMind introduced WaveNet, a text-to-speech system. It was originally too computationally intensive for use in consumer products, but in late 2017 it became ready for use in consumer applications such as Google Assistant. In 2018 Google launched a commercial text-to-speech product, Cloud Text-to-Speech, based on WaveNet. In 2018, DeepMind introduced a more efficient model called WaveRNN co-developed with Google AI. In 2020 WaveNetEQ, a packet loss concealment method based on a WaveRNN architecture, was presented. In 2019, Google started to roll WaveRNN with WavenetEQ out to Google Duo users. Released in May 2022, Gato is a polyvalent multimodal model. It was trained on 604 tasks, such as image captioning, dialogue, or stacking blocks. On 450 of these tasks, Gato outperformed human experts at least half of the time, according to DeepMind. Unlike models like MuZero, Gato does not need to be retrained to switch from one task to the other. Sparrow is an artificial intelligence-powered chatbot developed by DeepMind to build safer machine learning systems by using a mix of human feedback and Google search suggestions. Chinchilla is a language model developed by DeepMind. DeepMind posted a blog post on 28 April 2022 on a single visual language model (VLM) named Flamingo that can accurately describe a picture of something with just a few training images. ==== AlphaCode ==== In 2022, DeepMind unveiled AlphaCode, an AI-powered coding engine that creates computer programs at a rate comparable to that of an average programmer, with the company testing the system against coding challenges created by Codeforces utilized in human competitive programming competitions. AlphaCode earned a rank equivalent to 54% of the median score on Codeforces after being trained on GitHub data and Codeforce problems and solutions. The program was required to come up with a unique solution and stopped from duplicating answers. ==== Gemini ==== Gemini is a multimodal large language model which was released on 6 December 2023. It is the successor of Google's LaMDA and PaLM 2 language models and sought to challenge OpenAI's GPT-4. Gemini comes in 3 sizes: Nano, Pro, and Ultra. Gemini is also the name of the chatbot that integrates Gemini (and which was previously called Bard). On 12 December 2024, Google released Gemini 2.0 Flash, the first model in the Gemini 2.0 series. It notably features expanded multimodality, with the ability to also generate images and audio, and is part of Google's broader plans to integrate advanced AI into autonomous agents. On 25 March 2025, Google released Gemini 2.5, a reasoning model that stops to "think" before giving a response. Google announced that all future models will also have reasoning ability. On 30 March 2025, Google released Gemini 2.5 to all free users. ==== Gemma ==== Gemma is a collection of open-weight large language models. The first ones were released on 21 February 2024 and are available in two distinct sizes: a 7 billion parameter model optimized for GPU and TPU usage, and a 2 billion parameter model designed for CPU and on-device applications. Gemma models were trained on up to 6 trillion tokens of text, employing similar architectures, datasets, and training methodologies as the Gemini model set. In June 2024, Google started releasing Gemma 2 models. In December 2024, Google introduced PaliGemma 2, an upgraded vision-language model. In February 2025, they launched PaliGemma 2 Mix, a version fine-tuned for multiple tasks. It is available in 3B, 10B, and 28B parameters with 224px and 448px resolutions. In March 2025, Google released Gemma 3, calling it the most capable model that can be run on a single GPU. It has four available sizes: 1B, 4B, 12B, and 27B. In March 2025, Google introduced TxGemma, an open-source model designed to improve the efficiency of therapeutics development. In April 2025, Google introduced DolphinGemma, a research artificial intelligence model designed to hopefully decode dolphin communication. They want to train a foundation model that can learn the structure of dolphin vocalizations and generate novel dolphin-like sound sequences. ==== SIMA ==== In March 2024, DeepMind introduced Scalable Instructable Multiword Agent, or SIMA, an AI agent capable of understanding and following natural language instructions to complete tasks across various 3D virtual environments. Trained on nine video games from eight studios and four research environments, SIMA demonstrated adaptability to new tasks and settings without requiring access to game source code or APIs. The agent comprises pre-trained computer vision and language models fine-tuned on gaming data, with language being crucial for understanding and completing given tasks as instructed. DeepMind's research aimed to develop more helpful AI agents by translating advanced AI capabilities into real-world actions through a language interface. ==== Habermas machine ==== In 2024, Google Deepmind published the results of an experiment where they trained two large language models to help identify and present areas of overlap among a few thousand group members they had recruited online using techniques like sortition to get a representative sample of participants. The project is named in honor of Jürgen Habermas. In one experiment, the participants rated the summaries by the AI higher than the human moderator 56% of the time. === Video generation === In May 2024, a multimodal video generation model called Veo was announced at Google I/O 2024. Google claimed that it could generate 1080p videos beyond a minute long. In December 2024, Google released Veo 2, available via VideoFX. It supports 4K resolution video generation, and has an improved understanding of physics. In April 2025, Google announced that Veo 2 became available for advanced users on Gemini App. In May 2025, Google released Veo 3, which not only generates videos but also creates synchronized audio — including dialogue, sound effects, and ambient noise — to match the visuals. Google also announced Flow, a video-creation tool powered by Veo and Imagen. === Music generation === Google DeepMind developed Lyria, a text-to-music model. As of April 2025, it is available in preview mode on Vertex AI. === Environment generation === In March 2023, DeepMind introduced "Genie" (Generative Interactive Environments), an AI model that can generate game-like, action-controllable virtual worlds based on textual descriptions, images, or sketches. Built as an autoregressive latent diffusion model, Genie enables frame-by-frame interactivity without requiring labeled action data for training. Its successor, Genie 2, released in December 2024, expanded these capabilities to generate diverse and interactive 3D environments. === Robotics === Released in June 2023, RoboCat is an AI model that can control robotic arms. The model can adapt to new models of robotic arms, and to new types of tasks. In March 2025, DeepMind launched two AI models, Gemini Robotics and Gemini Robotics-ER, aimed at improving how robots interact with the physical world. === Sports === DeepMind researchers have applied machine learning models to the sport of football, often referred to as soccer in North America, modelling the behaviour of football players, including the goalkeeper, defenders, and strikers during different scenarios such as penalty kicks. The researchers used heat maps and cluster analysis to organize players based on their tendency to behave a certain way during the game when confronted with a decision on how to score or prevent the other team from scoring. The researchers mention that machine learning models could be used to democratize the football industry by automatically selecting interesting video clips of the game that serve as highlights. This can be done by searching videos for certain events, which is possible because video analysis is an established field of machine learning. This is also possible because of extensive sports analytics based on data including annotated passes or shots, sensors that capture data about the players movements many times over the course of a game, and game theory models. === Archaeology === Google has unveiled a new archaeology document program, named Ithaca after the Greek island in Homer's Odyssey. This deep neural network helps researchers restore the empty text of damaged Greek documents, and to identify their date and geographical origin. The work builds on another text analysis network that DeepMind released in 2019, named Pythia. Ithaca achieves 62% accuracy in restoring damaged texts and 71% location accuracy, and has a dating precision of 30 years. The authors claimed that the use of Ithaca by "expert historians" raised the accuracy of their work from 25 to 72 percent. However, Eleanor Dickey noted that this test was actually only made of students, saying that it wasn't clear how helpful Ithaca would be to "genuinely qualified editors". The team is working on extending the model to other ancient languages, including Demotic, Akkadian, Hebrew, and Mayan. === Materials science === In November 2023, Google DeepMind announced an Open Source Graph Network for Materials Exploration (GNoME). The tool proposes millions of materials previously unknown to chemistry, including several hundred thousand stable crystalline structures, of which 736 had been experimentally produced by the Massachusetts Institute of Technology, at the time of the release. However, according to Anthony Cheetham, GNoME did not make "a useful, practical contribution to the experimental materials scientists." A review article by Cheetham and Ram Seshadri were unable to identify any "strikingly novel" materials found by GNoME, with most being minor variants of already-known materials. === Mathematics === ==== AlphaTensor ==== In October 2022, DeepMind released AlphaTensor, which used reinforcement learning techniques similar to those in AlphaGo, to find novel algorithms for matrix multiplication. In the special case of multiplying two 4×4 matrices with integer entries, where only the evenness or oddness of the entries is recorded, AlphaTensor found an algorithm requiring only 47 distinct multiplications; the previous optimum, known since 1969, was the more general Strassen algorithm, using 49 multiplications. Computer scientist Josh Alman described AlphaTensor as "a proof of concept for something that could become a breakthrough," while Vassilevska Williams called it "a little overhyped" despite also acknowledging its basis in reinforcement learning as "something completely different" from previous approaches. ==== AlphaGeometry ==== AlphaGeometry is a neuro-symbolic AI that was able to solve 25 out of 30 geometry problems of the International Mathematical Olympiad, a performance comparable to that of a gold medalist. Traditional geometry programs are symbolic engines that rely exclusively on human-coded rules to generate rigorous proofs, which makes them lack flexibility in unusual situations. AlphaGeometry combines such a symbolic engine with a specialized large language model trained on synthetic data of geometrical proofs. When the symbolic engine doesn't manage to find a formal and rigorous proof on its own, it solicits the large language model, which suggests a geometrical construct to move forward. However, it is unclear how applicable this method is to other domains of mathematics or reasoning, because symbolic engines rely on domain-specific rules and because of the need for synthetic data. ==== AlphaProof ==== AlphaProof is an AI model, which couples a pre-trained language model with the AlphaZero reinforcement learning algorithm. AlphaZero has previously taught itself how to master games. The pre-trained language model used in this combination is the fine-tuning of a Gemini model to automatically translate natural language problem statements into formal statements, creating a large library of formal problems of varying difficulty. For this purpose, mathematical statements are defined in the formal language Lean. At the 2024 International Mathematical Olympiad, AlphaProof together with an adapted version of AlphaGeometry have reached the same level of solving problems in the combined categories as a silver medalist in that competition for the first time. === AlphaDev === In June 2023, Deepmind announced that AlphaDev, which searches for improved computer science algorithms using reinforcement learning, discovered a more efficient way of coding a sorting algorithm and a hashing algorithm. The new sorting algorithm was 70% faster for shorter sequences and 1.7% faster for sequences exceeding 250,000 elements, and the new hashing algorithm was 30% faster in some cases. The sorting algorithm was accepted into the C++ Standard Library sorting algorithms, and was the first change to those algorithms in more than a decade and the first update to involve an algorithm discovered using AI. The hashing algorithm was released to an opensource library. Google estimates that these two algorithms are used trillions of times every day. === AlphaEvolve === In May 2025, Google DeepMind unveiled AlphaEvolve, an evolutionary coding agent using LLMs like Gemini to design optimized algorithms. AlphaEvolve begins each optimization process with an initial algorithm and metrics to evaluate the quality of a solution. At each step, it uses the LLM to generate variations of the algorithms or combine them, and selects the best candidates for further iterations. AlphaEvolve has made several algorithmic discoveries, including in matrix multiplication. According to Google, when tested on 50 open mathematical problems, AlphaEvolve was able to match the efficiency of state-of-the-art algorithms in 75% of cases, and discovered improved solutions 20% of the time, such as with the kissing number problem in 11 dimensions. It also developed a new heuristic for data center scheduling, recovering on average 0.7% of Google's worldwide compute resources. === Chip design === AlphaChip is an reinforcement learning-based neural architecture that guides the task of chip placement. DeepMind claimed that the time needed to create chip layouts fell from weeks to hours. Its chip designs were used in every Tensor Processing Unit (TPU) iteration since 2020. === Miscellaneous contributions to Google === Google has stated that DeepMind algorithms have greatly increased the efficiency of cooling its data centers by automatically balancing the cost of hardware failures against the cost of cooling. In addition, DeepMind (alongside other Alphabet AI researchers) assists Google Play's personalized app recommendations. DeepMind has also collaborated with the Android team at Google for the creation of two new features which were made available to people with devices running Android Pie, the ninth installment of Google's mobile operating system. These features, Adaptive Battery and Adaptive Brightness, use machine learning to conserve energy and make devices running the operating system easier to use. It is the first time DeepMind has used these techniques on such a small scale, with typical machine learning applications requiring orders of magnitude more computing power. == DeepMind Health == In July 2016, a collaboration between DeepMind and Moorfields Eye Hospital was announced to develop AI applications for healthcare. DeepMind would be applied to the analysis of anonymised eye scans, searching for early signs of diseases leading to blindness. In August 2016, a research programme with University College London Hospital was announced with the aim of developing an algorithm that can automatically differentiate between healthy and cancerous tissues in head and neck areas. There are also projects with the Royal Free London NHS Foundation Trust and Imperial College Healthcare NHS Trust to develop new clinical mobile apps linked to electronic patient records. Staff at the Royal Free Hospital were reported as saying in December 2017 that access to patient data through the app had saved a 'huge amount of time' and made a 'phenomenal' difference to the management of patients with acute kidney injury. Test result data is sent to staff's mobile phones and alerts them to changes in the patient's condition. It also enables staff to see if someone else has responded, and to show patients their results in visual form. In November 2017, DeepMind announced a research partnership with the Cancer Research UK Centre at Imperial College London with the goal of improving breast cancer detection by applying machine learning to mammography. Additionally, in February 2018, DeepMind announced it was working with the U.S. Department of Veterans Affairs in an attempt to use machine learning to predict the onset of acute kidney injury in patients, and also more broadly the general deterioration of patients during a hospital stay so that doctors and nurses can more quickly treat patients in need. DeepMind developed an app called Streams, which sends alerts to doctors about patients at risk of acute kidney injury. On 13 November 2018, DeepMind announced that its health division and the Streams app would be absorbed into Google Health. Privacy advocates said the announcement betrayed patient trust and appeared to contradict previous statements by DeepMind that patient data would not be connected to Google accounts or services. A spokesman for DeepMind said that patient data would still be kept separate from Google services or projects. === NHS data-sharing controversy === In April 2016, New Scientist obtained a copy of a data sharing agreement between DeepMind and the Royal Free London NHS Foundation Trust. The latter operates three London hospitals where an estimated 1.6 million patients are treated annually. The agreement shows DeepMind Health had access to admissions, discharge and transfer data, accident and emergency, pathology and radiology, and critical care at these hospitals. This included personal details such as whether patients had been diagnosed with HIV, suffered from depression or had ever undergone an abortion in order to conduct research to seek better outcomes in various health conditions. A complaint was filed to the Information Commissioner's Office (ICO), arguing that the data should be pseudonymised and encrypted. In May 2016, New Scientist published a further article claiming that the project had failed to secure approval from the Confidentiality Advisory Group of the Medicines and Healthcare products Regulatory Agency. In 2017, the ICO concluded a year-long investigation that focused on how the Royal Free NHS Foundation Trust tested the app, Streams, in late 2015 and 2016. The ICO found that the Royal Free failed to comply with the Data Protection Act when it provided patient details to DeepMind, and found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test. DeepMind published its thoughts on the investigation in July 2017, saying "we need to do better" and highlighting several activities and initiatives they had initiated for transparency, oversight and engagement. This included developing a patient and public involvement strategy and being transparent in its partnerships. In May 2017, Sky News published a leaked letter from the National Data Guardian, Dame Fiona Caldicott, revealing that in her "considered opinion" the data-sharing agreement between DeepMind and the Royal Free took place on an "inappropriate legal basis". The Information Commissioner's Office ruled in July 2017 that the Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind. == DeepMind Ethics and Society == In October 2017, DeepMind announced a new research unit, DeepMind Ethics & Society. Their goal is to fund external research of the following themes: privacy, transparency, and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world's challenges. As a result, the team hopes to further understand the ethical implications of AI and aid society to seeing AI can be beneficial. This new subdivision of DeepMind is a completely separate unit from the partnership of leading companies using AI, academia, civil society organizations and nonprofits of the name Partnership on Artificial Intelligence to Benefit People and Society of which DeepMind is also a part. The DeepMind Ethics and Society board is also distinct from the mooted AI Ethics Board that Google originally agreed to form when acquiring DeepMind. == DeepMind Professors of machine learning == DeepMind sponsors three chairs of machine learning: At the University of Cambridge, held by Neil Lawrence, in the Department of Computer Science and Technology, At the University of Oxford, held by Michael Bronstein, in the Department of Computer Science, and At the University College London, held by Marc Deisenroth, in the Department of Computer Science. == See also == Anthropic Cohere Glossary of artificial intelligence Imagen Model Context Protocol OpenAI Robot Constitution == References == == External links == Official website GitHub Repositories
Negative testing
Negative testing is a method of testing an application or system to improve the likelihood that an application works as intended/specified and can handle unexpected input and user behavior. Invalid data is inserted to compare the output against the given input. Negative testing is also known as failure testing or error path testing. When performing negative testing exceptions are expected. This shows that the application is able to handle improper user behavior. Users input values that do not work in the system to test its ability to handle incorrect values or system failure. == Purpose == The purpose of negative testing is to prevent the application from crashing and it also helps improve the quality of an application by detecting defects. Negative testing helps you to improve the testing coverage of the application. Negative testing makes the application more stable and reliable. Negative testing together with positive testing allows users to test the application with any valid (or invalid) input data. == Benefits of negative testing == Negative testing is done to check that the product deals properly with the circumstance for which it is not programmed. The fundamental aim of this testing is to check how bad data is taken care of by the systems, and appropriate errors are shown to the client when bad data is entered. Both positive and negative testing play an important role. Positive testing ensures that the application does what it is implied for and performs each function as expected. Negative testing is opposite of positive testing. Negative testing discovers diverse approaches to make the application crash and handle the crash effortlessly. Example If there is a text box that can only take numeric values but the user tries to type a letter, the correct behavior would be to display a message such as "(Incorrect data) Please enter a number". If the user is to fill the name field and there are ground rules that the name text is mandatory to fill, but that the name box shouldn't have values other than letters (no numeric values and special characters). Negative test cases could be a name containing numeric values or special characters. The correct behavior of the system would be to not display those invalid characters. == Parameters for writing Negative test cases == There are two basic techniques that help to write the sufficient test cases to cover most of the functionalities of the system. Both these techniques are used in positive testing as well. The two parameters are: Boundary-value analysis Boundary indicates a limit to something. In this parameter, test scenarios are designed in such a way that it covers the boundary values and validates how the application behaves on these boundary values. Example If there is an application that accepts Ids ranging from 0–255. Hence in this scenario, 0,255 will form the boundary values. The values within the range of 0–255 will constitute the positive testing. Any inputs going below 0 or above 255 will be considered invalid and will constitute negative testing. Equivalence Partitioning The input data may be divided into many partitions. Values from each partition must be tested at least once. Partitions with valid values are used for positive testing. While partitions with invalid values are used for negative testing. Example Numeric values from minus ten to ten are divided into two partitions: from minus ten to zero and from one to ten. If we need to test positive numeric values, then the first partition (from minus ten to zero) is used in negative testing. == References ==
Data augmentation
Data augmentation is a statistical technique which allows maximum likelihood estimation from incomplete data. Data augmentation has important applications in Bayesian analysis, and the technique is widely used in machine learning to reduce overfitting when training machine learning models, achieved by training models on several slightly-modified copies of existing data. == Synthetic oversampling techniques for traditional machine learning == Synthetic Minority Over-sampling Technique (SMOTE) is a method used to address imbalanced datasets in machine learning. In such datasets, the number of samples in different classes varies significantly, leading to biased model performance. For example, in a medical diagnosis dataset with 90 samples representing healthy individuals and only 10 samples representing individuals with a particular disease, traditional algorithms may struggle to accurately classify the minority class. SMOTE rebalances the dataset by generating synthetic samples for the minority class. For instance, if there are 100 samples in the majority class and 10 in the minority class, SMOTE can create synthetic samples by randomly selecting a minority class sample and its nearest neighbors, then generating new samples along the line segments joining these neighbors. This process helps increase the representation of the minority class, improving model performance. == Data augmentation for image classification == When convolutional neural networks grew larger in mid-1990s, there was a lack of data to use, especially considering that some part of the overall dataset should be spared for later testing. It was proposed to perturb existing data with affine transformations to create new examples with the same labels, which were complemented by so-called elastic distortions in 2003, and the technique was widely used as of 2010s. Data augmentation can enhance CNN performance and acts as a countermeasure against CNN profiling attacks. Data augmentation has become fundamental in image classification, enriching training dataset diversity to improve model generalization and performance. The evolution of this practice has introduced a broad spectrum of techniques, including geometric transformations, color space adjustments, and noise injection. === Geometric Transformations === Geometric transformations alter the spatial properties of images to simulate different perspectives, orientations, and scales. Common techniques include: Rotation: Rotating images by a specified degree to help models recognize objects at various angles. Flipping: Reflecting images horizontally or vertically to introduce variability in orientation. Cropping: Removing sections of the image to focus on particular features or simulate closer views. Translation: Shifting images in different directions to teach models positional invariance. Morphing within the same class: Generating new samples by applying morphing techniques between two images belonging to the same class, thereby increasing intra-class diversity . === Color Space Transformations === Color space transformations modify the color properties of images, addressing variations in lighting, color saturation, and contrast. Techniques include: Brightness Adjustment: Varying the image's brightness to simulate different lighting conditions. Contrast Adjustment: Changing the contrast to help models recognize objects under various clarity levels. Saturation Adjustment: Altering saturation to prepare models for images with diverse color intensities. Color Jittering: Randomly adjusting brightness, contrast, saturation, and hue to introduce color variability. === Noise Injection === Injecting noise into images simulates real-world imperfections, teaching models to ignore irrelevant variations. Techniques involve: Gaussian Noise: Adding Gaussian noise mimics sensor noise or graininess. Salt and Pepper Noise: Introducing black or white pixels at random simulates sensor dust or dead pixels. == Data augmentation for signal processing == Residual or block bootstrap can be used for time series augmentation. === Biological signals === Synthetic data augmentation is of paramount importance for machine learning classification, particularly for biological data, which tend to be high dimensional and scarce. The applications of robotic control and augmentation in disabled and able-bodied subjects still rely mainly on subject-specific analyses. Data scarcity is notable in signal processing problems such as for Parkinson's Disease Electromyography signals, which are difficult to source - Zanini, et al. noted that it is possible to use a generative adversarial network (in particular, a DCGAN) to perform style transfer in order to generate synthetic electromyographic signals that corresponded to those exhibited by sufferers of Parkinson's Disease. The approaches are also important in electroencephalography (brainwaves). Wang, et al. explored the idea of using deep convolutional neural networks for EEG-Based Emotion Recognition, results show that emotion recognition was improved when data augmentation was used. A common approach is to generate synthetic signals by re-arranging components of real data. Lotte proposed a method of "Artificial Trial Generation Based on Analogy" where three data examples x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}} provide examples and an artificial x s y n t h e t i c {\displaystyle x_{synthetic}} is formed which is to x 3 {\displaystyle x_{3}} what x 2 {\displaystyle x_{2}} is to x 1 {\displaystyle x_{1}} . A transformation is applied to x 1 {\displaystyle x_{1}} to make it more similar to x 2 {\displaystyle x_{2}} , the same transformation is then applied to x 3 {\displaystyle x_{3}} which generates x s y n t h e t i c {\displaystyle x_{synthetic}} . This approach was shown to improve performance of a Linear Discriminant Analysis classifier on three different datasets. Current research shows great impact can be derived from relatively simple techniques. For example, Freer observed that introducing noise into gathered data to form additional data points improved the learning ability of several models which otherwise performed relatively poorly. Tsinganos et al. studied the approaches of magnitude warping, wavelet decomposition, and synthetic surface EMG models (generative approaches) for hand gesture recognition, finding classification performance increases of up to +16% when augmented data was introduced during training. More recently, data augmentation studies have begun to focus on the field of deep learning, more specifically on the ability of generative models to create artificial data which is then introduced during the classification model training process. In 2018, Luo et al. observed that useful EEG signal data could be generated by Conditional Wasserstein Generative Adversarial Networks (GANs) which was then introduced to the training set in a classical train-test learning framework. The authors found classification performance was improved when such techniques were introduced. === Mechanical signals === The prediction of mechanical signals based on data augmentation brings a new generation of technological innovations, such as new energy dispatch, 5G communication field, and robotics control engineering. In 2022, Yang et al. integrate constraints, optimization and control into a deep network framework based on data augmentation and data pruning with spatio-temporal data correlation, and improve the interpretability, safety and controllability of deep learning in real industrial projects through explicit mathematical programming equations and analytical solutions. == See also == Oversampling and undersampling in data analysis Surrogate data Generative adversarial network Variational autoencoder Data pre-processing Convolutional neural network Regularization (mathematics) Data preparation Data fusion == References ==
Physics-informed neural networks
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples. == Function approximation == Most of the physical laws that govern the dynamics of a system can be described by partial differential equations. For example, the Navier–Stokes equations are a set of partial differential equations derived from the conservation laws (i.e., conservation of mass, momentum, and energy) that govern fluid mechanics. The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry. However, these equations cannot be solved exactly and therefore numerical methods must be used (such as finite differences, finite elements and finite volumes). In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization. Recently, solving the governing partial differential equations of physical phenomena using deep learning has emerged as a new field of scientific machine learning (SciML), leveraging the universal approximation theorem and high expressivity of neural networks. In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied. However, such networks do not consider the physical characteristics underlying the problem, and the level of approximation accuracy provided by them is still heavily dependent on careful specifications of the problem geometry as well as the initial and boundary conditions. Without this preliminary information, the solution is not unique and may lose physical correctness. On the other hand, physics-informed neural networks (PINNs) leverage governing physical equations in neural network training. Namely, PINNs are designed to be trained to satisfy the given training data as well as the imposed governing equations. In this fashion, a neural network can be guided with training data that do not necessarily need to be large and complete. Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions. Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity. PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs. PINNs can be thought of as a meshfree alternative to traditional approaches (e.g., CFD for fluid dynamics), and new data-driven approaches for model inversion and system identification. Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained. In addition, they allow for exploiting automatic differentiation (AD) to compute the required derivatives in the partial differential equations, a new class of differentiation techniques widely used to derive neural networks assessed to be superior to numerical or symbolic differentiation. == Modeling and computation == A general nonlinear partial differential equation can be: u t + N [ u ; λ ] = 0 , x ∈ Ω , t ∈ [ 0 , T ] {\displaystyle u_{t}+N[u;\lambda ]=0,\quad x\in \Omega ,\quad t\in [0,T]} where u ( t , x ) {\displaystyle u(t,x)} denotes the solution, N [ ⋅ ; λ ] {\displaystyle N[\cdot ;\lambda ]} is a nonlinear operator parameterized by λ {\displaystyle \lambda } , and Ω {\displaystyle \Omega } is a subset of R D {\displaystyle \mathbb {R} ^{D}} . This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems: data-driven solution data-driven discovery of partial differential equations. === Data-driven solution of partial differential equations === The data-driven solution of PDE computes the hidden state u ( t , x ) {\displaystyle u(t,x)} of the system given boundary data and/or measurements z {\displaystyle z} , and fixed model parameters λ {\displaystyle \lambda } . We solve: u t + N [ u ] = 0 , x ∈ Ω , t ∈ [ 0 , T ] {\displaystyle u_{t}+N[u]=0,\quad x\in \Omega ,\quad t\in [0,T]} . By defining the residual f ( t , x ) {\displaystyle f(t,x)} as f := u t + N [ u ] = 0 {\displaystyle f:=u_{t}+N[u]=0} , and approximating u ( t , x ) {\displaystyle u(t,x)} by a deep neural network. This network can be differentiated using automatic differentiation. The parameters of u ( t , x ) {\displaystyle u(t,x)} and f ( t , x ) {\displaystyle f(t,x)} can be then learned by minimizing the following loss function L t o t {\displaystyle L_{tot}} : L t o t = L u + L f {\displaystyle L_{tot}=L_{u}+L_{f}} . Where L u = ‖ u − z ‖ Γ {\displaystyle L_{u}=\Vert u-z\Vert _{\Gamma }} is the error between the PINN u ( t , x ) {\displaystyle u(t,x)} and the set of boundary conditions and measured data on the set of points Γ {\displaystyle \Gamma } where the boundary conditions and data are defined, and L f = ‖ f ‖ Γ {\displaystyle L_{f}=\Vert f\Vert _{\Gamma }} is the mean-squared error of the residual function. This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process. This approach has been used to yield computationally efficient physics-informed surrogate models with applications in the forecasting of physical processes, model predictive control, multi-physics and multi-scale modeling, and simulation. It has been shown to converge to the solution of the PDE. === Data-driven discovery of partial differential equations === Given noisy and incomplete measurements z {\displaystyle z} of the state of the system, the data-driven discovery of PDE results in computing the unknown state u ( t , x ) {\displaystyle u(t,x)} and learning model parameters λ {\displaystyle \lambda } that best describe the observed data and it reads as follows: u t + N [ u ; λ ] = 0 , x ∈ Ω , t ∈ [ 0 , T ] {\displaystyle u_{t}+N[u;\lambda ]=0,\quad x\in \Omega ,\quad t\in [0,T]} . By defining f ( t , x ) {\displaystyle f(t,x)} as f := u t + N [ u ; λ ] = 0 {\displaystyle f:=u_{t}+N[u;\lambda ]=0} , and approximating u ( t , x ) {\displaystyle u(t,x)} by a deep neural network, f ( t , x ) {\displaystyle f(t,x)} results in a PINN. This network can be derived using automatic differentiation. The parameters of u ( t , x ) {\displaystyle u(t,x)} and f ( t , x ) {\displaystyle f(t,x)} , together with the parameter λ {\displaystyle \lambda } of the differential operator can be then learned by minimizing the following loss function L t o t {\displaystyle L_{tot}} : L t o t = L u + L f {\displaystyle L_{tot}=L_{u}+L_{f}} . Where L u = ‖ u − z ‖ Γ {\displaystyle L_{u}=\Vert u-z\Vert _{\Gamma }} , with u {\displaystyle u} and z {\displaystyle z} state solutions and measurements at sparse location Γ {\displaystyle \Gamma } , respectively and L f = ‖ f ‖ Γ {\displaystyle L_{f}=\Vert f\Vert _{\Gamma }} residual function. This second term requires the structured information represented by the partial differential equations to be satisfied in the training process. This strategy allows for discovering dynamic models described by nonlinear PDEs assembling computationally efficient and fully differentiable surrogate models that may find application in predictive forecasting, control, and data assimilation. == Physics-informed neural networks for piece-wise function approximation == PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. DPINN (Distributed physics-informed neural networks) and DPIELM (Distributed physics-informed extreme learning machines) are generalizable space-time domain discretization for better approximation. DPIELM is an extremely fast and lightweight approximator with competitive accuracy. Domain scaling on the top has a special effect. Another school of thought is discretization for parallel computation to leverage usage of available computational resources. XPINNs is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization. == Physics-informed neural networks and theory of functional connections == In the PINN framework, initial and boundary conditions are not analytically satisfied, thus they need to be included in the loss function of the network to be simultaneously learned with the differential equation (DE) unknown functions. Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution. This drawback is overcome by using functional interpolation techniques such as the Theory of functional connections (TFC)'s constrained expression, in the Deep-TFC framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. A further improvement of PINN and functional interpolation approach is given by the Extreme Theory of Functional Connections (X-TFC) framework, where a single-layer Neural Network and the extreme learning machine training algorithm are employed. X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications. == Physics-informed PointNet (PIPN) for multiple sets of irregular geometries == Regular PINNs are only able to obtain the solution of a forward or inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) is fundamentally the result of a combination of PINN's loss function with PointNet. In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas. PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for incompressible flow, heat transfer and linear elasticity. == Physics-informed neural networks (PINNs) for inverse computations == Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, demonstrating their applicability across science, engineering, and economics. They have shown to be useful for solving inverse problems in a variety of fields, including nano-optics, topology optimization/characterization, multiphase flow in porous media, and high-speed fluid flow. PINNs have demonstrated flexibility when dealing with noisy and uncertain observation datasets. They also demonstrated clear advantages in the inverse calculation of parameters for multi-fidelity datasets, meaning datasets with different quality, quantity, and types of observations. Uncertainties in calculations can be evaluated using ensemble-based or Bayesian-based calculations. PINNs can also be used in connection with symbolic regression for discovering the mathematical expression in connection with discovery of parameters and functions. One example of such application is the study on chemical ageing of cellulose insulation material, in this example PINNs are used to first discover a parameter for a set of ordinary differential equations (ODEs) and later a function solution, which is later used to find a more fitting expression using a symbolic regression with a combination of operators. == Physics-informed neural networks for elasticity problems == Ensemble of physics-informed neural networks is applied for solving plane elasticity problems. Surrogate networks are intended for the unknown functions, namely, the components of the strain and the stress tensors as well as the unknown displacement field, respectively. The residual network provides the residuals of the partial differential equations (PDEs) and of the boundary conditions. The computational approach is based on principles of artificial intelligence. This approach can be extended to nonlinear elasticity problems, where the constitutive equations are nonlinear. PINNs can also be used for Kirchhoff plate bending problems with transverse distributed loads and to contact models with elastic Winkler’s foundations. == Physics-informed neural networks (PINNs) with backward stochastic differential equation == Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions. == Physics-informed neural networks for biology == An extension or adaptation of PINNs are Biologically-informed neural networks (BINNs). BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function L t o t {\displaystyle L_{tot}} is modified to include L c o n s t r {\displaystyle L_{constr}} , a term used to incorporate domain-specific knowledge that helps enforce biological applicability. For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms. Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases. A natural example of BINNs can be found in cell dynamics, where the cell density u ( x , t ) {\displaystyle u(x,t)} is governed by a reaction-diffusion equation with diffusion and growth functions D ( u ) {\displaystyle D(u)} and G ( u ) {\displaystyle G(u)} , respectively: u t = ∇ ⋅ ( D ( u ) ∇ u ) + G ( u ) u , x ∈ Ω , t ∈ [ 0 , T ] {\displaystyle u_{t}=\nabla \cdot (D(u)\nabla u)+G(u)u,\quad x\in \Omega ,\quad t\in [0,T]} In this case, a component of L c o n s t r {\displaystyle L_{constr}} could be | | D | | Γ {\displaystyle ||D||_{\Gamma }} for D < D m i n , D > D m a x {\displaystyle D<D_{min},D>D_{max}} , which penalizes values of D {\displaystyle D} that fall outside a biologically relevant diffusion range defined by D m i n ≤ D ≤ D m a x {\displaystyle D_{min}\leq D\leq D_{max}} . Furthermore, the BINN architecture, when utilizing multilayer-perceptrons (MLPs), would function as follows: an MLP is used to construct u M L P ( x , t ) {\displaystyle u_{MLP}(x,t)} from model inputs ( x , t ) {\displaystyle (x,t)} , serving as a surrogate model for the cell density u ( x , t ) {\displaystyle u(x,t)} . This surrogate is then fed into the two additional MLPs, D M L P ( u M L P ) {\displaystyle D_{MLP}(u_{MLP})} and G M L P ( u M L P ) {\displaystyle G_{MLP}(u_{MLP})} , which model the diffusion and growth functions. Automatic differentiation can then be applied to compute the necessary derivatives of u M L P {\displaystyle u_{MLP}} , D M L P {\displaystyle D_{MLP}} and G M L P {\displaystyle G_{MLP}} to form the governing reaction-diffusion equation. Note that since u M L P {\displaystyle u_{MLP}} is a surrogate for the cell density, it may contain errors, particularly in regions where the PDE is not fully satisfied. Therefore, the reaction-diffusion equation may be solved numerically, for instance using a method-of-lines approach. == Limitations == Translation and discontinuous behavior are hard to approximate using PINNs. They fail when solving differential equations with slight advective dominance and hence asymptotic behaviour causes the method to fail. Such PDEs could be solved by scaling variables. This difficulty in training of PINNs in advection-dominated PDEs can be explained by the Kolmogorov n–width of the solution. They also fail to solve a system of dynamical systems and hence have not been a success in solving chaotic equations. One of the reasons behind the failure of regular PINNs is soft-constraining of Dirichlet and Neumann boundary conditions which pose a multi-objective optimization problem which requires manually weighing the loss terms to be able to optimize. More generally, posing the solution of a PDE as an optimization problem brings with it all the problems that are faced in the world of optimization, the major one being getting stuck in local optima. == References == == External links == Physics Informed Neural Network PINN – repository to implement physics-informed neural network in Python XPINN – repository to implement extended physics-informed neural network (XPINN) in Python PIPN [2]– repository to implement physics-informed PointNet (PIPN) in Python
Isotropic measure
In probability theory, an isotropic measure is any mathematical measure that is invariant under linear isometries. It is a standard simplification and assumption used in probability theory. Generally, it is used in the context of measure theory on n {\displaystyle n} -dimensional Euclidean space, for which it can be intuitive to study measures that are unchanged by rotations and translations. An obvious example of such a measure is the standard way of assigning a measure to subsets of n-dimensional Euclidean space: Lebesgue measure. == Definition == An isotropic measure on R d {\displaystyle \mathbb {R} ^{d}} is a (Borel) measure that is absolutely continuous on R d ∖ { 0 } {\displaystyle \mathbb {R} ^{d}\smallsetminus \{0\}} and that is invariant under linear isometries of R d {\displaystyle \mathbb {R} ^{d}} . Alternatively, an isotropic measure, μ ( d z ) {\displaystyle \mu (dz)} , is a measure for which there exists a real density function μ 0 ( r ) {\displaystyle \mu _{0}(r)} on ( 0 , ∞ ) {\displaystyle (0,\infty )} such that μ ( d z ) = μ 0 ( | z | ) d z {\displaystyle \mu (dz)=\mu _{0}\left(|z|\right)dz} for z ≠ 0 {\displaystyle z\neq 0} . == Example == The Lebesgue measure on R d {\displaystyle \mathbb {R} ^{d}} is invariant under linear isometries and is hence an isotropic measure. In this case, μ ( d z ) = d z {\displaystyle \mu (dz)=dz} . For d = 1 {\displaystyle d=1} , the linear isometries of R 1 {\displaystyle \mathbb {R} ^{1}} are of the form f ( x ) = x + c {\displaystyle f(x)=x+c} or f ( x ) = − x + c {\displaystyle f(x)=-x+c} , for some constant c ∈ R {\displaystyle c\in \mathbb {R} } . Hence an isotropic measure on R 1 {\displaystyle \mathbb {R} ^{1}} must satisfy μ ( A ) = μ ( − A + b ) {\displaystyle \mu (A)=\mu (-A+b)} , for any A ⊆ R 1 {\displaystyle A\subseteq \mathbb {R} ^{1}} and b ∈ R {\displaystyle b\in \mathbb {R} } . The measure μ ( d z ) = | z | − 2 d z {\displaystyle \mu (dz)=|z|^{-2}dz} , for z ≠ 0 {\displaystyle z\neq 0} , is one such isotropic measure. == Unimodal measure == In probability theory it is common that another assumption is added to measures in addition to the measure being isotropic. A unimodal measure (or isotropic unimodal measure) is any isotropic measure μ ( d z ) = μ 0 ( | z | ) d z {\displaystyle \mu (dz)=\mu _{0}\left(|z|\right)dz} such that μ 0 ( r ) {\displaystyle \mu _{0}(r)} is nonincreasing on ( 0 , ∞ ) {\displaystyle (0,\infty )} . It is possible that μ ( { 0 } ) > 0 {\displaystyle \mu \left(\left\{0\right\}\right)>0} . == Isotropic and unimodal stochastic processes == In studying stochastic processes, in particular Lévy processes, a reasonable assumption to make is that, for each element of the index set, the probability distributions of the random variables are isotropic or even unimodal measures. More specifically, an isotropic Lévy process is a Lévy process, X = ( X t , t ≥ 0 ) {\displaystyle X=\left(X_{t},t\geq 0\right)} , such that all its distributions, p t ( d x ) {\displaystyle p_{t}(dx)} , are isotropic measures. A unimodal Lévy process (or isotropic unimodal Lévy process) is a Lévy process, X = ( X t , t ≥ 0 ) {\displaystyle X=\left(X_{t},t\geq 0\right)} , such that all its distributions, p t ( d x ) {\displaystyle p_{t}(dx)} , are unimodal measures. == See also == Measure (mathematics) Stochastic process Lévy process == References ==
Dataveillance
Dataveillance is the practice of monitoring and collecting online data as well as metadata. The word is a portmanteau of data and surveillance. Dataveillance is concerned with the continuous monitoring of users' communications and actions across various platforms. For instance, dataveillance refers to the monitoring of data resulting from credit card transactions, GPS coordinates, emails, social networks, etc. Using digital media often leaves traces of data and creates a digital footprint of our activity. Unlike sousveillance, this type of surveillance is not often known and happens discreetly. Dataveillance may involve the surveillance of groups of individuals. There exist three types of dataveillance: personal dataveillance, mass dataveillance, and facilitative mechanisms. Unlike computer and network surveillance, which collects data from computer networks and hard drives, dataveillance monitors and collects data (and metadata) through social networks and various other online platforms. Dataveillance is not to be confused with electronic surveillance. Electronic surveillance refers to the surveillance of oral and audio systems such as wire tapping. Additionally, electronic surveillance depends on having suspects already presumed before surveillance can occur. On the other hand, dataveillance can use data to identify an individual or a group. Oftentimes, these individuals and groups have sparked some form of suspicion with their activity. Dataveillance has significant impacts on advertising theory and practice. These impacts particularly stem from recent infrastructure and technological advancements that increase the extent to which advertisers can gain data information about consumers and their behaviours. For example, collecting data can be extended into collecting consumers’ offline behaviors and to places that are considered private. == Types == The types of dataveillance are separated by the way data is collected, as well as the number of individuals associated with it. Personal Dataveillance: Personal dataveillance refers to the collection and monitoring of a person's personal data. Personal dataveillance can occur when an individual's data causes a suspicion or has attracted attention in some way. Personal data can include information such as birth date, address, social security (or social insurance) number, as well as other unique identifiers. Mass Dataveillance: Refers to the collection of data on groups of people. The general distinction between mass dataveillance and personal dataveillance is the surveillance and collection of data as a group rather than an individual. Facilitative Mechanisms: Unlike mass dataveillance a group is not targeted. An individual's data is placed into a system or database along with various others where computer matching can unveil distinct patterns. An individual's data is never considered to part of a group in this instance. == Benefits and concerns == === Pros === There are many concerns and benefits associated with dataveillance. Dataveillance can be useful for collecting and verifying data in ways that are beneficial. For instance, personal dataveillance can be utilized by financial institutions to track fraudulent purchases on credit card accounts. This has the potential to prevent and regulate fraudulent financial claims and resolve the issue. Compared to traditional methods of surveillance, dataveillance tends to be an economical approach, since it can monitor more information in a less time. In this case, the responsibility of monitoring is transferred to computers, therefore reducing time and human labor in the process of surveilling. Dataveillance has also been useful in assessing security threats associated with terrorism. Authorities have utilized dataveillance to help them understand and predict potential terrorist or criminal threats. Dataveillance is very important to the concept of predictive policing. Since predictive policing requires a great deal of data to operate effectively and dataveillance can do just that. Predictive policing allows police to intervene in potential crimes to create safer communities and better understand potential threats. Businesses also rely on dataveillance to help them understand the online activity for potential clients by tracking their online activity. By tracking their online activity through cookies, as well as various other methods, businesses are able to better understand what sort of advertisements work with their existing and potential clients. While making online transactions users often give away their information freely which is later used by the company for corporate or private interests. For businesses this information can help boost sales and attract attention towards their products to help generate revenue. === Cons === On the other hand, there are many concerns that arise with dataveillance. Dataveillance assumes that our technologies and data are a true reflection of ourselves. This presents itself as a potential concern. This becomes a critical concern when associated with the surveillance of criminal suspects and terrorist groups. Authorities who monitor these suspects would then assume that the data they have collected reflects their actions. This helps to understand potential or past threats for criminals as well. There is also the lack of transparency and privacy regarding companies who collect and share their user's data. This is a critical issue with both the trust and belief of the data and its uses. Many social networks have argued that their users forfeit part of their privacy in order to provide their service for free. Several of these companies choose not to fully disclose what data is collected and who it is shared with. When data is volunteered to companies it is difficult to know what companies have gained data about you and your online activity. Much of an individual's data is shared with websites and social networks in order to provide a more customized marketing experience. Many of those social networks may share your information with intelligence agencies and authorities, without a user's knowledge. Since the recent scandal involving Edward Snowden and National Security Agency, it has been revealed that authorities may have access to more data from various devices and platforms. It has become very difficult to know what will happen with your data or what specifically has been collected. It is also important to recognize that while online users are worried about their information, many of those same worries are not always applied to their activities or behavior. With social networks collecting a large amount of personal data such as birth date, legal name, sex, and photos there is an issue of dataveillance compromising confidentiality. Ultimately, dataveillance can compromise online anonymity. Despite dataveillance compromising anonymity, anonymity itself presents a crucial issue. Online criminals who steal users' data and information may exploit it for their own gain. Tactics used by online users to conceal their identity, make it difficult for others to track the criminal behavior and identify those responsible. Having unique identifiers such as IP addresses allows for the identification of users actions, which are often used to track illegal online activity such as piracy. While dataveillance may help businesses market their products to existing and potential clients, there are concerns over how and who has access to customer data. When visiting a business's website, cookies are often installed onto users' devices. Cookies have been a new way for businesses to obtain data on potential customers, since it allows them to track their online activities. Companies may also look to sell information they have collected on their clients to third parties. Since clients are not notified about these transactions it becomes difficult to know where your data has been sold. Furthermore, since dataveillance is discrete, clients are very unlikely to know the exact nature of the data that has been either collected or sold. Education on tracking tools (such as cookies) presents a critical issue. If businesses or online services are unwilling to define cookies, or educate their users as to why they are being used, many may unwillingly accept them. The issue stemming from companies and other agencies which collect personal data and information is that they have now engaged in the practices of data brokering. Data brokers, such as Acxiom, collect users' information, and are known for often selling that information to third parties. While companies may disclose that they are collecting data or online activity from their users, it is usually not comprehensible by everyday users. It is difficult for everyday people to spot this disclosure, since it is hidden by jargon and writing most often understood by lawyers. This is now becoming a new source of revenue for companies. In terms of predictive policing, the proper use of crime data and the combination of offline practices and technology have also become the challenges for police institutions. Too much reliance on results brought up by big data may lead to the subjective judgement of police. It also may reduce the amount of real-time on site communication between local police officers and residents in particular areas, thus decreasing the opportunity for the police to investigate and cruise in local communities at a frequent basis. Secondly, data security still remains to be a huge dilemma, considering the access to crime data and the potential use of these data for negative purposes. Last but not least, discrimination towards certain community might be developed due to the findings of data analysis, which could lead to improper behaviours or over-reaction of surveillance. One of the major issues with dataveillance is the removal of a human actors from the loop. Computer systems which oversee data and construct representations. This can allow for greater risk of false representations being created, as they are based on the data that has been surveilled. Computer systems can only use the data they have, wand if this is not an accurate depiction of individuals or their situations then false representations can be created. Dataveillance is highly automated through computer systems which observe our interactions and activities. Highly automated systems and technology eliminates human understanding of our activities. == Resistance == With such an increase in data collection and surveillance, many individuals are now attempting to reduce the concerns which have risen alongside it. Countersurveillance is perhaps the most significant concept focused on the tactics to prevent dataveillance. There are various tools associated with the concept of countersurveillance, which disrupt the effectiveness and possibilities of dataveillance. Privacy-enhancing technologies, otherwise known as PETs, have been utilized by individuals to reduce data collection and decrease the possibility for dataveillance. PETs, such as adblocker, attempt to prevent other actors from collecting users data. In the case of adblock, the web browser extension is able to prevent the display of advertisements, which disrupts data collection about users online interactions. For businesses that may limit their opportunity to provide online users with tailored advertisements. Recently, the European Union demanded companies to indicate that their website uses cookies. This law has become basic practice by many online services and companies, however, education on tracking tools with the general public differs and therefore can prevent the effectiveness of this sort of ruling. However, many companies are launching new PETs initiatives within their products. For example, Mozilla's Firefox Focus in pre-enabled with customizable privacy features, which allows for better online privacy. A few of the tools featured in Firefox Focus are also mimicked by other web browsers such as Apple's Safari. Some of the various tools featured with these web browsers are the capabilities to block ads and remove cookie data and history. Private browsing, otherwise known as Incognito for Google Chrome users, allows users to browse the web with having their history or cookies saved. These tools, aid in curbing dataveillance, by disrupting the collection and analysis of users' data. While several other web browsers may not pre-enable these PETs within their software users can download the same tools, like adblocker, through their browser's web store such as the Google Chrome Web Store. Many of these extensions help enable better privacy tools. Social networks, such as Facebook, have introduced new security measures to help users protect their online data. Users can block their posts and other information on their account other than their name and profile picture. While this doesn't necessarily prevent data tracking these tools have helped to keep users data more private and less accessible for online criminals to exploit. == See also == Big data Critical data studies Global surveillance disclosures (2013–present) Mass surveillance Surveillance capitalism Dataism Internet privacy == References ==
Society 5.0
Society 5.0, also known as the "Super Smart Society", is a concept that was firstly outlined and closely described in the Report on the Fifth Science and Technology Basic Plan, that was written by the Cabinet of Japan's Cabinet Office’s Council for Science, Technology and Innovation, and bestowed to the Japanese government, on 18 December 2015. It aims to use advanced technologies such as artificial intelligence to address societal challenges and enhance economic productivity across various sectors of everyday life. Building on the Fourth Industrial Revolution, the concept of Society 5.0 was officially made public by the Cabinet of Japan's Cabinet Office’s Council for Science, Technology and Innovation. The initiative was formally presented by the former Prime Minister Shinzo Abe in 2019 as a part of the Fifth Science and Technology Basic Plan. It emphasizes the integration of cyberspace and physical space. == Objective == Society 5.0 is designed to promote a shift toward a human-centered, knowledge-based, and data-driven society. The Cabinet Office of the Government of Japan describes Society 5.0 as an initiative aimed at ensuring safety, security, comfort and health for individuals, facilitating the pursuit of their preferred lifestyles. == History == The term "Society 5.0" refers to a proposed fifth stage of human following the hunter-gatherer society (Society 1.0), the agrarian society (Society 2.0), the industrial society (Society 3.0), and the information society (Society 4.0). The concept envisions a society that uses digital transformation technologies to solve social problems and improve quality of life. === Society 1.0 (Hunting Society) === In anthropology, a hunter-gatherer society is a society dependent on hunting wild animals and gathering fruits and plants for sustenance. Anthropologists propose that all human societies followed a hunter-gatherer lifestyle until the advent of agriculture during the Neolithic period. === Society 2.0 (Agricultural Society) === An agrarian society is a societal structure where the economy primarily relies on agriculture. The origins of agrarian societies are associated with the Neolithic Revolution, also known as the First Agricultural Revolution, which occurred during the Neolithic or Stone Age. These societies have existed in various parts of the world for thousands of years. === Society 3.0 (Industrial Society) === An industrial society is one that has undergone significant industrialization. Industrial societies often develop from agrarian societies and are characterized by technological advancements across various fields. === Society 4.0 (Information Society) === An information society is a society in which activities related to the utilization, generation, dissemination, and incorporation of information hold considerable importance. Key factors enabling this phenomenon are information and communication technologies, which have contributed to the development of automated machines and robots impacting industry and information management. == Technological applications == A report by Japan's National Institute of Advanced Industrial Science and Technology lists the following six topics as basic technologies for realizing Society 5.0: Technology for enhancing human capabilities, fostering sensitivity and enabling control within Cyber-Physical Systems (CPS). AI hardware technology and AI application systems. Self-developing security technology for AI applications. Highly efficient network technology along with advanced information input and output devices. Next-generation manufacturing system technology designed to facilitate mass customization. New measurement technology tailored for digital manufacturing processes. The Japan Business Federation (Keidanren) initiated "Society 5.0 for SDGs" in alignment with the United Nations' Sustainable Development Goals (SDGs), citing compatibility between the concepts. == See also == Cyber manufacturing List of emerging technologies Digital modelling and fabrication Computer-integrated manufacturing Industrial control system Simulation software Technological singularity Work 4.0 World Economic Forum == References ==
Affine arithmetic
Affine arithmetic (AA) is a model for self-validated numerical analysis. In AA, the quantities of interest are represented as affine combinations (affine forms) of certain primitive variables, which stand for sources of uncertainty in the data or approximations made during the computation. Affine arithmetic is meant to be an improvement on interval arithmetic (IA), and is similar to generalized interval arithmetic, first-order Taylor arithmetic, the center-slope model, and ellipsoid calculus — in the sense that it is an automatic method to derive first-order guaranteed approximations to general formulas. Affine arithmetic is potentially useful in every numeric problem where one needs guaranteed enclosures to smooth functions, such as solving systems of non-linear equations, analyzing dynamical systems, integrating functions, differential equations, etc. Applications include ray tracing, plotting curves, intersecting implicit and parametric surfaces, error analysis (mathematics), process control, worst-case analysis of electric circuits, and more. == Definition == In affine arithmetic, each input or computed quantity x is represented by a formula x = x 0 + x 1 ϵ 1 + x 2 ϵ 2 + {\displaystyle x=x_{0}+x_{1}\epsilon _{1}+x_{2}\epsilon _{2}+{}} ⋯ {\displaystyle \cdots } + x n ϵ n {\displaystyle {}+x_{n}\epsilon _{n}} where x 0 , x 1 , x 2 , {\displaystyle x_{0},x_{1},x_{2},} … , {\displaystyle \dots ,} x n {\displaystyle x_{n}} are known floating-point numbers, and ϵ 1 , ϵ 2 , … , ϵ n {\displaystyle \epsilon _{1},\epsilon _{2},\dots ,\epsilon _{n}} are symbolic variables whose values are only known to lie in the range [-1,+1]. Thus, for example, a quantity X which is known to lie in the range [3,7] can be represented by the affine form x = 5 + 2 ϵ k {\displaystyle x=5+2\epsilon _{k}} , for some k. Conversely, the form x = 10 + 2 ϵ 3 − 5 ϵ 8 {\displaystyle x=10+2\epsilon _{3}-5\epsilon _{8}} implies that the corresponding quantity X lies in the range [3,17]. The sharing of a symbol ϵ j {\displaystyle \epsilon _{j}} among two affine forms x {\displaystyle x} , y {\displaystyle y} implies that the corresponding quantities X, Y are partially dependent, in the sense that their joint range is smaller than the Cartesian product of their separate ranges. For example, if x = 10 + 2 ϵ 3 − 6 ϵ 8 {\displaystyle x=10+2\epsilon _{3}-6\epsilon _{8}} and y = 20 + 3 ϵ 4 + 4 ϵ 8 {\displaystyle y=20+3\epsilon _{4}+4\epsilon _{8}} , then the individual ranges of X and Y are [2,18] and [13,27], but the joint range of the pair (X,Y) is the hexagon with corners (2,27), (6,27), (18,19), (18,13), (14,13), (2,21) — which is a proper subset of the rectangle [2,18]×[13,27]. == Affine arithmetic operations == Affine forms can be combined with the standard arithmetic operations or elementary functions, to obtain guaranteed approximations to formulas. === Affine operations === For example, given affine forms x , y {\displaystyle x,y} for X and Y, one can obtain an affine form z {\displaystyle z} for Z = X + Y simply by adding the forms — that is, setting z j {\displaystyle z_{j}} ← {\displaystyle \gets } x j + y j {\displaystyle x_{j}+y_{j}} for every j. Similarly, one can compute an affine form z {\displaystyle z} for Z = α {\displaystyle \alpha } X, where α {\displaystyle \alpha } is a known constant, by setting z j {\displaystyle z_{j}} ← {\displaystyle \gets } α x j {\displaystyle \alpha x_{j}} for every j. This generalizes to arbitrary affine operations like Z = α {\displaystyle \alpha } X + β {\displaystyle \beta } Y + γ {\displaystyle \gamma } . === Non-affine operations === A non-affine operation Z {\displaystyle Z} ← {\displaystyle \gets } F ( X , Y , {\displaystyle F(X,Y,} … {\displaystyle \dots } ) {\displaystyle )} , like multiplication Z {\displaystyle Z} ← {\displaystyle \gets } X Y {\displaystyle XY} or Z {\displaystyle Z} ← {\displaystyle \gets } sin ⁡ ( X ) {\displaystyle \sin(X)} , cannot be performed exactly, since the result would not be an affine form of the ϵ i {\displaystyle \epsilon _{i}} . In that case, one should take a suitable affine function G that approximates F to first order, in the ranges implied by x {\displaystyle x} and y {\displaystyle y} ; and compute z {\displaystyle z} ← {\displaystyle \gets } G ( x , y , {\displaystyle G(x,y,} … {\displaystyle \dots } ) + z k ϵ k {\displaystyle )+z_{k}\epsilon _{k}} , where z k {\displaystyle z_{k}} is an upper bound for the absolute error | F − G | {\displaystyle |F-G|} in that range, and ϵ k {\displaystyle \epsilon _{k}} is a new symbolic variable not occurring in any previous form. The form z {\displaystyle z} then gives a guaranteed enclosure for the quantity Z; moreover, the affine forms x , y , {\displaystyle x,y,} … {\displaystyle \dots } , z {\displaystyle ,z} jointly provide a guaranteed enclosure for the point (X,Y,...,Z), which is often much smaller than the Cartesian product of the ranges of the individual forms. === Chaining operations === Systematic use of this method allows arbitrary computations on given quantities to be replaced by equivalent computations on their affine forms, while preserving first-order correlations between the input and output and guaranteeing the complete enclosure of the joint range. One simply replaces each arithmetic operation or elementary function call in the formula by a call to the corresponding AA library routine. For smooth functions, the approximation errors made at each step are proportional to the square h2 of the width h of the input intervals. For this reason, affine arithmetic will often yield much tighter bounds than standard interval arithmetic (whose errors are proportional to h). === Roundoff errors === In order to provide guaranteed enclosure, affine arithmetic operations must account for the roundoff errors in the computation of the resulting coefficients z j {\displaystyle z_{j}} . This cannot be done by rounding each z j {\displaystyle z_{j}} in a specific direction, because any such rounding would falsify the dependencies between affine forms that share the symbol ϵ j {\displaystyle \epsilon _{j}} . Instead, one must compute an upper bound δ j {\displaystyle \delta _{j}} to the roundoff error of each z j {\displaystyle z_{j}} , and add all those δ j {\displaystyle \delta _{j}} to the coefficient z k {\displaystyle z_{k}} of the new symbol ϵ k {\displaystyle \epsilon _{k}} (rounding up). Thus, because of roundoff errors, even affine operations like Z = α {\displaystyle \alpha } X and Z = X + Y will add the extra term z k ϵ k {\displaystyle z_{k}\epsilon _{k}} . The handling of roundoff errors increases the code complexity and execution time of AA operations. In applications where those errors are known to be unimportant (because they are dominated by uncertainties in the input data and/or by the linearization errors), one may use a simplified AA library that does not implement roundoff error control. == Affine projection model == Affine arithmetic can be viewed in matrix form as follows. Let X 1 , X 2 , {\displaystyle X_{1},X_{2},} … , {\displaystyle \dots ,} X m {\displaystyle X_{m}} be all input and computed quantities in use at some point during a computation. The affine forms for those quantities can be represented by a single coefficient matrix A and a vector b, where element A i , j {\displaystyle A_{i,j}} is the coefficient of symbol ϵ j {\displaystyle \epsilon _{j}} in the affine form of X i {\displaystyle X_{i}} ; and b i {\displaystyle b_{i}} is the independent term of that form. Then the joint range of the quantities — that is, the range of the point ( X 1 , X 2 , {\displaystyle (X_{1},X_{2},} … , {\displaystyle \dots ,} X m ) {\displaystyle X_{m})} — is the image of the hypercube U n = [ − 1 , + 1 ] n {\displaystyle U^{n}=[-1,+1]^{n}} by the affine map from U n {\displaystyle U^{n}} to R m {\displaystyle R^{m}} defined by ϵ {\displaystyle \epsilon } → {\displaystyle \to } A ϵ + b {\displaystyle A\epsilon +b} . The range of this affine map is a zonotope bounding the joint range of the quantities X 1 , X 2 , {\displaystyle X_{1},X_{2},} … , {\displaystyle \dots ,} X m {\displaystyle X_{m}} . Thus one could say that AA is a "zonotope arithmetic". Each step of AA usually entails adding one more row and one more column to the matrix A. == Affine form simplification == Since each AA operation generally creates a new symbol ϵ k {\displaystyle \epsilon _{k}} , the number of terms in an affine form may be proportional to the number of operations used to compute it. Thus, it is often necessary to apply "symbol condensation" steps, where two or more symbols ϵ k {\displaystyle \epsilon _{k}} are replaced by a smaller set of new symbols. Geometrically, this means replacing a complicated zonotope P by a simpler zonotope Q that encloses it. This operation can be done without destroying the first-order approximation property of the final zonotope. == Implementation == === Matrix implementation === Affine arithmetic can be implemented by a global array A and a global vector b, as described above. This approach is reasonably adequate when the set of quantities to be computed is small and known in advance. In this approach, the programmer must maintain externally the correspondence between the row indices and the quantities of interest. Global variables hold the number m of affine forms (rows) computed so far, and the number n of symbols (columns) used so far; these are automatically updated at each AA operation. === Vector implementation === Alternatively, each affine form can be implemented as a separate vector of coefficients. This approach is more convenient for programming, especially when there are calls to library procedures that may use AA internally. Each affine form can be given a mnemonic name; it can be allocated when needed, be passed to procedures, and reclaimed when no longer needed. The AA code then looks much closer to the original formula. A global variable holds the number n of symbols used so far. === Sparse vector implementation === On fairly long computations, the set of "live" quantities (that will be used in future computations) is much smaller than the set of all computed quantities; and ditto for the set of "live" symbols ϵ j {\displaystyle \epsilon _{j}} . In this situation, the matrix and vector implementations are too wasteful of time and space. In such situations, one should use a sparse implementation. Namely, each affine form is stored as a list of pairs (j, x j {\displaystyle x_{j}} ), containing only the terms with non-zero coefficient x j {\displaystyle x_{j}} . For efficiency, the terms should be sorted in order of j. This representation makes the AA operations somewhat more complicated; however, the cost of each operation becomes proportional to the number of nonzero terms appearing in the operands, instead of the number of total symbols used so far. This is the representation used by LibAffa. == References == == External links == [1] Stolfi's page on AA. [2] LibAffa, an LGPL implementation of affine arithmetic. libaffa on GitHub [3] ASOL, a branch-and-prune method to find all solutions to systems of nonlinear equations using affine arithmetic [4] Archived 2021-01-27 at the Wayback Machine YalAA, an object-oriented C++ based template library for affine arithmetic (AA). kv on GitHub (C++ library which can use affine arithmetic)
Claude (language model)
Claude is a family of large language models developed by Anthropic. The first model was released in March 2023. The Claude 3 family, released in March 2024, consists of three models: Haiku, optimized for speed; Sonnet, which balances capability and performance; and Opus, designed for complex reasoning tasks. These models can process both text and images, with Claude 3 Opus demonstrating enhanced capabilities in areas like mathematics, programming, and logical reasoning compared to previous versions. Claude 4, which includes Opus and Sonnet, was released in May 2025. == Training == Claude models are generative pre-trained transformers. They have been pre-trained to predict the next word in large amounts of text. Then, they have been fine-tuned, notably using constitutional AI and reinforcement learning from human feedback (RLHF). === Constitutional AI === Constitutional AI is an approach developed by Anthropic for training AI systems, particularly language models like Claude, to be harmless and helpful without relying on extensive human feedback. The method, detailed in the paper "Constitutional AI: Harmlessness from AI Feedback" involves two phases: supervised learning and reinforcement learning. In the supervised learning phase, the model generates responses to prompts, self-critiques these responses based on a set of guiding principles (a "constitution"), and revises the responses. Then the model is fine-tuned on these revised responses. For the reinforcement learning from AI feedback (RLAIF) phase, responses are generated, and an AI compares their compliance with this constitution. This dataset of AI feedback is used to train a preference model that evaluates responses based on how much they satisfy the constitution. Claude is then fine-tuned to align with this preference model. This technique is similar to RLHF, except that the comparisons used to train the preference model are AI-generated. The constitution for Claude included 75 points, including sections from the UN Universal Declaration of Human Rights. == Models == Claude is named after Claude Shannon, a pioneer in AI research. === Claude === Claude was the initial version of Anthropic's language model released in March 2023, Claude demonstrated proficiency in various tasks but had certain limitations in coding, math, and reasoning capabilities. Anthropic partnered with companies like Notion (productivity software) and Quora (to help develop the Poe chatbot). ==== Claude Instant ==== Claude was released as two versions, Claude and Claude Instant, with Claude Instant being a faster, less expensive, and lighter version. Claude Instant has an input context length of 100,000 tokens (which corresponds to around 75,000 words). === Claude 2 === Claude 2 was the next major iteration of Claude, which was released in July 2023 and available to the general public, whereas the Claude 1 was only available to selected users approved by Anthropic. Claude 2 expanded its context window from 9,000 tokens to 100,000 tokens. Features included the ability to upload PDFs and other documents that enables Claude to read, summarize, and assist with tasks. ==== Claude 2.1 ==== Claude 2.1 doubled the number of tokens that the chatbot could handle, increasing it to a window of 200,000 tokens, which equals around 500 pages of written material. Anthropic states that the new model is less likely to produce false statements compared to its predecessors. ==== Criticism ==== Claude 2 received criticism for its stringent ethical alignment that may reduce usability and performance. Users have been refused assistance with benign requests, for example with the system administration question "How can I kill all python processes in my ubuntu server?" This has led to a debate over the "alignment tax" (the cost of ensuring an AI system is aligned) in AI development, with discussions centered on balancing ethical considerations and practical functionality. Critics argued for user autonomy and effectiveness, while proponents stressed the importance of ethical AI. === Claude 3 === Claude 3 was released on March 4, 2024, with claims in the press release to have set new industry benchmarks across a wide range of cognitive tasks. The Claude 3 family includes three state-of-the-art models in ascending order of capability: Haiku, Sonnet, and Opus. The default version of Claude 3, Opus, has a context window of 200,000 tokens, but this is being expanded to 1 million for specific use cases. Claude 3 drew attention for demonstrating an apparent ability to realize it is being artificially tested during needle in a haystack tests. ==== Claude 3.5 ==== On June 20, 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview the rendered output in real time, such as SVG graphics or websites. Anthropic also announced that Claude 3.5 Opus would be released later that year, and added it to their models page. However, as of February 2025, Claude 3.5 Opus has not been released, and Anthropic has removed mention of it from the models page. An "upgraded Claude 3.5 Sonnet", billed as "Claude 3.5 Sonnet (New)" in the web interface and benchmarks, was introduced on October 22, 2024, along with Claude 3.5 Haiku. A feature, "computer use," was also unveiled in public beta. This capability enables Claude 3.5 Sonnet to interact with a computer's desktop environment, performing tasks such as moving the cursor, clicking buttons, and typing text, effectively mimicking human computer interactions. This development allows the AI to autonomously execute complex, multi-step tasks across various applications. Upon release, Anthropic claimed Claude 3.5 Haiku would remain the same price as its predecessor, Claude 3 Haiku. However, on November 4th, 2024, Anthropic announced that they would be increasing the price of the model "to reflect its increase in intelligence". ==== Claude 3.7 ==== Claude 3.7 Sonnet was released on February 24, 2025. It is a pioneering hybrid AI reasoning model that allows users to choose between rapid responses and more thoughtful, step-by-step reasoning. This model integrates both capabilities into a single framework, eliminating the need for multiple models. Users can control how long the model "thinks" about a question, balancing speed and accuracy based on their needs. Anthropic also launched a research preview of Claude Code, an agentic command line tool that enables developers to delegate coding tasks directly from their terminal. === Claude 4 === On May 22, 2025, Anthropic released two more models: Claude Sonnet 4 and Claude Opus 4. Anthropic added API features for developers: a code execution tool, a connector to its Model Context Protocol, and Files API. It classified Opus 4 as a "Level 3" model on the company's four-point safety scale, meaning they consider it so powerful that it poses "significantly higher risk". Anthropic reported that during a safety test involving a fictional scenario, Claude attempted to blackmail an engineer in order to prevent its deactivation. == Features == In June 2024, Anthropic released the Artifacts feature, allowing users to generate and interact with code snippets and documents. In October 2024, Anthropic released the "computer use" feature, allowing Claude to attempt to navigate computers by interpreting screen content and simulating keyboard and mouse input. In March 2025, Anthropic added a web search feature to Claude, starting with only paying users located in the United States. == Criticism == Claude uses a web crawler, ClaudeBot, to search the web for content. It has been criticized for not respecting a site's robots.txt and placing excessive load on sites. == References == == External links == Official website
Hajek projection
In statistics, Hájek projection of a random variable T {\displaystyle T} on a set of independent random vectors X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} is a particular measurable function of X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} that, loosely speaking, captures the variation of T {\displaystyle T} in an optimal way. It is named after the Czech statistician Jaroslav Hájek . == Definition == Given a random variable T {\displaystyle T} and a set of independent random vectors X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} , the Hájek projection T ^ {\displaystyle {\hat {T}}} of T {\displaystyle T} onto { X 1 , … , X n } {\displaystyle \{X_{1},\dots ,X_{n}\}} is given by T ^ = E ⁡ ( T ) + ∑ i = 1 n [ E ⁡ ( T ∣ X i ) − E ⁡ ( T ) ] = ∑ i = 1 n E ⁡ ( T ∣ X i ) − ( n − 1 ) E ⁡ ( T ) {\displaystyle {\hat {T}}=\operatorname {E} (T)+\sum _{i=1}^{n}\left[\operatorname {E} (T\mid X_{i})-\operatorname {E} (T)\right]=\sum _{i=1}^{n}\operatorname {E} (T\mid X_{i})-(n-1)\operatorname {E} (T)} == Properties == Hájek projection T ^ {\displaystyle {\hat {T}}} is an L 2 {\displaystyle L^{2}} projection of T {\displaystyle T} onto a linear subspace of all random variables of the form ∑ i = 1 n g i ( X i ) {\displaystyle \sum _{i=1}^{n}g_{i}(X_{i})} , where g i : R d → R {\displaystyle g_{i}:\mathbb {R} ^{d}\to \mathbb {R} } are arbitrary measurable functions such that E ⁡ ( g i 2 ( X i ) ) < ∞ {\displaystyle \operatorname {E} (g_{i}^{2}(X_{i}))<\infty } for all i = 1 , … , n {\displaystyle i=1,\dots ,n} E ⁡ ( T ^ ∣ X i ) = E ⁡ ( T ∣ X i ) {\displaystyle \operatorname {E} ({\hat {T}}\mid X_{i})=\operatorname {E} (T\mid X_{i})} and hence E ⁡ ( T ^ ) = E ⁡ ( T ) {\displaystyle \operatorname {E} ({\hat {T}})=\operatorname {E} (T)} Under some conditions, asymptotic distributions of the sequence of statistics T n = T n ( X 1 , … , X n ) {\displaystyle T_{n}=T_{n}(X_{1},\dots ,X_{n})} and the sequence of its Hájek projections T ^ n = T ^ n ( X 1 , … , X n ) {\displaystyle {\hat {T}}_{n}={\hat {T}}_{n}(X_{1},\dots ,X_{n})} coincide, namely, if Var ⁡ ( T n ) / Var ⁡ ( T ^ n ) → 1 {\displaystyle \operatorname {Var} (T_{n})/\operatorname {Var} ({\hat {T}}_{n})\to 1} , then T n − E ⁡ ( T n ) Var ⁡ ( T n ) − T ^ n − E ⁡ ( T ^ n ) Var ⁡ ( T ^ n ) {\displaystyle {\frac {T_{n}-\operatorname {E} (T_{n})}{\sqrt {\operatorname {Var} (T_{n})}}}-{\frac {{\hat {T}}_{n}-\operatorname {E} ({\hat {T}}_{n})}{\sqrt {\operatorname {Var} ({\hat {T}}_{n})}}}} converges to zero in probability. == References ==
Sparse Fourier transform
The sparse Fourier transform (SFT) is a kind of discrete Fourier transform (DFT) for handling big data signals. Specifically, it is used in GPS synchronization, spectrum sensing and analog-to-digital converters.: The fast Fourier transform (FFT) plays an indispensable role on many scientific domains, especially on signal processing. It is one of the top-10 algorithms in the twentieth century. However, with the advent of big data era, the FFT still needs to be improved in order to save more computing power. Recently, the sparse Fourier transform (SFT) has gained a considerable amount of attention, for it performs well on analyzing the long sequence of data with few signal components. == Definition == Consider a sequence xn of complex numbers. By Fourier series, xn can be written as x n = ( F ∗ X ) n = ∑ k = 0 N − 1 X k e j 2 π N k n . {\displaystyle x_{n}=(F^{*}X)_{n}=\sum _{k=0}^{N-1}X_{k}e^{j{\frac {2\pi }{N}}kn}.} Similarly, Xk can be represented as X k = 1 N ( F x ) k = 1 N ∑ n = 0 N − 1 x n e − j 2 π N k n . {\displaystyle X_{k}={\frac {1}{N}}(Fx)_{k}={\frac {1}{N}}\sum _{n=0}^{N-1}x_{n}e^{-j{\frac {2\pi }{N}}kn}.} Hence, from the equations above, the mapping is F : C N → C N {\displaystyle F:\mathbb {C} ^{N}\to \mathbb {C} ^{N}} . === Single frequency recovery === Assume only a single frequency exists in the sequence. In order to recover this frequency from the sequence, it is reasonable to utilize the relationship between adjacent points of the sequence. ==== Phase encoding ==== The phase k can be obtained by dividing the adjacent points of the sequence. In other words, x n + 1 x n = e j 2 π N k = cos ⁡ ( 2 π k N ) + j sin ⁡ ( 2 π k N ) . {\displaystyle {\frac {x_{n+1}}{x_{n}}}=e^{j{\frac {2\pi }{N}}k}=\cos \left({\frac {2\pi k}{N}}\right)+j\sin \left({\frac {2\pi k}{N}}\right).} Notice that x n ∈ C N {\displaystyle x_{n}\in \mathbb {C} ^{N}} . ==== An aliasing-based search ==== Seeking phase k can be done by Chinese remainder theorem (CRT). Take k = 104,134 {\displaystyle k=104{,}134} for an example. Now, we have three relatively prime integers 100, 101, and 103. Thus, the equation can be described as k = 104,134 ≡ { 34 mod 1 00 , 3 mod 1 01 , 1 mod 1 03. {\displaystyle k=104{,}134\equiv \left\{{\begin{array}{rl}34&{\bmod {1}}00,\\3&{\bmod {1}}01,\\1&{\bmod {1}}03.\end{array}}\right.} By CRT, we have k = 104,134 mod ( 100 ⋅ 101 ⋅ 103 ) = 104,134 mod 1 , 040,300 {\displaystyle k=104{,}134{\bmod {(}}100\cdot 101\cdot 103)=104{,}134{\bmod {1}}{,}040{,}300} ==== Randomly binning frequencies ==== Now, we desire to explore the case of multiple frequencies, instead of a single frequency. The adjacent frequencies can be separated by the scaling c and modulation b properties. Namely, by randomly choosing the parameters of c and b, the distribution of all frequencies can be almost a uniform distribution. The figure Spread all frequencies reveals by randomly binning frequencies, we can utilize the single frequency recovery to seek the main components. x n ′ = X k e j 2 π N ( c ⋅ k + b ) , {\displaystyle x_{n}'=X_{k}e^{j{\frac {2\pi }{N}}(c\cdot k+b)},} where c is scaling property and b is modulation property. By randomly choosing c and b, the whole spectrum can be looked like uniform distribution. Then, taking them into filter banks can separate all frequencies, including Gaussians, indicator functions, spike trains, and Dolph-Chebyshev filters. Each bank only contains a single frequency. == The prototypical SFT == Generally, all SFT follows the three stages === Identifying frequencies === By randomly bining frequencies, all components can be separated. Then, taking them into filter banks, so each band only contains a single frequency. It is convenient to use the methods we mentioned to recover this signal frequency. === Estimating coefficients === After identifying frequencies, we will have many frequency components. We can use Fourier transform to estimate their coefficients. X k ′ = 1 L ∑ l = 1 L x n ′ e − j 2 π N n ′ ℓ {\displaystyle X_{k}'={\frac {1}{L}}\sum _{l=1}^{L}x_{n}'e^{-j{\frac {2\pi }{N}}n'\ell }} === Repeating === Finally, repeating these two stages can we extract the most important components from the original signal. x n − ∑ k ′ = 1 k X k ′ e j 2 π N k ′ n {\displaystyle x_{n}-\sum _{k'=1}^{k}X_{k}'e^{j{\frac {2\pi }{N}}k'n}} == Sparse Fourier transform in the discrete setting == In 2012, Hassanieh, Indyk, Katabi, and Price proposed an algorithm that takes O ( k log ⁡ n log ⁡ ( n / k ) ) {\displaystyle O(k\log n\log(n/k))} samples and runs in the same running time. == Sparse Fourier transform in the high dimensional setting == In 2014, Indyk and Kapralov proposed an algorithm that takes 2 O ( d log ⁡ d ) k log ⁡ n {\displaystyle 2^{O(d\log d)}k\log n} samples and runs in nearly linear time in n {\displaystyle n} . In 2016, Kapralov proposed an algorithm that uses sublinear samples 2 O ( d 2 ) k log ⁡ n log ⁡ log ⁡ n {\displaystyle 2^{O(d^{2})}k\log n\log \log n} and sublinear decoding time k log O ( d ) ⁡ n {\displaystyle k\log ^{O(d)}n} . In 2019, Nakos, Song, and Wang introduced a new algorithm which uses nearly optimal samples O ( k log ⁡ n log ⁡ k ) {\displaystyle O(k\log n\log k)} and requires nearly linear time decoding time. A dimension-incremental algorithm was proposed by Potts, Volkmer based on sampling along rank-1 lattices. == Sparse Fourier transform in the continuous setting == There are several works about generalizing the discrete setting into the continuous setting. == Implementations == There are several works based on MIT, MSU, ETH and University of Technology Chemnitz [TUC]. Also, they are free online. MSU implementations ETH implementations MIT implementations GitHub TUC implementations == Further reading == Hassanieh, Haitham (2018). The Sparse Fourier Transform: Theory and Practice. Association for Computing Machinery and Morgan & Claypool. ISBN 978-1-94748-707-9. == References ==
CIML community portal
The computational intelligence and machine learning (CIML) community portal is an international multi-university initiative. Its primary purpose is to help facilitate a virtual scientific community infrastructure for all those involved with, or interested in, computational intelligence and machine learning. This includes CIML research-, education, and application-oriented resources residing at the portal and others that are linked from the CIML site. == Overview == The CIML community portal was created to facilitate an online virtual scientific community wherein anyone interested in CIML can share research, obtain resources, or simply learn more. The effort is currently led by Jacek Zurada (principal investigator), with Rammohan Ragade and Janusz Wojtusiak, aided by a team of 25 volunteer researchers from 13 different countries. The ultimate goal of the CIML community portal is to accommodate and cater to a broad range of users, including experts, students, the public, and outside researchers interested in using CIML methods and software tools. Each community member and user will be guided through the portal resources and tools based on their respective CIML experience (e.g. expert, student, outside researcher) and goals (e.g. collaboration, education). A preliminary version of the community's portal, with limited capabilities, is now operational and available for users. All electronic resources on the portal are peer-reviewed to ensure high quality and cite-ability for literature. == Further reading == Jacek M. Zurada, Janusz Wojtusiak, Fahmida Chowdhury, James E. Gentle, Cedric J. Jeannot, and Maciej A. Mazurowski, Computational Intelligence Virtual Community: Framework and Implementation Issues, Proceedings of the IEEE World Congress on Computational Intelligence, Hong Kong, June 1–6, 2008. Jacek M. Zurada, Janusz Wojtusiak, Maciej A. Mazurowski, Devendra Mehta, Khalid Moidu, Steve Margolis, Toward Multidisciplinary Collaboration in the CIML Virtual Community, Proceedings of the 2008 Workshop on Building Computational Intelligence and Machine Learning Virtual Organizations, pp. 62–66 Chris Boyle, Artur Abdullin, Rammohan Ragade, Maciej A. Mazurowski, Janusz Wojtusiak, Jacek M. Zurada, Workflow considerations in the emerging CI-ML virtual organization, Proceedings of the 2008 Workshop on Building Computational Intelligence and Machine Learning Virtual Organizations, pp. 67–70 == See also == Artificial Intelligence Computational Intelligence Machine Learning National Science Foundation == References == == External links == Official website
Inductive probability
Inductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world. There are three sources of knowledge: inference, communication, and deduction. Communication relays information found using other methods. Deduction establishes new facts based on existing facts. Inference establishes new facts from data. Its basis is Bayes' theorem. Information describing the world is written in a language. For example, a simple mathematical language of propositions may be chosen. Sentences may be written down in this language as strings of characters. But in the computer it is possible to encode these sentences as strings of bits (1s and 0s). Then the language may be encoded so that the most commonly used sentences are the shortest. This internal language implicitly represents probabilities of statements. Occam's razor says the "simplest theory, consistent with the data is most likely to be correct". The "simplest theory" is interpreted as the representation of the theory written in this internal language. The theory with the shortest encoding in this internal language is most likely to be correct. == History == Probability and statistics was focused on probability distributions and tests of significance. Probability was formal, well defined, but limited in scope. In particular its application was limited to situations that could be defined as an experiment or trial, with a well defined population. Bayes's theorem is named after Rev. Thomas Bayes 1701–1761. Bayesian inference broadened the application of probability to many situations where a population was not well defined. But Bayes' theorem always depended on prior probabilities, to generate new probabilities. It was unclear where these prior probabilities should come from. Ray Solomonoff developed algorithmic probability which gave an explanation for what randomness is and how patterns in the data may be represented by computer programs, that give shorter representations of the data circa 1964. Chris Wallace and D. M. Boulton developed minimum message length circa 1968. Later Jorma Rissanen developed the minimum description length circa 1978. These methods allow information theory to be related to probability, in a way that can be compared to the application of Bayes' theorem, but which give a source and explanation for the role of prior probabilities. Marcus Hutter combined decision theory with the work of Ray Solomonoff and Andrey Kolmogorov to give a theory for the Pareto optimal behavior for an Intelligent agent, circa 1998. === Minimum description/message length === The program with the shortest length that matches the data is the most likely to predict future data. This is the thesis behind the minimum message length and minimum description length methods. At first sight Bayes' theorem appears different from the minimimum message/description length principle. At closer inspection it turns out to be the same. Bayes' theorem is about conditional probabilities, and states the probability that event B happens if firstly event A happens: P ( A ∧ B ) = P ( B ) ⋅ P ( A | B ) = P ( A ) ⋅ P ( B | A ) {\displaystyle P(A\land B)=P(B)\cdot P(A|B)=P(A)\cdot P(B|A)} becomes in terms of message length L, L ( A ∧ B ) = L ( B ) + L ( A | B ) = L ( A ) + L ( B | A ) . {\displaystyle L(A\land B)=L(B)+L(A|B)=L(A)+L(B|A).} This means that if all the information is given describing an event then the length of the information may be used to give the raw probability of the event. So if the information describing the occurrence of A is given, along with the information describing B given A, then all the information describing A and B has been given. ==== Overfitting ==== Overfitting occurs when the model matches the random noise and not the pattern in the data. For example, take the situation where a curve is fitted to a set of points. If a polynomial with many terms is fitted then it can more closely represent the data. Then the fit will be better, and the information needed to describe the deviations from the fitted curve will be smaller. Smaller information length means higher probability. However, the information needed to describe the curve must also be considered. The total information for a curve with many terms may be greater than for a curve with fewer terms, that has not as good a fit, but needs less information to describe the polynomial. === Inference based on program complexity === Solomonoff's theory of inductive inference is also inductive inference. A bit string x is observed. Then consider all programs that generate strings starting with x. Cast in the form of inductive inference, the programs are theories that imply the observation of the bit string x. The method used here to give probabilities for inductive inference is based on Solomonoff's theory of inductive inference. ==== Detecting patterns in the data ==== If all the bits are 1, then people infer that there is a bias in the coin and that it is more likely also that the next bit is 1 also. This is described as learning from, or detecting a pattern in the data. Such a pattern may be represented by a computer program. A short computer program may be written that produces a series of bits which are all 1. If the length of the program K is L ( K ) {\displaystyle L(K)} bits then its prior probability is, P ( K ) = 2 − L ( K ) {\displaystyle P(K)=2^{-L(K)}} The length of the shortest program that represents the string of bits is called the Kolmogorov complexity. Kolmogorov complexity is not computable. This is related to the halting problem. When searching for the shortest program some programs may go into an infinite loop. ==== Considering all theories ==== The Greek philosopher Epicurus is quoted as saying "If more than one theory is consistent with the observations, keep all theories". As in a crime novel all theories must be considered in determining the likely murderer, so with inductive probability all programs must be considered in determining the likely future bits arising from the stream of bits. Programs that are already longer than n have no predictive power. The raw (or prior) probability that the pattern of bits is random (has no pattern) is 2 − n {\displaystyle 2^{-n}} . Each program that produces the sequence of bits, but is shorter than the n is a theory/pattern about the bits with a probability of 2 − k {\displaystyle 2^{-k}} where k is the length of the program. The probability of receiving a sequence of bits y after receiving a series of bits x is then the conditional probability of receiving y given x, which is the probability of x with y appended, divided by the probability of x. ==== Universal priors ==== The programming language affects the predictions of the next bit in the string. The language acts as a prior probability. This is particularly a problem where the programming language codes for numbers and other data types. Intuitively we think that 0 and 1 are simple numbers, and that prime numbers are somehow more complex than numbers that may be composite. Using the Kolmogorov complexity gives an unbiased estimate (a universal prior) of the prior probability of a number. As a thought experiment an intelligent agent may be fitted with a data input device giving a series of numbers, after applying some transformation function to the raw numbers. Another agent might have the same input device with a different transformation function. The agents do not see or know about these transformation functions. Then there appears no rational basis for preferring one function over another. A universal prior insures that although two agents may have different initial probability distributions for the data input, the difference will be bounded by a constant. So universal priors do not eliminate an initial bias, but they reduce and limit it. Whenever we describe an event in a language, either using a natural language or other, the language has encoded in it our prior expectations. So some reliance on prior probabilities are inevitable. A problem arises where an intelligent agent's prior expectations interact with the environment to form a self reinforcing feed back loop. This is the problem of bias or prejudice. Universal priors reduce but do not eliminate this problem. === Universal artificial intelligence === The theory of universal artificial intelligence applies decision theory to inductive probabilities. The theory shows how the best actions to optimize a reward function may be chosen. The result is a theoretical model of intelligence. It is a fundamental theory of intelligence, which optimizes the agents behavior in, Exploring the environment; performing actions to get responses that broaden the agents knowledge. Competing or co-operating with another agent; games. Balancing short and long term rewards. In general no agent will always provide the best actions in all situations. A particular choice made by an agent may be wrong, and the environment may provide no way for the agent to recover from an initial bad choice. However the agent is Pareto optimal in the sense that no other agent will do better than this agent in this environment, without doing worse in another environment. No other agent may, in this sense, be said to be better. At present the theory is limited by incomputability (the halting problem). Approximations may be used to avoid this. Processing speed and combinatorial explosion remain the primary limiting factors for artificial intelligence. == Probability == Probability is the representation of uncertain or partial knowledge about the truth of statements. Probabilities are subjective and personal estimates of likely outcomes based on past experience and inferences made from the data. This description of probability may seem strange at first. In natural language we refer to "the probability" that the sun will rise tomorrow. We do not refer to "your probability" that the sun will rise. But in order for inference to be correctly modeled probability must be personal, and the act of inference generates new posterior probabilities from prior probabilities. Probabilities are personal because they are conditional on the knowledge of the individual. Probabilities are subjective because they always depend, to some extent, on prior probabilities assigned by the individual. Subjective should not be taken here to mean vague or undefined. The term intelligent agent is used to refer to the holder of the probabilities. The intelligent agent may be a human or a machine. If the intelligent agent does not interact with the environment then the probability will converge over time to the frequency of the event. If however the agent uses the probability to interact with the environment there may be a feedback, so that two agents in the identical environment starting with only slightly different priors, end up with completely different probabilities. In this case optimal decision theory as in Marcus Hutter's Universal Artificial Intelligence will give Pareto optimal performance for the agent. This means that no other intelligent agent could do better in one environment without doing worse in another environment. === Comparison to deductive probability === In deductive probability theories, probabilities are absolutes, independent of the individual making the assessment. But deductive probabilities are based on, Shared knowledge. Assumed facts, that should be inferred from the data. For example, in a trial the participants are aware the outcome of all the previous history of trials. They also assume that each outcome is equally probable. Together this allows a single unconditional value of probability to be defined. But in reality each individual does not have the same information. And in general the probability of each outcome is not equal. The dice may be loaded, and this loading needs to be inferred from the data. === Probability as estimation === The principle of indifference has played a key role in probability theory. It says that if N statements are symmetric so that one condition cannot be preferred over another then all statements are equally probable. Taken seriously, in evaluating probability this principle leads to contradictions. Suppose there are 3 bags of gold in the distance and one is asked to select one. Then because of the distance one cannot see the bag sizes. You estimate using the principle of indifference that each bag has equal amounts of gold, and each bag has one third of the gold. Now, while one of us is not looking, the other takes one of the bags and divide it into 3 bags. Now there are 5 bags of gold. The principle of indifference now says each bag has one fifth of the gold. A bag that was estimated to have one third of the gold is now estimated to have one fifth of the gold. Taken as a value associated with the bag the values are different therefore contradictory. But taken as an estimate given under a particular scenario, both values are separate estimates given under different circumstances and there is no reason to believe they are equal. Estimates of prior probabilities are particularly suspect. Estimates will be constructed that do not follow any consistent frequency distribution. For this reason prior probabilities are considered as estimates of probabilities rather than probabilities. A full theoretical treatment would associate with each probability, The statement Prior knowledge Prior probabilities The estimation procedure used to give the probability. === Combining probability approaches === Inductive probability combines two different approaches to probability. Probability and information Probability and frequency Each approach gives a slightly different viewpoint. Information theory is used in relating probabilities to quantities of information. This approach is often used in giving estimates of prior probabilities. Frequentist probability defines probabilities as objective statements about how often an event occurs. This approach may be stretched by defining the trials to be over possible worlds. Statements about possible worlds define events. == Probability and information == Whereas logic represents only two values; true and false as the values of statement, probability associates a number in [0,1] to each statement. If the probability of a statement is 0, the statement is false. If the probability of a statement is 1 the statement is true. In considering some data as a string of bits the prior probabilities for a sequence of 1s and 0s, the probability of 1 and 0 is equal. Therefore, each extra bit halves the probability of a sequence of bits. This leads to the conclusion that, P ( x ) = 2 − L ( x ) {\displaystyle P(x)=2^{-L(x)}} Where P ( x ) {\displaystyle P(x)} is the probability of the string of bits x {\displaystyle x} and L ( x ) {\displaystyle L(x)} is its length. The prior probability of any statement is calculated from the number of bits needed to state it. See also information theory. === Combining information === Two statements A {\displaystyle A} and B {\displaystyle B} may be represented by two separate encodings. Then the length of the encoding is, L ( A ∧ B ) = L ( A ) + L ( B ) {\displaystyle L(A\land B)=L(A)+L(B)} or in terms of probability, P ( A ∧ B ) = P ( A ) P ( B ) {\displaystyle P(A\land B)=P(A)P(B)} But this law is not always true because there may be a shorter method of encoding B {\displaystyle B} if we assume A {\displaystyle A} . So the above probability law applies only if A {\displaystyle A} and B {\displaystyle B} are "independent". === The internal language of information === The primary use of the information approach to probability is to provide estimates of the complexity of statements. Recall that Occam's razor states that "All things being equal, the simplest theory is the most likely to be correct". In order to apply this rule, first there needs to be a definition of what "simplest" means. Information theory defines simplest to mean having the shortest encoding. Knowledge is represented as statements. Each statement is a Boolean expression. Expressions are encoded by a function that takes a description (as against the value) of the expression and encodes it as a bit string. The length of the encoding of a statement gives an estimate of the probability of a statement. This probability estimate will often be used as the prior probability of a statement. Technically this estimate is not a probability because it is not constructed from a frequency distribution. The probability estimates given by it do not always obey the law of total of probability. Applying the law of total probability to various scenarios will usually give a more accurate probability estimate of the prior probability than the estimate from the length of the statement. ==== Encoding expressions ==== An expression is constructed from sub expressions, Constants (including function identifier). Application of functions. quantifiers. A Huffman code must distinguish the 3 cases. The length of each code is based on the frequency of each type of sub expressions. Initially constants are all assigned the same length/probability. Later constants may be assigned a probability using the Huffman code based on the number of uses of the function id in all expressions recorded so far. In using a Huffman code the goal is to estimate probabilities, not to compress the data. The length of a function application is the length of the function identifier constant plus the sum of the sizes of the expressions for each parameter. The length of a quantifier is the length of the expression being quantified over. ==== Distribution of numbers ==== No explicit representation of natural numbers is given. However natural numbers may be constructed by applying the successor function to 0, and then applying other arithmetic functions. A distribution of natural numbers is implied by this, based on the complexity of constructing each number. Rational numbers are constructed by the division of natural numbers. The simplest representation has no common factors between the numerator and the denominator. This allows the probability distribution of natural numbers may be extended to rational numbers. == Probability and frequency == The probability of an event may be interpreted as the frequencies of outcomes where the statement is true divided by the total number of outcomes. If the outcomes form a continuum the frequency may need to be replaced with a measure. Events are sets of outcomes. Statements may be related to events. A Boolean statement B about outcomes defines a set of outcomes b, b = { x : B ( x ) } {\displaystyle b=\{x:B(x)\}} === Conditional probability === Each probability is always associated with the state of knowledge at a particular point in the argument. Probabilities before an inference are known as prior probabilities, and probabilities after are known as posterior probabilities. Probability depends on the facts known. The truth of a fact limits the domain of outcomes to the outcomes consistent with the fact. Prior probabilities are the probabilities before a fact is known. Posterior probabilities are after a fact is known. The posterior probabilities are said to be conditional on the fact. the probability that B {\displaystyle B} is true given that A {\displaystyle A} is true is written as: P ( B | A ) . {\displaystyle P(B|A).} All probabilities are in some sense conditional. The prior probability of B {\displaystyle B} is, P ( B ) = P ( B | ⊤ ) {\displaystyle P(B)=P(B|\top )} === The frequentist approach applied to possible worlds === In the frequentist approach, probabilities are defined as the ratio of the number of outcomes within an event to the total number of outcomes. In the possible world model each possible world is an outcome, and statements about possible worlds define events. The probability of a statement being true is the number of possible worlds where the statement is true divided by the total number of possible worlds. The probability of a statement A {\displaystyle A} being true about possible worlds is then, P ( A ) = | { x : A ( x ) } | | x : ⊤ | {\displaystyle P(A)={\frac {|\{x:A(x)\}|}{|x:\top |}}} For a conditional probability. P ( B | A ) = | { x : A ( x ) ∧ B ( x ) } | | x : A ( x ) | {\displaystyle P(B|A)={\frac {|\{x:A(x)\land B(x)\}|}{|x:A(x)|}}} then P ( A ∧ B ) = | { x : A ( x ) ∧ B ( x ) } | | x : ⊤ | = | { x : A ( x ) ∧ B ( x ) } | | { x : A ( x ) } | | { x : A ( x ) } | | x : ⊤ | = P ( A ) P ( B | A ) {\displaystyle {\begin{aligned}P(A\land B)&={\frac {|\{x:A(x)\land B(x)\}|}{|x:\top |}}\\[8pt]&={\frac {|\{x:A(x)\land B(x)\}|}{|\{x:A(x)\}|}}{\frac {|\{x:A(x)\}|}{|x:\top |}}\\[8pt]&=P(A)P(B|A)\end{aligned}}} Using symmetry this equation may be written out as Bayes' law. P ( A ∧ B ) = P ( A ) P ( B | A ) = P ( B ) P ( A | B ) {\displaystyle P(A\land B)=P(A)P(B|A)=P(B)P(A|B)} This law describes the relationship between prior and posterior probabilities when new facts are learnt. Written as quantities of information Bayes' Theorem becomes, L ( A ∧ B ) = L ( A ) + L ( B | A ) = L ( B ) + L ( A | B ) {\displaystyle L(A\land B)=L(A)+L(B|A)=L(B)+L(A|B)} Two statements A and B are said to be independent if knowing the truth of A does not change the probability of B. Mathematically this is, P ( B ) = P ( B | A ) {\displaystyle P(B)=P(B|A)} then Bayes' Theorem reduces to, P ( A ∧ B ) = P ( A ) P ( B ) {\displaystyle P(A\land B)=P(A)P(B)} === The law of total of probability === For a set of mutually exclusive possibilities A i {\displaystyle A_{i}} , the sum of the posterior probabilities must be 1. ∑ i P ( A i | B ) = 1 {\displaystyle \sum _{i}{P(A_{i}|B)}=1} Substituting using Bayes' theorem gives the law of total probability ∑ i P ( B | A i ) P ( A i ) = ∑ i P ( A i | B ) P ( B ) {\displaystyle \sum _{i}{P(B|A_{i})P(A_{i})}=\sum _{i}{P(A_{i}|B)P(B)}} P ( B ) = ∑ i P ( B | A i ) P ( A i ) {\displaystyle P(B)=\sum _{i}{P(B|A_{i})P(A_{i})}} This result is used to give the extended form of Bayes' theorem, P ( A i | B ) = P ( B | A i ) P ( A i ) ∑ j P ( B | A j ) P ( A j ) {\displaystyle P(A_{i}|B)={\frac {P(B|A_{i})P(A_{i})}{\sum _{j}{P(B|A_{j})P(A_{j})}}}} This is the usual form of Bayes' theorem used in practice, because it guarantees the sum of all the posterior probabilities for A i {\displaystyle A_{i}} is 1. === Alternate possibilities === For mutually exclusive possibilities, the probabilities add. P ( A ∨ B ) = P ( A ) + P ( B ) , if P ( A ∧ B ) = 0 {\displaystyle P(A\lor B)=P(A)+P(B),\qquad {\text{if }}P(A\land B)=0} Using A ∨ B = ( A ∧ ¬ ( A ∧ B ) ) ∨ ( B ∧ ¬ ( A ∧ B ) ) ∨ ( A ∧ B ) {\displaystyle A\lor B=(A\land \neg (A\land B))\lor (B\land \neg (A\land B))\lor (A\land B)} Then the alternatives A ∧ ¬ ( A ∧ B ) , B ∧ ¬ ( A ∧ B ) , A ∧ B {\displaystyle A\land \neg (A\land B),\quad B\land \neg (A\land B),\quad A\land B} are all mutually exclusive. Also, ( A ∧ ¬ ( A ∧ B ) ) ∨ ( A ∧ B ) = A {\displaystyle (A\land \neg (A\land B))\lor (A\land B)=A} P ( A ∧ ¬ ( A ∧ B ) ) + P ( A ∧ B ) = P ( A ) {\displaystyle P(A\land \neg (A\land B))+P(A\land B)=P(A)} P ( A ∧ ¬ ( A ∧ B ) ) = P ( A ) − P ( A ∧ B ) {\displaystyle P(A\land \neg (A\land B))=P(A)-P(A\land B)} so, putting it all together, P ( A ∨ B ) = P ( ( A ∧ ¬ ( A ∧ B ) ) ∨ ( B ∧ ¬ ( A ∧ B ) ) ∨ ( A ∧ B ) ) = P ( A ∧ ¬ ( A ∧ B ) + P ( B ∧ ¬ ( A ∧ B ) ) + P ( A ∧ B ) = P ( A ) − P ( A ∧ B ) + P ( B ) − P ( A ∧ B ) + P ( A ∧ B ) = P ( A ) + P ( B ) − P ( A ∧ B ) {\displaystyle {\begin{aligned}P(A\lor B)&=P((A\land \neg (A\land B))\lor (B\land \neg (A\land B))\lor (A\land B))\\&=P(A\land \neg (A\land B)+P(B\land \neg (A\land B))+P(A\land B)\\&=P(A)-P(A\land B)+P(B)-P(A\land B)+P(A\land B)\\&=P(A)+P(B)-P(A\land B)\end{aligned}}} === Negation === As, A ∨ ¬ A = ⊤ {\displaystyle A\lor \neg A=\top } then P ( A ) + P ( ¬ A ) = 1 {\displaystyle P(A)+P(\neg A)=1} === Implication and condition probability === Implication is related to conditional probability by the following equation, A → B ⟺ P ( B | A ) = 1 {\displaystyle A\to B\iff P(B|A)=1} Derivation, A → B ⟺ P ( A → B ) = 1 ⟺ P ( A ∧ B ∨ ¬ A ) = 1 ⟺ P ( A ∧ B ) + P ( ¬ A ) = 1 ⟺ P ( A ∧ B ) = P ( A ) ⟺ P ( A ) ⋅ P ( B | A ) = P ( A ) ⟺ P ( B | A ) = 1 {\displaystyle {\begin{aligned}A\to B&\iff P(A\to B)=1\\&\iff P(A\land B\lor \neg A)=1\\&\iff P(A\land B)+P(\neg A)=1\\&\iff P(A\land B)=P(A)\\&\iff P(A)\cdot P(B|A)=P(A)\\&\iff P(B|A)=1\end{aligned}}} == Bayesian hypothesis testing == Bayes' theorem may be used to estimate the probability of a hypothesis or theory H, given some facts F. The posterior probability of H is then P ( H | F ) = P ( H ) P ( F | H ) P ( F ) {\displaystyle P(H|F)={\frac {P(H)P(F|H)}{P(F)}}} or in terms of information, P ( H | F ) = 2 − ( L ( H ) + L ( F | H ) − L ( F ) ) {\displaystyle P(H|F)=2^{-(L(H)+L(F|H)-L(F))}} By assuming the hypothesis is true, a simpler representation of the statement F may be given. The length of the encoding of this simpler representation is L ( F | H ) . {\displaystyle L(F|H).} L ( H ) + L ( F | H ) {\displaystyle L(H)+L(F|H)} represents the amount of information needed to represent the facts F, if H is true. L ( F ) {\displaystyle L(F)} is the amount of information needed to represent F without the hypothesis H. The difference is how much the representation of the facts has been compressed by assuming that H is true. This is the evidence that the hypothesis H is true. If L ( F ) {\displaystyle L(F)} is estimated from encoding length then the probability obtained will not be between 0 and 1. The value obtained is proportional to the probability, without being a good probability estimate. The number obtained is sometimes referred to as a relative probability, being how much more probable the theory is than not holding the theory. If a full set of mutually exclusive hypothesis that provide evidence is known, a proper estimate may be given for the prior probability P ( F ) {\displaystyle P(F)} . === Set of hypothesis === Probabilities may be calculated from the extended form of Bayes' theorem. Given all mutually exclusive hypothesis H i {\displaystyle H_{i}} which give evidence, such that, L ( H i ) + L ( F | H i ) < L ( F ) {\displaystyle L(H_{i})+L(F|H_{i})<L(F)} and also the hypothesis R, that none of the hypothesis is true, then, P ( H i | F ) = P ( H i ) P ( F | H i ) P ( F | R ) + ∑ j P ( H j ) P ( F | H j ) P ( R | F ) = P ( F | R ) P ( F | R ) + ∑ j P ( H j ) P ( F | H j ) {\displaystyle {\begin{aligned}P(H_{i}|F)&={\frac {P(H_{i})P(F|H_{i})}{P(F|R)+\sum _{j}{P(H_{j})P(F|H_{j})}}}\\[8pt]P(R|F)&={\frac {P(F|R)}{P(F|R)+\sum _{j}{P(H_{j})P(F|H_{j})}}}\end{aligned}}} In terms of information, P ( H i | F ) = 2 − ( L ( H i ) + L ( F | H i ) ) 2 − L ( F | R ) + ∑ j 2 − ( L ( H j ) + L ( F | H j ) ) P ( R | F ) = 2 − L ( F | R ) 2 − L ( F | R ) + ∑ j 2 − ( L ( H j ) + L ( F | H j ) ) {\displaystyle {\begin{aligned}P(H_{i}|F)&={\frac {2^{-(L(H_{i})+L(F|H_{i}))}}{2^{-L(F|R)}+\sum _{j}2^{-(L(H_{j})+L(F|H_{j}))}}}\\[8pt]P(R|F)&={\frac {2^{-L(F|R)}}{2^{-L(F|R)}+\sum _{j}{2^{-(L(H_{j})+L(F|H_{j}))}}}}\end{aligned}}} In most situations it is a good approximation to assume that F {\displaystyle F} is independent of R {\displaystyle R} , which means P ( F | R ) = P ( F ) {\displaystyle P(F|R)=P(F)} giving, P ( H i | F ) ≈ 2 − ( L ( H i ) + L ( F | H i ) ) 2 − L ( F ) + ∑ j 2 − ( L ( H j ) + L ( F | H j ) ) P ( R | F ) ≈ 2 − L ( F ) 2 − L ( F ) + ∑ j 2 − ( L ( H j ) + L ( F | H j ) ) {\displaystyle {\begin{aligned}P(H_{i}|F)&\approx {\frac {2^{-(L(H_{i})+L(F|H_{i}))}}{2^{-L(F)}+\sum _{j}{2^{-(L(H_{j})+L(F|H_{j}))}}}}\\[8pt]P(R|F)&\approx {\frac {2^{-L(F)}}{2^{-L(F)}+\sum _{j}{2^{-(L(H_{j})+L(F|H_{j}))}}}}\end{aligned}}} == Boolean inductive inference == Abductive inference starts with a set of facts F which is a statement (Boolean expression). Abductive reasoning is of the form, A theory T implies the statement F. As the theory T is simpler than F, abduction says that there is a probability that the theory T is implied by F. The theory T, also called an explanation of the condition F, is an answer to the ubiquitous factual "why" question. For example, for the condition F is "Why do apples fall?". The answer is a theory T that implies that apples fall; F = G m 1 m 2 r 2 {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}}} Inductive inference is of the form, All observed objects in a class C have a property P. Therefore there is a probability that all objects in a class C have a property P. In terms of abductive inference, all objects in a class C or set have a property P is a theory that implies the observed condition, All observed objects in a class C have a property P. So inductive inference is a general case of abductive inference. In common usage the term inductive inference is often used to refer to both abductive and inductive inference. === Generalization and specialization === Inductive inference is related to generalization. Generalizations may be formed from statements by replacing a specific value with membership of a category, or by replacing membership of a category with membership of a broader category. In deductive logic, generalization is a powerful method of generating new theories that may be true. In inductive inference generalization generates theories that have a probability of being true. The opposite of generalization is specialization. Specialization is used in applying a general rule to a specific case. Specializations are created from generalizations by replacing membership of a category by a specific value, or by replacing a category with a sub category. The Linnaen classification of living things and objects forms the basis for generalization and specification. The ability to identify, recognize and classify is the basis for generalization. Perceiving the world as a collection of objects appears to be a key aspect of human intelligence. It is the object oriented model, in the non computer science sense. The object oriented model is constructed from our perception. In particularly vision is based on the ability to compare two images and calculate how much information is needed to morph or map one image into another. Computer vision uses this mapping to construct 3D images from stereo image pairs. Inductive logic programming is a means of constructing theory that implies a condition. Plotkin's "relative least general generalization (rlgg)" approach constructs the simplest generalization consistent with the condition. === Newton's use of induction === Isaac Newton used inductive arguments in constructing his law of universal gravitation. Starting with the statement, The center of an apple falls towards the center of the Earth. Generalizing by replacing apple for object, and Earth for object gives, in a two body system, The center of an object falls towards the center of another object. The theory explains all objects falling, so there is strong evidence for it. The second observation, The planets appear to follow an elliptical path. After some complicated mathematical calculus, it can be seen that if the acceleration follows the inverse square law then objects will follow an ellipse. So induction gives evidence for the inverse square law. Using Galileo's observation that all objects drop with the same speed, F 1 = m 1 a 1 = m 1 k 1 r 2 i 1 {\displaystyle F_{1}=m_{1}a_{1}={\frac {m_{1}k_{1}}{r^{2}}}i_{1}} F 2 = m 2 a 2 = m 2 k 2 r 2 i 2 {\displaystyle F_{2}=m_{2}a_{2}={\frac {m_{2}k_{2}}{r^{2}}}i_{2}} where i 1 {\displaystyle i_{1}} and i 2 {\displaystyle i_{2}} vectors towards the center of the other object. Then using Newton's third law F 1 = − F 2 {\displaystyle F_{1}=-F_{2}} F = G m 1 m 2 r 2 {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}}} === Probabilities for inductive inference === Implication determines condition probability as, T → F ⟺ P ( F | T ) = 1 {\displaystyle T\to F\iff P(F|T)=1} So, P ( F | T ) = 1 {\displaystyle P(F|T)=1} L ( F | T ) = 0 {\displaystyle L(F|T)=0} This result may be used in the probabilities given for Bayesian hypothesis testing. For a single theory, H = T and, P ( T | F ) = P ( T ) P ( F ) {\displaystyle P(T|F)={\frac {P(T)}{P(F)}}} or in terms of information, the relative probability is, P ( T | F ) = 2 − ( L ( T ) − L ( F ) ) {\displaystyle P(T|F)=2^{-(L(T)-L(F))}} Note that this estimate for P(T|F) is not a true probability. If L ( T i ) < L ( F ) {\displaystyle L(T_{i})<L(F)} then the theory has evidence to support it. Then for a set of theories T i = H i {\displaystyle T_{i}=H_{i}} , such that L ( T i ) < L ( F ) {\displaystyle L(T_{i})<L(F)} , P ( T i | F ) = P ( T i ) P ( F | R ) + ∑ j P ( T j ) {\displaystyle P(T_{i}|F)={\frac {P(T_{i})}{P(F|R)+\sum _{j}{P(T_{j})}}}} P ( R | F ) = P ( F | R ) P ( F | R ) + ∑ j P ( T j ) {\displaystyle P(R|F)={\frac {P(F|R)}{P(F|R)+\sum _{j}{P(T_{j})}}}} giving, P ( T i | F ) ≈ 2 − L ( T i ) 2 − L ( F ) + ∑ j 2 − L ( T j ) {\displaystyle P(T_{i}|F)\approx {\frac {2^{-L(T_{i})}}{2^{-L(F)}+\sum _{j}{2^{-L(T_{j})}}}}} P ( R | F ) ≈ 2 − L ( F ) 2 − L ( F ) + ∑ j 2 − L ( T j ) {\displaystyle P(R|F)\approx {\frac {2^{-L(F)}}{2^{-L(F)}+\sum _{j}{2^{-L(T_{j})}}}}} == Derivations == === Derivation of inductive probability === Make a list of all the shortest programs K i {\displaystyle K_{i}} that each produce a distinct infinite string of bits, and satisfy the relation, T n ( R ( K i ) ) = x {\displaystyle T_{n}(R(K_{i}))=x} where R ( K i ) {\displaystyle R(K_{i})} is the result of running the program K i {\displaystyle K_{i}} and T n {\displaystyle T_{n}} truncates the string after n bits. The problem is to calculate the probability that the source is produced by program K i , {\displaystyle K_{i},} given that the truncated source after n bits is x. This is represented by the conditional probability, P ( s = R ( K i ) | T n ( s ) = x ) {\displaystyle P(s=R(K_{i})|T_{n}(s)=x)} Using the extended form of Bayes' theorem P ( s = R ( K i ) | T n ( s ) = x ) = P ( T n ( s ) = x | s = R ( K i ) ) P ( s = R ( K i ) ) ∑ j P ( T n ( s ) = x | s = R ( K j ) ) P ( s = R ( K j ) ) . {\displaystyle P(s=R(K_{i})|T_{n}(s)=x)={\frac {P(T_{n}(s)=x|s=R(K_{i}))P(s=R(K_{i}))}{\sum _{j}P(T_{n}(s)=x|s=R(K_{j}))P(s=R(K_{j}))}}.} The extended form relies on the law of total probability. This means that the s = R ( K i ) {\displaystyle s=R(K_{i})} must be distinct possibilities, which is given by the condition that each K i {\displaystyle K_{i}} produce a different infinite string. Also one of the conditions s = R ( K i ) {\displaystyle s=R(K_{i})} must be true. This must be true, as in the limit as n → ∞ , {\displaystyle n\to \infty ,} there is always at least one program that produces T n ( s ) {\displaystyle T_{n}(s)} . As K i {\displaystyle K_{i}} are chosen so that T n ( R ( K i ) ) = x , {\displaystyle T_{n}(R(K_{i}))=x,} then, P ( T n ( s ) = x | s = R ( K i ) ) = 1 {\displaystyle P(T_{n}(s)=x|s=R(K_{i}))=1} The apriori probability of the string being produced from the program, given no information about the string, is based on the size of the program, P ( s = R ( K i ) ) = 2 − I ( K i ) {\displaystyle P(s=R(K_{i}))=2^{-I(K_{i})}} giving, P ( s = R ( K i ) | T n ( s ) = x ) = 2 − I ( K i ) ∑ j 2 − I ( K j ) . {\displaystyle P(s=R(K_{i})|T_{n}(s)=x)={\frac {2^{-I(K_{i})}}{\sum _{j}2^{-I(K_{j})}}}.} Programs that are the same or longer than the length of x provide no predictive power. Separate them out giving, P ( s = R ( K i ) | T n ( s ) = x ) = 2 − I ( K i ) ∑ j : I ( K j ) < n 2 − I ( K j ) + ∑ j : I ( K j ) ⩾ n 2 − I ( K j ) . {\displaystyle P(s=R(K_{i})|T_{n}(s)=x)={\frac {2^{-I(K_{i})}}{\sum _{j:I(K_{j})<n}2^{-I(K_{j})}+\sum _{j:I(K_{j})\geqslant n}2^{-I(K_{j})}}}.} Then identify the two probabilities as, P ( x has pattern ) = ∑ j : I ( K j ) < n 2 − I ( K j ) {\displaystyle P(x{\text{ has pattern}})=\sum _{j:I(K_{j})<n}2^{-I(K_{j})}} P ( x is random ) = ∑ j : I ( K j ) ⩾ n 2 − I ( K j ) {\displaystyle P(x{\text{ is random}})=\sum _{j:I(K_{j})\geqslant n}2^{-I(K_{j})}} But the prior probability that x is a random set of bits is 2 − n {\displaystyle 2^{-n}} . So, P ( s = R ( K i ) | T n ( s ) = x ) = 2 − I ( K i ) 2 − n + ∑ j : I ( K j ) < n 2 − I ( K j ) . {\displaystyle P(s=R(K_{i})|T_{n}(s)=x)={\frac {2^{-I(K_{i})}}{2^{-n}+\sum _{j:I(K_{j})<n}2^{-I(K_{j})}}}.} The probability that the source is random, or unpredictable is, P ( random ⁡ ( s ) | T n ( s ) = x ) = 2 − n 2 − n + ∑ j : I ( K j ) < n 2 − I ( K j ) . {\displaystyle P(\operatorname {random} (s)|T_{n}(s)=x)={\frac {2^{-n}}{2^{-n}+\sum _{j:I(K_{j})<n}2^{-I(K_{j})}}}.} === A model for inductive inference === A model of how worlds are constructed is used in determining the probabilities of theories, A random bit string is selected. A condition is constructed from the bit string. A world is constructed that is consistent with the condition. If w is the bit string then the world is created such that R ( w ) {\displaystyle R(w)} is true. An intelligent agent has some facts about the word, represented by the bit string c, which gives the condition, C = R ( c ) {\displaystyle C=R(c)} The set of bit strings identical with any condition x is E ( x ) {\displaystyle E(x)} . ∀ x , E ( x ) = { w : R ( w ) ≡ x } {\displaystyle \forall x,E(x)=\{w:R(w)\equiv x\}} A theory is a simpler condition that explains (or implies) C. The set of all such theories is called T, T ( C ) = { t : t → C } {\displaystyle T(C)=\{t:t\to C\}} ==== Applying Bayes' theorem ==== extended form of Bayes' theorem may be applied P ( A i | B ) = P ( B | A i ) P ( A i ) ∑ j P ( B | A j ) P ( A j ) , {\displaystyle P(A_{i}|B)={\frac {P(B|A_{i})\,P(A_{i})}{\sum _{j}P(B|A_{j})\,P(A_{j})}},} where, B = E ( C ) {\displaystyle B=E(C)} A i = E ( t ) {\displaystyle A_{i}=E(t)} To apply Bayes' theorem the following must hold: A i {\displaystyle A_{i}} is a partition of the event space. For T ( C ) {\displaystyle T(C)} to be a partition, no bit string n may belong to two theories. To prove this assume they can and derive a contradiction, ( N ∈ T ) ∧ ( N ∈ M ) ∧ ( N ≠ M ) ∧ ( n ∈ E ( N ) ∧ n ∈ E ( M ) ) {\displaystyle (N\in T)\land (N\in M)\land (N\neq M)\land (n\in E(N)\land n\in E(M))} ⟹ ( N ≠ M ) ∧ R ( n ) ≡ N ∧ R ( n ) ≡ M {\displaystyle \implies (N\neq M)\land R(n)\equiv N\land R(n)\equiv M} ⟹ ⊥ {\displaystyle \implies \bot } Secondly prove that T includes all outcomes consistent with the condition. As all theories consistent with C are included then R ( w ) {\displaystyle R(w)} must be in this set. So Bayes theorem may be applied as specified giving, ∀ t ∈ T ( C ) , P ( E ( t ) | E ( C ) ) = P ( E ( t ) ) ⋅ P ( E ( C ) | E ( t ) ) ∑ j ∈ T ( C ) P ( E ( j ) ) ⋅ P ( E ( C ) | E ( j ) ) {\displaystyle \forall t\in T(C),P(E(t)|E(C))={\frac {P(E(t))\cdot P(E(C)|E(t))}{\sum _{j\in T(C)}P(E(j))\cdot P(E(C)|E(j))}}} Using the implication and condition probability law, the definition of T ( C ) {\displaystyle T(C)} implies, ∀ t ∈ T ( C ) , P ( E ( C ) | E ( t ) ) = 1 {\displaystyle \forall t\in T(C),P(E(C)|E(t))=1} The probability of each theory in T is given by, ∀ t ∈ T ( C ) , P ( E ( t ) ) = ∑ n : R ( n ) ≡ t 2 − L ( n ) {\displaystyle \forall t\in T(C),P(E(t))=\sum _{n:R(n)\equiv t}2^{-L(n)}} so, ∀ t ∈ T ( C ) , P ( E ( t ) | E ( C ) ) = ∑ n : R ( n ) ≡ t 2 − L ( n ) ∑ j ∈ T ( C ) ∑ m : R ( m ) ≡ j 2 − L ( m ) {\displaystyle \forall t\in T(C),P(E(t)|E(C))={\frac {\sum _{n:R(n)\equiv t}2^{-L(n)}}{\sum _{j\in T(C)}\sum _{m:R(m)\equiv j}2^{-L(m)}}}} Finally the probabilities of the events may be identified with the probabilities of the condition which the outcomes in the event satisfy, ∀ t ∈ T ( C ) , P ( E ( t ) | E ( C ) ) = P ( t | C ) {\displaystyle \forall t\in T(C),P(E(t)|E(C))=P(t|C)} giving ∀ t ∈ T ( C ) , P ( t | C ) = ∑ n : R ( n ) ≡ t 2 − L ( n ) ∑ j ∈ T ( C ) ∑ m : R ( m ) ≡ j 2 − L ( m ) {\displaystyle \forall t\in T(C),P(t|C)={\frac {\sum _{n:R(n)\equiv t}2^{-L(n)}}{\sum _{j\in T(C)}\sum _{m:R(m)\equiv j}2^{-L(m)}}}} This is the probability of the theory t after observing that the condition C holds. ==== Removing theories without predictive power ==== Theories that are less probable than the condition C have no predictive power. Separate them out giving, ∀ t ∈ T ( C ) , P ( t | C ) = P ( E ( t ) ) ( ∑ j : j ∈ T ( C ) ∧ P ( E ( j ) ) > P ( E ( C ) ) P ( E ( j ) ) ) + ( ∑ j : j ∈ T ( C ) ∧ P ( E ( j ) ) ≤ P ( E ( C ) ) P ( j ) ) {\displaystyle \forall t\in T(C),P(t|C)={\frac {P(E(t))}{(\sum _{j:j\in T(C)\land P(E(j))>P(E(C))}P(E(j)))+(\sum _{j:j\in T(C)\land P(E(j))\leq P(E(C))}P(j))}}} The probability of the theories without predictive power on C is the same as the probability of C. So, P ( E ( C ) ) = ∑ j : j ∈ T ( C ) ∧ P ( E ( j ) ) ≤ P ( E ( C ) ) P ( j ) {\displaystyle P(E(C))=\sum _{j:j\in T(C)\land P(E(j))\leq P(E(C))}P(j)} So the probability ∀ t ∈ T ( C ) , P ( t | C ) = P ( E ( t ) ) P ( E ( C ) ) + ∑ j : j ∈ T ( C ) ∧ P ( E ( j ) ) > P ( E ( C ) ) P ( E ( j ) ) {\displaystyle \forall t\in T(C),P(t|C)={\frac {P(E(t))}{P(E(C))+\sum _{j:j\in T(C)\land P(E(j))>P(E(C))}P(E(j))}}} and the probability of no prediction for C, written as random ⁡ ( C ) {\displaystyle \operatorname {random} (C)} , P ( random ( C ) | C ) = P ( E ( C ) ) P ( E ( C ) ) + ∑ j : j ∈ T ( C ) ∧ P ( E ( j ) ) > P ( E ( C ) ) P ( E ( j ) ) {\displaystyle P({\text{random}}(C)|C)={\frac {P(E(C))}{P(E(C))+\sum _{j:j\in T(C)\land P(E(j))>P(E(C))}P(E(j))}}} The probability of a condition was given as, ∀ t , P ( E ( t ) ) = ∑ n : R ( n ) ≡ t 2 − L ( n ) {\displaystyle \forall t,P(E(t))=\sum _{n:R(n)\equiv t}2^{-L(n)}} Bit strings for theories that are more complex than the bit string given to the agent as input have no predictive power. There probabilities are better included in the random case. To implement this a new definition is given as F in, ∀ t , P ( F ( t , c ) ) = ∑ n : R ( n ) ≡ t ∧ L ( n ) < L ( c ) 2 − L ( n ) {\displaystyle \forall t,P(F(t,c))=\sum _{n:R(n)\equiv t\land L(n)<L(c)}2^{-L(n)}} Using F, an improved version of the abductive probabilities is, ∀ t ∈ T ( C ) , P ( t | C ) = P ( F ( t , c ) ) P ( F ( C , c ) ) + ∑ j : j ∈ T ( C ) ∧ P ( F ( j , c ) ) > P ( F ( C , c ) ) P ( E ( j , c ) ) {\displaystyle \forall t\in T(C),P(t|C)={\frac {P(F(t,c))}{P(F(C,c))+\sum _{j:j\in T(C)\land P(F(j,c))>P(F(C,c))}P(E(j,c))}}} P ( random ⁡ ( C ) | C ) = P ( F ( C , c ) ) P ( F ( C , c ) ) + ∑ j : j ∈ T ( C ) ∧ P ( F ( j , c ) ) > P ( F ( C , c ) ) P ( F ( j , c ) ) {\displaystyle P(\operatorname {random} (C)|C)={\frac {P(F(C,c))}{P(F(C,c))+\sum _{j:j\in T(C)\land P(F(j,c))>P(F(C,c))}P(F(j,c))}}} == Key people == William of Ockham Thomas Bayes Ray Solomonoff Andrey Kolmogorov Chris Wallace D. M. Boulton Jorma Rissanen Marcus Hutter == See also == Abductive reasoning Algorithmic probability Algorithmic information theory Bayesian inference Information theory Inductive inference Inductive logic programming Inductive reasoning Learning Minimum message length Minimum description length Occam's razor Solomonoff's theory of inductive inference Universal artificial intelligence == References == == External links == Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076–1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference. C.S. Wallace, Statistical and Inductive Inference by Minimum Message Length, Springer-Verlag (Information Science and Statistics), ISBN 0-387-23795-X, May 2005 – chapter headings, table of contents and sample pages.
Predictable process
In stochastic analysis, a part of the mathematical theory of probability, a predictable process is a stochastic process whose value is knowable at a prior time. The predictable processes form the smallest class that is closed under taking limits of sequences and contains all adapted left-continuous processes. == Mathematical definition == === Discrete-time process === Given a filtered probability space ( Ω , F , ( F n ) n ∈ N , P ) {\displaystyle (\Omega ,{\mathcal {F}},({\mathcal {F}}_{n})_{n\in \mathbb {N} },\mathbb {P} )} , then a stochastic process ( X n ) n ∈ N {\displaystyle (X_{n})_{n\in \mathbb {N} }} is predictable if X n + 1 {\displaystyle X_{n+1}} is measurable with respect to the σ-algebra F n {\displaystyle {\mathcal {F}}_{n}} for each n. === Continuous-time process === Given a filtered probability space ( Ω , F , ( F t ) t ≥ 0 , P ) {\displaystyle (\Omega ,{\mathcal {F}},({\mathcal {F}}_{t})_{t\geq 0},\mathbb {P} )} , then a continuous-time stochastic process ( X t ) t ≥ 0 {\displaystyle (X_{t})_{t\geq 0}} is predictable if X {\displaystyle X} , considered as a mapping from Ω × R + {\displaystyle \Omega \times \mathbb {R} _{+}} , is measurable with respect to the σ-algebra generated by all left-continuous adapted processes. This σ-algebra is also called the predictable σ-algebra. == Examples == Every deterministic process is a predictable process. Every continuous-time adapted process that is left continuous is a predictable process. == See also == Adapted process Martingale == References ==
ALOPEX
ALOPEX (an abbreviation of "algorithms of pattern extraction") is a correlation based machine learning algorithm first proposed by Tzanakou and Harth in 1974. == Principle == In machine learning, the goal is to train a system to minimize a cost function or (referring to ALOPEX) a response function. Many training algorithms, such as backpropagation, have an inherent susceptibility to getting "stuck" in local minima or maxima of the response function. ALOPEX uses a cross-correlation of differences and a stochastic process to overcome this in an attempt to reach the absolute minimum (or maximum) of the response function. == Method == ALOPEX, in its simplest form is defined by an updating equation: Δ W i j ( n ) = γ Δ W i j ( n − 1 ) Δ R ( n ) + r i ( n ) {\displaystyle \Delta \ W_{ij}(n)=\gamma \ \Delta \ W_{ij}(n-1)\Delta \ R(n)+r_{i}(n)} where: n ≥ 0 {\displaystyle n\geq 0} is the iteration or time-step. Δ W i j ( n ) {\displaystyle \Delta \ W_{ij}(n)} is the difference between the current and previous value of system variable W i j {\displaystyle \ W_{ij}} at iteration n {\displaystyle n} . Δ R ( n ) {\displaystyle \Delta \ R(n)} is the difference between the current and previous value of the response function R , {\displaystyle \ R,} at iteration n {\displaystyle n} . γ {\displaystyle \gamma } is the learning rate parameter ( γ < 0 {\displaystyle (\gamma \ <0} minimizes R , {\displaystyle R,} and γ > 0 {\displaystyle \gamma \ >0} maximizes R ) {\displaystyle R\ )} r i ( n ) ∼ N ( 0 , σ 2 ) {\displaystyle r_{i}(n)\sim \ N(0,\sigma \ ^{2})} == Discussion == Essentially, ALOPEX changes each system variable W i j ( n ) {\displaystyle W_{ij}(n)} based on a product of: the previous change in the variable Δ {\displaystyle \Delta } W i j ( n − 1 ) {\displaystyle W_{ij}(n-1)} , the resulting change in the cost function Δ {\displaystyle \Delta } R ( n ) {\displaystyle R(n)} , and the learning rate parameter γ {\displaystyle \gamma } . Further, to find the absolute minimum (or maximum), the stochastic process r i j ( n ) {\displaystyle r_{ij}(n)} (Gaussian or other) is added to stochastically "push" the algorithm out of any local minima. == References == Harth, E., & Tzanakou, E. (1974) Alopex: A stochastic method for determining visual receptive fields. Vision Research, 14:1475-1482. Abstract from ScienceDirect
Excursion probability
In probability theory, an excursion probability is the probability that a stochastic process surpasses a given value in a fixed time period. It is the probability P { sup t ∈ T f ( t ) ≥ u } . {\displaystyle \mathbb {P} \left\{\sup _{t\in T}f(t)\geq u\right\}.} Numerous approximation methods for the situation where u is large and f(t) is a Gaussian process have been proposed such as Rice's formula. First-excursion probabilities can be used to describe deflection to a critical point experienced by structures during "random loadings, such as earthquakes, strong gusts, hurricanes, etc." == References ==
Statistics
Statistics (from German: Statistik, orig. "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. When census data (comprising every member of the target population) cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation). Descriptive statistics are most often concerned with two sets of properties of a distribution (sample or population): central tendency (or location) seeks to characterize the distribution's central or typical value, while dispersion (or variability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences made using mathematical statistics employ the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the collection of data leading to a test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors (null hypothesis is rejected when it is in fact true, giving a "false positive") and Type II errors (null hypothesis fails to be rejected when it is in fact false, giving a "false negative"). Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis. Statistical measurement processes are also prone to error in regards to the data that they generate. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also occur. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. == Introduction == "Statistics is both the science of uncertainty and the technology of extracting information from data." - featured in the International Encyclopedia of Statistical Science.Statistics is the discipline that deals with data, facts and figures with which meaningful information is inferred. Data may represent a numerical value, in form of quantitative data, or a label, as with qualitative data. Data may be collected, presented and summarised, in one of two methods called descriptive statistics. Two elementary summaries of data, singularly called a statistic, are the mean and dispersion. Whereas inferential statistics interprets data from a population sample to induce statements and predictions about a population. Statistics is regarded as a body of science or a branch of mathematics. It is based on probability, a branch of mathematics that studies random events. Statistics is considered the science of uncertainty. This arises from the ways to cope with measurement and sampling error as well as dealing with uncertanties in modelling. Although probability and statistics were once paired together as a single subject, they are conceptually distinct from one another. The former is based on deducing answers to specific situations from a general theory of probability, meanwhile statistics induces statements about a population based on a data set. Statistics serves to bridge the gap between probability and applied mathematical fields. Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is generally concerned with the use of data in the context of uncertainty and decision-making in the face of uncertainty. Statistics is indexed at 62, a subclass of probability theory and stochastic processes, in the Mathematics Subject Classification. Mathematical statistics is covered in the range 276-280 of subclass QA (science > mathematics) in the Library of Congress Classification. The word statistics ultimately comes from the Latin word Status, meaning "situation" or "condition" in society, which in late Latin adopted the meaning "state". Derived from this, political scientist Gottfried Achenwall, coined the German word statistik (a summary of how things stand). In 1770, the term entered the English language through German and referred to the study of political arrangements. The term gained its modern meaning in the 1790s in John Sinclair's works. In modern German, the term statistik is synonymous with mathematical statistics. The term statistic, in singular form, is used to describe a function that returns its value of the same name. == Statistical data == === Data collection === ==== Sampling ==== When full census data cannot be collected, statisticians collect sample data by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting through statistical models. To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population. Sampling theory is part of the mathematical discipline of probability theory. Probability is used in mathematical statistics to study the sampling distributions of sample statistics and, more generally, the properties of statistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction—inductively inferring from samples to the parameters of a larger or total population. ==== Experimental and observational studies ==== A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables. There are two major types of causal statistical studies: experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements with different levels using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data—like natural experiments and observational studies—for which a statistician would use a modified, more structured estimation method (e.g., difference in differences estimation and instrumental variables, among many others) that produce consistent estimators. ===== Experiments ===== The basic steps of a statistical experiment are: Planning the research, including finding the number of replicates of the study, using the following information: preliminary estimates regarding the size of treatment effects, alternative hypotheses, and the estimated experimental variability. Consideration of the selection of experimental subjects and the ethics of research is necessary. Statisticians recommend that experiments compare (at least) one new treatment with a standard treatment or control, to allow an unbiased estimate of the difference in treatment effects. Design of experiments, using blocking to reduce the influence of confounding variables, and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage, the experimenters and statisticians write the experimental protocol that will guide the performance of the experiment and which specifies the primary analysis of the experimental data. Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol. Further examining the data set in secondary analyses, to suggest new hypotheses for future study. Documenting and presenting the results of the study. Experiments on human behavior have special concerns. The famous Hawthorne study examined changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected productivity. It turned out that productivity indeed improved (under the experimental conditions). However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a control group and blindness. The Hawthorne effect refers to finding that an outcome (in this case, worker productivity) changed due to observation itself. Those in the Hawthorne study became more productive not because the lighting was changed but because they were being observed. ===== Observational study ===== An example of an observational study is one that explores the association between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a cohort study, and then look for the number of cases of lung cancer in each group. A case-control study is another type of observational study in which people with and without the outcome of interest (e.g. lung cancer) are invited to participate and their exposure histories are collected. === Types of data === Various attempts have been made to produce a taxonomy of levels of measurement. The psychophysicist Stanley Smith Stevens defined nominal, ordinal, interval, and ratio scales. Nominal measurements do not have meaningful rank order among values, and permit any one-to-one (injective) transformation. Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values, and permit any order-preserving transformation. Interval measurements have meaningful distances between measurements defined, but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit), and permit any linear transformation. Ratio measurements have both a meaningful zero value and the distances between different measurements defined, and permit any rescaling transformation. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are grouped together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative variables, which can be either discrete or continuous, due to their numerical nature. Such distinctions can often be loosely correlated with data type in computer science, in that dichotomous categorical variables may be represented with the Boolean data type, polytomous categorical variables with arbitrarily assigned integers in the integral data type, and continuous variables with the real data type involving floating-point arithmetic. But the mapping of computer science data types to statistical data types depends on which categorization of the latter is being implemented. Other categorizations have been proposed. For example, Mosteller and Tukey (1977) distinguished grades, ranks, counted fractions, counts, amounts, and balances. Nelder (1990) described continuous counts, continuous ratios, count ratios, and categorical modes of data. (See also: Chrisman (1998), van den Berg (1991).) The issue of whether or not it is appropriate to apply different kinds of statistical methods to data obtained from different kinds of measurement procedures is complicated by issues concerning the transformation of variables and the precise interpretation of research questions. "The relationship between the data and what they describe merely reflects the fact that certain kinds of statistical statements may have truth values which are not invariant under some transformations. Whether or not a transformation is sensible to contemplate depends on the question one is trying to answer.": 82  == Methods == === Descriptive statistics === A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. === Inferential statistics === Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. ==== Terminology and theory of inferential statistics ==== ===== Statistics, estimators and pivotal quantities ===== Consider independent identically distributed (IID) random variables with a given probability distribution: standard statistical inference and estimation theory defines a random sample as the random vector given by the column vector of these IID variables. The population being examined is described by a probability distribution that may have unknown parameters. A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters. Consider now a function of the unknown parameter: an estimator is a statistic used to estimate such function. Commonly used estimators include sample mean, unbiased sample variance and sample covariance. A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value. Between two estimators of a given parameter, the one with lower mean squared error is said to be more efficient. Furthermore, an estimator is said to be unbiased if its expected value is equal to the true value of the unknown parameter being estimated, and asymptotically unbiased if its expected value converges at the limit to the true value of such parameter. Other desirable properties for estimators include: UMVUE estimators that have the lowest variance for all possible values of the parameter to be estimated (this is usually an easier property to verify than efficiency) and consistent estimators which converges in probability to the true value of such parameter. This still leaves the question of how to obtain estimators in a given situation and carry the computation, several methods have been proposed: the method of moments, the maximum likelihood method, the least squares method and the more recent method of estimating equations. ===== Null hypothesis and alternative hypothesis ===== Interpretation of statistical information can often involve the development of a null hypothesis which is usually (but not necessarily) that no relationship exists among variables or that no change occurred over time. The alternative hypothesis is the name of the hypothesis that contradicts the null hypothesis. The best illustration for a novice is the predicament encountered by a criminal trial. The null hypothesis, H0, asserts that the defendant is innocent, whereas the alternative hypothesis, H1, asserts that the defendant is guilty. The indictment comes because of suspicion of the guilt. The H0 (the status quo) stands in opposition to H1 and is maintained unless H1 is supported by evidence "beyond a reasonable doubt". However, "failure to reject H0" in this case does not imply innocence, but merely that the evidence was insufficient to convict. So the jury does not necessarily accept H0 but fails to reject H0. While one can not "prove" a null hypothesis, one can test how close it is to being true with a power test, which tests for type II errors. ===== Error ===== Working from a null hypothesis, two broad categories of error are recognized: Type I errors where the null hypothesis is falsely rejected, giving a "false positive". Type II errors where the null hypothesis fails to be rejected and an actual difference between populations is missed, giving a "false negative". Standard deviation refers to the extent to which individual observations in a sample differ from a central value, such as the sample or population mean, while Standard error refers to an estimate of difference between sample mean and population mean. A statistical error is the amount by which an observation differs from its expected value. A residual is the amount an observation differs from the value the estimator of the expected value assumes on a given sample (also called prediction). Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error. Many statistical methods seek to minimize the residual sum of squares, and these are called "methods of least squares" in contrast to Least absolute deviations. The latter gives equal weight to small and big errors, while the former gives more weight to large errors. Residual sum of squares is also differentiable, which provides a handy property for doing regression. Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve. Measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also be important. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. ===== Interval estimation ===== Most studies only sample part of a population, so results do not fully represent the whole population. Any estimates obtained from the sample only approximate the population value. Confidence intervals allow statisticians to express how closely the sample estimate matches the true value in the whole population. Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%. From the frequentist perspective, such a claim does not even make sense, as the true value is not a random variable. Either the true value is or is not within the given interval. However, it is true that, before any data are sampled and given a plan for how to construct the confidence interval, the probability is 95% that the yet-to-be-calculated interval will cover the true value: at this point, the limits of the interval are yet-to-be-observed random variables. One approach that does yield an interval that can be interpreted as having a given probability of containing the true value is to use a credible interval from Bayesian statistics: this approach depends on a different way of interpreting what is meant by "probability", that is as a Bayesian probability. In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds. ===== Significance ===== Statistics rarely give a simple Yes/No type answer to the question under analysis. Interpretation often comes down to the level of statistical significance applied to the numbers and often refers to the probability of a value accurately rejecting the null hypothesis (sometimes referred to as the p-value). The standard approach is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator does not belong to the critical region given that the alternative hypothesis is true. The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false. Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably. Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error. Some problems are usually associated with this framework (See criticism of hypothesis testing): A difference that is highly statistically significant can still be of no practical significance, but it is possible to properly formulate tests to account for this. One response involves going beyond reporting only the significance level to include the p-value when reporting whether a hypothesis is rejected or accepted. The p-value, however, does not indicate the size or importance of the observed effect and can also seem to exaggerate the importance of minor differences in large studies. A better and increasingly common approach is to report confidence intervals. Although these are produced from the same calculations as those of hypothesis tests or p-values, they describe both the size of the effect and the uncertainty surrounding it. Fallacy of the transposed conditional, aka prosecutor's fallacy: criticisms arise because the hypothesis testing approach forces one hypothesis (the null hypothesis) to be favored, since what is being evaluated is the probability of the observed result given the null hypothesis and not probability of the null hypothesis given the observed result. An alternative to this approach is offered by Bayesian inference, although it requires establishing a prior probability. Rejecting the null hypothesis does not automatically prove the alternative hypothesis. As everything in inferential statistics it relies on sample size, and therefore under fat tails p-values may be seriously mis-computed. ===== Examples ===== Some well-known statistical tests and procedures are: === Bayesian Statistics === An alternative paradigm to the popular frequentist paradigm is to use Bayes' theorem to update the prior probability of the hypotheses in consideration based on the relative likelihood of the evidence gathered to obtain a posterior probability. Bayesian methods have been aided by the increase in available computing power to compute the posterior probability using numerical approximation techniques like Markov Chain Monte Carlo. For statistically modelling purposes, Bayesian models tend to be hierarchical, for example, one could model each Youtube channel as having video views distributed as a normal distribution with channel dependent mean and variance N ( μ i , σ i ) {\displaystyle {\mathcal {N}}(\mu _{i},\sigma _{i})} , while modeling the channel means as themselves coming from a normal distribution representing the distribution of average video view counts per channel, and the variances as coming from another distribution. The concept of using likelihood ratio can also be prominently seen in medical diagnostic testing. === Exploratory data analysis === Exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task. === Mathematical statistics === Mathematical statistics is the application of mathematics to statistics. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. All statistical analyses make use of at least some mathematics, and mathematical statistics can therefore be regarded as a fundamental component of general statistics. == History == Formal discussions on inference date back to the mathematicians and cryptographers of the Islamic Golden Age between the 8th and 13th centuries. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains one of the first uses of permutations and combinations, to list all possible Arabic words with and without vowels. Al-Kindi's Manuscript on Deciphering Cryptographic Messages gave a detailed description of how to use frequency analysis to decipher encrypted messages, providing an early example of statistical inference for decoding. Ibn Adlan (1187–1268) later made an important contribution on the use of sample size in frequency analysis. Although the term statistic was introduced by the Italian scholar Girolamo Ghilini in 1589 with reference to a collection of facts and information about a state, it was the German Gottfried Achenwall in 1749 who started using the term as a collection of quantitative information, in the modern use for this science. The earliest writing containing statistics in Europe dates back to 1663, with the publication of Natural and Political Observations upon the Bills of Mortality by John Graunt. Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences. The mathematical foundations of statistics developed from discussions concerning games of chance among mathematicians such as Gerolamo Cardano, Blaise Pascal, Pierre de Fermat, and Christiaan Huygens. Although the idea of probability was already examined in ancient and medieval law and philosophy (such as the work of Juan Caramuel), probability theory as a mathematical discipline only took shape at the very end of the 17th century, particularly in Jacob Bernoulli's posthumous work Ars Conjectandi. This was the first book where the realm of games of chance and the realm of the probable (which concerned opinion, evidence, and argument) were combined and submitted to mathematical analysis. The method of least squares was first described by Adrien-Marie Legendre in 1805, though Carl Friedrich Gauss presumably made use of it a decade earlier in 1795. The modern field of statistics emerged in the late 19th and early 20th century in three stages. The first wave, at the turn of the century, was led by the work of Francis Galton and Karl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not just in science, but in industry and politics as well. Galton's contributions included introducing the concepts of standard deviation, correlation, regression analysis and the application of these methods to the study of the variety of human characteristics—height, weight and eyelash length among others. Pearson developed the Pearson product-moment correlation coefficient, defined as a product-moment, the method of moments for the fitting of distributions to samples and the Pearson distribution, among many other things. Galton and Pearson founded Biometrika as the first journal of mathematical statistics and biostatistics (then called biometry), and the latter founded the world's first university statistics department at University College London. The second wave of the 1910s and 20s was initiated by William Sealy Gosset, and reached its culmination in the insights of Ronald Fisher, who wrote the textbooks that were to define the academic discipline in universities around the world. Fisher's most important publications were his 1918 seminal paper The Correlation between Relatives on the Supposition of Mendelian Inheritance (which was the first to use the statistical term, variance), his classic 1925 work Statistical Methods for Research Workers and his 1935 The Design of Experiments, where he developed rigorous design of experiments models. He originated the concepts of sufficiency, ancillary statistics, Fisher's linear discriminator and Fisher information. He also coined the term null hypothesis during the Lady tasting tea experiment, which "is never proved or established, but is possibly disproved, in the course of experimentation". In his 1930 book The Genetical Theory of Natural Selection, he applied statistics to various biological concepts such as Fisher's principle (which A. W. F. Edwards called "probably the most celebrated argument in evolutionary biology") and Fisherian runaway, a concept in sexual selection about a positive feedback runaway effect found in evolution. The final wave, which mainly saw the refinement and expansion of earlier developments, emerged from the collaborative work between Egon Pearson and Jerzy Neyman in the 1930s. They introduced the concepts of "Type II" error, power of a test and confidence intervals. Jerzy Neyman in 1934 showed that stratified random sampling was in general a better method of estimation than purposive (quota) sampling. Among the early attempts to measure national economic activity were those of William Petty in the 17th century. In the 20th century the uniform System of National Accounts was developed. Today, statistical methods are applied in all fields that involve decision making, for making accurate inferences from a collated body of data and for making decisions in the face of uncertainty based on statistical methodology. The use of modern computers has expedited large-scale statistical computations and has also made possible new methods that are impractical to perform manually. Statistics continues to be an area of active research, for example on the problem of how to analyze big data. == Applications == === Applied statistics, theoretical statistics and mathematical statistics === Applied statistics, sometimes referred to as Statistical science, comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments. Statistical consultants can help organizations and companies that do not have in-house expertise relevant to their particular questions. === Machine learning and data mining === Machine learning models are statistical and probabilistic models that capture patterns in the data through use of computational algorithms. === Statistics in academia === Statistics is applicable to a wide variety of academic disciplines, including natural and social sciences, government, and business. Business statistics applies statistical methods in econometrics, auditing and production and operations, including services improvement and marketing research. A study of two journals in tropical biology found that the 12 most frequent statistical tests are: analysis of variance (ANOVA), chi-squared test, Student's t-test, linear regression, Pearson's correlation coefficient, Mann-Whitney U test, Kruskal-Wallis test, Shannon's diversity index, Tukey's range test, cluster analysis, Spearman's rank correlation coefficient and principal component analysis. A typical statistics course covers descriptive statistics, probability, binomial and normal distributions, test of hypotheses and confidence intervals, linear regression, and correlation. Modern fundamental statistical courses for undergraduate students focus on correct test selection, results interpretation, and use of free statistics software. === Statistical computing === The rapid and sustained increases in computing power starting from the second half of the 20th century have had a substantial impact on the practice of statistical science. Early statistical models were almost always from the class of linear models, but powerful computers, coupled with suitable numerical algorithms, caused an increased interest in nonlinear models (such as neural networks) as well as the creation of new types, such as generalized linear models and multilevel models. Increased computing power has also led to the growing popularity of computationally intensive methods based on resampling, such as permutation tests and the bootstrap, while techniques such as Gibbs sampling have made use of Bayesian models more feasible. The computer revolution has implications for the future of statistics with a new emphasis on "experimental" and "empirical" statistics. A large number of both general and special purpose statistical software are now available. Examples of available software capable of complex statistical computation include programs such as Mathematica, SAS, SPSS, and R. === Business statistics === In business, "statistics" is a widely used management- and decision support tool. It is particularly applied in financial management, marketing management, and production, services and operations management. Statistics is also heavily used in management accounting and auditing. The discipline of Management Science formalizes the use of statistics, and other mathematics, in business. (Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships.) A typical "Business Statistics" course is intended for business majors, and covers descriptive statistics (collection, description, analysis, and summary of data), probability (typically the binomial and normal distributions), test of hypotheses and confidence intervals, linear regression, and correlation; (follow-on) courses may include forecasting, time series, decision trees, multiple linear regression, and other topics from business analytics more generally. Professional certification programs, such as the CFA, often include topics in statistics. == Specialized disciplines == Statistical techniques are used in a wide range of types of scientific and social research, including: biostatistics, computational biology, computational sociology, network biology, social science, sociology and social research. Some fields of inquiry use applied statistics so extensively that they have specialized terminology. These disciplines include: In addition, there are particular types of statistical analysis that have also developed their own specialised terminology and methodology: Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as in statistical process control or SPC), for summarizing data, and to make data-driven decisions. == Misuse == Misuse of statistics can produce subtle but serious errors in description and interpretation—subtle in the sense that even experienced professionals make such errors, and serious in the sense that they can lead to devastating decision errors. For instance, social policy, medical practice, and the reliability of structures like bridges all rely on the proper use of statistics. Even when statistical techniques are correctly applied, the results can be difficult to interpret for those lacking expertise. The statistical significance of a trend in the data—which measures the extent to which a trend could be caused by random variation in the sample—may or may not agree with an intuitive sense of its significance. The set of basic statistical skills (and skepticism) that people need to deal with information in their everyday lives properly is referred to as statistical literacy. There is a general perception that statistical knowledge is all-too-frequently intentionally misused by finding ways to interpret only the data that are favorable to the presenter. A mistrust and misunderstanding of statistics is associated with the quotation, "There are three kinds of lies: lies, damned lies, and statistics". Misuse of statistics can be both inadvertent and intentional, and the book How to Lie with Statistics, by Darrell Huff, outlines a range of considerations. In an attempt to shed light on the use and misuse of statistics, reviews of statistical techniques used in particular fields are conducted (e.g. Warne, Lazo, Ramos, and Ritter (2012)). Ways to avoid misuse of statistics include using proper diagrams and avoiding bias. Misuse can occur when conclusions are overgeneralized and claimed to be representative of more than they really are, often by either deliberately or unconsciously overlooking sampling bias. Bar graphs are arguably the easiest diagrams to use and understand, and they can be made either by hand or with simple computer programs. Most people do not look for bias or errors, so they are not noticed. Thus, people may often believe that something is true even if it is not well represented. To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, "The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism." To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case: Who says so? (Does he/she have an axe to grind?) How does he/she know? (Does he/she have the resources to know the facts?) What's missing? (Does he/she give us a complete picture?) Did someone change the subject? (Does he/she offer us the right answer to the wrong problem?) Does it make sense? (Is his/her conclusion logical and consistent with what we already know?) === Misinterpretation: correlation === The concept of correlation is particularly noteworthy for the potential confusion it can cause. Statistical analysis of a data set often reveals that two variables (properties) of the population under consideration tend to vary together, as if they were connected. For example, a study of annual income that also looks at age of death, might find that poor people tend to have shorter lives than affluent people. The two variables are said to be correlated; however, they may or may not be the cause of one another. The correlation phenomena could be caused by a third, previously unconsidered phenomenon, called a lurking variable or confounding variable. For this reason, there is no way to immediately infer the existence of a causal relationship between the two variables. == See also == Foundations and major areas of statistics == References == == Further reading == Lydia Denworth, "A Significant Problem: Standard scientific methods are under fire. Will anything change?", Scientific American, vol. 321, no. 4 (October 2019), pp. 62–67. "The use of p values for nearly a century [since 1925] to determine statistical significance of experimental results has contributed to an illusion of certainty and [to] reproducibility crises in many scientific fields. There is growing determination to reform statistical analysis... Some [researchers] suggest changing statistical methods, whereas others would do away with a threshold for defining "significant" results". (p. 63.) Barbara Illowsky; Susan Dean (2014). Introductory Statistics. OpenStax CNX. ISBN 978-1938168208. Stockburger, David W. "Introductory Statistics: Concepts, Models, and Applications". Missouri State University (3rd Web ed.). Archived from the original on 28 May 2020. OpenIntro Statistics Archived 2019-06-16 at the Wayback Machine, 3rd edition by Diez, Barr, and Cetinkaya-Rundel Stephen Jones, 2010. Statistics in Psychology: Explanations without Equations. Palgrave Macmillan. ISBN 978-1137282392. Cohen, J (1990). "Things I have learned (so far)" (PDF). American Psychologist. 45 (12): 1304–1312. doi:10.1037/0003-066x.45.12.1304. S2CID 7180431. Archived from the original (PDF) on 2017-10-18. Gigerenzer, G (2004). "Mindless statistics". Journal of Socio-Economics. 33 (5): 587–606. doi:10.1016/j.socec.2004.09.033. Ioannidis, J.P.A. (2005). "Why most published research findings are false". PLOS Medicine. 2 (4): 696–701. doi:10.1371/journal.pmed.0040168. PMC 1855693. PMID 17456002. == External links == (Electronic Version): TIBCO Software Inc. (2020). Data Science Textbook. Online Statistics Education: An Interactive Multimedia Course of Study. Developed by Rice University (Lead Developer), University of Houston Clear Lake, Tufts University, and National Science Foundation. UCLA Statistical Computing Resources (archived 17 July 2006) Philosophy of Statistics from the Stanford Encyclopedia of Philosophy
Probability interpretations
The word "probability" has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory. There are two broad categories of probability interpretations which can be called "physical" and "evidential" probabilities. Physical probabilities, which are also called objective or frequency probabilities, are associated with random physical systems such as roulette wheels, rolling dice and radioactive atoms. In such systems, a given type of event (such as a die yielding a six) tends to occur at a persistent rate, or "relative frequency", in a long run of trials. Physical probabilities either explain, or are invoked to explain, these stable frequencies. The two main kinds of theory of physical probability are frequentist accounts (such as those of Venn, Reichenbach and von Mises) and propensity accounts (such as those of Popper, Miller, Giere and Fetzer). Evidential probability, also called Bayesian probability, can be assigned to any statement whatsoever, even when no random process is involved, as a way to represent its subjective plausibility, or the degree to which the statement is supported by the available evidence. On most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds. The four main evidential interpretations are the classical (e.g. Laplace's) interpretation, the subjective interpretation (de Finetti and Savage), the epistemic or inductive interpretation (Ramsey, Cox) and the logical interpretation (Keynes and Carnap). There are also evidential interpretations of probability covering groups, which are often labelled as 'intersubjective' (proposed by Gillies and Rowbottom). Some interpretations of probability are associated with approaches to statistical inference, including theories of estimation and hypothesis testing. The physical interpretation, for example, is taken by followers of "frequentist" statistical methods, such as Ronald Fisher, Jerzy Neyman and Egon Pearson. Statisticians of the opposing Bayesian school typically accept the frequency interpretation when it makes sense (although not as a definition), but there is less agreement regarding physical probabilities. Bayesians consider the calculation of evidential probabilities to be both valid and necessary in statistics. This article, however, focuses on the interpretations of probability rather than theories of statistical inference. The terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields. The word "frequentist" is especially tricky. To philosophers it refers to a particular theory of physical probability, one that has more or less been abandoned. To scientists, on the other hand, "frequentist probability" is just another name for physical (or objective) probability. Those who promote Bayesian inference view "frequentist statistics" as an approach to statistical inference that is based on the frequency interpretation of probability, usually relying on the law of large numbers and characterized by what is called 'Null Hypothesis Significance Testing' (NHST). Also the word "objective", as applied to probability, sometimes means exactly what "physical" means here, but is also used of evidential probabilities that are fixed by rational constraints, such as logical and epistemic probabilities. It is unanimously agreed that statistics depends somehow on probability. But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis. == Philosophy == The philosophy of probability presents problems chiefly in matters of epistemology and the uneasy interface between mathematical concepts and ordinary language as it is used by non-mathematicians. Probability theory is an established field of study in mathematics. It has its origins in correspondence discussing the mathematics of games of chance between Blaise Pascal and Pierre de Fermat in the seventeenth century, and was formalized and rendered axiomatic as a distinct branch of mathematics by Andrey Kolmogorov in the twentieth century. In axiomatic form, mathematical statements about probability theory carry the same sort of epistemological confidence within the philosophy of mathematics as are shared by other mathematical statements. The mathematical analysis originated in observations of the behaviour of game equipment such as playing cards and dice, which are designed specifically to introduce random and equalized elements; in mathematical terms, they are subjects of indifference. This is not the only way probabilistic statements are used in ordinary human language: when people say that "it will probably rain", they typically do not mean that the outcome of rain versus not-rain is a random factor that the odds currently favor; instead, such statements are perhaps better understood as qualifying their expectation of rain with a degree of confidence. Likewise, when it is written that "the most probable explanation" of the name of Ludlow, Massachusetts "is that it was named after Roger Ludlow", what is meant here is not that Roger Ludlow is favored by a random factor, but rather that this is the most plausible explanation of the evidence, which admits other, less likely explanations. Thomas Bayes attempted to provide a logic that could handle varying degrees of confidence; as such, Bayesian probability is an attempt to recast the representation of probabilistic statements as an expression of the degree of confidence by which the beliefs they express are held. Though probability initially had somewhat mundane motivations, its modern influence and use is widespread ranging from evidence-based medicine, through six sigma, all the way to the probabilistically checkable proof and the string theory landscape. == Classical definition == The first attempt at mathematical rigour in the field of probability, championed by Pierre-Simon Laplace, is now known as the classical definition. Developed from studies of games of chance (such as rolling dice) it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely. (3.1) The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible. This can be represented mathematically as follows: If a random experiment can result in N mutually exclusive and equally likely outcomes and if NA of these outcomes result in the occurrence of the event A, the probability of A is defined by P ( A ) = N A N . {\displaystyle P(A)={N_{A} \over N}.} There are two clear limitations to the classical definition. Firstly, it is applicable only to situations in which there is only a 'finite' number of possible outcomes. But some important random experiments, such as tossing a coin until it shows heads, give rise to an infinite set of outcomes. And secondly, it requires an a priori determination that all possible outcomes are equally likely without falling in a trap of circular reasoning by relying on the notion of probability. (In using the terminology "we may be equally undecided", Laplace assumed, by what has been called the "principle of insufficient reason", that all possible outcomes are equally likely if there is no known reason to assume otherwise, for which there is no obvious justification.) == Frequentism == Frequentists posit that the probability of an event is its relative frequency over time, (3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by some random physical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (see determinism); or phenomena which are essentially unpredictable. Examples of the first kind include tossing dice or spinning a roulette wheel; an example of the second kind is radioactive decay. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity. If we denote by n a {\displaystyle \textstyle n_{a}} the number of occurrences of an event A {\displaystyle {\mathcal {A}}} in n {\displaystyle \textstyle n} trials, then if lim n → + ∞ n a n = p {\displaystyle \lim _{n\to +\infty }{n_{a} \over n}=p} we say that P ( A ) = p {\displaystyle \textstyle P({\mathcal {A}})=p} . The frequentist view has its own problems. It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular; see for example “What is the Chance of an Earthquake?” == Subjectivism == Subjectivists, also known as Bayesians or followers of epistemic probability, give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation. Epistemic or subjective probability is sometimes called credence, as opposed to the term chance for a propensity probability. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true or to determine how probable it is that a suspect committed a crime, based on the evidence presented. The use of Bayesian probability raises the philosophical debate as to whether it can contribute valid justifications of belief. Bayesians point to the work of Ramsey (p 182) and de Finetti (p 103) as proving that subjective beliefs must follow the laws of probability if they are to be coherent. Evidence casts doubt that humans will have coherent beliefs. The use of Bayesian probability involves specifying a prior probability. This may be obtained from consideration of whether the required prior probability is greater or lesser than a reference probability associated with an urn model or a thought experiment. The issue is that for a given problem, multiple thought experiments could apply, and choosing one is a matter of judgement: different people may assign different prior probabilities, known as the reference class problem. The "sunrise problem" provides an example. == Propensity == Propensity theorists think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind or to yield a long run relative frequency of such an outcome. This kind of objective probability is sometimes called 'chance'. Propensities, or chances, are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate given outcome types at persistent rates, which are known as propensities or chances. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives (see "single case possible" in the table above). In contrast, a propensitist is able to use the law of large numbers to explain the behaviour of long-run frequencies. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will be close to the probability of heads on each single toss. This law allows that stable long-run frequencies are a manifestation of invariant single-case probabilities. In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of decay of a particular atom at a particular time. The main challenge facing propensity theories is to say exactly what propensity means. (And then, of course, to show that propensity thus defined has the required properties.) At present, unfortunately, none of the well-recognised accounts of propensity comes close to meeting this challenge. A propensity theory of probability was given by Charles Sanders Peirce. A later propensity theory was proposed by philosopher Karl Popper, who had only slight acquaintance with the writings of C. S. Peirce, however. Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions has propensity p of producing the outcome E means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which E occurred with limiting relative frequency p. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) only exist for genuinely nondeterministic experiments. A number of other philosophers, including David Miller and Donald A. Gillies, have proposed propensity theories somewhat similar to Popper's. Other propensity theorists (e.g. Ronald Giere) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argued, for example, that physical magnitudes such as electrical charge cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science. What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value. David Lewis called this the Principal Principle, (3.3 & 3.5) a term that philosophers have mostly adopted. For example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct price for a gamble that pays $1 if the coin lands heads, and nothing otherwise? According to the Principal Principle, the fair price is 32 cents. == Logical, epistemic, and inductive probability == It is widely recognized that the term "probability" is sometimes used in contexts where it has nothing to do with physical randomness. Consider, for example, the claim that the extinction of the dinosaurs was probably caused by a large meteorite hitting the earth. Statements such as "Hypothesis H is probably true" have been interpreted to mean that the (presently available) empirical evidence (E, say) supports H to a high degree. This degree of support of H by E has been called the logical, or epistemic, or inductive probability of H given E. The differences between these interpretations are rather small, and may seem inconsequential. One of the main points of disagreement lies in the relation between probability and belief. Logical probabilities are conceived (for example in Keynes' Treatise on Probability) to be objective, logical relations between propositions (or sentences), and hence not to depend in any way upon belief. They are degrees of (partial) entailment, or degrees of logical consequence, not degrees of belief. (They do, nevertheless, dictate proper degrees of belief, as is discussed below.) Frank P. Ramsey, on the other hand, was skeptical about the existence of such objective logical relations and argued that (evidential) probability is "the logic of partial belief". (p 157) In other words, Ramsey held that epistemic probabilities simply are degrees of rational belief, rather than being logical relations that merely constrain degrees of rational belief. Another point of disagreement concerns the uniqueness of evidential probability, relative to a given state of knowledge. Rudolf Carnap held, for example, that logical principles always determine a unique logical probability for any statement, relative to any body of evidence. Ramsey, by contrast, thought that while degrees of belief are subject to some rational constraints (such as, but not limited to, the axioms of probability) these constraints usually do not determine a unique value. Rational people, in other words, may differ somewhat in their degrees of belief, even if they all have the same information. == Prediction == An alternative account of probability emphasizes the role of prediction – predicting future observations on the basis of past observations, not on unobservable parameters. In its modern form, it is mainly in the Bayesian vein. This was the main function of probability before the 20th century, but fell out of favor compared to the parametric approach, which modeled phenomena as a physical system that was observed with error, such as in celestial mechanics. The modern predictive approach was pioneered by Bruno de Finetti, with the central idea of exchangeability – that future observations should behave like past observations. This view came to the attention of the Anglophone world with the 1974 translation of de Finetti's book, and has since been propounded by such statisticians as Seymour Geisser. == Axiomatic probability == The mathematics of probability can be developed on an entirely axiomatic basis that is independent of any interpretation: see the articles on probability theory and probability axioms for a detailed treatment. == See also == Coverage probability Frequency (statistics) Negative probability Philosophy of mathematics Philosophy of statistics Pignistic probability Probability amplitude (quantum mechanics) Sunrise problem Bayesian epistemology == Notes == == References == == Further reading == Cohen, L (1989). An introduction to the philosophy of induction and probability. Oxford New York: Clarendon Press Oxford University Press. ISBN 978-0198750789. Eagle, Antony (2011). Philosophy of probability : contemporary readings. Abingdon, Oxon New York: Routledge. ISBN 978-0415483872. Gillies, Donald (2000). Philosophical theories of probability. London New York: Routledge. ISBN 978-0415182768. A comprehensive monograph covering the four principal current interpretations: logical, subjective, frequency, propensity. Also proposes a novel intersubective interpretation. Hacking, Ian (2006). The emergence of probability : a philosophical study of early ideas about probability, induction and statistical inference. Cambridge New York: Cambridge University Press. ISBN 978-0521685573. Paul Humphreys, ed. (1994) Patrick Suppes: Scientific Philosopher, Synthese Library, Springer-Verlag. Vol. 1: Probability and Probabilistic Causality. Vol. 2: Philosophy of Physics, Theory Structure and Measurement, and Action Theory. Jackson, Frank, and Robert Pargetter (1982) "Physical Probability as a Propensity," Noûs 16(4): 567–583. Khrennikov, Andrei (2009). Interpretations of probability (2nd ed.). Berlin New York: Walter de Gruyter. ISBN 978-3110207484. Covers mostly non-Kolmogorov probability models, particularly with respect to quantum physics. Lewis, David (1983). Philosophical papers. New York: Oxford University Press. ISBN 978-0195036466. Plato, Jan von (1994). Creating modern probability : its mathematics, physics, and philosophy in historical perspective. Cambridge England New York: Cambridge University Press. ISBN 978-0521597357. Rowbottom, Darrell (2015). Probability. Cambridge: Polity. ISBN 978-0745652573. A highly accessible introduction to the interpretation of probability. Covers all the main interpretations, and proposes a novel group level (or 'intersubjective') interpretation. Also covers fallacies and applications of interpretations in the social and natural sciences. Skyrms, Brian (2000). Choice and chance : an introduction to inductive logic. Australia Belmont, CA: Wadsworth/Thomson Learning. ISBN 978-0534557379. == External links == Zalta, Edward N. (ed.). "Interpretations of Probability". Stanford Encyclopedia of Philosophy. Interpretations of Probability at the Indiana Philosophy Ontology Project Interpretation of Probability at PhilPapers
Model compression
Model compression is a machine learning technique for reducing the size of trained models. Large models can achieve high accuracy, but often at the cost of significant resource requirements. Compression techniques aim to compress models without significant performance reduction. Smaller models require less storage space, and consume less memory and compute during inference. Compressed models enable deployment on resource-constrained devices such as smartphones, embedded systems, edge computing devices, and consumer electronics computers. Efficient inference is also valuable for large corporations that serve large model inference over an API, allowing them to reduce computational costs and improve response times for users. Model compression is not to be confused with knowledge distillation, in which a separate, smaller "student" model is trained to imitate the input-output behavior of a larger "teacher" model. == Techniques == Several techniques are employed for model compression. === Pruning === Pruning sparsifies a large model by setting some parameters to exactly zero. This effectively reduces the number of parameters. This allows the use of sparse matrix operations, which are faster than dense matrix operations. Pruning criteria can be based on magnitudes of parameters, the statistical pattern of neural activations, Hessian values, etc. === Quantization === Quantization reduces the numerical precision of weights and activations. For example, instead of storing weights as 32-bit floating-point numbers, they can be represented using 8-bit integers. Low-precision parameters take up less space, and takes less compute to perform arithmetic with. It is also possible to quantize some parameters more aggressively than others, so for example, a less important parameter can have 8-bit precision while another, more important parameter, can have 16-bit precision. Inference with such models requires mixed-precision arithmetic. Quantized models can also be used during training (rather than after training). PyTorch implements automatic mixed-precision (AMP), which performs autocasting, gradient scaling, and loss scaling. === Low-rank factorization === Weight matrices can be approximated by low-rank matrices. Let W {\displaystyle W} be a weight matrix of shape m × n {\displaystyle m\times n} . A low-rank approximation is W ≈ U V T {\displaystyle W\approx UV^{T}} , where U {\displaystyle U} and V {\displaystyle V} are matrices of shapes m × k , n × k {\displaystyle m\times k,n\times k} . When k {\displaystyle k} is small, this both reduces the number of parameters needed to represent W {\displaystyle W} approximately, and accelerates matrix multiplication by W {\displaystyle W} . Low-rank approximations can be found by singular value decomposition (SVD). The choice of rank for each weight matrix is a hyperparameter, and jointly optimized as a mixed discrete-continuous optimization problem. The rank of weight matrices may also be pruned after training, taking into account the effect of activation functions like ReLU on the implicit rank of the weight matrices. == Training == Model compression may be decoupled from training, that is, a model is first trained without regard for how it might be compressed, then it is compressed. However, it may also be combined with training. The "train big, then compress" method trains a large model for a small number of training steps (less than it would be if it were trained to convergence), then heavily compress the model. It is found that at the same compute budget, this method results in a better model than lightly compressed, small models. In Deep Compression, the compression has three steps. First loop (pruning): prune all weights lower than a threshold, then finetune the network, then prune again, etc. Second loop (quantization): cluster weights, then enforce weight sharing among all weights in each cluster, then finetune the network, then cluster again, etc. Third step: Use Huffman coding to losslessly compress the model. The SqueezeNet paper reported that Deep Compression achieved a compression ratio of 35 on AlexNet, and a ratio of ~10 on SqueezeNets. == References == Review papers Li, Zhuo; Li, Hengyi; Meng, Lin (March 12, 2023). "Model Compression for Deep Neural Networks: A Survey". Computers. 12 (3). MDPI AG: 60. doi:10.3390/computers12030060. ISSN 2073-431X. Deng, By Lei; Li, Guoqi; Han, Song; Shi, Luping; Xie, Yuan (March 20, 2020). "Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey". Proceedings of the IEEE. 108 (4): 485–532. doi:10.1109/JPROC.2020.2976475. Retrieved October 18, 2024. Cheng, Yu; Wang, Duo; Zhou, Pan; Zhang, Tao (October 23, 2017). "A Survey of Model Compression and Acceleration for Deep Neural Networks". arXiv:1710.09282 [cs.LG]. Choudhary, Tejalal; Mishra, Vipul; Goswami, Anurag; Sarangapani, Jagannathan (February 8, 2020). "A comprehensive survey on model compression and acceleration". Artificial Intelligence Review. 53 (7). Springer Science and Business Media LLC: 5113–5155. doi:10.1007/s10462-020-09816-7. ISSN 0269-2821.
Delta operator
In mathematics, a delta operator is a shift-equivariant linear operator Q : K [ x ] ⟶ K [ x ] {\displaystyle Q\colon \mathbb {K} [x]\longrightarrow \mathbb {K} [x]} on the vector space of polynomials in a variable x {\displaystyle x} over a field K {\displaystyle \mathbb {K} } that reduces degrees by one. To say that Q {\displaystyle Q} is shift-equivariant means that if g ( x ) = f ( x + a ) {\displaystyle g(x)=f(x+a)} , then ( Q g ) ( x ) = ( Q f ) ( x + a ) . {\displaystyle {(Qg)(x)=(Qf)(x+a)}.\,} In other words, if f {\displaystyle f} is a "shift" of g {\displaystyle g} , then Q f {\displaystyle Qf} is also a shift of Q g {\displaystyle Qg} , and has the same "shifting vector" a {\displaystyle a} . To say that an operator reduces degree by one means that if f {\displaystyle f} is a polynomial of degree n {\displaystyle n} , then Q f {\displaystyle Qf} is either a polynomial of degree n − 1 {\displaystyle n-1} , or, in case n = 0 {\displaystyle n=0} , Q f {\displaystyle Qf} is 0. Sometimes a delta operator is defined to be a shift-equivariant linear transformation on polynomials in x {\displaystyle x} that maps x {\displaystyle x} to a nonzero constant. Seemingly weaker than the definition given above, this latter characterization can be shown to be equivalent to the stated definition when K {\displaystyle \mathbb {K} } has characteristic zero, since shift-equivariance is a fairly strong condition. == Examples == The forward difference operator ( Δ f ) ( x ) = f ( x + 1 ) − f ( x ) {\displaystyle (\Delta f)(x)=f(x+1)-f(x)\,} is a delta operator. Differentiation with respect to x, written as D, is also a delta operator. Any operator of the form ∑ k = 1 ∞ c k D k {\displaystyle \sum _{k=1}^{\infty }c_{k}D^{k}} (where Dn(ƒ) = ƒ(n) is the nth derivative) with c 1 ≠ 0 {\displaystyle c_{1}\neq 0} is a delta operator. It can be shown that all delta operators can be written in this form. For example, the difference operator given above can be expanded as Δ = e D − 1 = ∑ k = 1 ∞ D k k ! . {\displaystyle \Delta =e^{D}-1=\sum _{k=1}^{\infty }{\frac {D^{k}}{k!}}.} The generalized derivative of time scale calculus which unifies the forward difference operator with the derivative of standard calculus is a delta operator. In computer science and cybernetics, the term "discrete-time delta operator" (δ) is generally taken to mean a difference operator ( δ f ) ( x ) = f ( x + Δ t ) − f ( x ) Δ t , {\displaystyle {(\delta f)(x)={{f(x+\Delta t)-f(x)} \over {\Delta t}}},} the Euler approximation of the usual derivative with a discrete sample time Δ t {\displaystyle \Delta t} . The delta-formulation obtains a significant number of numerical advantages compared to the shift-operator at fast sampling. == Basic polynomials == Every delta operator Q {\displaystyle Q} has a unique sequence of "basic polynomials", a polynomial sequence defined by three conditions: p 0 ( x ) = 1 ; {\displaystyle p_{0}(x)=1;} p n ( 0 ) = 0 ; {\displaystyle p_{n}(0)=0;} ( Q p n ) ( x ) = n p n − 1 ( x ) for all n ∈ N . {\displaystyle (Qp_{n})(x)=np_{n-1}(x){\text{ for all }}n\in \mathbb {N} .} Such a sequence of basic polynomials is always of binomial type, and it can be shown that no other sequences of binomial type exist. If the first two conditions above are dropped, then the third condition says this polynomial sequence is a Sheffer sequence—a more general concept. == See also == Pincherle derivative Shift operator Umbral calculus == References == Nikol'Skii, Nikolai Kapitonovich (1986), Treatise on the shift operator: spectral function theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-15021-5 == External links == Weisstein, Eric W. "Delta Operator". MathWorld.