text
stringlengths
559
401k
source
stringlengths
13
121
A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. Graphical models are commonly used in probability theory, statistics—particularly Bayesian statistics—and machine learning. == Types of graphical models == Generally, probabilistic graphical models use a graph-based representation as the foundation for encoding a distribution over a multi-dimensional space and a graph that is a compact or factorized representation of a set of independences that hold in the specific distribution. Two branches of graphical representations of distributions are commonly used, namely, Bayesian networks and Markov random fields. Both families encompass the properties of factorization and independences, but they differ in the set of independences they can encode and the factorization of the distribution that they induce. === Undirected Graphical Model === The undirected graph shown may have one of several interpretations; the common feature is that the presence of an edge implies some sort of dependence between the corresponding random variables. From this graph, we might deduce that B, C, and D are all conditionally independent given A. This means that if the value of A is known, then the values of B, C, and D provide no further information about each other. Equivalently (in this case), the joint probability distribution can be factorized as: P [ A , B , C , D ] = f A B [ A , B ] ⋅ f A C [ A , C ] ⋅ f A D [ A , D ] {\displaystyle P[A,B,C,D]=f_{AB}[A,B]\cdot f_{AC}[A,C]\cdot f_{AD}[A,D]} for some non-negative functions f A B , f A C , f A D {\displaystyle f_{AB},f_{AC},f_{AD}} . === Bayesian network === If the network structure of the model is a directed acyclic graph, the model represents a factorization of the joint probability of all random variables. More precisely, if the events are X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} then the joint probability satisfies P [ X 1 , … , X n ] = ∏ i = 1 n P [ X i | pa ( X i ) ] {\displaystyle P[X_{1},\ldots ,X_{n}]=\prod _{i=1}^{n}P[X_{i}|{\text{pa}}(X_{i})]} where pa ( X i ) {\displaystyle {\text{pa}}(X_{i})} is the set of parents of node X i {\displaystyle X_{i}} (nodes with edges directed towards X i {\displaystyle X_{i}} ). In other words, the joint distribution factors into a product of conditional distributions. For example, in the directed acyclic graph shown in the Figure this factorization would be P [ A , B , C , D ] = P [ A ] ⋅ P [ B | A ] ⋅ P [ C | A ] ⋅ P [ D | A , C ] {\displaystyle P[A,B,C,D]=P[A]\cdot P[B|A]\cdot P[C|A]\cdot P[D|A,C]} . Any two nodes are conditionally independent given the values of their parents. In general, any two sets of nodes are conditionally independent given a third set if a criterion called d-separation holds in the graph. Local independences and global independences are equivalent in Bayesian networks. This type of graphical model is known as a directed graphical model, Bayesian network, or belief network. Classic machine learning models like hidden Markov models, neural networks and newer models such as variable-order Markov models can be considered special cases of Bayesian networks. One of the simplest Bayesian Networks is the Naive Bayes classifier. === Cyclic Directed Graphical Models === The next figure depicts a graphical model with a cycle. This may be interpreted in terms of each variable 'depending' on the values of its parents in some manner. The particular graph shown suggests a joint probability density that factors as P [ A , B , C , D ] = P [ A ] ⋅ P [ B ] ⋅ P [ C , D | A , B ] {\displaystyle P[A,B,C,D]=P[A]\cdot P[B]\cdot P[C,D|A,B]} , but other interpretations are possible. === Other types === Dependency network where cycles are allowed Tree-augmented classifier or TAN model Targeted Bayesian network learning (TBNL) A factor graph is an undirected bipartite graph connecting variables and factors. Each factor represents a function over the variables it is connected to. This is a helpful representation for understanding and implementing belief propagation. A clique tree or junction tree is a tree of cliques, used in the junction tree algorithm. A chain graph is a graph which may have both directed and undirected edges, but without any directed cycles (i.e. if we start at any vertex and move along the graph respecting the directions of any arrows, we cannot return to the vertex we started from if we have passed an arrow). Both directed acyclic graphs and undirected graphs are special cases of chain graphs, which can therefore provide a way of unifying and generalizing Bayesian and Markov networks. An ancestral graph is a further extension, having directed, bidirected and undirected edges. Random field techniques A Markov random field, also known as a Markov network, is a model over an undirected graph. A graphical model with many repeated subunits can be represented with plate notation. A conditional random field is a discriminative model specified over an undirected graph. A restricted Boltzmann machine is a bipartite generative model specified over an undirected graph. == Applications == The framework of the models, which provides algorithms for discovering and analyzing structure in complex distributions to describe them succinctly and extract the unstructured information, allows them to be constructed and utilized effectively. Applications of graphical models include causal inference, information extraction, speech recognition, computer vision, decoding of low-density parity-check codes, modeling of gene regulatory networks, gene finding and diagnosis of diseases, and graphical models for protein structure. == See also == Belief propagation Structural equation model == Notes == == Further reading == === Books and book chapters === Barber, David (2012). Bayesian Reasoning and Machine Learning. Cambridge University Press. ISBN 978-0-521-51814-7. Bishop, Christopher M. (2006). "Chapter 8. Graphical Models" (PDF). Pattern Recognition and Machine Learning. Springer. pp. 359–422. ISBN 978-0-387-31073-2. MR 2247587. Cowell, Robert G.; Dawid, A. Philip; Lauritzen, Steffen L.; Spiegelhalter, David J. (1999). Probabilistic networks and expert systems. Berlin: Springer. ISBN 978-0-387-98767-5. MR 1697175. A more advanced and statistically oriented book Jensen, Finn (1996). An introduction to Bayesian networks. Berlin: Springer. ISBN 978-0-387-91502-9. Pearl, Judea (1988). Probabilistic Reasoning in Intelligent Systems (2nd revised ed.). San Mateo, CA: Morgan Kaufmann. ISBN 978-1-55860-479-7. MR 0965765. A computational reasoning approach, where the relationships between graphs and probabilities were formally introduced. === Journal articles === Edoardo M. Airoldi (2007). "Getting Started in Probabilistic Graphical Models". PLOS Computational Biology. 3 (12): e252. arXiv:0706.2040. Bibcode:2007PLSCB...3..252A. doi:10.1371/journal.pcbi.0030252. PMC 2134967. PMID 18069887. Jordan, M. I. (2004). "Graphical Models". Statistical Science. 19: 140–155. doi:10.1214/088342304000000026. Ghahramani, Zoubin (May 2015). "Probabilistic machine learning and artificial intelligence". Nature. 521 (7553): 452–459. Bibcode:2015Natur.521..452G. doi:10.1038/nature14541. PMID 26017444. S2CID 216356. === Other === Heckerman's Bayes Net Learning Tutorial A Brief Introduction to Graphical Models and Bayesian Networks Sargur Srihari's lecture slides on probabilistic graphical models == External links == Graphical models and Conditional Random Fields Probabilistic Graphical Models taught by Eric Xing at CMU
Wikipedia/Graphical_models
In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm to compute the statistics for the expectation step. The Baum–Welch algorithm, the primary method for inference in hidden Markov models, is numerically unstable due to its recursive calculation of joint probabilities. As the number of variables grows, these joint probabilities become increasingly small, leading to the forward recursions rapidly approaching values below machine precision. == History == The Baum–Welch algorithm was named after its inventors Leonard E. Baum and Lloyd R. Welch. The algorithm and the Hidden Markov models were first described in a series of articles by Baum and his peers at the IDA Center for Communications Research, Princeton in the late 1960s and early 1970s. One of the first major applications of HMMs was to the field of speech processing. In the 1980s, HMMs were emerging as a useful tool in the analysis of biological systems and information, and in particular genetic information. They have since become an important tool in the probabilistic modeling of genomic sequences. == Description == A hidden Markov model describes the joint probability of a collection of "hidden" and observed discrete random variables. It relies on the assumption that the i-th hidden variable given the (i − 1)-th hidden variable is independent of previous hidden variables, and the current observation variables depend only on the current hidden state. The Baum–Welch algorithm uses the well known EM algorithm to find the maximum likelihood estimate of the parameters of a hidden Markov model given a set of observed feature vectors. Let X t {\displaystyle X_{t}} be a discrete hidden random variable with N {\displaystyle N} possible values (i.e. We assume there are N {\displaystyle N} states in total). We assume the P ( X t ∣ X t − 1 ) {\displaystyle P(X_{t}\mid X_{t-1})} is independent of time t {\displaystyle t} , which leads to the definition of the time-independent stochastic transition matrix A = { a i j } = P ( X t = j ∣ X t − 1 = i ) . {\displaystyle A=\{a_{ij}\}=P(X_{t}=j\mid X_{t-1}=i).} The initial state distribution (i.e. when t = 1 {\displaystyle t=1} ) is given by π i = P ( X 1 = i ) . {\displaystyle \pi _{i}=P(X_{1}=i).} The observation variables Y t {\displaystyle Y_{t}} can take one of K {\displaystyle K} possible values. We also assume the observation given the "hidden" state is time independent. The probability of a certain observation y i {\displaystyle y_{i}} at time t {\displaystyle t} for state X t = j {\displaystyle X_{t}=j} is given by b j ( y i ) = P ( Y t = y i ∣ X t = j ) . {\displaystyle b_{j}(y_{i})=P(Y_{t}=y_{i}\mid X_{t}=j).} Taking into account all the possible values of Y t {\displaystyle Y_{t}} and X t {\displaystyle X_{t}} , we obtain the N × K {\displaystyle N\times K} matrix B = { b j ( y i ) } {\displaystyle B=\{b_{j}(y_{i})\}} where b j {\displaystyle b_{j}} belongs to all the possible states and y i {\displaystyle y_{i}} belongs to all the observations. An observation sequence is given by Y = ( Y 1 = y 1 , Y 2 = y 2 , … , Y T = y T ) {\displaystyle Y=(Y_{1}=y_{1},Y_{2}=y_{2},\ldots ,Y_{T}=y_{T})} . Thus we can describe a hidden Markov chain by θ = ( A , B , π ) {\displaystyle \theta =(A,B,\pi )} . The Baum–Welch algorithm finds a local maximum for θ ∗ = a r g m a x θ ⁡ P ( Y ∣ θ ) {\displaystyle \theta ^{*}=\operatorname {arg\,max} _{\theta }P(Y\mid \theta )} (i.e. the HMM parameters θ {\displaystyle \theta } that maximize the probability of the observation). === Algorithm === Set θ = ( A , B , π ) {\displaystyle \theta =(A,B,\pi )} with random initial conditions. They can also be set using prior information about the parameters if it is available; this can speed up the algorithm and also steer it toward the desired local maximum. ==== Forward procedure ==== Let α i ( t ) = P ( Y 1 = y 1 , … , Y t = y t , X t = i ∣ θ ) {\displaystyle \alpha _{i}(t)=P(Y_{1}=y_{1},\ldots ,Y_{t}=y_{t},X_{t}=i\mid \theta )} , the probability of seeing the observations y 1 , y 2 , … , y t {\displaystyle y_{1},y_{2},\ldots ,y_{t}} and being in state i {\displaystyle i} at time t {\displaystyle t} . This is found recursively: α i ( 1 ) = π i b i ( y 1 ) , {\displaystyle \alpha _{i}(1)=\pi _{i}b_{i}(y_{1}),} α i ( t + 1 ) = b i ( y t + 1 ) ∑ j = 1 N α j ( t ) a j i . {\displaystyle \alpha _{i}(t+1)=b_{i}(y_{t+1})\sum _{j=1}^{N}\alpha _{j}(t)a_{ji}.} Since this series converges exponentially to zero, the algorithm will numerically underflow for longer sequences. However, this can be avoided in a slightly modified algorithm by scaling α {\displaystyle \alpha } in the forward and β {\displaystyle \beta } in the backward procedure below. ==== Backward procedure ==== Let β i ( t ) = P ( Y t + 1 = y t + 1 , … , Y T = y T ∣ X t = i , θ ) {\displaystyle \beta _{i}(t)=P(Y_{t+1}=y_{t+1},\ldots ,Y_{T}=y_{T}\mid X_{t}=i,\theta )} that is the probability of the ending partial sequence y t + 1 , … , y T {\displaystyle y_{t+1},\ldots ,y_{T}} given starting state i {\displaystyle i} at time t {\displaystyle t} . We calculate β i ( t ) {\displaystyle \beta _{i}(t)} as, β i ( T ) = 1 , {\displaystyle \beta _{i}(T)=1,} β i ( t ) = ∑ j = 1 N β j ( t + 1 ) a i j b j ( y t + 1 ) . {\displaystyle \beta _{i}(t)=\sum _{j=1}^{N}\beta _{j}(t+1)a_{ij}b_{j}(y_{t+1}).} ==== Update ==== We can now calculate the temporary variables, according to Bayes' theorem: γ i ( t ) = P ( X t = i ∣ Y , θ ) = P ( X t = i , Y ∣ θ ) P ( Y ∣ θ ) = α i ( t ) β i ( t ) ∑ j = 1 N α j ( t ) β j ( t ) , {\displaystyle \gamma _{i}(t)=P(X_{t}=i\mid Y,\theta )={\frac {P(X_{t}=i,Y\mid \theta )}{P(Y\mid \theta )}}={\frac {\alpha _{i}(t)\beta _{i}(t)}{\sum _{j=1}^{N}\alpha _{j}(t)\beta _{j}(t)}},} which is the probability of being in state i {\displaystyle i} at time t {\displaystyle t} given the observed sequence Y {\displaystyle Y} and the parameters θ {\displaystyle \theta } ξ i j ( t ) = P ( X t = i , X t + 1 = j ∣ Y , θ ) = P ( X t = i , X t + 1 = j , Y ∣ θ ) P ( Y ∣ θ ) = α i ( t ) a i j β j ( t + 1 ) b j ( y t + 1 ) ∑ k = 1 N ∑ w = 1 N α k ( t ) a k w β w ( t + 1 ) b w ( y t + 1 ) , {\displaystyle \xi _{ij}(t)=P(X_{t}=i,X_{t+1}=j\mid Y,\theta )={\frac {P(X_{t}=i,X_{t+1}=j,Y\mid \theta )}{P(Y\mid \theta )}}={\frac {\alpha _{i}(t)a_{ij}\beta _{j}(t+1)b_{j}(y_{t+1})}{\sum _{k=1}^{N}\sum _{w=1}^{N}\alpha _{k}(t)a_{kw}\beta _{w}(t+1)b_{w}(y_{t+1})}},} which is the probability of being in state i {\displaystyle i} and j {\displaystyle j} at times t {\displaystyle t} and t + 1 {\displaystyle t+1} respectively given the observed sequence Y {\displaystyle Y} and parameters θ {\displaystyle \theta } . The denominators of γ i ( t ) {\displaystyle \gamma _{i}(t)} and ξ i j ( t ) {\displaystyle \xi _{ij}(t)} are the same ; they represent the probability of making the observation Y {\displaystyle Y} given the parameters θ {\displaystyle \theta } . The parameters of the hidden Markov model θ {\displaystyle \theta } can now be updated: π i ∗ = γ i ( 1 ) , {\displaystyle \pi _{i}^{*}=\gamma _{i}(1),} which is the expected frequency spent in state i {\displaystyle i} at time 1 {\displaystyle 1} . a i j ∗ = ∑ t = 1 T − 1 ξ i j ( t ) ∑ t = 1 T − 1 γ i ( t ) , {\displaystyle a_{ij}^{*}={\frac {\sum _{t=1}^{T-1}\xi _{ij}(t)}{\sum _{t=1}^{T-1}\gamma _{i}(t)}},} which is the expected number of transitions from state i to state j compared to the expected total number of transitions away from state i. To clarify, the number of transitions away from state i does not mean transitions to a different state j, but to any state including itself. This is equivalent to the number of times state i is observed in the sequence from t = 1 to t = T − 1. b i ∗ ( v k ) = ∑ t = 1 T 1 y t = v k γ i ( t ) ∑ t = 1 T γ i ( t ) , {\displaystyle b_{i}^{*}(v_{k})={\frac {\sum _{t=1}^{T}1_{y_{t}=v_{k}}\gamma _{i}(t)}{\sum _{t=1}^{T}\gamma _{i}(t)}},} where 1 y t = v k = { 1 if y t = v k , 0 otherwise {\displaystyle 1_{y_{t}=v_{k}}={\begin{cases}1&{\text{if }}y_{t}=v_{k},\\0&{\text{otherwise}}\end{cases}}} is an indicator function, and b i ∗ ( v k ) {\displaystyle b_{i}^{*}(v_{k})} is the expected number of times the output observations have been equal to v k {\displaystyle v_{k}} while in state i {\displaystyle i} over the expected total number of times in state i {\displaystyle i} . These steps are now repeated iteratively until a desired level of convergence. Note: It is possible to over-fit a particular data set. That is, P ( Y ∣ θ final ) > P ( Y ∣ θ true ) {\displaystyle P(Y\mid \theta _{\text{final}})>P(Y\mid \theta _{\text{true}})} . The algorithm also does not guarantee a global maximum. ==== Multiple sequences ==== The algorithm described thus far assumes a single observed sequence Y = y 1 , … , y T {\displaystyle Y=y_{1},\ldots ,y_{T}} . However, in many situations, there are several sequences observed: Y 1 , … , Y R {\displaystyle Y_{1},\ldots ,Y_{R}} . In this case, the information from all of the observed sequences must be used in the update of the parameters A {\displaystyle A} , π {\displaystyle \pi } , and b {\displaystyle b} . Assuming that you have computed γ i r ( t ) {\displaystyle \gamma _{ir}(t)} and ξ i j r ( t ) {\displaystyle \xi _{ijr}(t)} for each sequence y 1 , r , … , y N r , r {\displaystyle y_{1,r},\ldots ,y_{N_{r},r}} , the parameters can now be updated: π i ∗ = ∑ r = 1 R γ i r ( 1 ) R {\displaystyle \pi _{i}^{*}={\frac {\sum _{r=1}^{R}\gamma _{ir}(1)}{R}}} a i j ∗ = ∑ r = 1 R ∑ t = 1 T − 1 ξ i j r ( t ) ∑ r = 1 R ∑ t = 1 T − 1 γ i r ( t ) , {\displaystyle a_{ij}^{*}={\frac {\sum _{r=1}^{R}\sum _{t=1}^{T-1}\xi _{ijr}(t)}{\sum _{r=1}^{R}\sum _{t=1}^{T-1}\gamma _{ir}(t)}},} b i ∗ ( v k ) = ∑ r = 1 R ∑ t = 1 T 1 y t r = v k γ i r ( t ) ∑ r = 1 R ∑ t = 1 T γ i r ( t ) , {\displaystyle b_{i}^{*}(v_{k})={\frac {\sum _{r=1}^{R}\sum _{t=1}^{T}1_{y_{tr}=v_{k}}\gamma _{ir}(t)}{\sum _{r=1}^{R}\sum _{t=1}^{T}\gamma _{ir}(t)}},} where 1 y t r = v k = { 1 if y t , r = v k , 0 otherwise {\displaystyle 1_{y_{tr}=v_{k}}={\begin{cases}1&{\text{if }}y_{t,r}=v_{k},\\0&{\text{otherwise}}\end{cases}}} is an indicator function == Example == Suppose we have a chicken from which we collect eggs at noon every day. Now whether or not the chicken has laid eggs for collection depends on some unknown factors that are hidden. We can however (for simplicity) assume that the chicken is always in one of two states that influence whether the chicken lays eggs, and that this state only depends on the state on the previous day. Now we don't know the state at the initial starting point, we don't know the transition probabilities between the two states and we don't know the probability that the chicken lays an egg given a particular state. To start we first guess the transition and emission matrices. We then take a set of observations (E = eggs, N = no eggs): N, N, N, N, N, E, E, N, N, N This gives us a set of observed transitions between days: NN, NN, NN, NN, NE, EE, EN, NN, NN The next step is to estimate a new transition matrix. For example, the probability of the sequence NN and the state being ⁠ S 1 {\displaystyle S_{1}} ⁠ then ⁠ S 2 {\displaystyle S_{2}} ⁠ is given by the following, P ( S 1 ) ⋅ P ( N | S 1 ) ⋅ P ( S 1 → S 2 ) ⋅ P ( N | S 2 ) . {\displaystyle P(S_{1})\cdot P(N|S_{1})\cdot P(S_{1}\rightarrow S_{2})\cdot P(N|S_{2}).} Thus the new estimate for the ⁠ S 1 {\displaystyle S_{1}} ⁠ to ⁠ S 2 {\displaystyle S_{2}} ⁠ transition is now 0.22 2.4234 = 0.0908 {\displaystyle {\frac {0.22}{2.4234}}=0.0908} (referred to as "Pseudo probabilities" in the following tables). We then calculate the ⁠ S 2 {\displaystyle S_{2}} ⁠ to ⁠ S 1 {\displaystyle S_{1}} ⁠, ⁠ S 2 {\displaystyle S_{2}} ⁠ to ⁠ S 2 {\displaystyle S_{2}} ⁠ and ⁠ S 1 {\displaystyle S_{1}} ⁠ to ⁠ S 1 {\displaystyle S_{1}} ⁠ transition probabilities and normalize so they add to 1. This gives us the updated transition matrix: Next, we want to estimate a new emission matrix, The new estimate for the E coming from ⁠ S 1 {\displaystyle S_{1}} ⁠ emission is now 0.2394 0.2730 = 0.8769 {\displaystyle {\frac {0.2394}{0.2730}}=0.8769} . This allows us to calculate the emission matrix as described above in the algorithm, by adding up the probabilities for the respective observed sequences. We then repeat for if N came from ⁠ S 1 {\displaystyle S_{1}} ⁠ and for if N and E came from ⁠ S 2 {\displaystyle S_{2}} ⁠ and normalize. To estimate the initial probabilities we assume all sequences start with the hidden state ⁠ S 1 {\displaystyle S_{1}} ⁠ and calculate the highest probability and then repeat for ⁠ S 2 {\displaystyle S_{2}} ⁠. Again we then normalize to give an updated initial vector. Finally we repeat these steps until the resulting probabilities converge satisfactorily. == Applications == === Speech recognition === Hidden Markov Models were first applied to speech recognition by James K. Baker in 1975. Continuous speech recognition occurs by the following steps, modeled by a HMM. Feature analysis is first undertaken on temporal and/or spectral features of the speech signal. This produces an observation vector. The feature is then compared to all sequences of the speech recognition units. These units could be phonemes, syllables, or whole-word units. A lexicon decoding system is applied to constrain the paths investigated, so only words in the system's lexicon (word dictionary) are investigated. Similar to the lexicon decoding, the system path is further constrained by the rules of grammar and syntax. Finally, semantic analysis is applied and the system outputs the recognized utterance. A limitation of many HMM applications to speech recognition is that the current state only depends on the state at the previous time-step, which is unrealistic for speech as dependencies are often several time-steps in duration. The Baum–Welch algorithm also has extensive applications in solving HMMs used in the field of speech synthesis. === Cryptanalysis === The Baum–Welch algorithm is often used to estimate the parameters of HMMs in deciphering hidden or noisy information and consequently is often used in cryptanalysis. In data security an observer would like to extract information from a data stream without knowing all the parameters of the transmission. This can involve reverse engineering a channel encoder. HMMs and as a consequence the Baum–Welch algorithm have also been used to identify spoken phrases in encrypted VoIP calls. In addition HMM cryptanalysis is an important tool for automated investigations of cache-timing data. It allows for the automatic discovery of critical algorithm state, for example key values. === Applications in bioinformatics === ==== Finding genes ==== ===== Prokaryotic ===== The GLIMMER (Gene Locator and Interpolated Markov ModelER) software was an early gene-finding program used for the identification of coding regions in prokaryotic DNA. GLIMMER uses Interpolated Markov Models (IMMs) to identify the coding regions and distinguish them from the noncoding DNA. The latest release (GLIMMER3) has been shown to have increased specificity and accuracy compared with its predecessors with regard to predicting translation initiation sites, demonstrating an average 99% accuracy in locating 3' locations compared to confirmed genes in prokaryotes. ===== Eukaryotic ===== The GENSCAN webserver is a gene locator capable of analyzing eukaryotic sequences up to one million base-pairs (1 Mbp) long. GENSCAN utilizes a general inhomogeneous, three periodic, fifth order Markov model of DNA coding regions. Additionally, this model accounts for differences in gene density and structure (such as intron lengths) that occur in different isochores. While most integrated gene-finding software (at the time of GENSCANs release) assumed input sequences contained exactly one gene, GENSCAN solves a general case where partial, complete, or multiple genes (or even no gene at all) is present. GENSCAN was shown to exactly predict exon location with 90% accuracy with 80% specificity compared to an annotated database. ==== Copy-number variation detection ==== Copy-number variations (CNVs) are an abundant form of genome structure variation in humans. A discrete-valued bivariate HMM (dbHMM) was used assigning chromosomal regions to seven distinct states: unaffected regions, deletions, duplications and four transition states. Solving this model using Baum-Welch demonstrated the ability to predict the location of CNV breakpoint to approximately 300 bp from micro-array experiments. This magnitude of resolution enables more precise correlations between different CNVs and across populations than previously possible, allowing the study of CNV population frequencies. It also demonstrated a direct inheritance pattern for a particular CNV. == Implementations == Accord.NET in C# ghmm C library with Python bindings that supports both discrete and continuous emissions. hmmlearn Python library that implements Baum-Welch on various discrete-time HMMs Jajapy Python library that implements Baum-Welch on various kind of Markov Models ( HMM, MC, MDP, CTMC). HiddenMarkovModels.jl package for Julia. HMMFit function in the RHmm package for R. hmmtrain in MATLAB rustbio in Rust == See also == Viterbi algorithm Hidden Markov model EM algorithm Maximum likelihood Speech recognition Bioinformatics Cryptanalysis == References == == External links == A comprehensive review of HMM methods and software in bioinformatics – Profile Hidden Markov Models Early HMM publications by Baum: A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains An inequality with applications to statistical estimation for probabilistic functions of Markov processes and to a model for ecology Statistical Inference for Probabilistic Functions of Finite State Markov Chains The Shannon Lecture by Welch, which speaks to how the algorithm can be implemented efficiently: Hidden Markov Models and the Baum–Welch Algorithm, IEEE Information Theory Society Newsletter, Dec. 2003. An alternative to the Baum–Welch algorithm, the Viterbi Path Counting algorithm: Davis, Richard I. A.; Lovell, Brian C.; "Comparing and evaluating HMM ensemble training algorithms using train and test and condition number criteria", Pattern Analysis and Applications, vol. 6, no. 4, pp. 327–336, 2003. An Interactive Spreadsheet for Teaching the Forward-Backward Algorithm (spreadsheet and article with step-by-step walkthrough) Formal derivation of the Baum–Welch algorithm Archived 2012-02-28 at the Wayback Machine Implementation of the Baum–Welch algorithm
Wikipedia/Baum–Welch_algorithm
In computer science, the precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value. Some of the standardized precision formats are: Half-precision floating-point format Single-precision floating-point format Double-precision floating-point format Quadruple-precision floating-point format Octuple-precision floating-point format Of these, octuple-precision format is rarely used. The single- and double-precision formats are most widely used and supported on nearly all platforms. The use of half-precision format and minifloat formats has been increasing especially in the field of machine learning since many machine learning algorithms are inherently error-tolerant. == Rounding errors == Precision is often the source of rounding errors in computation. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store "sin(0.1)" in IEEE single-precision floating point standard. The error is then often magnified as subsequent computations are made using the data (although it can also be reduced). == See also == Approximate computing Arbitrary-precision arithmetic Extended precision Granularity IEEE754 (IEEE floating point standard) Integer (computer science) Minifloat (8-bit and less) Significant figures Truncation == References ==
Wikipedia/Precision_(computer_science)
Deep Learning Super Sampling (DLSS) is a suite of real-time deep learning image enhancement and upscaling technologies developed by Nvidia that are available in a number of video games. The goal of these technologies is to allow the majority of the graphics pipeline to run at a lower resolution for increased performance, and then infer a higher resolution image from this that approximates the same level of detail as if the image had been rendered at this higher resolution. This allows for higher graphical settings and/or frame rates for a given output resolution, depending on user preference. All generations of DLSS are available on all RTX-branded cards from Nvidia in supported titles. However, the Frame Generation feature is only supported on 40 series GPUs or newer and Multi Frame Generation is only available on 50 series GPUs. == History == Nvidia advertised DLSS as a key feature of the GeForce 20 series cards when they launched in September 2018. At that time, the results were limited to a few video games, namely Battlefield V, or Metro Exodus, because the algorithm had to be trained specifically on each game on which it was applied and the results were usually not as good as simple resolution upscaling. In 2019, the video game Control shipped with real-time ray tracing and an improved version of DLSS, which did not use the Tensor Cores. In April 2020, Nvidia advertised and shipped an improved version of DLSS named DLSS 2.0 with driver version 445.75. DLSS 2.0 was available for a few existing games including Control and Wolfenstein: Youngblood, and would later be added to many newly released games and game engines such as Unreal Engine and Unity. This time Nvidia said that it used the Tensor Cores again, and that the AI did not need to be trained specifically on each game. Despite sharing the DLSS branding, the two iterations of DLSS differ significantly and are not backwards-compatible. In January 2025, Nvidia stated that there are over 540 games and apps supporting DLSS, and that over 80% of Nvidia RTX users activate DLSS. In March 2025, there were more than 100 games that support DLSS 4, according to Nvidia. By May 2025, over 125 games supported DLSS 4. === Release history === == Quality presets == When using DLSS, depending on the game, users have access to various quality presets in addition to the option to set the internally rendered, upscaled resolution manually: == Implementation == === DLSS 1.0 === The first iteration of DLSS is a predominantly spatial image upscaler with two stages, both relying on convolutional auto-encoder neural networks. The first step is an image enhancement network which uses the current frame and motion vectors to perform edge enhancement, and spatial anti-aliasing. The second stage is an image upscaling step which uses the single raw, low-resolution frame to upscale the image to the desired output resolution. Using just a single frame for upscaling means the neural network itself must generate a large amount of new information to produce the high resolution output, this can result in slight hallucinations such as leaves that differ in style to the source content. The neural networks are trained on a per-game basis by generating a "perfect frame" using traditional supersampling to 64 samples per pixel, as well as the motion vectors for each frame. The data collected must be as comprehensive as possible, including as many levels, times of day, graphical settings, resolutions, etc. as possible. This data is also augmented using common augmentations such as rotations, colour changes, and random noise to help generalize the test data. Training is performed on Nvidia's Saturn V supercomputer. This first iteration received a mixed response, with many criticizing the often soft appearance and artifacts in certain situations; likely a side effect of the limited data from only using a single frame input to the neural networks which could not be trained to perform optimally in all scenarios and edge-cases. Nvidia also demonstrated the ability for the auto-encoder networks to learn the ability to recreate depth-of-field and motion blur, although this functionality has never been included in a publicly released product. === DLSS 2.0 === DLSS 2.0 is a temporal anti-aliasing upsampling (TAAU) implementation, using data from previous frames extensively through sub-pixel jittering to resolve fine detail and reduce aliasing. The data DLSS 2.0 collects includes: the raw low-resolution input, motion vectors, depth buffers, and exposure / brightness information. It can also be used as a simpler TAA implementation where the image is rendered at 100% resolution, rather than being upsampled by DLSS, Nvidia brands this as DLAA (Deep Learning Anti-Aliasing). TAA(U) is used in many modern video games and game engines; however, all previous implementations have used some form of manually written heuristics to prevent temporal artifacts such as ghosting and flickering. One example of this is neighborhood clamping which forcefully prevents samples collected in previous frames from deviating too much compared to nearby pixels in newer frames. This helps to identify and fix many temporal artifacts, but deliberately removing fine details in this way is analogous to applying a blur filter, and thus the final image can appear blurry when using this method. DLSS 2.0 uses a convolutional auto-encoder neural network trained to identify and fix temporal artifacts, instead of manually programmed heuristics as mentioned above. Because of this, DLSS 2.0 can generally resolve detail better than other TAA and TAAU implementations, while also removing most temporal artifacts. This is why DLSS 2.0 can sometimes produce a sharper image than rendering at higher, or even native resolutions using traditional TAA. However, no temporal solution is perfect, and artifacts (ghosting in particular) are still visible in some scenarios when using DLSS 2.0. Because temporal artifacts occur in most art styles and environments in broadly the same way, the neural network that powers DLSS 2.0 does not need to be retrained when being used in different games. Despite this, Nvidia does frequently ship new minor revisions of DLSS 2.0 with new titles, so this could suggest some minor training optimizations may be performed as games are released, although Nvidia does not provide changelogs for these minor revisions to confirm this. The main advancements compared to DLSS 1.0 include: Significantly improved detail retention, a generalized neural network that does not need to be re-trained per-game, and ~2x less overhead (~1–2 ms vs ~2–4 ms). It should also be noted that forms of TAAU such as DLSS 2.0 are not upscalers in the same sense as techniques such as ESRGAN or DLSS 1.0, which attempt to create new information from a low-resolution source; instead, TAAU works to recover data from previous frames, rather than creating new data. In practice, this means low resolution textures in games will still appear low-resolution when using current TAAU techniques. This is why Nvidia recommends game developers use higher resolution textures than they would normally for a given rendering resolution by applying a mip-map bias when DLSS 2.0 is enabled. === DLSS 3.0 === Augments DLSS 2.0 by making use of motion interpolation. The DLSS Frame Generation algorithm takes two rendered frames from the rendering pipeline and generates a new frame that smoothly transitions between them. So for every frame rendered, one additional frame is generated. DLSS 3.0 makes use of a new generation Optical Flow Accelerator (OFA) included in Ada Lovelace generation RTX GPUs. The new OFA is faster and more accurate than the OFA already available in previous Turing and Ampere RTX GPUs. This results in DLSS 3.0 being exclusive for the RTX 40 Series. At release, DLSS 3.0 does not work for VR displays. === DLSS 3.5 === DLSS 3.5 adds Ray Reconstruction, replacing multiple denoising algorithms with a single AI model trained on five times more data than DLSS 3. Ray Reconstruction is available on all RTX GPUs and first targeted games with path tracing (aka "full ray tracing"), including Cyberpunk 2077's Phantom Liberty DLC, Portal with RTX, and Alan Wake 2. === DLSS 4.0 === The fourth generation of Deep Learning Super Sampling (DLSS) was unveiled alongside the GeForce RTX 50 series. DLSS 4 upscaling uses a new vision transformer-based model for enhanced image quality with reduced ghosting and greater image stability in motion compared to the previous convolutional neural network (CNN) model. DLSS 4 allows a greater number of frames to be generated and interpolated based on a single traditionally rendered frame. This form of frame generation called Multi Frame Generation is exclusive to the GeForce RTX 50 series while the GeForce RTX 40 series is limited to one interpolated frame per traditionally rendered frame. According to Nvidia, this technique will increase performance by up to 800% while retaining low latency with Nvidia Reflex. Nvidia claims that DLSS 4x Frame Generation model uses 30% less video memory with the example of Warhammer 40,000: Darktide using 400MB less memory at 4K resolution with Frame Generation enabled. Nvidia claims that 75 games will integrate DLSS 4 Multi Frame Generation at launch, including Alan Wake 2, Cyberpunk 2077, Indiana Jones and the Great Circle, and Star Wars Outlaws. === Manually upgrading DLSS support === Users can manually replace the DLLs in games to support a newer version of DLSS. DLSS Swapper, an open source utility, can automatically do this for all installed games. Replacing DLL files can not add DLSS support or features to games that do not already implement them, though some mods can add frame generation support. == Anti-aliasing == DLSS requires and applies its own anti-aliasing method. Thus, depending on the game and quality setting used, using DLSS may improve image quality even over native resolution rendering. It operates on similar principles to TAA. Like TAA, it uses information from past frames to produce the current frame. Unlike TAA, DLSS does not sample every pixel in every frame. Instead, it samples different pixels in different frames and uses pixels sampled in past frames to fill in the unsampled pixels in the current frame. DLSS uses machine learning to combine samples in the current frame and past frames, and it can be thought of as an advanced and superior TAA implementation made possible by the available tensor cores. Nvidia also offers Deep Learning Anti-Aliasing (DLAA), which provides the same AI-driven anti-aliasing DLSS uses, but without any upscaling or downscaling functionality. == Architecture == With the exception of the shader-core version implemented in Control, DLSS is only available on GeForce RTX 20, GeForce RTX 30, GeForce RTX 40, GeForce RTX 50, and Quadro RTX series of video cards, using dedicated AI accelerators called Tensor Cores. Tensor Cores are available since the Nvidia Volta GPU microarchitecture, which was first used on the Tesla V100 line of products. They are used for doing fused multiply-add (FMA) operations that are used extensively in neural network calculations for applying a large series of multiplications on weights, followed by the addition of a bias. Tensor cores can operate on FP16, INT8, INT4, and INT1 data types. Each core can do 1024 bits of FMA operations per clock, so 1024 INT1, 256 INT4, 128 INT8, and 64 FP16 operations per clock per tensor core, and most Turing GPUs have a few hundred tensor cores. The Tensor Cores use CUDA Warp-Level Primitives on 32 parallel threads to take advantage of their parallel architecture. A Warp is a set of 32 threads which are configured to execute the same instruction. Since Windows 10 version 1903, Microsoft Windows provided DirectML as one part of DirectX to support Tensor Cores. == Reception == Particularly in early versions of DLSS, users reported blurry frames. Andrew Edelsten, an employee at Nvidia, therefore commented on the problem in a blog post in 2019 and promised that they were working on improving the technology and clarified that the DLSS AI algorithm was mainly trained with 4K image material. That the use of DLSS leads to particularly blurred images at lower resolutions, such as Full HD, is due to the fact that the algorithm has far less image information available to calculate an appropriate image compared to higher resolutions like 4K. The use of DLSS Frame Generation may lead to increased input latency, as well as visual artifacts. It has also been criticized that by implementing DLSS in their games, game developers no longer have an incentive to optimize them so that they also run smoothly in native resolution on modern PC hardware. For example, for the game Alan Wake 2 in 4K resolution at the highest graphics settings with ray tracing enabled, the use of DLSS in Performance mode is recommended even with graphics cards such as the Nvidia GeForce RTX 4080 in order to achieve 60 fps. The transformer-based AI upscaling model introduced with DLSS 4 received praise for its improved image quality with regard to increased stability, reduced ghosting, better anti-aliasing, and higher level of detail, as well as its backward compatability and higher training scalability regarding future improvements. == See also == FidelityFX Super Resolution – competing technology from AMD Intel XeSS – competing technology from Intel PlayStation Spectral Super Resolution – similar technology from PlayStation == References == == External links == Official website DLSS on the Nvidia developer website
Wikipedia/Tensor_Core
Thermal design power (TDP), also known as thermal design point, is the maximum amount of heat that a computer component (like a CPU, GPU or system on a chip) can generate and that its cooling system is designed to dissipate during normal operation at a non-turbo clock rate (base frequency). Some sources state that the peak power rating for a microprocessor is usually 1.5 times the TDP rating. == Calculation == The average CPU power (ACP) is the power consumption of central processing units, especially server processors, under "average" daily usage as defined by Advanced Micro Devices (AMD) for use in its line of processors based on the K10 microarchitecture (Opteron 8300 and 2300 series processors). Intel's thermal design power (TDP), used for Pentium and Core 2 processors, measures the energy consumption under high workload; it is numerically somewhat higher than the "average" ACP rating of the same processor. According to AMD the ACP rating includes the power consumption when running several benchmarks, including TPC-C, SPECcpu2006, SPECjbb2005 and STREAM Benchmark (memory bandwidth), which AMD said is an appropriate method of power consumption measurement for data centers and server-intensive workload environments. AMD said that the ACP and TDP values of the processors will both be stated and do not replace one another. Barcelona and later server processors have the two power figures. The TDP of a CPU has been underestimated in some cases, leading to certain real applications (typically strenuous, such as video encoding or games) causing the CPU to exceed its specified TDP and resulting in overloading the computer's cooling system. In this case, CPUs either cause a system failure (a "therm-trip") or throttle their speed down. Most modern processors will cause a therm-trip only upon a catastrophic cooling failure, such as a no longer operational fan or an incorrectly mounted heat sink. For example, a laptop's CPU cooling system may be designed for a 20 W TDP, which means that it can dissipate up to 20 watts of heat without exceeding the maximum junction temperature for the laptop's CPU. A cooling system can do this using an active cooling method (e.g. conduction coupled with forced convection) such as a heat sink with a fan, or any of the two passive cooling methods: thermal radiation or conduction. Typically, a combination of these methods is used. Since safety margins and the definition of what constitutes a real application vary among manufacturers, TDP values between different manufacturers cannot be accurately compared (a processor with a TDP of, for example, 100 W will almost certainly use more power at full load than processors with a fraction of said TDP, and very probably more than processors with lower TDP from the same manufacturer, but it may or may not use more power than a processor from a different manufacturer with a not excessively lower TDP, such as 90 W). Additionally, TDPs are often specified for families of processors, with the low-end models usually using significantly less power than those at the high end of the family. Until around 2006 AMD used to report the maximum power draw of its processors as TDP. Intel changed this practice with the introduction of its Conroe family of processors. Intel calculates a specified chip's TDP according to the amount of power the computer's fan and heatsink need to be able to dissipate while the chip is under sustained load. Actual power usage can be higher or (much) lower than TDP, but the figure is intended to give guidance to engineers designing cooling solutions for their products. In particular, Intel's measurement also does not fully take into account Intel Turbo Boost due to the default time limits, while AMD does because AMD Turbo Core always tries to push for the maximum power. == Alternatives == TDP specifications for some processors may allow them to work under multiple different power levels, depending on the usage scenario, available cooling capacities and desired power consumption. Technologies that provide such variable TDPs include Intel's configurable TDP (cTDP) and scenario design power (SDP), and AMD's TDP power cap. Configurable TDP (cTDP), also known as programmable TDP or TDP power cap, is an operating mode of later generations of Intel mobile processors (as of January 2014) and AMD processors (as of June 2012) that allows adjustments in their TDP values. By modifying the processor behavior and its performance levels, power consumption of a processor can be changed altering its TDP at the same time. That way, a processor can operate at higher or lower performance levels, depending on the available cooling capacities and desired power consumption.: 69–72  cTDP typically provide (but are not limited to) three operating modes:: 71–72  Nominal TDP – the processor's rated frequency and TDP. cTDP down – when a cooler or quieter mode of operation is desired, this mode specifies a lower TDP and lower guaranteed frequency versus the nominal mode. cTDP up – when extra cooling is available, this mode specifies a higher TDP and higher guaranteed frequency versus the nominal mode. For example, some of the mobile Haswell processors support cTDP up, cTDP down, or both modes. As another example, some of the AMD Opteron processors and Kaveri APUs can be configured for lower TDP values. IBM's POWER8 processor implements a similar power capping functionality through its embedded on-chip controller (OCC). Intel introduced scenario design power (SDP) for some low power Y-series processors since 2013. It is described as "an additional thermal reference point meant to represent thermally relevant device usage in real-world environmental scenarios." As a power rating, SDP is not an additional power state of a processor; it states the average power consumption of a processor using a certain mix of benchmark programs to simulate "real-world" scenarios. == Ambiguities of the thermal design power parameter == As some authors and users have observed, the thermal design power (TDP) rating is an ambiguous parameter. In fact, different manufacturers define the TDP using different calculation methods and different operating conditions, keeping these details almost undisclosed (with very few exceptions). This makes highly problematic (if not impossible) to reasonably compare similar devices made by different manufacturers based on their TDP, and to optimize the design of a cooling system in terms of both heat management and cost. === Thermal management fundamentals === To better understand the problem we must remember the basic concepts underlying thermal management and computer cooling. Let’s consider the thermal conduction path from the CPU case to the ambient air through a Heat sink, with: Pd (Watt) = thermal power generated by a CPU and to be dissipated into the ambient through a suitable Heat sink. It corresponds to the total power drain from the direct current supply rails of the CPU. Rca (°C/W) = thermal resistance of the heat sink, between the case of the CPU and the ambient air. Tc (°C) = maximum allowed temperature of the CPU's case (ensuring full performances). Ta (°C) = maximum expected ambient temperature at the inlet of the heat sink fan. All these parameters are linked together by the following equation: ( T c − T a ) = P d ⋅ R c a {\displaystyle (Tc-Ta)=Pd\cdot Rca} Hence, once we know the thermal power to be dissipated (Pd), the maximum allowed case temperature (Tc) of the CPU and the maximum expected ambient temperature (Ta) of the air entering the cooling fans, we can determine the fundamental characteristics of the required heat sink, i.e. its thermal resistance Rca, as: R c a = ( T c − T a ) P d {\displaystyle Rca={\frac {(Tc-Ta)}{Pd}}} This equation can be rearranged by writing P d = ( T c − T a ) R c a {\displaystyle Pd={\frac {(Tc-Ta)}{Rca}}} where in Pd can replaced by the thermal design power (TDP). Note that the heat dissipation path going from the CPU to the ambient air flowing through the printed circuit of the motherboard has a thermal resistance that is orders of magnitude greater than that of the Heat sink, therefore it can be neglected in these computations. === Issues when dealing with the thermal design power (TDP) === Once all the input data is known, the previous formula allows to choose a CPU’s heat sink with a suitable thermal resistance Rca between case and ambient air, sufficient to keep the maximum case temperature at or below a predefined value Tc. On the contrary, when dealing with the Thermal Design Power (TDP), ambiguities arise because the CPU manufacturers usually do not disclose the exact conditions under which this parameter has been defined. The maximum acceptable case temperature Tc to get the rated performances is usually missing, as well as the corresponding ambient temperature Ta, and, last but not least, details about the specific computational test workload. For instance, an Intel’s general support page states briefly that the TDP refers to "the power consumption under the maximum theoretical load". Here they also inform that starting from the 12th generation of their CPUs the term thermal design power (TDP) has been replaced with processor base power (PBP). In a support page dedicated to the Core i7-7700 processor, Intel defines the TDP as the maximum amount of heat that a processor can produce when running real life applications, without telling what these "real life applications" are. Another example: in a 2011 white paper where the Xeon processors are compared with AMD’s competing devices, Intel defines TDP as the upper point of the thermal profile measured at maximum case temperature, but without specifying what this temperature should be (nor the computing load). It is important to note that all these definitions imply that the CPU is running at the base clock rate (non-turbo). In conclusion: Comparing the TDP between devices of different manufacturers is not very meaningful. The selection of a heat sink may end up with overheating (and CPU reduced performances) or overcooling (oversized, expensive heat sink ), depending if one chooses a too high or a too low case temperature Tc (respectively with a too low or too high ambient temperature Ta), or if the CPU operates with different computational loads. A possible approach to ensure a long life of a CPU is to ask the manufacturer the recommended maximum case temperature Tc and then to oversize the cooling system. For instance, a safety margin taking into account some turbo overclocking could consider a thermal power that is 1.5 times the rated TDP. In any case, the lower is the silicon junction temperature, the longer will be the lifespan of the device, according to an acceleration factor very roughly expressed by means of the Arrhenius equation. === Some disclosed details of AMD’s thermal design power (TDP) === In October 2019, the GamersNexus hardware guides showed a table with case and ambient temperature values that they got directly from AMD, describing the TPDs of some Ryzen 5, 7 and 9 CPUs. The formula relating all these parameters, given by AMD, is the usual T P D = ( T c − T a ) / R c a {\displaystyle TPD=(Tc-Ta)/Rca} The declared TPDs of these devices range from 65 W to 105 W; the ambient temperature considered by AMD is +42°C, and the case temperatures range from +61.8 °C to +69.3°C, while the case-to-ambient thermal resistances range from 0.189 to 0.420 °C/W. == See also == Heat generation in integrated circuits Operating temperature Power rating Intel Turbo Boost AMD Turbo Core == References == == External links == Details on AMD Bulldozer: Opterons to Feature Configurable TDP, AnandTech, July 15, 2011, by Johan De Gelas and Kristian Vättö Making x86 Run Cool, April 15, 2001, by Paul DeMone
Wikipedia/Thermal_design_power
Mutation is a genetic operator used to maintain genetic diversity of the chromosomes of a population of an evolutionary algorithm (EA), including genetic algorithms in particular. It is analogous to biological mutation. The classic example of a mutation operator of a binary coded genetic algorithm (GA) involves a probability that an arbitrary bit in a genetic sequence will be flipped from its original state. A common method of implementing the mutation operator involves generating a random variable for each bit in a sequence. This random variable tells whether or not a particular bit will be flipped. This mutation procedure, based on the biological point mutation, is called single point mutation. Other types of mutation operators are commonly used for representations other than binary, such as floating-point encodings or representations for combinatorial problems. The purpose of mutation in EAs is to introduce diversity into the sampled population. Mutation operators are used in an attempt to avoid local minima by preventing the population of chromosomes from becoming too similar to each other, thus slowing or even stopping convergence to the global optimum. This reasoning also leads most EAs to avoid only taking the fittest of the population in generating the next generation, but rather selecting a random (or semi-random) set with a weighting toward those that are fitter. The following requirements apply to all mutation operators used in an EA: every point in the search space must be reachable by one or more mutations. there must be no preference for parts or directions in the search space (no drift). small mutations should be more probable than large ones. For different genome types, different mutation types are suitable. Some mutations are Gaussian, Uniform, Zigzag, Scramble, Insertion, Inversion, Swap, and so on. An overview and more operators than those presented below can be found in the introductory book by Eiben and Smith or in. == Bit string mutation == The mutation of bit strings ensue through bit flips at random positions. Example: The probability of a mutation of a bit is 1 l {\displaystyle {\frac {1}{l}}} , where l {\displaystyle l} is the length of the binary vector. Thus, a mutation rate of 1 {\displaystyle 1} per mutation and individual selected for mutation is reached. == Mutation of real numbers == Many EAs, such as the evolution strategy or the real-coded genetic algorithms, work with real numbers instead of bit strings. This is due to the good experiences that have been made with this type of coding. The value of a real-valued gene can either be changed or redetermined. A mutation that implements the latter should only ever be used in conjunction with the value-changing mutations and then only with comparatively low probability, as it can lead to large changes. In practical applications, the respective value range of the decision variables to be changed of the optimisation problem to be solved is usually limited. Accordingly, the values of the associated genes are each restricted to an interval [ x min , x max ] {\displaystyle [x_{\min },x_{\max }]} . Mutations may or may not take these restrictions into account. In the latter case, suitable post-treatment is then required as described below. === Mutation without consideration of restrictions === A real number x {\displaystyle x} can be mutated using normal distribution N ( 0 , σ ) {\displaystyle {\mathcal {N}}(0,\sigma )} by adding the generated random value to the old value of the gene, resulting in the mutated value x ′ {\displaystyle x'} : x ′ = x + N ( 0 , σ ) {\displaystyle x'=x+{\mathcal {N}}(0,\sigma )} In the case of genes with a restricted range of values, it is a good idea to choose the step size of the mutation σ {\displaystyle \sigma } so that it reasonably fits the range [ x min , x max ] {\displaystyle [x_{\min },x_{\max }]} of the gene to be changed, e.g.: σ = x max − x min 6 {\displaystyle \sigma ={\frac {x_{\text{max}}-x_{\text{min}}}{6}}} The step size can also be adjusted to the smaller permissible change range depending on the current value. In any case, however, it is likely that the new value x ′ {\displaystyle x'} of the gene will be outside the permissible range of values. Such a case must be considered a lethal mutation, since the obvious repair by using the respective violated limit as the new value of the gene would lead to a drift. This is because the limit value would then be selected with the entire probability of the values beyond the limit of the value range. The evolution strategy works with real numbers and mutation based on normal distribution. The step sizes are part of the chromosome and are subject to evolution together with the actual decision variables. === Mutation with consideration of restrictions === One possible form of changing the value of a gene while taking its value range [ x min , x max ] {\displaystyle [x_{\min },x_{\max }]} into account is the mutation relative parameter change of the evolutionary algorithm GLEAM (General Learning Evolutionary Algorithm and Method), in which, as with the mutation presented earlier, small changes are more likely than large ones. First, an equally distributed decision is made as to whether the current value x {\displaystyle x} should be increased or decreased and then the corresponding total change interval is determined. Without loss of generality, an increase is assumed for the explanation and the total change interval is then [ x , x max ] {\displaystyle [x,x_{\max }]} . It is divided into k {\displaystyle k} sub-areas of equal size with the width δ {\displaystyle \delta } , from which k {\displaystyle k} sub-change intervals of different size are formed: i {\displaystyle i} -th sub-change interval: [ x , x + δ ⋅ i ] {\displaystyle [x,x+\delta \cdot i]} with δ = ( x max − x ) k {\displaystyle \delta ={\frac {(x_{\text{max}}-x)}{k}}} and i = 1 , … , k {\displaystyle i=1,\dots ,k} Subsequently, one of the k {\displaystyle k} sub-change intervals is selected in equal distribution and a random number, also equally distributed, is drawn from it as the new value x ′ {\displaystyle x'} of the gene. The resulting summed probabilities of the sub-change intervals result in the probability distribution of the k {\displaystyle k} sub-areas shown in the adjacent figure for the exemplary case of k = 10 {\displaystyle k=10} . This is not a normal distribution as before, but this distribution also clearly favours small changes over larger ones. This mutation for larger values of k {\displaystyle k} , such as 10, is less well suited for tasks where the optimum lies on one of the value range boundaries. This can be remedied by significantly reducing k {\displaystyle k} when a gene value approaches its limits very closely. === Common properties === For both mutation operators for real-valued numbers, the probability of an increase and decrease is independent of the current value and is 50% in each case. In addition, small changes are considerably more likely than large ones. For mixed-integer optimization problems, rounding is usually used. == Mutation of permutations == Mutations of permutations are specially designed for genomes that are themselves permutations of a set. These are often used to solve combinatorial tasks. In the two mutations presented, parts of the genome are rotated or inverted. === Rotation to the right === The presentation of the procedure is illustrated by an example on the right: === Inversion === The presentation of the procedure is illustrated by an example on the right: === Variants with preference for smaller changes === The requirement raised at the beginning for mutations, according to which small changes should be more probable than large ones, is only inadequately fulfilled by the two permutation mutations presented, since the lengths of the partial lists and the number of shift positions are determined in an equally distributed manner. However, the longer the partial list and the shift, the greater the change in gene order. This can be remedied by the following modifications. The end index j {\displaystyle j} of the partial lists is determined as the distance d {\displaystyle d} to the start index i {\displaystyle i} : j = ( i + d ) mod | P 0 | {\displaystyle j=(i+d){\bmod {\left|P_{0}\right|}}} where d {\displaystyle d} is determined randomly according to one of the two procedures for the mutation of real numbers from the interval [ 0 , | P 0 | − 1 ] {\displaystyle \left[0,\left|P_{0}\right|-1\right]} and rounded. For the rotation, k {\displaystyle k} is determined similarly to the distance d {\displaystyle d} , but the value 0 {\displaystyle 0} is forbidden. For the inversion, note that i ≠ j {\displaystyle i\neq j} must hold, so for d {\displaystyle d} the value 0 {\displaystyle 0} must be excluded. == See also == Evolutionary algorithms Genetic algorithms Evolution strategy Genetic programming Evolutionary programming == References == == Bibliography == John Holland (1975). Adaptation in Natural and Artificial Systems, PhD thesis, University of Michigan Press, Ann Arbor, Michigan. ISBN 0-262-58111-6. Schwefel, Hans-Paul (1995). Evolution and Optimum Seeking. New York: John Wiley & Sons. ISBN 0-471-57148-2. Davis, Lawrence (1991). Handbook of genetic algorithms. New York: Van Nostrand Reinhold. ISBN 0-442-00173-8. OCLC 23081440. Eiben, A.E.; Smith, J.E. (2015). Introduction to Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg: Springer. doi:10.1007/978-3-662-44874-8. ISBN 978-3-662-44873-1. S2CID 20912932. Yu, Xinjie; Gen, Mitsuo (2010). Introduction to Evolutionary Algorithms. Decision Engineering. London: Springer. doi:10.1007/978-1-84996-129-5. ISBN 978-1-84996-128-8. De Jong, Kenneth A. (2006). Evolutionary computation : a unified approach. Cambridge, Mass.: MIT Press. ISBN 978-0-262-25598-1. OCLC 69652176. Fogel, David B.; Bäck, Thomas; Michalewicz, Zbigniew, eds. (1999). Evolutionary computation. Vol. 1, Basic algorithms and operators. Bristol: Institute of Physics Pub. ISBN 0-585-30560-9. OCLC 45730387.
Wikipedia/Mutation_(evolutionary_algorithm)
Ecological interface design (EID) is an approach to interface design that was introduced specifically for complex sociotechnical, real-time, and dynamic systems. It has been applied in a variety of domains including process control (e.g. nuclear power plants, petrochemical plants), aviation, and medicine. EID differs from some interface design methodologies like user-centered design (UCD) in that the focus of the analysis is on the work domain or environment, rather than on the end user or a specific task. The goal of EID is to make constraints and complex relationships in the work environment perceptually evident (e.g. visible, audible) to the user. This allows more of users' cognitive resources to be devoted to higher cognitive processes such as problem solving and decision making. EID is based on two key concepts from cognitive engineering and cognitive systems engineering research: the Abstraction Hierarchy (AH) and the Skills, Rules, Knowledge (SRK) framework. By reducing mental workload and supporting knowledge-based reasoning, EID aims to improve user performance and overall system reliability for both anticipated and unanticipated events in a complex system. == Overview == === Origin and history of EID === Ecological interface design was proposed as a framework for interface design by Kim Vicente and Jens Rasmussen in the late 1980s and early 1990s following extensive research into human-system reliability at the Risø National Laboratory in Denmark (Rasmussen & Vicente et al, 1989; Vicente, 2001). The term ecological in EID originates from a school of psychology developed by James J. Gibson known as ecological psychology. This field of psychology focuses on human-environment relationships, in particular in relation to human perception in actual environments rather than in laboratory environments. EID borrows from ecological psychology in that the constraints and relationships of the work environment in a complex system are reflected perceptually (through an interface) in order to shape user behaviour. In order to develop ecological designs, analytical tools developed earlier by researchers at the Risø National Laboratory were adopted, including the Abstraction Hierarchy (AH) and the Skills, Rules, Knowledge (SRK) framework. The EID framework was first applied and evaluated in nuclear power plant systems (Vicente & Rasmussen, 1990, 1992). These tools are also used in cognitive work analysis. To date, EID has been applied in a variety of complex systems including computer network management, anaesthesiology, military command and control, and aircraft (Vicente, 2002; Burns & Hajdukiewicz, 2004). === Motivation === Rapid advances in technologies along with economic demands have led to a noticeable increase in the complexity of engineering systems (Vicente, 1999a). As a result, it is becoming more and more difficult for designers to anticipate events that may occur within such systems. Unanticipated events by definition cannot be specified in advance and thus cannot be prevented through training, procedures, or automation. A complex sociotechnical system designed based solely on known scenarios frequently loses the flexibility to support unforeseen events. System safety is often compromised by the operators' inability to adapt to new and unfamiliar situations (Vicente & Rasmussen, 1992). Ecological interface design attempts to provide the operators with the necessary tools and information to become active problem solvers as opposed to passive monitors, particularly during the development of unforeseen events. Interfaces designed following the EID framework aim to lessen mental workload when dealing with unfamiliar and unanticipated events, which are attributed to increased psychological pressure (Vicente, 1999b). In doing so, cognitive resources may be freed up to support efficient problem solving. In addition to providing operators with the means to successfully manage unanticipated events, EID is also proposed for systems that require users to become experts (Burns & Hajdukiewicz, 2004). Through the use of the Abstraction Hierarchy (AH) and the Skills, Rules, Knowledge (SRK) framework, EID enables novice users to more easily acquire advanced mental models that generally take many years of experience and training to develop. Likewise, EID provides a basis for continuous learning and distributed, collaborative work (Vicente, 1999b). When faced with complex sociotechnical systems, it is not always possible for designers to ask operators what kinds of information they would like to see since each person understands the system at a different level (but rarely fully) and will provide very different answers. The EID framework allows designers to determine what kinds of information are required when it is not possible or feasible to ask users (Burns & Hajdukiewicz, 2004). It is not the intention of EID to replace existing design methodologies such as UCD and task analysis, but to complement them. == UCD and EID: Why use EID at all? == As we can see from today's windows based interfaces User-Centered Design (UCD) has done an excellent job of identifying user preferences and limitations and incorporating them into the interfaces. In the pre-UCD era, interface design was almost an afterthought to a program and was completely dependent on the programmers while totally neglecting the end user. === Benefits of UCD === UCD adds three key ideas: 1. That Interface Design is a field on its own because it bridges between humans and the program/environment. 2. That an understanding of human perception, cognition, and behavior is critical to designing interfaces. 3. That much can be learned by getting feedback from the actual users of the interface, at the early design stages, and then testing through various points in the design (Burns & Hajdukiewicz, 2004) But there are some problems in this approach as well. === How is EID relevant? === The UCD approach commonly focuses on single user interactions between the user and the interface which is not enough to deal with today's increasingly complex systems where centralized control of information is needed and it is displayed on a variety of interfaces in varying detail. EID is a preferable addition to the complex systems' design process when even very experienced users do not have a complete understanding of how the entire complex system (power plant, nuclear plant, petrochemical refinery etc.) works. It is a known fact that users don't always understand or even feel the need to understand all the relationships behind the complex processes that they control via their interfaces. Furthermore, the users are not always aware of the constraints that affect the system that they work with, and discovering these constraints can take some extra effort (Burns & Hajdukiewicz, 2004). EID incorporates this constraint based style in the design approach where it examines the constraints of the user domain before getting user input. EID focuses on understanding the complex system – its build, its architecture, and its original intent and then relaying this information to the end user thereby reducing their learning curve and helping them achieve higher level of expertise. The constraint based style in interface design also facilitates the handling of unanticipated events because, regardless of the event, the constraint is broken and it can be seen by the user who in turn can proactively work with the interface to restore the constraint and fix the system. This does not in any way take away the usefulness of UCD but stresses the fact that EID offers some unique insight into the design process and it could be used in conjunction with other cognitive engineering techniques to enhance the user interfaces and increase human reliability in human–machine interactions. == The abstraction hierarchy == The abstraction hierarchy (AH) is a five-level functional decomposition used for modelling the work environment, or more commonly referred to as the work domain, for complex sociotechnical systems (Rasmussen, 1985). In the EID framework, the AH is used to determine what kinds of information should be displayed on the system interface and how the information should be arranged. The AH describes a system at different levels of abstraction using how and why relationships. Moving down the model levels answers how certain elements in the system are achieved, whereas moving up reveals why certain elements exist. Elements at highest level of the model define the purposes and goals of the system. Elements at the lowest levels of the model indicate and describe the physical components (i.e. equipment) of the system. The how and why relationships are shown on the AH as means-ends links. An AH is typically developed following a systematic approach known as a Work Domain Analysis (Vicente, 1999a). It is not uncommon for a Work Domain Analysis to yield multiple AH models; each examining the system at a different level of physical detail defined using another model called the Part-Whole Hierarchy (Burns & Hajdukiewicz, 2004). Each level in the AH is a complete but unique description of the work domain. === Functional purpose === The functional purpose (FP) level describes the goals and purposes of the system. An AH typically includes more than one system goal such that the goals conflict or complement each other (Burns & Hajdukiewicz, 2004). The relationships between the goals indicate potential trade-offs and constraints within the work domain of the system. For example, the goals of a refrigerator might be to cool food to a certain temperature while using a minimal amount of electricity. === Abstract function === The abstract function (AF) level describes the underlying laws and principles that govern the goals of the system. These may be empirical laws in a physical system, judicial laws in a social system, or even economic principles in a commercial system. In general, the laws and principles focus on things that need to be conserved or that flow through the system such as mass (Burns & Hajdukiewicz, 2004). The operation of the refrigerator (as a heat pump) is governed by the second law of thermodynamics. === Generalised function === The generalised function (GF) level explains the processes involved in the laws and principles found at the AF level, i.e. how each abstract function is achieved. Causal relationships exist between the elements found at the GF level. The refrigeration cycle in a refrigerator involves pumping heat from an area of low temperature (source) into an area of higher temperature (sink). === Physical function === The physical function (PFn) level reveals the physical components or equipment associated with the processes identified at the GF level. The capabilities and limitations of the components such as maximum capacity are also usually noted in the AH (Burns & Hajdukiewicz, 2004). A refrigerator may consist of heat exchange pipes and a gas compressor that can exert a certain maximum pressure on the cooling medium. === Physical form === The physical form (PFo) level describes the condition, location, and physical appearance of the components shown at the PFn level. In the refrigerator example, the heat exchange pipes and the gas compressor are arranged in a specific manner, basically illustrating the location of the components. Physical characteristics may include things as colour, dimensions, and shape. == Causal Abstraction Hierarchy == The hierarchy described before is a functional Abstraction Hierarchy representation. A functional Abstraction Hierarchy emphasizes the "means-ends" or "how/why" links of the hierarchy. These connections are direct and illustrated across the five levels of the Abstraction Hierarchy. As the systems get more and more complex, we need to follow the flow structure as well as to understand how the system works. This is when a causal Abstraction Hierarchy representation becomes necessary. As the flow patterns become increasingly complex and it becomes increasingly difficult to derive the flows directly from the system diagram, we add causal models to the functional models. The causal models help to detail the flow structure and understand more complex flow patterns within a specified Abstraction Hierarchy level. A causal Abstraction Hierarchy representation has the same structure as a functional Abstraction Hierarchy representation but with causal links drawn. Causal links are also known as "within the level" links. These links show how the processes and flows are connected within each level. The two representations are closely related but are usually developed separately because doing so results in a clearer model which captures most of the system constraints. In very elaborate flow systems causal models can be used to simplify or abstract the flows. In such a scenario we may find it easier to identify the main feed and product lines at first, then control lines, emergency supply lines, or emergency shunting lines (Burns & Hajdukiewicz, 2004). Causal links are most useful at the Generalized Function and the Abstract Function levels which show flows of materials, processes, mass, or energy. == The Skills, Rules, Knowledge (SRK) framework == The Skills, Rules, Knowledge (SRK) framework or SRK taxonomy defines three types of behaviour or psychological processes present in operator information processing (Vicente, 1999a). The SRK framework was developed by Rasmussen (1983) to help designers combine information requirements for a system and aspects of human cognition. In EID, the SRK framework is used to determine how information should be displayed to take advantage of human perception and psychomotor abilities (Vicente, 1999b). By supporting skill- and rule-based behaviours in familiar tasks, more cognitive resources may be devoted to knowledge-based behaviours, which are important for managing unanticipated events. The three categories essentially describe the possible ways in which information, for example, from a human–machine interface is extracted and understood: === Skill-based level === A skill-based behaviour represents a type of behaviour that requires very little or no conscious control to perform or execute an action once an intention is formed; also known as a sensorimotor behaviour. Performance is smooth, automated, and consists of highly integrated patterns of behaviour in most skill-based control (Rasmussen, 1990). For example, bicycle riding is considered a skill-based behaviour in which very little attention is required for control once the skill is acquired. This automaticity allows operators to free up cognitive resources, which can then be used for higher cognitive functions like problem solving (Wickens & Hollands, 2000). Errors in skills-based behaviour are routine errors. === Rule-based level === A rule-based behaviour is characterised by the use of rules and procedures to select a course of action in a familiar work situation (Rasmussen, 1990). The rules can be a set of instructions acquired by the operator through experience or given by supervisors and former operators. Operators are not required to know the underlying principles of a system, to perform a rule-based control. For example, hospitals have highly-proceduralised instructions for fire emergencies. Therefore, when one sees a fire, one can follow the necessary steps to ensure the safety of the patients without any knowledge of fire behaviour. Errors in rule-based behaviour are due to insufficient technical knowledge. === Knowledge-based level === A knowledge-based behaviour represents a more advanced level of reasoning (Wirstad, 1988). This type of control must be employed when the situation is novel and unexpected. Operators are required to know the fundamental principles and laws by which the system is governed. Since operators need to form explicit goals based on their current analysis of the system, cognitive workload is typically greater than when using skill- or rule-based behaviours. == See also == Cognition and applied psychology Ecological psychology Human factors and ergonomics Human–machine interface Usability == References == Bennett, K. B. & Flach, J. M. (2011). Display and Interface Design - Subtle Science, Exact Art. CRC Press. ISBN 978-1-4200-6439-1 Burns, C. M. & Hajdukiewicz, J. R. (2004). Ecological Interface Design. Boca Raton, FL: CRC Press. ISBN 0-415-28374-4 Rasmussen, J. (1983). Skills, rules, knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics, 13, 257-266. Rasmussen, J. (1985). The role of hierarchical knowledge representation in decision making and system management. IEEE Transactions on Systems, Man, and Cybernetics, 15, 234-243. Rasmussen, J. (1990). Mental models and the control of action in complex environments. In D. Ackermann, D. & M.J. Tauber (Eds.). Mental Models and Human-Computer Interaction 1 (pp. 41–46). North-Holland: Elsevier Science Publishers. ISBN 0-444-88453-X Rasmussen, J. & Vicente, K. J. (1989). Coping with human errors through system design: Implications for ecological interface design. International Journal of Man–Machine Studies, 31, 517-534. Vicente, K. J. (1999a). Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. Mahwah, NJ: Erlbaum and Associates. ISBN 0-8058-2397-2 Vicente, K. J. (1999b). Ecological Interface Design: Supporting operator adaptation, continuous learning, distributed, collaborative work. Proceedings of the Human Centered Processes Conference, 93-97. Vicente, K. J. (2001). Cognitive engineering research at Risø from 1962-1979. In E. Salas (Ed.), Advances in Human Performance and Cognitive Engineering Research, Volume 1 (pp. 1–57), New York: Elsevier. ISBN 0-7623-0748-X Vicente, K. J. (2002). Ecological Interface Design: Progress and challenges. Human Factors, 44, 62-78. Vicente, K. J. & Rasmussen, J. (1990). The ecology of human–machine systems II: Mediating "direct perception" in complex work domains. Ecological Psychology, 2, 207-249. Vicente, K. J. & Rasmussen, J. (1992). Ecological Interface Design: Theoretical foundations. IEEE Transactions on Systems, Man, and Cybernetics, 22, 589-606. Wickens, C. D. & Hollands, J. G. (2000). Engineering Psychology and Human Performance (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 0-321-04711-7 Wirstad, J. (1988). On knowledge structures for process operators. In L.P. Goodstein, H.B. Andersen, & S.E. Olsen (Eds.), Tasks, Errors, and Mental Models (pp. 50–69). London: Taylor and Francis. ISBN 0-85066-401-2 == External links == === Institutions and organisations === Advanced Interface Design Lab (AIDL), University of Waterloo Cognitive Engineering Lab (CEL), University of Toronto Cognitive Engineering Research Group (CERG), University of Queensland Human Factors and Ergonomics Society IEEE Systems, Man and Cybernetics Society
Wikipedia/Ecological_interface_design
In numerical linear algebra, the conjugate gradient method is an iterative method for numerically solving the linear system A x = b {\displaystyle {\boldsymbol {Ax}}={\boldsymbol {b}}} where A {\displaystyle {\boldsymbol {A}}} is symmetric positive-definite, without computing A − 1 {\displaystyle {\boldsymbol {A}}^{-1}} explicitly. The conjugate gradient method can be derived from several different perspectives, including specialization of the conjugate direction method for optimization, and variation of the Arnoldi/Lanczos iteration for eigenvalue problems. The intent of this article is to document the important steps in these derivations. == Conjugate direction == The conjugate gradient method can be seen as a special case of the conjugate direction method applied to minimization of the quadratic function f ( x ) = x T A x − 2 b T x . {\displaystyle f({\boldsymbol {x}})={\boldsymbol {x}}^{\mathrm {T} }{\boldsymbol {A}}{\boldsymbol {x}}-2{\boldsymbol {b}}^{\mathrm {T} }{\boldsymbol {x}}{\text{.}}} which allows us to apply geometric intuition. === Line search === Geometrically, the quadratic function can be equivalently presented by writing down its value at every point in space. The points of equal value make up its contour surfaces, which are concentric ellipsoids with the equation x T A x − 2 b T x = C {\displaystyle {\boldsymbol {x}}^{\mathrm {T} }{\boldsymbol {A}}{\boldsymbol {x}}-2{\boldsymbol {b}}^{\mathrm {T} }{\boldsymbol {x}}=C} for varying C {\displaystyle C} . As C {\displaystyle C} decreases, the ellipsoids become smaller and smaller, until at its minimal value, the ellipsoid shrinks to their shared center. Minimizing the quadratic function is then a problem of moving around the plane, searching for that shared center of all those ellipsoids. The center can be found by computing A − 1 {\displaystyle {\boldsymbol {A}}^{-1}} explicitly, but this is precisely what we are trying to avoid. The simplest method is greedy line search, where we start at some point x 0 {\displaystyle {\boldsymbol {x}}_{0}} , pick a direction p 0 {\displaystyle {\boldsymbol {p}}_{0}} somehow, then minimize f ( x 0 + p 0 α 0 ) {\displaystyle f({\boldsymbol {x}}_{0}+{\boldsymbol {p}}_{0}\alpha _{0})} . This has a simple closed-form solution that does not involve matrix inversion: α 0 = p 0 T ( b − A x 0 ) p 0 T A p 0 {\displaystyle \alpha _{0}={\frac {{\boldsymbol {p}}_{0}^{\mathrm {T} }({\boldsymbol {b}}-{\boldsymbol {Ax}}_{0})}{{\boldsymbol {p}}_{0}^{\mathrm {T} }{\boldsymbol {A}}{\boldsymbol {p}}_{0}}}} Geometrically, we start at some point x 0 {\displaystyle {\boldsymbol {x}}_{0}} on some ellipsoid, then choose a direction and travel along that direction, until we hit the point where the ellipsoid is minimized in that direction. This is not necessarily the minimum, but it is progress towards it. Visually, it is moving along a line, and stopping as soon as we reach a point tangent to the contour ellipsoid. We can now repeat this procedure, starting at our new point x 1 = x 0 + α 0 p 0 {\displaystyle {\boldsymbol {x}}_{1}={\boldsymbol {x}}_{0}+\alpha _{0}{\boldsymbol {p}}_{0}} , pick a new direction p 1 {\displaystyle {\boldsymbol {p}}_{1}} , compute α 1 {\displaystyle \alpha _{1}} , etc. We can summarize this as the following algorithm: Start by picking an initial guess x 0 {\displaystyle {\boldsymbol {x}}_{0}} , and compute the initial residual r 0 = b − A x 0 {\displaystyle {\boldsymbol {r}}_{0}={\boldsymbol {b}}-{\boldsymbol {Ax}}_{0}} , then iterate: α i = p i T r i p i T A p i , x i + 1 = x i + α i p i , r i + 1 = r i − α i A p i {\displaystyle {\begin{aligned}\alpha _{i}&={\frac {{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{i}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}{\text{,}}\\{\boldsymbol {x}}_{i+1}&={\boldsymbol {x}}_{i}+\alpha _{i}{\boldsymbol {p}}_{i}{\text{,}}\\{\boldsymbol {r}}_{i+1}&={\boldsymbol {r}}_{i}-\alpha _{i}{\boldsymbol {Ap}}_{i}\end{aligned}}} where p 0 , p 1 , p 2 , … {\displaystyle {\boldsymbol {p}}_{0},{\boldsymbol {p}}_{1},{\boldsymbol {p}}_{2},\ldots } are to be picked. Notice in particular how the residual is calculated iteratively step-by-step, instead of anew every time: r i + 1 = b − A x i + 1 = b − A ( x i + α i p i ) = r i − α i A p i {\displaystyle {\boldsymbol {r}}_{i+1}={\boldsymbol {b}}-{\boldsymbol {Ax}}_{i+1}={\boldsymbol {b}}-{\boldsymbol {A}}({\boldsymbol {x}}_{i}+\alpha _{i}{\boldsymbol {p}}_{i})={\boldsymbol {r}}_{i}-\alpha _{i}{\boldsymbol {A}}{\boldsymbol {p}}_{i}} It is possibly true that α i = 0 {\displaystyle \alpha _{i}=0} prematurely, which would bring numerical problems. However, for particular choices of p 0 , p 1 , p 2 , … {\displaystyle {\boldsymbol {p}}_{0},{\boldsymbol {p}}_{1},{\boldsymbol {p}}_{2},\ldots } , this will not occur before convergence, as we will prove below. === Conjugate directions === If the directions p 0 , p 1 , p 2 , … {\displaystyle {\boldsymbol {p}}_{0},{\boldsymbol {p}}_{1},{\boldsymbol {p}}_{2},\ldots } are not picked well, then progress will be slow. In particular, the gradient descent method would be slow. This can be seen in the diagram, where the green line is the result of always picking the local gradient direction. It zig-zags towards the minimum, but repeatedly overshoots. In contrast, if we pick the directions to be a set of mutually conjugate directions, then there will be no overshoot, and we would obtain the global minimum after n {\displaystyle n} steps, where n {\displaystyle n} is the number of dimensions. The concept of conjugate directions came from classical geometry of ellipse. For an ellipse, two semi-axes center are mutually conjugate with respect to the ellipse iff the lines are parallel to the tangent bounding parallelogram, as pictured. The concept generalizes to n-dimensional ellipsoids, where n semi-axes t 0 p 0 , … , t n − 1 p n − 1 {\displaystyle t_{0}{\boldsymbol {p}}_{0},\dots ,t_{n-1}{\boldsymbol {p}}_{n-1}} are mutually conjugate with respect to the ellipsoid iff each axis is parallel to the tangent bounding parallelepiped. In other words, for any i {\displaystyle i} , the tangent plane to the ellipsoid at c + t i p i {\displaystyle {\boldsymbol {c}}+t_{i}{\boldsymbol {p}}_{i}} is a hyperplane spanned by the vectors { p j : j ≠ i } {\displaystyle \{{\boldsymbol {p}}_{j}:j\neq i\}} , where c {\displaystyle {\boldsymbol {c}}} is the center of the ellipsoid. Note that we need to scale each directional vector p i {\displaystyle {\boldsymbol {p}}_{i}} by a scalar t i {\displaystyle t_{i}} , so that c + t i p i {\displaystyle {\boldsymbol {c}}+t_{i}{\boldsymbol {p}}_{i}} falls exactly on the ellipsoid. Given an ellipsoid with equation x T A x − 2 b T x = C {\displaystyle {\boldsymbol {x}}^{\mathrm {T} }{\boldsymbol {A}}{\boldsymbol {x}}-2{\boldsymbol {b}}^{\mathrm {T} }{\boldsymbol {x}}=C} for some constant C {\displaystyle C} , we can translate it so that its center is at origin. This changes the equation to x T A x = C ′ {\displaystyle {\boldsymbol {x}}^{\mathrm {T} }{\boldsymbol {A}}{\boldsymbol {x}}=C'} for some other constant C ′ {\displaystyle C'} . The condition of tangency is then: ( t i p i + p j d t j ) T A ( t i p i + p j d t j ) = C ′ + O ( d t j 2 ) , ∀ i ≠ j {\displaystyle (t_{i}{\boldsymbol {p}}_{i}+{\boldsymbol {p}}_{j}dt_{j})^{\mathrm {T} }{\boldsymbol {A}}(t_{i}{\boldsymbol {p}}_{i}+{\boldsymbol {p}}_{j}dt_{j})=C'+O(dt_{j}^{2}),\quad \forall i\neq j} that is, p i T A p j = 0 {\displaystyle {\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{j}=0} for any i ≠ j {\displaystyle i\neq j} . The conjugate direction method is imprecise in the sense that no formulae are given for selection of the directions p 0 , p 1 , p 2 , … {\displaystyle {\boldsymbol {p}}_{0},{\boldsymbol {p}}_{1},{\boldsymbol {p}}_{2},\ldots } . Specific choices lead to various methods including the conjugate gradient method and Gaussian elimination. === Gram–Schmidt process === We can tabulate the equations that we need to set to zero: This resembles the problem of orthogonalization, which requires p i T p j = 0 {\displaystyle {\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {p}}_{j}=0} for any i ≠ j {\displaystyle i\neq j} , and p i T p j = 1 {\displaystyle {\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {p}}_{j}=1} for any i = j {\displaystyle i=j} . Thus the problem of finding conjugate axes is less constrained than the problem of orthogonalization, so the Gram–Schmidt process works, with additional degrees of freedom that we can later use to pick the ones that would simplify the computation: Arbitrarily set p 0 {\displaystyle {\boldsymbol {p}}_{0}} . Arbitrarily set p 10 {\displaystyle {\boldsymbol {p}}_{10}} , then modify it to p 1 = p 10 − p 0 T A p 10 p 0 T A p 0 p 0 {\displaystyle {\boldsymbol {p}}_{1}={\boldsymbol {p}}_{10}-{\frac {{\boldsymbol {p}}_{0}^{\mathrm {T} }{\boldsymbol {Ap}}_{10}}{{\boldsymbol {p}}_{0}^{\mathrm {T} }{\boldsymbol {Ap}}_{0}}}{\boldsymbol {p}}_{0}} . Arbitrarily set p 20 {\displaystyle {\boldsymbol {p}}_{20}} , then modify it to p 2 = p 20 − ∑ i = 0 1 p i T A p 20 p i T A p i p i {\displaystyle {\boldsymbol {p}}_{2}={\boldsymbol {p}}_{20}-\sum _{i=0}^{1}{\frac {{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{20}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}{\boldsymbol {p}}_{i}} . ... Arbitrarily set p n − 1 , 0 {\displaystyle {\boldsymbol {p}}_{n-1,0}} , then modify it to p n − 1 = p n − 1 , 0 − ∑ i = 0 n − 2 p i T A p n − 1 , 0 p i T A p i p i {\displaystyle {\boldsymbol {p}}_{n-1}={\boldsymbol {p}}_{n-1,0}-\sum _{i=0}^{n-2}{\frac {{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{n-1,0}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}{\boldsymbol {p}}_{i}} . The most natural choice of p k , 0 {\displaystyle {\boldsymbol {p}}_{k,0}} is the gradient. That is, p k , 0 = ∇ f ( x k ) {\displaystyle {\boldsymbol {p}}_{k,0}=\nabla f({\boldsymbol {x}}_{k})} . Since conjugate directions can be scaled by a nonzero value, we scale it by − 1 / 2 {\displaystyle -1/2} for notational cleanness, obtaining p k , 0 = r k = b − A x k {\displaystyle {\boldsymbol {p}}_{k,0}=\mathbf {r} _{k}=\mathbf {b} -\mathbf {Ax} _{k}} Thus, we have p k = r k − ∑ i = 0 k − 1 p i T A r k p i T A p i p i {\displaystyle {\boldsymbol {p}}_{k}={\boldsymbol {r}}_{k}-\sum _{i=0}^{k-1}{\frac {{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ar}}_{k}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}{\boldsymbol {p}}_{i}} . Plugging it in, we have the conjugate gradient algorithm: r 0 := b − A x 0 p 0 := r 0 k := 0 do while k < n α k := p k T r k p k T A p k x k + 1 := x k + α k p k if | α k | is sufficiently small, then exit loop r k + 1 := r k − α k A p k p k + 1 := r k + 1 − ∑ i = 0 k p i T A r k + 1 p i T A p i p i k := k + 1 return x k + 1 as the result {\displaystyle {\begin{aligned}&\mathbf {r} _{0}:=\mathbf {b} -\mathbf {Ax} _{0}\\&\mathbf {p} _{0}:=\mathbf {r} _{0}\\&k:=0\\&{\text{do while }}k<n\\&\qquad \alpha _{k}:={\frac {\mathbf {p} _{k}^{\mathsf {T}}\mathbf {r} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {Ap} _{k}}}\\&\qquad \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}\\&\qquad {\text{if }}|\alpha _{k}|{\text{ is sufficiently small, then exit loop}}\\&\qquad \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}\\&\qquad \mathbf {p} _{k+1}:={\boldsymbol {r}}_{k+1}-\sum _{i=0}^{k}{\frac {{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ar}}_{k+1}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}{\boldsymbol {p}}_{i}\\&\qquad k:=k+1\\&{\text{return }}\mathbf {x} _{k+1}{\text{ as the result}}\end{aligned}}} Proposition. If at some point, α k = 0 {\displaystyle \alpha _{k}=0} , then the algorithm has converged, that is, ∇ f ( x k + 1 ) = 0 {\displaystyle \nabla f(\mathrm {x} _{k+1})=0} . Proof. By construction, it would mean that x k + 1 = x k {\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}} , that is, taking a conjugate gradient step gets us exactly back to where we were. This is only possible if the local gradient is already zero. === Simplification === This algorithm can be significantly simplified by some lemmas, resulting in the conjugate gradient algorithm. Lemma 1. p i T r j = 0 , ∀ i < j {\displaystyle \mathbf {p} _{i}^{T}\mathbf {r} _{j}=0,\;\forall i<j} and r i T r j = 0 , ∀ i < j {\displaystyle \mathbf {r} _{i}^{T}\mathbf {r} _{j}=0,\;\forall i<j} . Proof. By the geometric construction, the tangent plane to the ellipsoid at x j {\displaystyle \mathbf {x} _{j}} contains each of the previous conjugate direction vectors p 0 , p 1 , … , p j − 1 {\displaystyle \mathbf {p} _{0},\mathbf {p} _{1},\dots ,\mathbf {p} _{j-1}} . Further, r j {\displaystyle \mathbf {r} _{j}} is perpendicular to the tangent, thus p i T r j = 0 , ∀ i < j {\displaystyle \mathbf {p} _{i}^{T}\mathbf {r} _{j}=0,\;\forall i<j} . The second equation is true since by construction, r 0 , r 1 , … , r j − 1 {\displaystyle \mathbf {r} _{0},\mathbf {r} _{1},\dots ,\mathbf {r} _{j-1}} is a linear transform of p 0 , p 1 , … , p j − 1 {\displaystyle \mathbf {p} _{0},\mathbf {p} _{1},\dots ,\mathbf {p} _{j-1}} . Lemma 2. p k T r k = r k T r k {\displaystyle \mathbf {p} _{k}^{T}\mathbf {r} _{k}=\mathbf {r} _{k}^{T}\mathbf {r} _{k}} . Proof. By construction, p k := r k − ∑ i = 0 k − 1 p i T A r k − 1 p i T A p i p i {\displaystyle \mathbf {p} _{k}:={\boldsymbol {r}}_{k}-\sum _{i=0}^{k-1}{\frac {{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ar}}_{k-1}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}{\boldsymbol {p}}_{i}} , now apply lemma 1. Lemma 3. p i T A r k + 1 = { 0 , i < k − r k + 1 T r k + 1 / α k , i = k {\displaystyle {\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ar}}_{k+1}={\begin{cases}0,\;i<k\\-{\boldsymbol {r}}_{k+1}^{T}{\boldsymbol {r}}_{k+1}/\alpha _{k},\;i=k\end{cases}}} . Proof. By construction, we have r i + 1 = r i − α i A p i {\displaystyle \mathbf {r} _{i+1}=\mathbf {r} _{i}-\alpha _{i}\mathbf {Ap} _{i}} , thus r k + 1 T A p i = r k + 1 T r i − r i + 1 α i {\displaystyle {\boldsymbol {r}}_{k+1}^{T}{\boldsymbol {A}}{\boldsymbol {p}}_{i}={\boldsymbol {r}}_{k+1}^{T}{\frac {{\boldsymbol {r}}_{i}-{\boldsymbol {r}}_{i+1}}{\alpha _{i}}}} Now apply lemma 1. Plugging lemmas 1-3 in, we have α k = r k ⊤ r k p k ⊤ A p k {\displaystyle \alpha _{k}={\frac {\mathbf {r} _{k}^{\top }\mathbf {r} _{k}}{\mathbf {p} _{k}^{\top }\mathbf {A} \mathbf {p} _{k}}}} and p k + 1 := r k + 1 + r k + 1 ⊤ r k + 1 r k ⊤ r k p k {\displaystyle \mathbf {p} _{k+1}:={\boldsymbol {r}}_{k+1}+{\frac {\mathbf {r} _{k+1}^{\top }\mathbf {r} _{k+1}}{\mathbf {r} _{k}^{\top }\mathbf {r} _{k}}}\mathbf {p} _{k}} , which is the proper conjugate gradient algorithm. == Arnoldi/Lanczos iteration == The conjugate gradient method can also be seen as a variant of the Arnoldi/Lanczos iteration applied to solving linear systems. === The general Arnoldi method === In the Arnoldi iteration, one starts with a vector r 0 {\displaystyle {\boldsymbol {r}}_{0}} and gradually builds an orthonormal basis { v 1 , v 2 , v 3 , … } {\displaystyle \{{\boldsymbol {v}}_{1},{\boldsymbol {v}}_{2},{\boldsymbol {v}}_{3},\ldots \}} of the Krylov subspace K ( A , r 0 ) = s p a n { r 0 , A r 0 , A 2 r 0 , … } {\displaystyle {\mathcal {K}}({\boldsymbol {A}},{\boldsymbol {r}}_{0})=\mathrm {span} \{{\boldsymbol {r}}_{0},{\boldsymbol {Ar}}_{0},{\boldsymbol {A}}^{2}{\boldsymbol {r}}_{0},\ldots \}} by defining v i = w i / ‖ w i ‖ 2 {\displaystyle {\boldsymbol {v}}_{i}={\boldsymbol {w}}_{i}/\lVert {\boldsymbol {w}}_{i}\rVert _{2}} where v i = { r 0 if i = 1 , A v i − 1 − ∑ j = 1 i − 1 ( v j T A v i − 1 ) v j if i > 1 . {\displaystyle {\boldsymbol {v}}_{i}={\begin{cases}{\boldsymbol {r}}_{0}&{\text{if }}i=1{\text{,}}\\{\boldsymbol {Av}}_{i-1}-\sum _{j=1}^{i-1}({\boldsymbol {v}}_{j}^{\mathrm {T} }{\boldsymbol {Av}}_{i-1}){\boldsymbol {v}}_{j}&{\text{if }}i>1{\text{.}}\end{cases}}} In other words, for i > 1 {\displaystyle i>1} , v i {\displaystyle {\boldsymbol {v}}_{i}} is found by Gram-Schmidt orthogonalizing A v i − 1 {\displaystyle {\boldsymbol {Av}}_{i-1}} against { v 1 , v 2 , … , v i − 1 } {\displaystyle \{{\boldsymbol {v}}_{1},{\boldsymbol {v}}_{2},\ldots ,{\boldsymbol {v}}_{i-1}\}} followed by normalization. Put in matrix form, the iteration is captured by the equation A V i = V i + 1 H ~ i {\displaystyle {\boldsymbol {AV}}_{i}={\boldsymbol {V}}_{i+1}{\boldsymbol {\tilde {H}}}_{i}} where V i = [ v 1 v 2 ⋯ v i ] , H ~ i = [ h 11 h 12 h 13 ⋯ h 1 , i h 21 h 22 h 23 ⋯ h 2 , i h 32 h 33 ⋯ h 3 , i ⋱ ⋱ ⋮ h i , i − 1 h i , i h i + 1 , i ] = [ H i h i + 1 , i e i T ] {\displaystyle {\begin{aligned}{\boldsymbol {V}}_{i}&={\begin{bmatrix}{\boldsymbol {v}}_{1}&{\boldsymbol {v}}_{2}&\cdots &{\boldsymbol {v}}_{i}\end{bmatrix}}{\text{,}}\\{\boldsymbol {\tilde {H}}}_{i}&={\begin{bmatrix}h_{11}&h_{12}&h_{13}&\cdots &h_{1,i}\\h_{21}&h_{22}&h_{23}&\cdots &h_{2,i}\\&h_{32}&h_{33}&\cdots &h_{3,i}\\&&\ddots &\ddots &\vdots \\&&&h_{i,i-1}&h_{i,i}\\&&&&h_{i+1,i}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {H}}_{i}\\h_{i+1,i}{\boldsymbol {e}}_{i}^{\mathrm {T} }\end{bmatrix}}\end{aligned}}} with h j i = { v j T A v i if j ≤ i , ‖ w i + 1 ‖ 2 if j = i + 1 , 0 if j > i + 1 . {\displaystyle h_{ji}={\begin{cases}{\boldsymbol {v}}_{j}^{\mathrm {T} }{\boldsymbol {Av}}_{i}&{\text{if }}j\leq i{\text{,}}\\\lVert {\boldsymbol {w}}_{i+1}\rVert _{2}&{\text{if }}j=i+1{\text{,}}\\0&{\text{if }}j>i+1{\text{.}}\end{cases}}} When applying the Arnoldi iteration to solving linear systems, one starts with r 0 = b − A x 0 {\displaystyle {\boldsymbol {r}}_{0}={\boldsymbol {b}}-{\boldsymbol {Ax}}_{0}} , the residual corresponding to an initial guess x 0 {\displaystyle {\boldsymbol {x}}_{0}} . After each step of iteration, one computes y i = H i − 1 ( ‖ r 0 ‖ 2 e 1 ) {\displaystyle {\boldsymbol {y}}_{i}={\boldsymbol {H}}_{i}^{-1}(\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {e}}_{1})} and the new iterate x i = x 0 + V i y i {\displaystyle {\boldsymbol {x}}_{i}={\boldsymbol {x}}_{0}+{\boldsymbol {V}}_{i}{\boldsymbol {y}}_{i}} . === The direct Lanczos method === For the rest of discussion, we assume that A {\displaystyle {\boldsymbol {A}}} is symmetric positive-definite. With symmetry of A {\displaystyle {\boldsymbol {A}}} , the upper Hessenberg matrix H i = V i T A V i {\displaystyle {\boldsymbol {H}}_{i}={\boldsymbol {V}}_{i}^{\mathrm {T} }{\boldsymbol {AV}}_{i}} becomes symmetric and thus tridiagonal. It then can be more clearly denoted by H i = [ a 1 b 2 b 2 a 2 b 3 ⋱ ⋱ ⋱ b i − 1 a i − 1 b i b i a i ] . {\displaystyle {\boldsymbol {H}}_{i}={\begin{bmatrix}a_{1}&b_{2}\\b_{2}&a_{2}&b_{3}\\&\ddots &\ddots &\ddots \\&&b_{i-1}&a_{i-1}&b_{i}\\&&&b_{i}&a_{i}\end{bmatrix}}{\text{.}}} This enables a short three-term recurrence for v i {\displaystyle {\boldsymbol {v}}_{i}} in the iteration, and the Arnoldi iteration is reduced to the Lanczos iteration. Since A {\displaystyle {\boldsymbol {A}}} is symmetric positive-definite, so is H i {\displaystyle {\boldsymbol {H}}_{i}} . Hence, H i {\displaystyle {\boldsymbol {H}}_{i}} can be LU factorized without partial pivoting into H i = L i U i = [ 1 c 2 1 ⋱ ⋱ c i − 1 1 c i 1 ] [ d 1 b 2 d 2 b 3 ⋱ ⋱ d i − 1 b i d i ] {\displaystyle {\boldsymbol {H}}_{i}={\boldsymbol {L}}_{i}{\boldsymbol {U}}_{i}={\begin{bmatrix}1\\c_{2}&1\\&\ddots &\ddots \\&&c_{i-1}&1\\&&&c_{i}&1\end{bmatrix}}{\begin{bmatrix}d_{1}&b_{2}\\&d_{2}&b_{3}\\&&\ddots &\ddots \\&&&d_{i-1}&b_{i}\\&&&&d_{i}\end{bmatrix}}} with convenient recurrences for c i {\displaystyle c_{i}} and d i {\displaystyle d_{i}} : c i = b i / d i − 1 , d i = { a 1 if i = 1 , a i − c i b i if i > 1 . {\displaystyle {\begin{aligned}c_{i}&=b_{i}/d_{i-1}{\text{,}}\\d_{i}&={\begin{cases}a_{1}&{\text{if }}i=1{\text{,}}\\a_{i}-c_{i}b_{i}&{\text{if }}i>1{\text{.}}\end{cases}}\end{aligned}}} Rewrite x i = x 0 + V i y i {\displaystyle {\boldsymbol {x}}_{i}={\boldsymbol {x}}_{0}+{\boldsymbol {V}}_{i}{\boldsymbol {y}}_{i}} as x i = x 0 + V i H i − 1 ( ‖ r 0 ‖ 2 e 1 ) = x 0 + V i U i − 1 L i − 1 ( ‖ r 0 ‖ 2 e 1 ) = x 0 + P i z i {\displaystyle {\begin{aligned}{\boldsymbol {x}}_{i}&={\boldsymbol {x}}_{0}+{\boldsymbol {V}}_{i}{\boldsymbol {H}}_{i}^{-1}(\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {e}}_{1})\\&={\boldsymbol {x}}_{0}+{\boldsymbol {V}}_{i}{\boldsymbol {U}}_{i}^{-1}{\boldsymbol {L}}_{i}^{-1}(\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {e}}_{1})\\&={\boldsymbol {x}}_{0}+{\boldsymbol {P}}_{i}{\boldsymbol {z}}_{i}\end{aligned}}} with P i = V i U i − 1 , z i = L i − 1 ( ‖ r 0 ‖ 2 e 1 ) . {\displaystyle {\begin{aligned}{\boldsymbol {P}}_{i}&={\boldsymbol {V}}_{i}{\boldsymbol {U}}_{i}^{-1}{\text{,}}\\{\boldsymbol {z}}_{i}&={\boldsymbol {L}}_{i}^{-1}(\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {e}}_{1}){\text{.}}\end{aligned}}} It is now important to observe that P i = [ P i − 1 p i ] , z i = [ z i − 1 ζ i ] . {\displaystyle {\begin{aligned}{\boldsymbol {P}}_{i}&={\begin{bmatrix}{\boldsymbol {P}}_{i-1}&{\boldsymbol {p}}_{i}\end{bmatrix}}{\text{,}}\\{\boldsymbol {z}}_{i}&={\begin{bmatrix}{\boldsymbol {z}}_{i-1}\\\zeta _{i}\end{bmatrix}}{\text{.}}\end{aligned}}} In fact, there are short recurrences for p i {\displaystyle {\boldsymbol {p}}_{i}} and ζ i {\displaystyle \zeta _{i}} as well: p i = 1 d i ( v i − b i p i − 1 ) , ζ i = − c i ζ i − 1 . {\displaystyle {\begin{aligned}{\boldsymbol {p}}_{i}&={\frac {1}{d_{i}}}({\boldsymbol {v}}_{i}-b_{i}{\boldsymbol {p}}_{i-1}){\text{,}}\\\zeta _{i}&=-c_{i}\zeta _{i-1}{\text{.}}\end{aligned}}} With this formulation, we arrive at a simple recurrence for x i {\displaystyle {\boldsymbol {x}}_{i}} : x i = x 0 + P i z i = x 0 + P i − 1 z i − 1 + ζ i p i = x i − 1 + ζ i p i . {\displaystyle {\begin{aligned}{\boldsymbol {x}}_{i}&={\boldsymbol {x}}_{0}+{\boldsymbol {P}}_{i}{\boldsymbol {z}}_{i}\\&={\boldsymbol {x}}_{0}+{\boldsymbol {P}}_{i-1}{\boldsymbol {z}}_{i-1}+\zeta _{i}{\boldsymbol {p}}_{i}\\&={\boldsymbol {x}}_{i-1}+\zeta _{i}{\boldsymbol {p}}_{i}{\text{.}}\end{aligned}}} The relations above straightforwardly lead to the direct Lanczos method, which turns out to be slightly more complex. === The conjugate gradient method from imposing orthogonality and conjugacy === If we allow p i {\displaystyle {\boldsymbol {p}}_{i}} to scale and compensate for the scaling in the constant factor, we potentially can have simpler recurrences of the form: x i = x i − 1 + α i − 1 p i − 1 , r i = r i − 1 − α i − 1 A p i − 1 , p i = r i + β i − 1 p i − 1 . {\displaystyle {\begin{aligned}{\boldsymbol {x}}_{i}&={\boldsymbol {x}}_{i-1}+\alpha _{i-1}{\boldsymbol {p}}_{i-1}{\text{,}}\\{\boldsymbol {r}}_{i}&={\boldsymbol {r}}_{i-1}-\alpha _{i-1}{\boldsymbol {Ap}}_{i-1}{\text{,}}\\{\boldsymbol {p}}_{i}&={\boldsymbol {r}}_{i}+\beta _{i-1}{\boldsymbol {p}}_{i-1}{\text{.}}\end{aligned}}} As premises for the simplification, we now derive the orthogonality of r i {\displaystyle {\boldsymbol {r}}_{i}} and conjugacy of p i {\displaystyle {\boldsymbol {p}}_{i}} , i.e., for i ≠ j {\displaystyle i\neq j} , r i T r j = 0 , p i T A p j = 0 . {\displaystyle {\begin{aligned}{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{j}&=0{\text{,}}\\{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{j}&=0{\text{.}}\end{aligned}}} The residuals are mutually orthogonal because r i {\displaystyle {\boldsymbol {r}}_{i}} is essentially a multiple of v i + 1 {\displaystyle {\boldsymbol {v}}_{i+1}} since for i = 0 {\displaystyle i=0} , r 0 = ‖ r 0 ‖ 2 v 1 {\displaystyle {\boldsymbol {r}}_{0}=\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {v}}_{1}} , for i > 0 {\displaystyle i>0} , r i = b − A x i = b − A ( x 0 + V i y i ) = r 0 − A V i y i = r 0 − V i + 1 H ~ i y i = r 0 − V i H i y i − h i + 1 , i ( e i T y i ) v i + 1 = ‖ r 0 ‖ 2 v 1 − V i ( ‖ r 0 ‖ 2 e 1 ) − h i + 1 , i ( e i T y i ) v i + 1 = − h i + 1 , i ( e i T y i ) v i + 1 . {\displaystyle {\begin{aligned}{\boldsymbol {r}}_{i}&={\boldsymbol {b}}-{\boldsymbol {Ax}}_{i}\\&={\boldsymbol {b}}-{\boldsymbol {A}}({\boldsymbol {x}}_{0}+{\boldsymbol {V}}_{i}{\boldsymbol {y}}_{i})\\&={\boldsymbol {r}}_{0}-{\boldsymbol {AV}}_{i}{\boldsymbol {y}}_{i}\\&={\boldsymbol {r}}_{0}-{\boldsymbol {V}}_{i+1}{\boldsymbol {\tilde {H}}}_{i}{\boldsymbol {y}}_{i}\\&={\boldsymbol {r}}_{0}-{\boldsymbol {V}}_{i}{\boldsymbol {H}}_{i}{\boldsymbol {y}}_{i}-h_{i+1,i}({\boldsymbol {e}}_{i}^{\mathrm {T} }{\boldsymbol {y}}_{i}){\boldsymbol {v}}_{i+1}\\&=\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {v}}_{1}-{\boldsymbol {V}}_{i}(\lVert {\boldsymbol {r}}_{0}\rVert _{2}{\boldsymbol {e}}_{1})-h_{i+1,i}({\boldsymbol {e}}_{i}^{\mathrm {T} }{\boldsymbol {y}}_{i}){\boldsymbol {v}}_{i+1}\\&=-h_{i+1,i}({\boldsymbol {e}}_{i}^{\mathrm {T} }{\boldsymbol {y}}_{i}){\boldsymbol {v}}_{i+1}{\text{.}}\end{aligned}}} To see the conjugacy of p i {\displaystyle {\boldsymbol {p}}_{i}} , it suffices to show that P i T A P i {\displaystyle {\boldsymbol {P}}_{i}^{\mathrm {T} }{\boldsymbol {AP}}_{i}} is diagonal: P i T A P i = U i − T V i T A V i U i − 1 = U i − T H i U i − 1 = U i − T L i U i U i − 1 = U i − T L i {\displaystyle {\begin{aligned}{\boldsymbol {P}}_{i}^{\mathrm {T} }{\boldsymbol {AP}}_{i}&={\boldsymbol {U}}_{i}^{-\mathrm {T} }{\boldsymbol {V}}_{i}^{\mathrm {T} }{\boldsymbol {AV}}_{i}{\boldsymbol {U}}_{i}^{-1}\\&={\boldsymbol {U}}_{i}^{-\mathrm {T} }{\boldsymbol {H}}_{i}{\boldsymbol {U}}_{i}^{-1}\\&={\boldsymbol {U}}_{i}^{-\mathrm {T} }{\boldsymbol {L}}_{i}{\boldsymbol {U}}_{i}{\boldsymbol {U}}_{i}^{-1}\\&={\boldsymbol {U}}_{i}^{-\mathrm {T} }{\boldsymbol {L}}_{i}\end{aligned}}} is symmetric and lower triangular simultaneously and thus must be diagonal. Now we can derive the constant factors α i {\displaystyle \alpha _{i}} and β i {\displaystyle \beta _{i}} with respect to the scaled p i {\displaystyle {\boldsymbol {p}}_{i}} by solely imposing the orthogonality of r i {\displaystyle {\boldsymbol {r}}_{i}} and conjugacy of p i {\displaystyle {\boldsymbol {p}}_{i}} . Due to the orthogonality of r i {\displaystyle {\boldsymbol {r}}_{i}} , it is necessary that r i + 1 T r i = ( r i − α i A p i ) T r i = 0 {\displaystyle {\boldsymbol {r}}_{i+1}^{\mathrm {T} }{\boldsymbol {r}}_{i}=({\boldsymbol {r}}_{i}-\alpha _{i}{\boldsymbol {Ap}}_{i})^{\mathrm {T} }{\boldsymbol {r}}_{i}=0} . As a result, α i = r i T r i r i T A p i = r i T r i ( p i − β i − 1 p i − 1 ) T A p i = r i T r i p i T A p i . {\displaystyle {\begin{aligned}\alpha _{i}&={\frac {{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{i}}{{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}\\&={\frac {{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{i}}{({\boldsymbol {p}}_{i}-\beta _{i-1}{\boldsymbol {p}}_{i-1})^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}\\&={\frac {{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{i}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}{\text{.}}\end{aligned}}} Similarly, due to the conjugacy of p i {\displaystyle {\boldsymbol {p}}_{i}} , it is necessary that p i + 1 T A p i = ( r i + 1 + β i p i ) T A p i = 0 {\displaystyle {\boldsymbol {p}}_{i+1}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}=({\boldsymbol {r}}_{i+1}+\beta _{i}{\boldsymbol {p}}_{i})^{\mathrm {T} }{\boldsymbol {Ap}}_{i}=0} . As a result, β i = − r i + 1 T A p i p i T A p i = − r i + 1 T ( r i − r i + 1 ) α i p i T A p i = r i + 1 T r i + 1 r i T r i . {\displaystyle {\begin{aligned}\beta _{i}&=-{\frac {{\boldsymbol {r}}_{i+1}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}{{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}\\&=-{\frac {{\boldsymbol {r}}_{i+1}^{\mathrm {T} }({\boldsymbol {r}}_{i}-{\boldsymbol {r}}_{i+1})}{\alpha _{i}{\boldsymbol {p}}_{i}^{\mathrm {T} }{\boldsymbol {Ap}}_{i}}}\\&={\frac {{\boldsymbol {r}}_{i+1}^{\mathrm {T} }{\boldsymbol {r}}_{i+1}}{{\boldsymbol {r}}_{i}^{\mathrm {T} }{\boldsymbol {r}}_{i}}}{\text{.}}\end{aligned}}} This completes the derivation. == References == Hestenes, M. R.; Stiefel, E. (December 1952). "Methods of conjugate gradients for solving linear systems" (PDF). Journal of Research of the National Bureau of Standards. 49 (6): 409. doi:10.6028/jres.049.044. Shewchuk, Jonathan Richard. "An introduction to the conjugate gradient method without the agonizing pain." (1994) Saad, Y. (2003). "Chapter 6: Krylov Subspace Methods, Part I". Iterative methods for sparse linear systems (2nd ed.). SIAM. ISBN 978-0-89871-534-7.
Wikipedia/Derivation_of_the_conjugate_gradient_method
In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations A x = b . {\displaystyle Ax=b.\,} Unlike the conjugate gradient method, this algorithm does not require the matrix A {\displaystyle A} to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose A*. == The Algorithm == Choose initial guess x 0 {\displaystyle x_{0}\,} , two other vectors x 0 ∗ {\displaystyle x_{0}^{*}} and b ∗ {\displaystyle b^{*}\,} and a preconditioner M {\displaystyle M\,} r 0 ← b − A x 0 {\displaystyle r_{0}\leftarrow b-A\,x_{0}\,} r 0 ∗ ← b ∗ − x 0 ∗ A ∗ {\displaystyle r_{0}^{*}\leftarrow b^{*}-x_{0}^{*}\,A^{*}} p 0 ← M − 1 r 0 {\displaystyle p_{0}\leftarrow M^{-1}r_{0}\,} p 0 ∗ ← r 0 ∗ M − 1 {\displaystyle p_{0}^{*}\leftarrow r_{0}^{*}M^{-1}\,} for k = 0 , 1 , … {\displaystyle k=0,1,\ldots } do α k ← r k ∗ M − 1 r k p k ∗ A p k {\displaystyle \alpha _{k}\leftarrow {r_{k}^{*}M^{-1}r_{k} \over p_{k}^{*}Ap_{k}}\,} x k + 1 ← x k + α k ⋅ p k {\displaystyle x_{k+1}\leftarrow x_{k}+\alpha _{k}\cdot p_{k}\,} x k + 1 ∗ ← x k ∗ + α k ¯ ⋅ p k ∗ {\displaystyle x_{k+1}^{*}\leftarrow x_{k}^{*}+{\overline {\alpha _{k}}}\cdot p_{k}^{*}\,} r k + 1 ← r k − α k ⋅ A p k {\displaystyle r_{k+1}\leftarrow r_{k}-\alpha _{k}\cdot Ap_{k}\,} r k + 1 ∗ ← r k ∗ − α k ¯ ⋅ p k ∗ A ∗ {\displaystyle r_{k+1}^{*}\leftarrow r_{k}^{*}-{\overline {\alpha _{k}}}\cdot p_{k}^{*}\,A^{*}} β k ← r k + 1 ∗ M − 1 r k + 1 r k ∗ M − 1 r k {\displaystyle \beta _{k}\leftarrow {r_{k+1}^{*}M^{-1}r_{k+1} \over r_{k}^{*}M^{-1}r_{k}}\,} p k + 1 ← M − 1 r k + 1 + β k ⋅ p k {\displaystyle p_{k+1}\leftarrow M^{-1}r_{k+1}+\beta _{k}\cdot p_{k}\,} p k + 1 ∗ ← r k + 1 ∗ M − 1 + β k ¯ ⋅ p k ∗ {\displaystyle p_{k+1}^{*}\leftarrow r_{k+1}^{*}M^{-1}+{\overline {\beta _{k}}}\cdot p_{k}^{*}\,} In the above formulation, the computed r k {\displaystyle r_{k}\,} and r k ∗ {\displaystyle r_{k}^{*}} satisfy r k = b − A x k , {\displaystyle r_{k}=b-Ax_{k},\,} r k ∗ = b ∗ − x k ∗ A ∗ {\displaystyle r_{k}^{*}=b^{*}-x_{k}^{*}\,A^{*}} and thus are the respective residuals corresponding to x k {\displaystyle x_{k}\,} and x k ∗ {\displaystyle x_{k}^{*}} , as approximate solutions to the systems A x = b , {\displaystyle Ax=b,\,} x ∗ A ∗ = b ∗ ; {\displaystyle x^{*}\,A^{*}=b^{*}\,;} x ∗ {\displaystyle x^{*}} is the adjoint, and α ¯ {\displaystyle {\overline {\alpha }}} is the complex conjugate. === Unpreconditioned version of the algorithm === Choose initial guess x 0 {\displaystyle x_{0}\,} , r 0 ← b − A x 0 {\displaystyle r_{0}\leftarrow b-A\,x_{0}\,} r ^ 0 ← b ^ − x ^ 0 A ∗ {\displaystyle {\hat {r}}_{0}\leftarrow {\hat {b}}-{\hat {x}}_{0}A^{*}} p 0 ← r 0 {\displaystyle p_{0}\leftarrow r_{0}\,} p ^ 0 ← r ^ 0 {\displaystyle {\hat {p}}_{0}\leftarrow {\hat {r}}_{0}\,} for k = 0 , 1 , … {\displaystyle k=0,1,\ldots } do α k ← r ^ k r k p ^ k A p k {\displaystyle \alpha _{k}\leftarrow {{\hat {r}}_{k}r_{k} \over {\hat {p}}_{k}Ap_{k}}\,} x k + 1 ← x k + α k ⋅ p k {\displaystyle x_{k+1}\leftarrow x_{k}+\alpha _{k}\cdot p_{k}\,} x ^ k + 1 ← x ^ k + α k ⋅ p ^ k {\displaystyle {\hat {x}}_{k+1}\leftarrow {\hat {x}}_{k}+\alpha _{k}\cdot {\hat {p}}_{k}\,} r k + 1 ← r k − α k ⋅ A p k {\displaystyle r_{k+1}\leftarrow r_{k}-\alpha _{k}\cdot Ap_{k}\,} r ^ k + 1 ← r ^ k − α k ⋅ p ^ k A ∗ {\displaystyle {\hat {r}}_{k+1}\leftarrow {\hat {r}}_{k}-\alpha _{k}\cdot {\hat {p}}_{k}A^{*}} β k ← r ^ k + 1 r k + 1 r ^ k r k {\displaystyle \beta _{k}\leftarrow {{\hat {r}}_{k+1}r_{k+1} \over {\hat {r}}_{k}r_{k}}\,} p k + 1 ← r k + 1 + β k ⋅ p k {\displaystyle p_{k+1}\leftarrow r_{k+1}+\beta _{k}\cdot p_{k}\,} p ^ k + 1 ← r ^ k + 1 + β k ⋅ p ^ k {\displaystyle {\hat {p}}_{k+1}\leftarrow {\hat {r}}_{k+1}+\beta _{k}\cdot {\hat {p}}_{k}\,} == Discussion == The biconjugate gradient method is numerically unstable (compare to the biconjugate gradient stabilized method), but very important from a theoretical point of view. Define the iteration steps by x k := x j + P k A − 1 ( b − A x j ) , {\displaystyle x_{k}:=x_{j}+P_{k}A^{-1}\left(b-Ax_{j}\right),} x k ∗ := x j ∗ + ( b ∗ − x j ∗ A ) P k A − 1 , {\displaystyle x_{k}^{*}:=x_{j}^{*}+\left(b^{*}-x_{j}^{*}A\right)P_{k}A^{-1},} where j < k {\displaystyle j<k} using the related projection P k := u k ( v k ∗ A u k ) − 1 v k ∗ A , {\displaystyle P_{k}:=\mathbf {u} _{k}\left(\mathbf {v} _{k}^{*}A\mathbf {u} _{k}\right)^{-1}\mathbf {v} _{k}^{*}A,} with u k = [ u 0 , u 1 , … , u k − 1 ] , {\displaystyle \mathbf {u} _{k}=\left[u_{0},u_{1},\dots ,u_{k-1}\right],} v k = [ v 0 , v 1 , … , v k − 1 ] . {\displaystyle \mathbf {v} _{k}=\left[v_{0},v_{1},\dots ,v_{k-1}\right].} These related projections may be iterated themselves as P k + 1 = P k + ( 1 − P k ) u k ⊗ v k ∗ A ( 1 − P k ) v k ∗ A ( 1 − P k ) u k . {\displaystyle P_{k+1}=P_{k}+\left(1-P_{k}\right)u_{k}\otimes {v_{k}^{*}A\left(1-P_{k}\right) \over v_{k}^{*}A\left(1-P_{k}\right)u_{k}}.} A relation to Quasi-Newton methods is given by P k = A k − 1 A {\displaystyle P_{k}=A_{k}^{-1}A} and x k + 1 = x k − A k + 1 − 1 ( A x k − b ) {\displaystyle x_{k+1}=x_{k}-A_{k+1}^{-1}\left(Ax_{k}-b\right)} , where A k + 1 − 1 = A k − 1 + ( 1 − A k − 1 A ) u k ⊗ v k ∗ ( 1 − A A k − 1 ) v k ∗ A ( 1 − A k − 1 A ) u k . {\displaystyle A_{k+1}^{-1}=A_{k}^{-1}+\left(1-A_{k}^{-1}A\right)u_{k}\otimes {v_{k}^{*}\left(1-AA_{k}^{-1}\right) \over v_{k}^{*}A\left(1-A_{k}^{-1}A\right)u_{k}}.} The new directions p k = ( 1 − P k ) u k , {\displaystyle p_{k}=\left(1-P_{k}\right)u_{k},} p k ∗ = v k ∗ A ( 1 − P k ) A − 1 {\displaystyle p_{k}^{*}=v_{k}^{*}A\left(1-P_{k}\right)A^{-1}} are then orthogonal to the residuals: v i ∗ r k = p i ∗ r k = 0 , {\displaystyle v_{i}^{*}r_{k}=p_{i}^{*}r_{k}=0,} r k ∗ u j = r k ∗ p j = 0 , {\displaystyle r_{k}^{*}u_{j}=r_{k}^{*}p_{j}=0,} which themselves satisfy r k = A ( 1 − P k ) A − 1 r j , {\displaystyle r_{k}=A\left(1-P_{k}\right)A^{-1}r_{j},} r k ∗ = r j ∗ ( 1 − P k ) {\displaystyle r_{k}^{*}=r_{j}^{*}\left(1-P_{k}\right)} where i , j < k {\displaystyle i,j<k} . The biconjugate gradient method now makes a special choice and uses the setting u k = M − 1 r k , {\displaystyle u_{k}=M^{-1}r_{k},\,} v k ∗ = r k ∗ M − 1 . {\displaystyle v_{k}^{*}=r_{k}^{*}\,M^{-1}.\,} With this particular choice, explicit evaluations of P k {\displaystyle P_{k}} and A−1 are avoided, and the algorithm takes the form stated above. == Properties == If A = A ∗ {\displaystyle A=A^{*}\,} is self-adjoint, x 0 ∗ = x 0 {\displaystyle x_{0}^{*}=x_{0}} and b ∗ = b {\displaystyle b^{*}=b} , then r k = r k ∗ {\displaystyle r_{k}=r_{k}^{*}} , p k = p k ∗ {\displaystyle p_{k}=p_{k}^{*}} , and the conjugate gradient method produces the same sequence x k = x k ∗ {\displaystyle x_{k}=x_{k}^{*}} at half the computational cost. The sequences produced by the algorithm are biorthogonal, i.e., p i ∗ A p j = r i ∗ M − 1 r j = 0 {\displaystyle p_{i}^{*}Ap_{j}=r_{i}^{*}M^{-1}r_{j}=0} for i ≠ j {\displaystyle i\neq j} . if P j ′ {\displaystyle P_{j'}\,} is a polynomial with deg ⁡ ( P j ′ ) + j < k {\displaystyle \deg \left(P_{j'}\right)+j<k} , then r k ∗ P j ′ ( M − 1 A ) u j = 0 {\displaystyle r_{k}^{*}P_{j'}\left(M^{-1}A\right)u_{j}=0} . The algorithm thus produces projections onto the Krylov subspace. if P i ′ {\displaystyle P_{i'}\,} is a polynomial with i + deg ⁡ ( P i ′ ) < k {\displaystyle i+\deg \left(P_{i'}\right)<k} , then v i ∗ P i ′ ( A M − 1 ) r k = 0 {\displaystyle v_{i}^{*}P_{i'}\left(AM^{-1}\right)r_{k}=0} . == See also == Biconjugate gradient stabilized method (BiCG-Stab) Conjugate gradient method (CG) Conjugate gradient squared method (CGS) == References == Fletcher, R. (1976). "Conjugate gradient methods for indefinite systems". In Watson, G. Alistair (ed.). Numerical analysis : proceedings of the Dundee Conference on Numerical Analysis. Lecture Notes in Mathematics. Vol. 506. Springer. pp. 73–89. doi:10.1007/BFb0080116. ISBN 978-3-540-07610-0. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 2.7.6". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
Wikipedia/Biconjugate_gradient_method
Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause and effect that forms a circuit or loop. The system can then be said to feed back into itself. The notion of cause-and-effect has to be handled carefully when applied to feedback systems: Simple causal reasoning about a feedback system is difficult because the first system influences the second and second system influences the first, leading to a circular argument. This makes reasoning based upon cause and effect tricky, and it is necessary to analyze the system as a whole. As provided by Webster, feedback in business is the transmission of evaluative or corrective information about an action, event, or process to the original or controlling source. == History == Self-regulating mechanisms have existed since antiquity, and the idea of feedback started to enter economic theory in Britain by the 18th century, but it was not at that time recognized as a universal abstraction and so did not have a name. The first ever known artificial feedback device was a float valve, for maintaining water at a constant level, invented in 270 BC in Alexandria, Egypt. This device illustrated the principle of feedback: a low water level opens the valve, the rising water then provides feedback into the system, closing the valve when the required level is reached. This then reoccurs in a circular fashion as the water level fluctuates. Centrifugal governors were used to regulate the distance and pressure between millstones in windmills since the 17th century. In 1788, James Watt designed his first centrifugal governor following a suggestion from his business partner Matthew Boulton, for use in the steam engines of their production. Early steam engines employed a purely reciprocating motion, and were used for pumping water – an application that could tolerate variations in the working speed, but the use of steam engines for other applications called for more precise control of the speed. In 1868, James Clerk Maxwell wrote a famous paper, "On governors", that is widely considered a classic in feedback control theory. This was a landmark paper on control theory and the mathematics of feedback. The verb phrase to feed back, in the sense of returning to an earlier position in a mechanical process, was in use in the US by the 1860s, and in 1909, Nobel laureate Karl Ferdinand Braun used the term "feed-back" as a noun to refer to (undesired) coupling between components of an electronic circuit. By the end of 1912, researchers using early electronic amplifiers (audions) had discovered that deliberately coupling part of the output signal back to the input circuit would boost the amplification (through regeneration), but would also cause the audion to howl or sing. This action of feeding back of the signal from output to input gave rise to the use of the term "feedback" as a distinct word by 1920. The development of cybernetics from the 1940s onwards was centred around the study of circular causal feedback mechanisms. Over the years there has been some dispute as to the best definition of feedback. According to cybernetician Ashby (1956), mathematicians and theorists interested in the principles of feedback mechanisms prefer the definition of "circularity of action", which keeps the theory simple and consistent. For those with more practical aims, feedback should be a deliberate effect via some more tangible connection. [Practical experimenters] object to the mathematician's definition, pointing out that this would force them to say that feedback was present in the ordinary pendulum ... between its position and its momentum—a "feedback" that, from the practical point of view, is somewhat mystical. To this the mathematician retorts that if feedback is to be considered present only when there is an actual wire or nerve to represent it, then the theory becomes chaotic and riddled with irrelevancies.: 54  Focusing on uses in management theory, Ramaprasad (1983) defines feedback generally as "...information about the gap between the actual level and the reference level of a system parameter" that is used to "alter the gap in some way". He emphasizes that the information by itself is not feedback unless translated into action. == Types == === Positive and negative feedback === Positive feedback: If the signal feedback from output is in phase with the input signal, the feedback is called positive feedback. Negative feedback: If the signal feedback is out of phase by 180° with respect to the input signal, the feedback is called negative feedback. As an example of negative feedback, the diagram might represent a cruise control system in a car that matches a target speed such as the speed limit. The controlled system is the car; its input includes the combined torque from the engine and from the changing slope of the road (the disturbance). The car's speed (status) is measured by a speedometer. The error signal is the difference of the speed as measured by the speedometer from the target speed (set point). The controller interprets the speed to adjust the accelerator, commanding the fuel flow to the engine (the effector). The resulting change in engine torque, the feedback, combines with the torque exerted by the change of road grade to reduce the error in speed, minimising the changing slope. The terms "positive" and "negative" were first applied to feedback prior to WWII. The idea of positive feedback already existed in the 1920s when the regenerative circuit was made. Friis and Jensen (1924) described this circuit in a set of electronic amplifiers as a case where the "feed-back" action is positive in contrast to negative feed-back action, which they mentioned only in passing. Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amplifiers. According to Black: Positive feed-back increases the gain of the amplifier, negative feed-back reduces it. According to Mindell (2002) confusion in the terms arose shortly after this: ... Friis and Jensen had made the same distinction Black used between "positive feed-back" and "negative feed-back", based not on the sign of the feedback itself but rather on its effect on the amplifier's gain. In contrast, Nyquist and Bode, when they built on Black's work, referred to negative feedback as that with the sign reversed. Black had trouble convincing others of the utility of his invention in part because confusion existed over basic matters of definition.: 121  Even before these terms were being used, James Clerk Maxwell had described their concept through several kinds of "component motions" associated with the centrifugal governors used in steam engines. He distinguished those that lead to a continued increase in a disturbance or the amplitude of a wave or oscillation, from those that lead to a decrease of the same quality. ==== Terminology ==== The terms positive and negative feedback are defined in different ways within different disciplines. the change of the gap between reference and actual values of a parameter or trait, based on whether the gap is widening (positive) or narrowing (negative). the valence of the action or effect that alters the gap, based on whether it makes the recipient or observer happy (positive) or unhappy (negative). The two definitions may be confusing, like when an incentive (reward) is used to boost poor performance (narrow a gap). Referring to definition 1, some authors use alternative terms, replacing positive and negative with self-reinforcing and self-correcting, reinforcing and balancing, discrepancy-enhancing and discrepancy-reducing or regenerative and degenerative respectively. And for definition 2, some authors promote describing the action or effect as positive and negative reinforcement or punishment rather than feedback. Yet even within a single discipline an example of feedback can be called either positive or negative, depending on how values are measured or referenced. This confusion may arise because feedback can be used to provide information or motivate, and often has both a qualitative and a quantitative component. As Connellan and Zemke (1993) put it: Quantitative feedback tells us how much and how many. Qualitative feedback tells us how good, bad or indifferent.: 102  ==== Limitations of negative and positive feedback ==== While simple systems can sometimes be described as one or the other type, many systems with feedback loops cannot be shoehorned into either type, and this is especially true when multiple loops are present. When there are only two parts joined so that each affects the other, the properties of the feedback give important and useful information about the properties of the whole. But when the parts rise to even as few as four, if every one affects the other three, then twenty circuits can be traced through them; and knowing the properties of all the twenty circuits does not give complete information about the system.: 54  === Other types of feedback === In general, feedback systems can have many signals fed back and the feedback loop frequently contain mixtures of positive and negative feedback where positive and negative feedback can dominate at different frequencies or different points in the state space of a system. The term bipolar feedback has been coined to refer to biological systems where positive and negative feedback systems can interact, the output of one affecting the input of another, and vice versa. Some systems with feedback can have very complex behaviors such as chaotic behaviors in non-linear systems, while others have much more predictable behaviors, such as those that are used to make and design digital systems. Feedback is used extensively in digital systems. For example, binary counters and similar devices employ feedback where the current state and inputs are used to calculate a new state which is then fed back and clocked back into the device to update it. == Applications == === Mathematics and dynamical systems === By using feedback properties, the behavior of a system can be altered to meet the needs of an application; systems can be made stable, responsive or held constant. It is shown that dynamical systems with a feedback experience an adaptation to the edge of chaos. === Physics === Physical systems present feedback through the mutual interactions of its parts. Feedback is also relevant for the regulation of experimental conditions, noise reduction, and signal control. The thermodynamics of feedback-controlled systems has intrigued physicist since the Maxwell's demon, with recent advances on the consequences for entropy reduction and performance increase. === Biology === In biological systems such as organisms, ecosystems, or the biosphere, most parameters must stay under control within a narrow range around a certain optimal level under certain environmental conditions. The deviation of the optimal value of the controlled parameter can result from the changes in internal and external environments. A change of some of the environmental conditions may also require change of that range to change for the system to function. The value of the parameter to maintain is recorded by a reception system and conveyed to a regulation module via an information channel. An example of this is insulin oscillations. Biological systems contain many types of regulatory circuits, both positive and negative. As in other contexts, positive and negative do not imply that the feedback causes good or bad effects. A negative feedback loop is one that tends to slow down a process, whereas the positive feedback loop tends to accelerate it. The mirror neurons are part of a social feedback system, when an observed action is "mirrored" by the brain—like a self-performed action. Normal tissue integrity is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted molecules that act as mediators; failure of key feedback mechanisms in cancer disrupts tissue function. In an injured or infected tissue, inflammatory mediators elicit feedback responses in cells, which alter gene expression, and change the groups of molecules expressed and secreted, including molecules that induce diverse cells to cooperate and restore tissue structure and function. This type of feedback is important because it enables coordination of immune responses and recovery from infections and injuries. During cancer, key elements of this feedback fail. This disrupts tissue function and immunity. Mechanisms of feedback were first elucidated in bacteria, where a nutrient elicits changes in some of their metabolic functions. Feedback is also central to the operations of genes and gene regulatory networks. Repressor (see Lac repressor) and activator proteins are used to create genetic operons, which were identified by François Jacob and Jacques Monod in 1961 as feedback loops. These feedback loops may be positive (as in the case of the coupling between a sugar molecule and the proteins that import sugar into a bacterial cell), or negative (as is often the case in metabolic consumption). On a larger scale, feedback can have a stabilizing effect on animal populations even when profoundly affected by external changes, although time lags in feedback response can give rise to predator-prey cycles. In zymology, feedback serves as regulation of activity of an enzyme by its direct product(s) or downstream metabolite(s) in the metabolic pathway (see Allosteric regulation). The hypothalamic–pituitary–adrenal axis is largely controlled by positive and negative feedback, much of which is still unknown. In psychology, the body receives a stimulus from the environment or internally that causes the release of hormones. Release of hormones then may cause more of those hormones to be released, causing a positive feedback loop. This cycle is also found in certain behaviour. For example, "shame loops" occur in people who blush easily. When they realize that they are blushing, they become even more embarrassed, which leads to further blushing, and so on. === Climate science === The climate system is characterized by strong positive and negative feedback loops between processes that affect the state of the atmosphere, ocean, and land. A simple example is the ice–albedo positive feedback loop whereby melting snow exposes more dark ground (of lower albedo), which in turn absorbs heat and causes more snow to melt. === Control theory === Feedback is extensively used in control theory, using a variety of methods including state space (controls), full state feedback, and so forth. In the context of control theory, "feedback" is traditionally assumed to specify "negative feedback". The most common general-purpose controller using a control-loop feedback mechanism is a proportional-integral-derivative (PID) controller. Heuristically, the terms of a PID controller can be interpreted as corresponding to time: the proportional term depends on the present error, the integral term on the accumulation of past errors, and the derivative term is a prediction of future error, based on current rate of change. === Education === For feedback in the educational context, see corrective feedback. === Mechanical engineering === In ancient times, the float valve was used to regulate the flow of water in Greek and Roman water clocks; similar float valves are used to regulate fuel in a carburettor and also used to regulate tank water level in the flush toilet. The Dutch inventor Cornelius Drebbel (1572–1633) built thermostats (c1620) to control the temperature of chicken incubators and chemical furnaces. In 1745, the windmill was improved by blacksmith Edmund Lee, who added a fantail to keep the face of the windmill pointing into the wind. In 1787, Tom Mead regulated the rotation speed of a windmill by using a centrifugal pendulum to adjust the distance between the bedstone and the runner stone (i.e., to adjust the load). The use of the centrifugal governor by James Watt in 1788 to regulate the speed of his steam engine was one factor leading to the Industrial Revolution. Steam engines also use float valves and pressure release valves as mechanical regulation devices. A mathematical analysis of Watt's governor was done by James Clerk Maxwell in 1868. The Great Eastern was one of the largest steamships of its time and employed a steam powered rudder with feedback mechanism designed in 1866 by John McFarlane Gray. Joseph Farcot coined the word servo in 1873 to describe steam-powered steering systems. Hydraulic servos were later used to position guns. Elmer Ambrose Sperry of the Sperry Corporation designed the first autopilot in 1912. Nicolas Minorsky published a theoretical analysis of automatic ship steering in 1922 and described the PID controller. Internal combustion engines of the late 20th century employed mechanical feedback mechanisms such as the vacuum timing advance but mechanical feedback was replaced by electronic engine management systems once small, robust and powerful single-chip microcontrollers became affordable. === Electronic engineering === The use of feedback is widespread in the design of electronic components such as amplifiers, oscillators, and stateful logic circuit elements such as flip-flops and counters. Electronic feedback systems are also very commonly used to control mechanical, thermal and other physical processes. If the signal is inverted on its way round the control loop, the system is said to have negative feedback; otherwise, the feedback is said to be positive. Negative feedback is often deliberately introduced to increase the stability and accuracy of a system by correcting or reducing the influence of unwanted changes. This scheme can fail if the input changes faster than the system can respond to it. When this happens, the lag in arrival of the correcting signal can result in over-correction, causing the output to oscillate or "hunt". While often an unwanted consequence of system behaviour, this effect is used deliberately in electronic oscillators. Harry Nyquist at Bell Labs derived the Nyquist stability criterion for determining the stability of feedback systems. An easier method, but less general, is to use Bode plots developed by Hendrik Bode to determine the gain margin and phase margin. Design to ensure stability often involves frequency compensation to control the location of the poles of the amplifier. Electronic feedback loops are used to control the output of electronic devices, such as amplifiers. A feedback loop is created when all or some portion of the output is fed back to the input. A device is said to be operating open loop if no output feedback is being employed and closed loop if feedback is being used. When two or more amplifiers are cross-coupled using positive feedback, complex behaviors can be created. These multivibrators are widely used and include: astable circuits, which act as oscillators monostable circuits, which can be pushed into a state, and will return to the stable state after some time bistable circuits, which have two stable states that the circuit can be switched between ==== Negative feedback ==== Negative feedback occurs when the fed-back output signal has a relative phase of 180° with respect to the input signal (upside down). This situation is sometimes referred to as being out of phase, but that term also is used to indicate other phase separations, as in "90° out of phase". Negative feedback can be used to correct output errors or to desensitize a system to unwanted fluctuations. In feedback amplifiers, this correction is generally for waveform distortion reduction or to establish a specified gain level. A general expression for the gain of a negative feedback amplifier is the asymptotic gain model. ==== Positive feedback ==== Positive feedback occurs when the fed-back signal is in phase with the input signal. Under certain gain conditions, positive feedback reinforces the input signal to the point where the output of the device oscillates between its maximum and minimum possible states. Positive feedback may also introduce hysteresis into a circuit. This can cause the circuit to ignore small signals and respond only to large ones. It is sometimes used to eliminate noise from a digital signal. Under some circumstances, positive feedback may cause a device to latch, i.e., to reach a condition in which the output is locked to its maximum or minimum state. This fact is very widely used in digital electronics to make bistable circuits for volatile storage of information. The loud squeals that sometimes occurs in audio systems, PA systems, and rock music are known as audio feedback. If a microphone is in front of a loudspeaker that it is connected to, sound that the microphone picks up comes out of the speaker, and is picked up by the microphone and re-amplified. If the loop gain is sufficient, howling or squealing at the maximum power of the amplifier is possible. ==== Oscillator ==== An electronic oscillator is an electronic circuit that produces a periodic, oscillating electronic signal, often a sine wave or a square wave. Oscillators convert direct current (DC) from a power supply to an alternating current signal. They are widely used in many electronic devices. Common examples of signals generated by oscillators include signals broadcast by radio and television transmitters, clock signals that regulate computers and quartz clocks, and the sounds produced by electronic beepers and video games. Oscillators are often characterized by the frequency of their output signal: A low-frequency oscillator (LFO) is an electronic oscillator that generates a frequency below ≈20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator. An audio oscillator produces frequencies in the audio range, about 16 Hz to 20 kHz. An RF oscillator produces signals in the radio frequency (RF) range of about 100 kHz to 100 GHz. Oscillators designed to produce a high-power AC output from a DC supply are usually called inverters. There are two main types of electronic oscillator: the linear or harmonic oscillator and the nonlinear or relaxation oscillator. ==== Latches and flip-flops ==== A latch or a flip-flop is a circuit that has two stable states and can be used to store state information. They typically constructed using feedback that crosses over between two arms of the circuit, to provide the circuit with a state. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Latches and flip-flops are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems. Latches and flip-flops are used as data storage elements. Such data storage can be used for storage of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal. Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered). Although the term flip-flop has historically referred generically to both simple and clocked circuits, in modern usage it is common to reserve the term flip-flop exclusively for discussing clocked circuits; the simple ones are commonly called latches. Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type (positive going or negative going) of clock edge. === Software === Feedback loops provide generic mechanisms for controlling the running, maintenance, and evolution of software and computing systems. Feedback-loops are important models in the engineering of adaptive software, as they define the behaviour of the interactions among the control elements over the adaptation process, to guarantee system properties at run-time. Feedback loops and foundations of control theory have been successfully applied to computing systems. In particular, they have been applied to the development of products such as IBM Db2 and IBM Tivoli. From a software perspective, the autonomic (MAPE, monitor analyze plan execute) loop proposed by researchers of IBM is another valuable contribution to the application of feedback loops to the control of dynamic properties and the design and evolution of autonomic software systems. ==== Software Development ==== ==== User interface design ==== Feedback is also a useful design principle for designing user interfaces. === Video feedback === Video feedback is the video equivalent of acoustic feedback. It involves a loop between a video camera input and a video output, e.g., a television screen or monitor. Aiming the camera at the display produces a complex video image based on the feedback. === Human resource management === == See also == == References == == Further reading == Katie Salen and Eric Zimmerman. Rules of Play. MIT Press. 2004. ISBN 0-262-24045-9. Chapter 18: Games as Cybernetic Systems. Korotayev A., Malkov A., Khaltourina D. Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends. Moscow: URSS, 2006. ISBN 5-484-00559-0 Dijk, E., Cremer, D.D., Mulder, L.B., and Stouten, J. "How Do We React to Feedback in Social Dilemmas?" In Biel, Eek, Garling & Gustafsson, (eds.), New Issues and Paradigms in Research on Social Dilemmas, New York: Springer, 2008. == External links == Media related to Feedback at Wikimedia Commons
Wikipedia/Feedback_Control
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being observed) in the input dataset and the output of the (linear) function of the independent variable. Some sources consider OLS to be linear regression. Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface—the smaller the differences, the better the model fits the data. The resulting estimator can be expressed by a simple formula, especially in the case of a simple linear regression, in which there is a single regressor on the right side of the regression equation. The OLS estimator is consistent for the level-one fixed effects when the regressors are exogenous and forms perfect colinearity (rank condition), consistent for the variance estimate of the residuals when regressors have finite fourth moments and—by the Gauss–Markov theorem—optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated. Under these conditions, the method of OLS provides minimum-variance mean-unbiased estimation when the errors have finite variances. Under the additional assumption that the errors are normally distributed with zero mean, OLS is the maximum likelihood estimator that outperforms any non-linear unbiased estimator. == Linear model == Suppose the data consists of n {\displaystyle n} observations { x i , y i } i = 1 n {\displaystyle \left\{\mathbf {x} _{i},y_{i}\right\}_{i=1}^{n}} . Each observation i {\displaystyle i} includes a scalar response y i {\displaystyle y_{i}} and a column vector x i {\displaystyle \mathbf {x} _{i}} of p {\displaystyle p} parameters (regressors), i.e., x i = [ x i 1 , x i 2 , … , x i p ] T {\displaystyle \mathbf {x} _{i}=\left[x_{i1},x_{i2},\dots ,x_{ip}\right]^{\operatorname {T} }} . In a linear regression model, the response variable, y i {\displaystyle y_{i}} , is a linear function of the regressors: y i = β 1 x i 1 + β 2 x i 2 + ⋯ + β p x i p + ε i , {\displaystyle y_{i}=\beta _{1}\ x_{i1}+\beta _{2}\ x_{i2}+\cdots +\beta _{p}\ x_{ip}+\varepsilon _{i},} or in vector form, y i = x i T β + ε i , {\displaystyle y_{i}=\mathbf {x} _{i}^{\operatorname {T} }{\boldsymbol {\beta }}+\varepsilon _{i},\,} where x i {\displaystyle \mathbf {x} _{i}} , as introduced previously, is a column vector of the i {\displaystyle i} -th observation of all the explanatory variables; β {\displaystyle {\boldsymbol {\beta }}} is a p × 1 {\displaystyle p\times 1} vector of unknown parameters; and the scalar ε i {\displaystyle \varepsilon _{i}} represents unobserved random variables (errors) of the i {\displaystyle i} -th observation. ε i {\displaystyle \varepsilon _{i}} accounts for the influences upon the responses y i {\displaystyle y_{i}} from sources other than the explanatory variables x i {\displaystyle \mathbf {x} _{i}} . This model can also be written in matrix notation as y = X β + ε , {\displaystyle \mathbf {y} =\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }},\,} where y {\displaystyle \mathbf {y} } and ε {\displaystyle {\boldsymbol {\varepsilon }}} are n × 1 {\displaystyle n\times 1} vectors of the response variables and the errors of the n {\displaystyle n} observations, and X {\displaystyle \mathbf {X} } is an n × p {\displaystyle n\times p} matrix of regressors, also sometimes called the design matrix, whose row i {\displaystyle i} is x i T {\displaystyle \mathbf {x} _{i}^{\operatorname {T} }} and contains the i {\displaystyle i} -th observations on all the explanatory variables. Typically, a constant term is included in the set of regressors X {\displaystyle \mathbf {X} } , say, by taking x i 1 = 1 {\displaystyle x_{i1}=1} for all i = 1 , … , n {\displaystyle i=1,\dots ,n} . The coefficient β 1 {\displaystyle \beta _{1}} corresponding to this regressor is called the intercept. Without the intercept, the fitted line is forced to cross the origin when x i = 0 → {\displaystyle x_{i}={\vec {0}}} . Regressors do not have to be independent for estimation to be consistent e.g. they may be non-linearly dependent. Short of perfect multicollinearity, parameter estimates may still be consistent; however, as multicollinearity rises the standard error around such estimates increases and reduces the precision of such estimates. When there is perfect multicollinearity, it is no longer possible to obtain unique estimates for the coefficients to the related regressors; estimation for these parameters cannot converge (thus, it cannot be consistent). As a concrete example where regressors are non-linearly dependent yet estimation may still be consistent, we might suspect the response depends linearly both on a value and its square; in which case we would include one regressor whose value is just the square of another regressor. In that case, the model would be quadratic in the second regressor, but none-the-less is still considered a linear model because the model is still linear in the parameters ( β {\displaystyle {\boldsymbol {\beta }}} ). === Matrix/vector formulation === Consider an overdetermined system ∑ j = 1 p x i j β j = y i , ( i = 1 , 2 , … , n ) , {\displaystyle \sum _{j=1}^{p}x_{ij}\beta _{j}=y_{i},\ (i=1,2,\dots ,n),} of n {\displaystyle n} linear equations in p {\displaystyle p} unknown coefficients, β 1 , β 2 , … , β p {\displaystyle \beta _{1},\beta _{2},\dots ,\beta _{p}} , with n > p {\displaystyle n>p} . This can be written in matrix form as X β = y , {\displaystyle \mathbf {X} {\boldsymbol {\beta }}=\mathbf {y} ,} where X = [ X 11 X 12 ⋯ X 1 p X 21 X 22 ⋯ X 2 p ⋮ ⋮ ⋱ ⋮ X n 1 X n 2 ⋯ X n p ] , β = [ β 1 β 2 ⋮ β p ] , y = [ y 1 y 2 ⋮ y n ] . {\displaystyle \mathbf {X} ={\begin{bmatrix}X_{11}&X_{12}&\cdots &X_{1p}\\X_{21}&X_{22}&\cdots &X_{2p}\\\vdots &\vdots &\ddots &\vdots \\X_{n1}&X_{n2}&\cdots &X_{np}\end{bmatrix}},\qquad {\boldsymbol {\beta }}={\begin{bmatrix}\beta _{1}\\\beta _{2}\\\vdots \\\beta _{p}\end{bmatrix}},\qquad \mathbf {y} ={\begin{bmatrix}y_{1}\\y_{2}\\\vdots \\y_{n}\end{bmatrix}}.} (Note: for a linear model as above, not all elements in X {\displaystyle \mathbf {X} } contains information on the data points. The first column is populated with ones, X i 1 = 1 {\displaystyle X_{i1}=1} . Only the other columns contain actual data. So here p {\displaystyle p} is equal to the number of regressors plus one). Such a system usually has no exact solution, so the goal is instead to find the coefficients β {\displaystyle {\boldsymbol {\beta }}} which fit the equations "best", in the sense of solving the quadratic minimization problem β ^ = a r g m i n β S ( β ) , {\displaystyle {\hat {\boldsymbol {\beta }}}={\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\,S({\boldsymbol {\beta }}),} where the objective function S {\displaystyle S} is given by S ( β ) = ∑ i = 1 n | y i − ∑ j = 1 p X i j β j | 2 = ‖ y − X β ‖ 2 . {\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{n}\left|y_{i}-\sum _{j=1}^{p}X_{ij}\beta _{j}\right|^{2}=\left\|\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\right\|^{2}.} A justification for choosing this criterion is given in Properties below. This minimization problem has a unique solution, provided that the p {\displaystyle p} columns of the matrix X {\displaystyle \mathbf {X} } are linearly independent, given by solving the so-called normal equations: ( X T X ) β ^ = X T y . {\displaystyle \left(\mathbf {X} ^{\operatorname {T} }\mathbf {X} \right){\hat {\boldsymbol {\beta }}}=\mathbf {X} ^{\operatorname {T} }\mathbf {y} \ .} The matrix X T X {\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {X} } is known as the normal matrix or Gram matrix and the matrix X T y {\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {y} } is known as the moment matrix of regressand by regressors. Finally, β ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} is the coefficient vector of the least-squares hyperplane, expressed as β ^ = ( X ⊤ X ) − 1 X ⊤ y . {\displaystyle {\hat {\boldsymbol {\beta }}}=\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} .} or β ^ = β + ( X ⊤ X ) − 1 X ⊤ ε . {\displaystyle {\hat {\boldsymbol {\beta }}}={\boldsymbol {\beta }}+\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }{\boldsymbol {\varepsilon }}.} == Estimation == Suppose b is a "candidate" value for the parameter vector β. The quantity yi − xiTb, called the residual for the i-th observation, measures the vertical distance between the data point (xi, yi) and the hyperplane y = xTb, and thus assesses the degree of fit between the actual data and the model. The sum of squared residuals (SSR) (also called the error sum of squares (ESS) or residual sum of squares (RSS)) is a measure of the overall model fit: S ( b ) = ∑ i = 1 n ( y i − x i T b ) 2 = ( y − X b ) T ( y − X b ) , {\displaystyle S(b)=\sum _{i=1}^{n}(y_{i}-x_{i}^{\operatorname {T} }b)^{2}=(y-Xb)^{\operatorname {T} }(y-Xb),} where T denotes the matrix transpose, and the rows of X, denoting the values of all the independent variables associated with a particular value of the dependent variable, are Xi = xiT. The value of b which minimizes this sum is called the OLS estimator for β. The function S(b) is quadratic in b with positive-definite Hessian, and therefore this function possesses a unique global minimum at b = β ^ {\displaystyle b={\hat {\beta }}} , which can be given by the explicit formula[proof] β ^ = argmin b ∈ R p ⁡ S ( b ) = ( X T X ) − 1 X T y . {\displaystyle {\hat {\beta }}=\operatorname {argmin} _{b\in \mathbb {R} ^{p}}S(b)=(X^{\operatorname {T} }X)^{-1}X^{\operatorname {T} }y\ .} The product N = XT X is a Gram matrix, and its inverse, Q = N−1, is the cofactor matrix of β, closely related to its covariance matrix, Cβ. The matrix (XT X)−1 XT = Q XT is called the Moore–Penrose pseudoinverse matrix of X. This formulation highlights the point that estimation can be carried out if, and only if, there is no perfect multicollinearity between the explanatory variables (which would cause the Gram matrix to have no inverse). == Prediction == After we have estimated β, the fitted values (or predicted values) from the regression will be y ^ = X β ^ = P y , {\displaystyle {\hat {y}}=X{\hat {\beta }}=Py,} where P = X(XTX)−1XT is the projection matrix onto the space V spanned by the columns of X. This matrix P is also sometimes called the hat matrix because it "puts a hat" onto the variable y. Another matrix, closely related to P is the annihilator matrix M = In − P; this is a projection matrix onto the space orthogonal to V. Both matrices P and M are symmetric and idempotent (meaning that P2 = P and M2 = M), and relate to the data matrix X via identities PX = X and MX = 0. Matrix M creates the residuals from the regression: ε ^ = y − y ^ = y − X β ^ = M y = M ( X β + ε ) = ( M X ) β + M ε = M ε . {\displaystyle {\hat {\varepsilon }}=y-{\hat {y}}=y-X{\hat {\beta }}=My=M(X\beta +\varepsilon )=(MX)\beta +M\varepsilon =M\varepsilon .} The variances of the predicted values s y ^ i 2 {\displaystyle s_{{\hat {y}}_{i}}^{2}} are found in the main diagonal of the variance-covariance matrix of predicted values: C y ^ = s 2 P , {\displaystyle C_{\hat {y}}=s^{2}P,} where P is the projection matrix and s2 is the sample variance. The full matrix is very large; its diagonal elements can be calculated individually as: s y ^ i 2 = s 2 X i ( X T X ) − 1 X i T , {\displaystyle s_{{\hat {y}}_{i}}^{2}=s^{2}X_{i}(X^{T}X)^{-1}X_{i}^{T},} where Xi is the i-th row of matrix X. == Sample statistics == Using these residuals we can estimate the sample variance s2 using the reduced chi-squared statistic: s 2 = ε ^ T ε ^ n − p = ( M y ) T M y n − p = y T M T M y n − p = y T M y n − p = S ( β ^ ) n − p , σ ^ 2 = n − p n s 2 {\displaystyle s^{2}={\frac {{\hat {\varepsilon }}^{\mathrm {T} }{\hat {\varepsilon }}}{n-p}}={\frac {(My)^{\mathrm {T} }My}{n-p}}={\frac {y^{\mathrm {T} }M^{\mathrm {T} }My}{n-p}}={\frac {y^{\mathrm {T} }My}{n-p}}={\frac {S({\hat {\beta }})}{n-p}},\qquad {\hat {\sigma }}^{2}={\frac {n-p}{n}}\;s^{2}} The denominator, n−p, is the statistical degrees of freedom. The first quantity, s2, is the OLS estimate for σ2, whereas the second, σ ^ 2 {\displaystyle \scriptstyle {\hat {\sigma }}^{2}} , is the MLE estimate for σ2. The two estimators are quite similar in large samples; the first estimator is always unbiased, while the second estimator is biased but has a smaller mean squared error. In practice s2 is used more often, since it is more convenient for the hypothesis testing. The square root of s2 is called the regression standard error, standard error of the regression, or standard error of the equation. It is common to assess the goodness-of-fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing onto X. The coefficient of determination R2 is defined as a ratio of "explained" variance to the "total" variance of the dependent variable y, in the cases where the regression sum of squares equals the sum of squares of residuals: R 2 = ∑ ( y ^ i − y ¯ ) 2 ∑ ( y i − y ¯ ) 2 = y T P T L P y y T L y = 1 − y T M y y T L y = 1 − R S S T S S {\displaystyle R^{2}={\frac {\sum ({\hat {y}}_{i}-{\overline {y}})^{2}}{\sum (y_{i}-{\overline {y}})^{2}}}={\frac {y^{\mathrm {T} }P^{\mathrm {T} }LPy}{y^{\mathrm {T} }Ly}}=1-{\frac {y^{\mathrm {T} }My}{y^{\mathrm {T} }Ly}}=1-{\frac {\rm {RSS}}{\rm {TSS}}}} where TSS is the total sum of squares for the dependent variable, L = I n − 1 n J n {\textstyle L=I_{n}-{\frac {1}{n}}J_{n}} , and J n {\textstyle J_{n}} is an n×n matrix of ones. ( L {\displaystyle L} is a centering matrix which is equivalent to regression on a constant; it simply subtracts the mean from a variable.) In order for R2 to be meaningful, the matrix X of data on regressors must contain a column vector of ones to represent the constant whose coefficient is the regression intercept. In that case, R2 will always be a number between 0 and 1, with values close to 1 indicating a good degree of fit. === Simple linear regression model === If the data matrix X contains only two variables, a constant and a scalar regressor xi, then this is called the "simple regression model". This case is often considered in the beginner statistics classes, as it provides much simpler formulas even suitable for manual calculation. The parameters are commonly denoted as (α, β): y i = α + β x i + ε i . {\displaystyle y_{i}=\alpha +\beta x_{i}+\varepsilon _{i}.} The least squares estimates in this case are given by simple formulas β ^ = ∑ i = 1 n ( x i − x ¯ ) ( y i − y ¯ ) ∑ i = 1 n ( x i − x ¯ ) 2 α ^ = y ¯ − β ^ x ¯ , {\displaystyle {\begin{aligned}{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}{(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}}{\sum _{i=1}^{n}{(x_{i}-{\bar {x}})^{2}}}}\\[2pt]{\widehat {\alpha }}&={\bar {y}}-{\widehat {\beta }}\,{\bar {x}}\ ,\end{aligned}}} == Alternative derivations == In the previous section the least squares estimator β ^ {\displaystyle {\hat {\beta }}} was obtained as a value that minimizes the sum of squared residuals of the model. However it is also possible to derive the same estimator from other approaches. In all cases the formula for OLS estimator remains the same: ^β = (XTX)−1XTy; the only difference is in how we interpret this result. === Projection === For mathematicians, OLS is an approximate solution to an overdetermined system of linear equations Xβ ≈ y, where β is the unknown. Assuming the system cannot be solved exactly (the number of equations n is much larger than the number of unknowns p), we are looking for a solution that could provide the smallest discrepancy between the right- and left- hand sides. In other words, we are looking for the solution that satisfies β ^ = a r g min β ‖ y − X β ‖ 2 , {\displaystyle {\hat {\beta }}={\rm {arg}}\min _{\beta }\,\lVert \mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\rVert ^{2},} where ‖·‖ is the standard L2 norm in the n-dimensional Euclidean space Rn. The predicted quantity Xβ is just a certain linear combination of the vectors of regressors. Thus, the residual vector y − Xβ will have the smallest length when y is projected orthogonally onto the linear subspace spanned by the columns of X. The OLS estimator β ^ {\displaystyle {\hat {\beta }}} in this case can be interpreted as the coefficients of vector decomposition of ^y = Py along the basis of X. In other words, the gradient equations at the minimum can be written as: ( y − X β ^ ) ⊤ X = 0. {\displaystyle (\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})^{\top }\mathbf {X} =0.} A geometrical interpretation of these equations is that the vector of residuals, y − X β ^ {\displaystyle \mathbf {y} -X{\hat {\boldsymbol {\beta }}}} is orthogonal to the column space of X, since the dot product ( y − X β ^ ) ⋅ X v {\displaystyle (\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})\cdot \mathbf {X} \mathbf {v} } is equal to zero for any conformal vector, v. This means that y − X β ^ {\displaystyle \mathbf {y} -\mathbf {X} {\boldsymbol {\hat {\beta }}}} is the shortest of all possible vectors y − X β {\displaystyle \mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}} , that is, the variance of the residuals is the minimum possible. This is illustrated at the right. Introducing γ ^ {\displaystyle {\hat {\boldsymbol {\gamma }}}} and a matrix K with the assumption that a matrix [ X K ] {\displaystyle [\mathbf {X} \ \mathbf {K} ]} is non-singular and KT X = 0 (cf. Orthogonal projections), the residual vector should satisfy the following equation: r ^ := y − X β ^ = K γ ^ . {\displaystyle {\hat {\mathbf {r} }}:=\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}}=\mathbf {K} {\hat {\boldsymbol {\gamma }}}.} The equation and solution of linear least squares are thus described as follows: y = [ X K ] [ β ^ γ ^ ] , ⇒ [ β ^ γ ^ ] = [ X K ] − 1 y = [ ( X ⊤ X ) − 1 X ⊤ ( K ⊤ K ) − 1 K ⊤ ] y . {\displaystyle {\begin{aligned}\mathbf {y} &={\begin{bmatrix}\mathbf {X} &\mathbf {K} \end{bmatrix}}{\begin{bmatrix}{\hat {\boldsymbol {\beta }}}\\{\hat {\boldsymbol {\gamma }}}\end{bmatrix}},\\{}\Rightarrow {\begin{bmatrix}{\hat {\boldsymbol {\beta }}}\\{\hat {\boldsymbol {\gamma }}}\end{bmatrix}}&={\begin{bmatrix}\mathbf {X} &\mathbf {K} \end{bmatrix}}^{-1}\mathbf {y} ={\begin{bmatrix}\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\\\left(\mathbf {K} ^{\top }\mathbf {K} \right)^{-1}\mathbf {K} ^{\top }\end{bmatrix}}\mathbf {y} .\end{aligned}}} Another way of looking at it is to consider the regression line to be a weighted average of the lines passing through the combination of any two points in the dataset. Although this way of calculation is more computationally expensive, it provides a better intuition on OLS. === Maximum likelihood === The OLS estimator is identical to the maximum likelihood estimator (MLE) under the normality assumption for the error terms.[proof] This normality assumption has historical importance, as it provided the basis for the early work in linear regression analysis by Yule and Pearson. From the properties of MLE, we can infer that the OLS estimator is asymptotically efficient (in the sense of attaining the Cramér–Rao bound for variance) if the normality assumption is satisfied. === Generalized method of moments === In iid case the OLS estimator can also be viewed as a GMM estimator arising from the moment conditions E [ x i ( y i − x i T β ) ] = 0. {\displaystyle \mathrm {E} {\big [}\,x_{i}\left(y_{i}-x_{i}^{\operatorname {T} }\beta \right)\,{\big ]}=0.} These moment conditions state that the regressors should be uncorrelated with the errors. Since xi is a p-vector, the number of moment conditions is equal to the dimension of the parameter vector β, and thus the system is exactly identified. This is the so-called classical GMM case, when the estimator does not depend on the choice of the weighting matrix. Note that the original strict exogeneity assumption E[εi | xi] = 0 implies a far richer set of moment conditions than stated above. In particular, this assumption implies that for any vector-function ƒ, the moment condition E[ƒ(xi)·εi] = 0 will hold. However it can be shown using the Gauss–Markov theorem that the optimal choice of function ƒ is to take ƒ(x) = x, which results in the moment equation posted above. == Properties == === Assumptions === There are several different frameworks in which the linear regression model can be cast in order to make the OLS technique applicable. Each of these settings produces the same formulas and same results. The only difference is the interpretation and the assumptions which have to be imposed in order for the method to give meaningful results. The choice of the applicable framework depends mostly on the nature of data in hand, and on the inference task which has to be performed. One of the lines of difference in interpretation is whether to treat the regressors as random variables, or as predefined constants. In the first case (random design) the regressors xi are random and sampled together with the yi's from some population, as in an observational study. This approach allows for more natural study of the asymptotic properties of the estimators. In the other interpretation (fixed design), the regressors X are treated as known constants set by a design, and y is sampled conditionally on the values of X as in an experiment. For practical purposes, this distinction is often unimportant, since estimation and inference is carried out while conditioning on X. All results stated in this article are within the random design framework. ==== Classical linear regression model ==== The classical model focuses on the "finite sample" estimation and inference, meaning that the number of observations n is fixed. This contrasts with the other approaches, which study the asymptotic behavior of OLS, and in which the behavior at a large number of samples is studied. Correct specification. The linear functional form must coincide with the form of the actual data-generating process. Strict exogeneity. The errors in the regression should have conditional mean zero: E ⁡ [ ε ∣ X ] = 0. {\displaystyle \operatorname {E} [\,\varepsilon \mid X\,]=0.} The immediate consequence of the exogeneity assumption is that the errors have mean zero: E[ε] = 0 (for the law of total expectation), and that the regressors are uncorrelated with the errors: E[XTε] = 0. The exogeneity assumption is critical for the OLS theory. If it holds then the regressor variables are called exogenous. If it does not, then those regressors that are correlated with the error term are called endogenous, and the OLS estimator becomes biased. In such case the method of instrumental variables may be used to carry out inference. No linear dependence. The regressors in X must all be linearly independent. Mathematically, this means that the matrix X must have full column rank almost surely: Pr [ rank ⁡ ( X ) = p ] = 1. {\displaystyle \Pr \!{\big [}\,\operatorname {rank} (X)=p\,{\big ]}=1.} Usually, it is also assumed that the regressors have finite moments up to at least the second moment. Then the matrix Qxx = E[XTX / n] is finite and positive semi-definite. When this assumption is violated the regressors are called linearly dependent or perfectly multicollinear. In such case the value of the regression coefficient β cannot be learned, although prediction of y values is still possible for new values of the regressors that lie in the same linearly dependent subspace. Spherical errors: Var ⁡ [ ε ∣ X ] = σ 2 I n , {\displaystyle \operatorname {Var} [\,\varepsilon \mid X\,]=\sigma ^{2}I_{n},} where In is the identity matrix in dimension n, and σ2 is a parameter which determines the variance of each observation. This σ2 is considered a nuisance parameter in the model, although usually it is also estimated. If this assumption is violated then the OLS estimates are still valid, but no longer efficient. It is customary to split this assumption into two parts: Homoscedasticity: E[ εi2 | X ] = σ2, which means that the error term has the same variance σ2 in each observation. When this requirement is violated this is called heteroscedasticity, in such case a more efficient estimator would be weighted least squares. If the errors have infinite variance then the OLS estimates will also have infinite variance (although by the law of large numbers they will nonetheless tend toward the true values so long as the errors have zero mean). In this case, robust estimation techniques are recommended. No autocorrelation: the errors are uncorrelated between observations: E[ εiεj | X ] = 0 for i ≠ j. This assumption may be violated in the context of time series data, panel data, cluster samples, hierarchical data, repeated measures data, longitudinal data, and other data with dependencies. In such cases generalized least squares provides a better alternative than the OLS. Another expression for autocorrelation is serial correlation. Normality. It is sometimes additionally assumed that the errors have normal distribution conditional on the regressors: ε ∣ X ∼ N ( 0 , σ 2 I n ) . {\displaystyle \varepsilon \mid X\sim {\mathcal {N}}(0,\sigma ^{2}I_{n}).} This assumption is not needed for the validity of the OLS method, although certain additional finite-sample properties can be established in case when it does (especially in the area of hypotheses testing). Also when the errors are normal, the OLS estimator is equivalent to the maximum likelihood estimator (MLE), and therefore it is asymptotically efficient in the class of all regular estimators. Importantly, the normality assumption applies only to the error terms; contrary to a popular misconception, the response (dependent) variable is not required to be normally distributed. ==== Independent and identically distributed (iid) ==== In some applications, especially with cross-sectional data, an additional assumption is imposed — that all observations are independent and identically distributed. This means that all observations are taken from a random sample which makes all the assumptions listed earlier simpler and easier to interpret. Also this framework allows one to state asymptotic results (as the sample size n → ∞), which are understood as a theoretical possibility of fetching new independent observations from the data generating process. The list of assumptions in this case is: IID observations: (xi, yi) is independent from, and has the same distribution as, (xj, yj) for all i ≠ j; No perfect multicollinearity: Qxx = E[ xi xiT ] is a positive-definite matrix; Exogeneity: E[ εi | xi ] = 0; Homoscedasticity: Var[ εi | xi ] = σ2. ==== Time series model ==== The stochastic process {xi, yi} is stationary and ergodic; if {xi, yi} is nonstationary, OLS results are often spurious unless {xi, yi} is co-integrating. The regressors are predetermined: E[xiεi] = 0 for all i = 1, ..., n; The p×p matrix Qxx = E[ xi xiT ] is of full rank, and hence positive-definite; {xiεi} is a martingale difference sequence, with a finite matrix of second moments Qxxε² = E[ εi2xi xiT ]. === Finite sample properties === First of all, under the strict exogeneity assumption the OLS estimators β ^ {\displaystyle \scriptstyle {\hat {\beta }}} and s2 are unbiased, meaning that their expected values coincide with the true values of the parameters:[proof] E ⁡ [ β ^ ∣ X ] = β , E ⁡ [ s 2 ∣ X ] = σ 2 . {\displaystyle \operatorname {E} [\,{\hat {\beta }}\mid X\,]=\beta ,\quad \operatorname {E} [\,s^{2}\mid X\,]=\sigma ^{2}.} If the strict exogeneity does not hold (as is the case with many time series models, where exogeneity is assumed only with respect to the past shocks but not the future ones), then these estimators will be biased in finite samples. The variance-covariance matrix (or simply covariance matrix) of β ^ {\displaystyle \scriptstyle {\hat {\beta }}} is equal to Var ⁡ [ β ^ ∣ X ] = σ 2 ( X T X ) − 1 = σ 2 Q . {\displaystyle \operatorname {Var} [\,{\hat {\beta }}\mid X\,]=\sigma ^{2}\left(X^{\operatorname {T} }X\right)^{-1}=\sigma ^{2}Q.} In particular, the standard error of each coefficient β ^ j {\displaystyle \scriptstyle {\hat {\beta }}_{j}} is equal to square root of the j-th diagonal element of this matrix. The estimate of this standard error is obtained by replacing the unknown quantity σ2 with its estimate s2. Thus, s . e . ^ ( β ^ j ) = s 2 ( X T X ) j j − 1 {\displaystyle {\widehat {\operatorname {s.\!e.} }}({\hat {\beta }}_{j})={\sqrt {s^{2}\left(X^{\operatorname {T} }X\right)_{jj}^{-1}}}} It can also be easily shown that the estimator β ^ {\displaystyle \scriptstyle {\hat {\beta }}} is uncorrelated with the residuals from the model: Cov ⁡ [ β ^ , ε ^ ∣ X ] = 0. {\displaystyle \operatorname {Cov} [\,{\hat {\beta }},{\hat {\varepsilon }}\mid X\,]=0.} The Gauss–Markov theorem states that under the spherical errors assumption (that is, the errors should be uncorrelated and homoscedastic) the estimator β ^ {\displaystyle \scriptstyle {\hat {\beta }}} is efficient in the class of linear unbiased estimators. This is called the best linear unbiased estimator (BLUE). Efficiency should be understood as if we were to find some other estimator β ~ {\displaystyle \scriptstyle {\tilde {\beta }}} which would be linear in y and unbiased, then Var ⁡ [ β ~ ∣ X ] − Var ⁡ [ β ^ ∣ X ] ≥ 0 {\displaystyle \operatorname {Var} [\,{\tilde {\beta }}\mid X\,]-\operatorname {Var} [\,{\hat {\beta }}\mid X\,]\geq 0} in the sense that this is a nonnegative-definite matrix. This theorem establishes optimality only in the class of linear unbiased estimators, which is quite restrictive. Depending on the distribution of the error terms ε, other, non-linear estimators may provide better results than OLS. ==== Assuming normality ==== The properties listed so far are all valid regardless of the underlying distribution of the error terms. However, if you are willing to assume that the normality assumption holds (that is, that ε ~ N(0, σ2In)), then additional properties of the OLS estimators can be stated. The estimator β ^ {\displaystyle \scriptstyle {\hat {\beta }}} is normally distributed, with mean and variance as given before: β ^ ∼ N ( β , σ 2 ( X T X ) − 1 ) . {\displaystyle {\hat {\beta }}\ \sim \ {\mathcal {N}}{\big (}\beta ,\ \sigma ^{2}(X^{\mathrm {T} }X)^{-1}{\big )}.} This estimator reaches the Cramér–Rao bound for the model, and thus is optimal in the class of all unbiased estimators. Note that unlike the Gauss–Markov theorem, this result establishes optimality among both linear and non-linear estimators, but only in the case of normally distributed error terms. The estimator s2 will be proportional to the chi-squared distribution: s 2 ∼ σ 2 n − p ⋅ χ n − p 2 {\displaystyle s^{2}\ \sim \ {\frac {\sigma ^{2}}{n-p}}\cdot \chi _{n-p}^{2}} The variance of this estimator is equal to 2σ4/(n − p), which does not attain the Cramér–Rao bound of 2σ4/n. However it was shown that there are no unbiased estimators of σ2 with variance smaller than that of the estimator s2. If we are willing to allow biased estimators, and consider the class of estimators that are proportional to the sum of squared residuals (SSR) of the model, then the best (in the sense of the mean squared error) estimator in this class will be ~σ2 = SSR / (n − p + 2), which even beats the Cramér–Rao bound in case when there is only one regressor (p = 1). Moreover, the estimators β ^ {\displaystyle \scriptstyle {\hat {\beta }}} and s2 are independent, the fact which comes in useful when constructing the t- and F-tests for the regression. ==== Influential observations ==== As was mentioned before, the estimator β ^ {\displaystyle {\hat {\beta }}} is linear in y, meaning that it represents a linear combination of the dependent variables yi. The weights in this linear combination are functions of the regressors X, and generally are unequal. The observations with high weights are called influential because they have a more pronounced effect on the value of the estimator. To analyze which observations are influential we remove a specific j-th observation and consider how much the estimated quantities are going to change (similarly to the jackknife method). It can be shown that the change in the OLS estimator for β will be equal to β ^ ( j ) − β ^ = − 1 1 − h j ( X T X ) − 1 x j T ε ^ j , {\displaystyle {\hat {\beta }}^{(j)}-{\hat {\beta }}=-{\frac {1}{1-h_{j}}}(X^{\mathrm {T} }X)^{-1}x_{j}^{\mathrm {T} }{\hat {\varepsilon }}_{j}\,,} where hj = xjT (XTX)−1xj is the j-th diagonal element of the hat matrix P, and xj is the vector of regressors corresponding to the j-th observation. Similarly, the change in the predicted value for j-th observation resulting from omitting that observation from the dataset will be equal to y ^ j ( j ) − y ^ j = x j T β ^ ( j ) − x j T β ^ = − h j 1 − h j ε ^ j {\displaystyle {\hat {y}}_{j}^{(j)}-{\hat {y}}_{j}=x_{j}^{\mathrm {T} }{\hat {\beta }}^{(j)}-x_{j}^{\operatorname {T} }{\hat {\beta }}=-{\frac {h_{j}}{1-h_{j}}}\,{\hat {\varepsilon }}_{j}} From the properties of the hat matrix, 0 ≤ hj ≤ 1, and they sum up to p, so that on average hj ≈ p/n. These quantities hj are called the leverages, and observations with high hj are called leverage points. Usually the observations with high leverage ought to be scrutinized more carefully, in case they are erroneous, or outliers, or in some other way atypical of the rest of the dataset. ==== Partitioned regression ==== Sometimes the variables and corresponding parameters in the regression can be logically split into two groups, so that the regression takes form y = X 1 β 1 + X 2 β 2 + ε , {\displaystyle y=X_{1}\beta _{1}+X_{2}\beta _{2}+\varepsilon ,} where X1 and X2 have dimensions n×p1, n×p2, and β1, β2 are p1×1 and p2×1 vectors, with p1 + p2 = p. The Frisch–Waugh–Lovell theorem states that in this regression the residuals ε ^ {\displaystyle {\hat {\varepsilon }}} and the OLS estimate β ^ 2 {\displaystyle \scriptstyle {\hat {\beta }}_{2}} will be numerically identical to the residuals and the OLS estimate for β2 in the following regression: M 1 y = M 1 X 2 β 2 + η , {\displaystyle M_{1}y=M_{1}X_{2}\beta _{2}+\eta \,,} where M1 is the annihilator matrix for regressors X1. The theorem can be used to establish a number of theoretical results. For example, having a regression with a constant and another regressor is equivalent to subtracting the means from the dependent variable and the regressor and then running the regression for the de-meaned variables but without the constant term. ==== Constrained estimation ==== Suppose it is known that the coefficients in the regression satisfy a system of linear equations A : Q T β = c , {\displaystyle A\colon \quad Q^{\operatorname {T} }\beta =c,\,} where Q is a p×q matrix of full rank, and c is a q×1 vector of known constants, where q < p. In this case least squares estimation is equivalent to minimizing the sum of squared residuals of the model subject to the constraint A. The constrained least squares (CLS) estimator can be given by an explicit formula: β ^ c = β ^ − ( X T X ) − 1 Q ( Q T ( X T X ) − 1 Q ) − 1 ( Q T β ^ − c ) . {\displaystyle {\hat {\beta }}^{c}={\hat {\beta }}-(X^{\operatorname {T} }X)^{-1}Q{\Big (}Q^{\operatorname {T} }(X^{\operatorname {T} }X)^{-1}Q{\Big )}^{-1}(Q^{\operatorname {T} }{\hat {\beta }}-c).} This expression for the constrained estimator is valid as long as the matrix XTX is invertible. It was assumed from the beginning of this article that this matrix is of full rank, and it was noted that when the rank condition fails, β will not be identifiable. However it may happen that adding the restriction A makes β identifiable, in which case one would like to find the formula for the estimator. The estimator is equal to β ^ c = R ( R T X T X R ) − 1 R T X T y + ( I p − R ( R T X T X R ) − 1 R T X T X ) Q ( Q T Q ) − 1 c , {\displaystyle {\hat {\beta }}^{c}=R(R^{\operatorname {T} }X^{\operatorname {T} }XR)^{-1}R^{\operatorname {T} }X^{\operatorname {T} }y+{\Big (}I_{p}-R(R^{\operatorname {T} }X^{\operatorname {T} }XR)^{-1}R^{\operatorname {T} }X^{\operatorname {T} }X{\Big )}Q(Q^{\operatorname {T} }Q)^{-1}c,} where R is a p×(p − q) matrix such that the matrix [Q R] is non-singular, and RTQ = 0. Such a matrix can always be found, although generally it is not unique. The second formula coincides with the first in case when XTX is invertible. === Large sample properties === The least squares estimators are point estimates of the linear regression model parameters β. However, generally we also want to know how close those estimates might be to the true values of parameters. In other words, we want to construct the interval estimates. Since we have not made any assumption about the distribution of error term εi, it is impossible to infer the distribution of the estimators β ^ {\displaystyle {\hat {\beta }}} and σ ^ 2 {\displaystyle {\hat {\sigma }}^{2}} . Nevertheless, we can apply the central limit theorem to derive their asymptotic properties as sample size n goes to infinity. While the sample size is necessarily finite, it is customary to assume that n is "large enough" so that the true distribution of the OLS estimator is close to its asymptotic limit. We can show that under the model assumptions, the least squares estimator for β is consistent (that is β ^ {\displaystyle {\hat {\beta }}} converges in probability to β) and asymptotically normal:[proof] ( β ^ − β ) → d N ( 0 , σ 2 Q x x − 1 ) , {\displaystyle ({\hat {\beta }}-\beta )\ {\xrightarrow {d}}\ {\mathcal {N}}{\big (}0,\;\sigma ^{2}Q_{xx}^{-1}{\big )},} where Q x x = X T X . {\displaystyle Q_{xx}=X^{\operatorname {T} }X.} ==== Intervals ==== Using this asymptotic distribution, approximate two-sided confidence intervals for the j-th component of the vector β ^ {\displaystyle {\hat {\beta }}} can be constructed as β j ∈ [ β ^ j ± q 1 − α 2 N ( 0 , 1 ) σ ^ 2 [ Q x x − 1 ] j j ] {\displaystyle \beta _{j}\in {\bigg [}\ {\hat {\beta }}_{j}\pm q_{1-{\frac {\alpha }{2}}}^{{\mathcal {N}}(0,1)}\!{\sqrt {{\hat {\sigma }}^{2}\left[Q_{xx}^{-1}\right]_{jj}}}\ {\bigg ]}} at the 1 − α confidence level, where q denotes the quantile function of standard normal distribution, and [·]jj is the j-th diagonal element of a matrix. Similarly, the least squares estimator for σ2 is also consistent and asymptotically normal (provided that the fourth moment of εi exists) with limiting distribution ( σ ^ 2 − σ 2 ) → d N ( 0 , E ⁡ [ ε i 4 ] − σ 4 ) . {\displaystyle ({\hat {\sigma }}^{2}-\sigma ^{2})\ {\xrightarrow {d}}\ {\mathcal {N}}\left(0,\;\operatorname {E} \left[\varepsilon _{i}^{4}\right]-\sigma ^{4}\right).} These asymptotic distributions can be used for prediction, testing hypotheses, constructing other estimators, etc.. As an example consider the problem of prediction. Suppose x 0 {\displaystyle x_{0}} is some point within the domain of distribution of the regressors, and one wants to know what the response variable would have been at that point. The mean response is the quantity y 0 = x 0 T β {\displaystyle y_{0}=x_{0}^{\mathrm {T} }\beta } , whereas the predicted response is y ^ 0 = x 0 T β ^ {\displaystyle {\hat {y}}_{0}=x_{0}^{\mathrm {T} }{\hat {\beta }}} . Clearly the predicted response is a random variable, its distribution can be derived from that of β ^ {\displaystyle {\hat {\beta }}} : ( y ^ 0 − y 0 ) → d N ( 0 , σ 2 x 0 T Q x x − 1 x 0 ) , {\displaystyle \left({\hat {y}}_{0}-y_{0}\right)\ {\xrightarrow {d}}\ {\mathcal {N}}\left(0,\;\sigma ^{2}x_{0}^{\mathrm {T} }Q_{xx}^{-1}x_{0}\right),} which allows construct confidence intervals for mean response y 0 {\displaystyle y_{0}} to be constructed: y 0 ∈ [ x 0 T β ^ ± q 1 − α 2 N ( 0 , 1 ) σ ^ 2 x 0 T Q x x − 1 x 0 ] {\displaystyle y_{0}\in \left[\ x_{0}^{\mathrm {T} }{\hat {\beta }}\pm q_{1-{\frac {\alpha }{2}}}^{{\mathcal {N}}(0,1)}\!{\sqrt {{\hat {\sigma }}^{2}x_{0}^{\mathrm {T} }Q_{xx}^{-1}x_{0}}}\ \right]} at the 1 − α confidence level. ==== Hypothesis testing ==== Two hypothesis tests are particularly widely used. First, one wants to know if the estimated regression equation is any better than simply predicting that all values of the response variable equal its sample mean (if not, it is said to have no explanatory power). The null hypothesis of no explanatory value of the estimated regression is tested using an F-test. If the calculated F-value is found to be large enough to exceed its critical value for the pre-chosen level of significance, the null hypothesis is rejected and the alternative hypothesis, that the regression has explanatory power, is accepted. Otherwise, the null hypothesis of no explanatory power is accepted. Second, for each explanatory variable of interest, one wants to know whether its estimated coefficient differs significantly from zero—that is, whether this particular explanatory variable in fact has explanatory power in predicting the response variable. Here the null hypothesis is that the true coefficient is zero. This hypothesis is tested by computing the coefficient's t-statistic, as the ratio of the coefficient estimate to its standard error. If the t-statistic is larger than a predetermined value, the null hypothesis is rejected and the variable is found to have explanatory power, with its coefficient significantly different from zero. Otherwise, the null hypothesis of a zero value of the true coefficient is accepted. In addition, the Chow test is used to test whether two subsamples both have the same underlying true coefficient values. The sum of squared residuals of regressions on each of the subsets and on the combined data set are compared by computing an F-statistic; if this exceeds a critical value, the null hypothesis of no difference between the two subsets is rejected; otherwise, it is accepted. == Example with real data == The following data set gives average heights and weights for American women aged 30–39 (source: The World Almanac and Book of Facts, 1975). When only one dependent variable is being modeled, a scatterplot will suggest the form and strength of the relationship between the dependent variable and regressors. It might also reveal outliers, heteroscedasticity, and other aspects of the data that may complicate the interpretation of a fitted regression model. The scatterplot suggests that the relationship is strong and can be approximated as a quadratic function. OLS can handle non-linear relationships by introducing the regressor HEIGHT2. The regression model then becomes a multiple linear model: w i = β 1 + β 2 h i + β 3 h i 2 + ε i . {\displaystyle w_{i}=\beta _{1}+\beta _{2}h_{i}+\beta _{3}h_{i}^{2}+\varepsilon _{i}.} The output from most popular statistical packages will look similar to this: In this table: The Value column gives the least squares estimates of parameters βj The Std error column shows standard errors of each coefficient estimate: σ ^ j = ( σ ^ 2 [ Q x x − 1 ] j j ) 1 2 {\displaystyle {\hat {\sigma }}_{j}=\left({\hat {\sigma }}^{2}\left[Q_{xx}^{-1}\right]_{jj}\right)^{\frac {1}{2}}} The t-statistic and p-value columns are testing whether any of the coefficients might be equal to zero. The t-statistic is calculated simply as t = β ^ j / σ ^ j {\displaystyle t={\hat {\beta }}_{j}/{\hat {\sigma }}_{j}} . If the errors ε follow a normal distribution, t follows a Student-t distribution. Under weaker conditions, t is asymptotically normal. Large values of t indicate that the null hypothesis can be rejected and that the corresponding coefficient is not zero. The second column, p-value, expresses the results of the hypothesis test as a significance level. Conventionally, p-values smaller than 0.05 are taken as evidence that the population coefficient is nonzero. R-squared is the coefficient of determination indicating goodness-of-fit of the regression. This statistic will be equal to one if fit is perfect, and to zero when regressors X have no explanatory power whatsoever. This is a biased estimate of the population R-squared, and will never decrease if additional regressors are added, even if they are irrelevant. Adjusted R-squared is a slightly modified version of R 2 {\displaystyle R^{2}} , designed to penalize for the excess number of regressors which do not add to the explanatory power of the regression. This statistic is always smaller than R 2 {\displaystyle R^{2}} , can decrease as new regressors are added, and even be negative for poorly fitting models: R ¯ 2 = 1 − n − 1 n − p ( 1 − R 2 ) {\displaystyle {\overline {R}}^{2}=1-{\frac {n-1}{n-p}}(1-R^{2})} Log-likelihood is calculated under the assumption that errors follow normal distribution. Even though the assumption is not very reasonable, this statistic may still find its use in conducting LR tests. Durbin–Watson statistic tests whether there is any evidence of serial correlation between the residuals. As a rule of thumb, the value smaller than 2 will be an evidence of positive correlation. Akaike information criterion and Schwarz criterion are both used for model selection. Generally when comparing two alternative models, smaller values of one of these criteria will indicate a better model. Standard error of regression is an estimate of σ, standard error of the error term. Total sum of squares, model sum of squared, and residual sum of squares tell us how much of the initial variation in the sample were explained by the regression. F-statistic tries to test the hypothesis that all coefficients (except the intercept) are equal to zero. This statistic has F(p–1,n–p) distribution under the null hypothesis and normality assumption, and its p-value indicates probability that the hypothesis is indeed true. Note that when errors are not normal this statistic becomes invalid, and other tests such as Wald test or LR test should be used. Ordinary least squares analysis often includes the use of diagnostic plots designed to detect departures of the data from the assumed form of the model. These are some of the common diagnostic plots: Residuals against the explanatory variables in the model. A non-linear relation between these variables suggests that the linearity of the conditional mean function may not hold. Different levels of variability in the residuals for different levels of the explanatory variables suggests possible heteroscedasticity. Residuals against explanatory variables not in the model. Any relation of the residuals to these variables would suggest considering these variables for inclusion in the model. Residuals against the fitted values, y ^ {\displaystyle {\hat {y}}} . Residuals against the preceding residual. This plot may identify serial correlations in the residuals. An important consideration when carrying out statistical inference using regression models is how the data were sampled. In this example, the data are averages rather than measurements on individual women. The fit of the model is very good, but this does not imply that the weight of an individual woman can be predicted with high accuracy based only on her height. === Sensitivity to rounding === This example also demonstrates that coefficients determined by these calculations are sensitive to how the data is prepared. The heights were originally given rounded to the nearest inch and have been converted and rounded to the nearest centimetre. Since the conversion factor is one inch to 2.54 cm this is not an exact conversion. The original inches can be recovered by Round(x/0.0254) and then re-converted to metric without rounding. If this is done the results become: Using either of these equations to predict the weight of a 5' 6" (1.6764 m) woman gives similar values: 62.94 kg with rounding vs. 62.98 kg without rounding. Thus a seemingly small variation in the data has a real effect on the coefficients but a small effect on the results of the equation. While this may look innocuous in the middle of the data range it could become significant at the extremes or in the case where the fitted model is used to project outside the data range (extrapolation). This highlights a common error: this example is an abuse of OLS which inherently requires that the errors in the independent variable (in this case height) are zero or at least negligible. The initial rounding to nearest inch plus any actual measurement errors constitute a finite and non-negligible error. As a result, the fitted parameters are not the best estimates they are presumed to be. Though not totally spurious the error in the estimation will depend upon relative size of the x and y errors. == Another example with less real data == === Problem statement === We can use the least square mechanism to figure out the equation of a two body orbit in polar base co-ordinates. The equation typically used is r ( θ ) = p 1 − e cos ⁡ ( θ ) {\displaystyle r(\theta )={\frac {p}{1-e\cos(\theta )}}} where r ( θ ) {\displaystyle r(\theta )} is the radius of how far the object is from one of the bodies. In the equation the parameters p {\displaystyle p} and e {\displaystyle e} are used to determine the path of the orbit. We have measured the following data. We need to find the least-squares approximation of e {\displaystyle e} and p {\displaystyle p} for the given data. === Solution === First we need to represent e and p in a linear form. So we are going to rewrite the equation r ( θ ) {\displaystyle r(\theta )} as 1 r ( θ ) = 1 p − e p cos ⁡ ( θ ) {\displaystyle {\frac {1}{r(\theta )}}={\frac {1}{p}}-{\frac {e}{p}}\cos(\theta )} . Furthermore, one could fit for apsides by expanding cos ⁡ ( θ ) {\displaystyle \cos(\theta )} with an extra parameter as cos ⁡ ( θ − θ 0 ) = cos ⁡ ( θ ) cos ⁡ ( θ 0 ) + sin ⁡ ( θ ) sin ⁡ ( θ 0 ) {\displaystyle \cos(\theta -\theta _{0})=\cos(\theta )\cos(\theta _{0})+\sin(\theta )\sin(\theta _{0})} , which is linear in both cos ⁡ ( θ ) {\displaystyle \cos(\theta )} and in the extra basis function sin ⁡ ( θ ) {\displaystyle \sin(\theta )} . We use the original two-parameter form to represent our observational data as: A T A ( x y ) = A T b , {\displaystyle A^{T}A{\binom {x}{y}}=A^{T}b,} where: x = 1 / p {\displaystyle x=1/p\,} ; y = e / p {\displaystyle y=e/p\,} ; A {\displaystyle A} contains the coefficients of 1 / p {\displaystyle 1/p} in the first column, which are all 1, and the coefficients of e / p {\displaystyle e/p} in the second column, given by cos ⁡ ( θ ) {\displaystyle \cos(\theta )\,} ; and b = 1 / r ( θ ) {\displaystyle b=1/r(\theta )} , such that: A = [ 1 − 0.731354 1 − 0.707107 1 − 0.615661 1 0.052336 1 0.309017 1 0.438371 ] , b = [ 0.21220 0.21958 0.24741 0.45071 0.52883 0.56820 ] . {\displaystyle A={\begin{bmatrix}1&-0.731354\\1&-0.707107\\1&-0.615661\\1&\ 0.052336\\1&0.309017\\1&0.438371\end{bmatrix}},\quad b={\begin{bmatrix}0.21220\\0.21958\\0.24741\\0.45071\\0.52883\\0.56820\end{bmatrix}}.} On solving we get ( x y ) = ( 0.43478 0.30435 ) {\displaystyle {\binom {x}{y}}={\binom {0.43478}{0.30435}}\,} , so p = 1 x = 2.3000 {\displaystyle p={\frac {1}{x}}=2.3000} and e = p ⋅ y = 0.70001 {\displaystyle e=p\cdot y=0.70001} == See also == Bayesian least squares Fama–MacBeth regression Nonlinear least squares Numerical methods for linear least squares Nonlinear system identification == References == == Further reading == Dougherty, Christopher (2002). Introduction to Econometrics (2nd ed.). New York: Oxford University Press. pp. 48–113. ISBN 0-19-877643-8. Gujarati, Damodar N.; Porter, Dawn C. (2009). Basic Econometics (Fifth ed.). Boston: McGraw-Hill Irwin. pp. 55–96. ISBN 978-0-07-337577-9. Heij, Christiaan; Boer, Paul; Franses, Philip H.; Kloek, Teun; van Dijk, Herman K. (2004). Econometric Methods with Applications in Business and Economics (1st ed.). Oxford: Oxford University Press. pp. 76–115. ISBN 978-0-19-926801-6. Hill, R. Carter; Griffiths, William E.; Lim, Guay C. (2008). Principles of Econometrics (3rd ed.). Hoboken, NJ: John Wiley & Sons. pp. 8–47. ISBN 978-0-471-72360-8. Wooldridge, Jeffrey (2008). "The Simple Regression Model". Introductory Econometrics: A Modern Approach (4th ed.). Mason, OH: Cengage Learning. pp. 22–67. ISBN 978-0-324-58162-1.
Wikipedia/Normal_equations
In numerical linear algebra, the conjugate gradient squared method (CGS) is an iterative algorithm for solving systems of linear equations of the form A x = b {\displaystyle A{\mathbf {x}}={\mathbf {b}}} , particularly in cases where computing the transpose A T {\displaystyle A^{T}} is impractical. The CGS method was developed as an improvement to the biconjugate gradient method. == Background == A system of linear equations A x = b {\displaystyle A{\mathbf {x}}={\mathbf {b}}} consists of a known matrix A {\displaystyle A} and a known vector b {\displaystyle {\mathbf {b}}} . To solve the system is to find the value of the unknown vector x {\displaystyle {\mathbf {x}}} . A direct method for solving a system of linear equations is to take the inverse of the matrix A {\displaystyle A} , then calculate x = A − 1 b {\displaystyle {\mathbf {x}}=A^{-1}{\mathbf {b}}} . However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} , and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution. As with the conjugate gradient method, biconjugate gradient method, and similar iterative methods for solving systems of linear equations, the CGS method can be used to find solutions to multi-variable optimisation problems, such as power-flow analysis, hyperparameter optimisation, and facial recognition. == Algorithm == The algorithm is as follows: Choose an initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} Compute the residual r ( 0 ) = b − A x ( 0 ) {\displaystyle {\mathbf {r}}^{(0)}={\mathbf {b}}-A{\mathbf {x}}^{(0)}} Choose r ~ ( 0 ) = r ( 0 ) {\displaystyle {\tilde {\mathbf {r}}}^{(0)}={\mathbf {r}}^{(0)}} For i = 1 , 2 , 3 , … {\displaystyle i=1,2,3,\dots } do: ρ ( i − 1 ) = r ~ T ( i − 1 ) r ( i − 1 ) {\displaystyle \rho ^{(i-1)}={\tilde {\mathbf {r}}}^{T(i-1)}{\mathbf {r}}^{(i-1)}} If ρ ( i − 1 ) = 0 {\displaystyle \rho ^{(i-1)}=0} , the method fails. If i = 1 {\displaystyle i=1} : p ( 1 ) = u ( 1 ) = r ( 0 ) {\displaystyle {\mathbf {p}}^{(1)}={\mathbf {u}}^{(1)}={\mathbf {r}}^{(0)}} Else: β ( i − 1 ) = ρ ( i − 1 ) / ρ ( i − 2 ) {\displaystyle \beta ^{(i-1)}=\rho ^{(i-1)}/\rho ^{(i-2)}} u ( i ) = r ( i − 1 ) + β i − 1 q ( i − 1 ) {\displaystyle {\mathbf {u}}^{(i)}={\mathbf {r}}^{(i-1)}+\beta _{i-1}{\mathbf {q}}^{(i-1)}} p ( i ) = u ( i ) + β ( i − 1 ) ( q ( i − 1 ) + β ( i − 1 ) p ( i − 1 ) ) {\displaystyle {\mathbf {p}}^{(i)}={\mathbf {u}}^{(i)}+\beta ^{(i-1)}({\mathbf {q}}^{(i-1)}+\beta ^{(i-1)}{\mathbf {p}}^{(i-1)})} Solve M p ^ = p ( i ) {\displaystyle M{\hat {\mathbf {p}}}={\mathbf {p}}^{(i)}} , where M {\displaystyle M} is a pre-conditioner. v ^ = A p ^ {\displaystyle {\hat {\mathbf {v}}}=A{\hat {\mathbf {p}}}} α ( i ) = ρ ( i − 1 ) / r ~ T v ^ {\displaystyle \alpha ^{(i)}=\rho ^{(i-1)}/{\tilde {\mathbf {r}}}^{T}{\hat {\mathbf {v}}}} q ( i ) = u ( i ) − α ( i ) v ^ {\displaystyle {\mathbf {q}}^{(i)}={\mathbf {u}}^{(i)}-\alpha ^{(i)}{\hat {\mathbf {v}}}} Solve M u ^ = u ( i ) + q ( i ) {\displaystyle M{\hat {\mathbf {u}}}={\mathbf {u}}^{(i)}+{\mathbf {q}}^{(i)}} x ( i ) = x ( i − 1 ) + α ( i ) u ^ {\displaystyle {\mathbf {x}}^{(i)}={\mathbf {x}}^{(i-1)}+\alpha ^{(i)}{\hat {\mathbf {u}}}} q ^ = A u ^ {\displaystyle {\hat {\mathbf {q}}}=A{\hat {\mathbf {u}}}} r ( i ) = r ( i − 1 ) − α ( i ) q ^ {\displaystyle {\mathbf {r}}^{(i)}={\mathbf {r}}^{(i-1)}-\alpha ^{(i)}{\hat {\mathbf {q}}}} Check for convergence: if there is convergence, end the loop and return the result == See also == Biconjugate gradient method Biconjugate gradient stabilized method Generalized minimal residual method == References ==
Wikipedia/Conjugate_gradient_squared_method
A flow-based generative model is a generative model used in machine learning that explicitly models a probability distribution by leveraging normalizing flow, which is a statistical method using the change-of-variable law of probabilities to transform a simple distribution into a complex one. The direct modeling of likelihood provides many advantages. For example, the negative log-likelihood can be directly computed and minimized as the loss function. Additionally, novel samples can be generated by sampling from the initial distribution, and applying the flow transformation. In contrast, many alternative generative modeling methods such as variational autoencoder (VAE) and generative adversarial network do not explicitly represent the likelihood function. == Method == Let z 0 {\displaystyle z_{0}} be a (possibly multivariate) random variable with distribution p 0 ( z 0 ) {\displaystyle p_{0}(z_{0})} . For i = 1 , . . . , K {\displaystyle i=1,...,K} , let z i = f i ( z i − 1 ) {\displaystyle z_{i}=f_{i}(z_{i-1})} be a sequence of random variables transformed from z 0 {\displaystyle z_{0}} . The functions f 1 , . . . , f K {\displaystyle f_{1},...,f_{K}} should be invertible, i.e. the inverse function f i − 1 {\displaystyle f_{i}^{-1}} exists. The final output z K {\displaystyle z_{K}} models the target distribution. The log likelihood of z K {\displaystyle z_{K}} is (see derivation): log ⁡ p K ( z K ) = log ⁡ p 0 ( z 0 ) − ∑ i = 1 K log ⁡ | det d f i ( z i − 1 ) d z i − 1 | {\displaystyle \log p_{K}(z_{K})=\log p_{0}(z_{0})-\sum _{i=1}^{K}\log \left|\det {\frac {df_{i}(z_{i-1})}{dz_{i-1}}}\right|} To efficiently compute the log likelihood, the functions f 1 , . . . , f K {\displaystyle f_{1},...,f_{K}} should be easily invertible, and the determinants of their Jacobians should be simple to compute. In practice, the functions f 1 , . . . , f K {\displaystyle f_{1},...,f_{K}} are modeled using deep neural networks, and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE, RealNVP, and Glow. === Derivation of log likelihood === Consider z 1 {\displaystyle z_{1}} and z 0 {\displaystyle z_{0}} . Note that z 0 = f 1 − 1 ( z 1 ) {\displaystyle z_{0}=f_{1}^{-1}(z_{1})} . By the change of variable formula, the distribution of z 1 {\displaystyle z_{1}} is: p 1 ( z 1 ) = p 0 ( z 0 ) | det d f 1 − 1 ( z 1 ) d z 1 | {\displaystyle p_{1}(z_{1})=p_{0}(z_{0})\left|\det {\frac {df_{1}^{-1}(z_{1})}{dz_{1}}}\right|} Where det d f 1 − 1 ( z 1 ) d z 1 {\displaystyle \det {\frac {df_{1}^{-1}(z_{1})}{dz_{1}}}} is the determinant of the Jacobian matrix of f 1 − 1 {\displaystyle f_{1}^{-1}} . By the inverse function theorem: p 1 ( z 1 ) = p 0 ( z 0 ) | det ( d f 1 ( z 0 ) d z 0 ) − 1 | {\displaystyle p_{1}(z_{1})=p_{0}(z_{0})\left|\det \left({\frac {df_{1}(z_{0})}{dz_{0}}}\right)^{-1}\right|} By the identity det ( A − 1 ) = det ( A ) − 1 {\displaystyle \det(A^{-1})=\det(A)^{-1}} (where A {\displaystyle A} is an invertible matrix), we have: p 1 ( z 1 ) = p 0 ( z 0 ) | det d f 1 ( z 0 ) d z 0 | − 1 {\displaystyle p_{1}(z_{1})=p_{0}(z_{0})\left|\det {\frac {df_{1}(z_{0})}{dz_{0}}}\right|^{-1}} The log likelihood is thus: log ⁡ p 1 ( z 1 ) = log ⁡ p 0 ( z 0 ) − log ⁡ | det d f 1 ( z 0 ) d z 0 | {\displaystyle \log p_{1}(z_{1})=\log p_{0}(z_{0})-\log \left|\det {\frac {df_{1}(z_{0})}{dz_{0}}}\right|} In general, the above applies to any z i {\displaystyle z_{i}} and z i − 1 {\displaystyle z_{i-1}} . Since log ⁡ p i ( z i ) {\displaystyle \log p_{i}(z_{i})} is equal to log ⁡ p i − 1 ( z i − 1 ) {\displaystyle \log p_{i-1}(z_{i-1})} subtracted by a non-recursive term, we can infer by induction that: log ⁡ p K ( z K ) = log ⁡ p 0 ( z 0 ) − ∑ i = 1 K log ⁡ | det d f i ( z i − 1 ) d z i − 1 | {\displaystyle \log p_{K}(z_{K})=\log p_{0}(z_{0})-\sum _{i=1}^{K}\log \left|\det {\frac {df_{i}(z_{i-1})}{dz_{i-1}}}\right|} == Training method == As is generally done when training a deep learning model, the goal with normalizing flows is to minimize the Kullback–Leibler divergence between the model's likelihood and the target distribution to be estimated. Denoting p θ {\displaystyle p_{\theta }} the model's likelihood and p ∗ {\displaystyle p^{*}} the target distribution to learn, the (forward) KL-divergence is: D KL [ p ∗ ( x ) ‖ p θ ( x ) ] = − E p ∗ ( x ) ⁡ [ log ⁡ p θ ( x ) ] + E p ∗ ( x ) ⁡ [ log ⁡ p ∗ ( x ) ] {\displaystyle D_{\text{KL}}[p^{*}(x)\|p_{\theta }(x)]=-\mathop {\mathbb {E} } _{p^{*}(x)}[\log p_{\theta }(x)]+\mathop {\mathbb {E} } _{p^{*}(x)}[\log p^{*}(x)]} The second term on the right-hand side of the equation corresponds to the entropy of the target distribution and is independent of the parameter θ {\displaystyle \theta } we want the model to learn, which only leaves the expectation of the negative log-likelihood to minimize under the target distribution. This intractable term can be approximated with a Monte-Carlo method by importance sampling. Indeed, if we have a dataset { x i } i = 1 N {\displaystyle \{x_{i}\}_{i=1}^{N}} of samples each independently drawn from the target distribution p ∗ ( x ) {\displaystyle p^{*}(x)} , then this term can be estimated as: − E ^ p ∗ ( x ) [ log ⁡ p θ ( x ) ] = − 1 N ∑ i = 0 N log ⁡ p θ ( x i ) {\displaystyle -{\hat {\mathop {\mathbb {E} } }}_{p^{*}(x)}[\log p_{\theta }(x)]=-{\frac {1}{N}}\sum _{i=0}^{N}\log p_{\theta }(x_{i})} Therefore, the learning objective a r g m i n θ D KL [ p ∗ ( x ) ‖ p θ ( x ) ] {\displaystyle {\underset {\theta }{\operatorname {arg\,min} }}\ D_{\text{KL}}[p^{*}(x)\|p_{\theta }(x)]} is replaced by a r g m a x θ ∑ i = 0 N log ⁡ p θ ( x i ) {\displaystyle {\underset {\theta }{\operatorname {arg\,max} }}\ \sum _{i=0}^{N}\log p_{\theta }(x_{i})} In other words, minimizing the Kullback–Leibler divergence between the model's likelihood and the target distribution is equivalent to maximizing the model likelihood under observed samples of the target distribution. A pseudocode for training normalizing flows is as follows: INPUT. dataset x 1 : n {\displaystyle x_{1:n}} , normalizing flow model f θ ( ⋅ ) , p 0 {\displaystyle f_{\theta }(\cdot ),p_{0}} . SOLVE. max θ ∑ j ln ⁡ p θ ( x j ) {\displaystyle \max _{\theta }\sum _{j}\ln p_{\theta }(x_{j})} by gradient descent RETURN. θ ^ {\displaystyle {\hat {\theta }}} == Variants == === Planar Flow === The earliest example. Fix some activation function h {\displaystyle h} , and let θ = ( u , w , b ) {\displaystyle \theta =(u,w,b)} with the appropriate dimensions, then x = f θ ( z ) = z + u h ( ⟨ w , z ⟩ + b ) {\displaystyle x=f_{\theta }(z)=z+uh(\langle w,z\rangle +b)} The inverse f θ − 1 {\displaystyle f_{\theta }^{-1}} has no closed-form solution in general. The Jacobian is | det ( I + h ′ ( ⟨ w , z ⟩ + b ) u w T ) | = | 1 + h ′ ( ⟨ w , z ⟩ + b ) ⟨ u , w ⟩ | {\displaystyle |\det(I+h'(\langle w,z\rangle +b)uw^{T})|=|1+h'(\langle w,z\rangle +b)\langle u,w\rangle |} . For it to be invertible everywhere, it must be nonzero everywhere. For example, h = tanh {\displaystyle h=\tanh } and ⟨ u , w ⟩ > − 1 {\displaystyle \langle u,w\rangle >-1} satisfies the requirement. === Nonlinear Independent Components Estimation (NICE) === Let x , z ∈ R 2 n {\displaystyle x,z\in \mathbb {R} ^{2n}} be even-dimensional, and split them in the middle. Then the normalizing flow functions are x = [ x 1 x 2 ] = f θ ( z ) = [ z 1 z 2 ] + [ 0 m θ ( z 1 ) ] {\displaystyle x={\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}=f_{\theta }(z)={\begin{bmatrix}z_{1}\\z_{2}\end{bmatrix}}+{\begin{bmatrix}0\\m_{\theta }(z_{1})\end{bmatrix}}} where m θ {\displaystyle m_{\theta }} is any neural network with weights θ {\displaystyle \theta } . f θ − 1 {\displaystyle f_{\theta }^{-1}} is just z 1 = x 1 , z 2 = x 2 − m θ ( x 1 ) {\displaystyle z_{1}=x_{1},z_{2}=x_{2}-m_{\theta }(x_{1})} , and the Jacobian is just 1, that is, the flow is volume-preserving. When n = 1 {\displaystyle n=1} , this is seen as a curvy shearing along the x 2 {\displaystyle x_{2}} direction. === Real Non-Volume Preserving (Real NVP) === The Real Non-Volume Preserving model generalizes NICE model by: x = [ x 1 x 2 ] = f θ ( z ) = [ z 1 e s θ ( z 1 ) ⊙ z 2 ] + [ 0 m θ ( z 1 ) ] {\displaystyle x={\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}=f_{\theta }(z)={\begin{bmatrix}z_{1}\\e^{s_{\theta }(z_{1})}\odot z_{2}\end{bmatrix}}+{\begin{bmatrix}0\\m_{\theta }(z_{1})\end{bmatrix}}} Its inverse is z 1 = x 1 , z 2 = e − s θ ( x 1 ) ⊙ ( x 2 − m θ ( x 1 ) ) {\displaystyle z_{1}=x_{1},z_{2}=e^{-s_{\theta }(x_{1})}\odot (x_{2}-m_{\theta }(x_{1}))} , and its Jacobian is ∏ i = 1 n e s θ ( z 1 , ) {\displaystyle \prod _{i=1}^{n}e^{s_{\theta }(z_{1,})}} . The NICE model is recovered by setting s θ = 0 {\displaystyle s_{\theta }=0} . Since the Real NVP map keeps the first and second halves of the vector x {\displaystyle x} separate, it's usually required to add a permutation ( x 1 , x 2 ) ↦ ( x 2 , x 1 ) {\displaystyle (x_{1},x_{2})\mapsto (x_{2},x_{1})} after every Real NVP layer. === Generative Flow (Glow) === In generative flow model, each layer has 3 parts: channel-wise affine transform y c i j = s c ( x c i j + b c ) {\displaystyle y_{cij}=s_{c}(x_{cij}+b_{c})} with Jacobian ∏ c s c H W {\displaystyle \prod _{c}s_{c}^{HW}} . invertible 1x1 convolution z c i j = ∑ c ′ K c c ′ y c i j {\displaystyle z_{cij}=\sum _{c'}K_{cc'}y_{cij}} with Jacobian det ( K ) H W {\displaystyle \det(K)^{HW}} . Here K {\displaystyle K} is any invertible matrix. Real NVP, with Jacobian as described in Real NVP. The idea of using the invertible 1x1 convolution is to permute all layers in general, instead of merely permuting the first and second half, as in Real NVP. === Masked Autoregressive Flow (MAF) === An autoregressive model of a distribution on R n {\displaystyle \mathbb {R} ^{n}} is defined as the following stochastic process: x 1 ∼ N ( μ 1 , σ 1 2 ) x 2 ∼ N ( μ 2 ( x 1 ) , σ 2 ( x 1 ) 2 ) ⋯ x n ∼ N ( μ n ( x 1 : n − 1 ) , σ n ( x 1 : n − 1 ) 2 ) {\displaystyle {\begin{aligned}x_{1}\sim &N(\mu _{1},\sigma _{1}^{2})\\x_{2}\sim &N(\mu _{2}(x_{1}),\sigma _{2}(x_{1})^{2})\\&\cdots \\x_{n}\sim &N(\mu _{n}(x_{1:n-1}),\sigma _{n}(x_{1:n-1})^{2})\\\end{aligned}}} where μ i : R i − 1 → R {\displaystyle \mu _{i}:\mathbb {R} ^{i-1}\to \mathbb {R} } and σ i : R i − 1 → ( 0 , ∞ ) {\displaystyle \sigma _{i}:\mathbb {R} ^{i-1}\to (0,\infty )} are fixed functions that define the autoregressive model. By the reparameterization trick, the autoregressive model is generalized to a normalizing flow: x 1 = μ 1 + σ 1 z 1 x 2 = μ 2 ( x 1 ) + σ 2 ( x 1 ) z 2 ⋯ x n = μ n ( x 1 : n − 1 ) + σ n ( x 1 : n − 1 ) z n {\displaystyle {\begin{aligned}x_{1}=&\mu _{1}+\sigma _{1}z_{1}\\x_{2}=&\mu _{2}(x_{1})+\sigma _{2}(x_{1})z_{2}\\&\cdots \\x_{n}=&\mu _{n}(x_{1:n-1})+\sigma _{n}(x_{1:n-1})z_{n}\\\end{aligned}}} The autoregressive model is recovered by setting z ∼ N ( 0 , I n ) {\displaystyle z\sim N(0,I_{n})} . The forward mapping is slow (because it's sequential), but the backward mapping is fast (because it's parallel). The Jacobian matrix is lower-diagonal, so the Jacobian is σ 1 σ 2 ( x 1 ) ⋯ σ n ( x 1 : n − 1 ) {\displaystyle \sigma _{1}\sigma _{2}(x_{1})\cdots \sigma _{n}(x_{1:n-1})} . Reversing the two maps f θ {\displaystyle f_{\theta }} and f θ − 1 {\displaystyle f_{\theta }^{-1}} of MAF results in Inverse Autoregressive Flow (IAF), which has fast forward mapping and slow backward mapping. === Continuous Normalizing Flow (CNF) === Instead of constructing flow by function composition, another approach is to formulate the flow as a continuous-time dynamic. Let z 0 {\displaystyle z_{0}} be the latent variable with distribution p ( z 0 ) {\displaystyle p(z_{0})} . Map this latent variable to data space with the following flow function: x = F ( z 0 ) = z T = z 0 + ∫ 0 T f ( z t , t ) d t {\displaystyle x=F(z_{0})=z_{T}=z_{0}+\int _{0}^{T}f(z_{t},t)dt} where f {\displaystyle f} is an arbitrary function and can be modeled with e.g. neural networks. The inverse function is then naturally: z 0 = F − 1 ( x ) = z T + ∫ T 0 f ( z t , t ) d t = z T − ∫ 0 T f ( z t , t ) d t {\displaystyle z_{0}=F^{-1}(x)=z_{T}+\int _{T}^{0}f(z_{t},t)dt=z_{T}-\int _{0}^{T}f(z_{t},t)dt} And the log-likelihood of x {\displaystyle x} can be found as: log ⁡ ( p ( x ) ) = log ⁡ ( p ( z 0 ) ) − ∫ 0 T Tr [ ∂ f ∂ z t ] d t {\displaystyle \log(p(x))=\log(p(z_{0}))-\int _{0}^{T}{\text{Tr}}\left[{\frac {\partial f}{\partial z_{t}}}\right]dt} Since the trace depends only on the diagonal of the Jacobian ∂ z t f {\displaystyle \partial _{z_{t}}f} , this allows "free-form" Jacobian. Here, "free-form" means that there is no restriction on the Jacobian's form. It is contrasted with previous discrete models of normalizing flow, where the Jacobian is carefully designed to be only upper- or lower-diagonal, so that the Jacobian can be evaluated efficiently. The trace can be estimated by "Hutchinson's trick":Given any matrix W ∈ R n × n {\displaystyle W\in \mathbb {R} ^{n\times n}} , and any random u ∈ R n {\displaystyle u\in \mathbb {R} ^{n}} with E [ u u T ] = I {\displaystyle E[uu^{T}]=I} , we have E [ u T W u ] = t r ( W ) {\displaystyle E[u^{T}Wu]=tr(W)} . (Proof: expand the expectation directly.)Usually, the random vector is sampled from N ( 0 , I ) {\displaystyle N(0,I)} (normal distribution) or { ± n − 1 / 2 } n {\displaystyle \{\pm n^{-1/2}\}^{n}} (Radamacher distribution). When f {\displaystyle f} is implemented as a neural network, neural ODE methods would be needed. Indeed, CNF was first proposed in the same paper that proposed neural ODE. There are two main deficiencies of CNF, one is that a continuous flow must be a homeomorphism, thus preserve orientation and ambient isotopy (for example, it's impossible to flip a left-hand to a right-hand by continuous deforming of space, and it's impossible to turn a sphere inside out, or undo a knot), and the other is that the learned flow f {\displaystyle f} might be ill-behaved, due to degeneracy (that is, there are an infinite number of possible f {\displaystyle f} that all solve the same problem). By adding extra dimensions, the CNF gains enough freedom to reverse orientation and go beyond ambient isotopy (just like how one can pick up a polygon from a desk and flip it around in 3-space, or unknot a knot in 4-space), yielding the "augmented neural ODE". Any homeomorphism of R n {\displaystyle \mathbb {R} ^{n}} can be approximated by a neural ODE operating on R 2 n + 1 {\displaystyle \mathbb {R} ^{2n+1}} , proved by combining Whitney embedding theorem for manifolds and the universal approximation theorem for neural networks. To regularize the flow f {\displaystyle f} , one can impose regularization losses. The paper proposed the following regularization loss based on optimal transport theory: λ K ∫ 0 T ‖ f ( z t , t ) ‖ 2 d t + λ J ∫ 0 T ‖ ∇ z f ( z t , t ) ‖ F 2 d t {\displaystyle \lambda _{K}\int _{0}^{T}\left\|f(z_{t},t)\right\|^{2}dt+\lambda _{J}\int _{0}^{T}\left\|\nabla _{z}f(z_{t},t)\right\|_{F}^{2}dt} where λ K , λ J > 0 {\displaystyle \lambda _{K},\lambda _{J}>0} are hyperparameters. The first term punishes the model for oscillating the flow field over time, and the second term punishes it for oscillating the flow field over space. Both terms together guide the model into a flow that is smooth (not "bumpy") over space and time. == Downsides == Despite normalizing flows success in estimating high-dimensional densities, some downsides still exist in their designs. First of all, their latent space where input data is projected onto is not a lower-dimensional space and therefore, flow-based models do not allow for compression of data by default and require a lot of computation. However, it is still possible to perform image compression with them. Flow-based models are also notorious for failing in estimating the likelihood of out-of-distribution samples (i.e.: samples that were not drawn from the same distribution as the training set). Some hypotheses were formulated to explain this phenomenon, among which the typical set hypothesis, estimation issues when training models, or fundamental issues due to the entropy of the data distributions. One of the most interesting properties of normalizing flows is the invertibility of their learned bijective map. This property is given by constraints in the design of the models (cf.: RealNVP, Glow) which guarantee theoretical invertibility. The integrity of the inverse is important in order to ensure the applicability of the change-of-variable theorem, the computation of the Jacobian of the map as well as sampling with the model. However, in practice this invertibility is violated and the inverse map explodes because of numerical imprecision. == Applications == Flow-based generative models have been applied on a variety of modeling tasks, including: Audio generation Image generation Molecular graph generation Point-cloud modeling Video generation Lossy image compression Anomaly detection == References == == External links == Flow-based Deep Generative Models Normalizing flow models
Wikipedia/Flow-based_generative_model
A convolutional neural network (CNN) is a type of feedforward neural network that learns features via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different types of data including text, images and audio. Convolution-based networks are the de-facto standard in deep learning-based approaches to computer vision and image processing, and have only recently been replaced—in some cases—by newer deep learning architectures such as the transformer. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by the regularization that comes from using shared weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution (or cross-correlation) kernels, only 25 weights for each convolutional layer are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features. Some applications of CNNs include: image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. CNNs are also known as shift invariant or space invariant artificial neural networks, based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input. Feedforward neural networks are usually fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The "full connectivity" of these networks makes them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training (such as weight decay) or trimming connectivity (skipped connections, dropout, etc.) Robust datasets also increase the probability that CNNs will learn the generalized principles that characterize a given dataset rather than the biases of a poorly-populated set. Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field. CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns to optimize the filters (or kernels) through automated learning, whereas in traditional algorithms these filters are hand-engineered. This simplifies and automates the process, enhancing efficiency and scalability overcoming human-intervention bottlenecks. == Architecture == A convolutional neural network consists of an input layer, hidden layers and an output layer. In a convolutional neural network, the hidden layers include one or more layers that perform convolutions. Typically this includes a layer that performs a dot product of the convolution kernel with the layer's input matrix. This product is usually the Frobenius inner product, and its activation function is commonly ReLU. As the convolution kernel slides along the input matrix for the layer, the convolution operation generates a feature map, which in turn contributes to the input of the next layer. This is followed by other layers such as pooling layers, fully connected layers, and normalization layers. Here it should be noted how close a convolutional neural network is to a matched filter. === Convolutional layers === In a CNN, the input is a tensor with shape: (number of inputs) × (input height) × (input width) × (input channels) After passing through a convolutional layer, the image becomes abstracted to a feature map, also called an activation map, with shape: (number of inputs) × (feature map height) × (feature map width) × (feature map channels). Convolutional layers convolve the input and pass its result to the next layer. This is similar to the response of a neuron in the visual cortex to a specific stimulus. Each convolutional neuron processes data only for its receptive field. Although fully connected feedforward neural networks can be used to learn features and classify data, this architecture is generally impractical for larger inputs (e.g., high-resolution images), which would require massive numbers of neurons because each pixel is a relevant input feature. A fully connected layer for an image of size 100 × 100 has 10,000 weights for each neuron in the second layer. Convolution reduces the number of free parameters, allowing the network to be deeper. For example, using a 5 × 5 tiling region, each with the same shared weights, requires only 25 neurons. Using shared weights means there are many fewer parameters, which helps avoid the vanishing gradients and exploding gradients problems seen during backpropagation in earlier neural networks. To speed processing, standard convolutional layers can be replaced by depthwise separable convolutional layers, which are based on a depthwise convolution followed by a pointwise convolution. The depthwise convolution is a spatial convolution applied independently over each channel of the input tensor, while the pointwise convolution is a standard convolution restricted to the use of 1 × 1 {\displaystyle 1\times 1} kernels. === Pooling layers === Convolutional networks may include local and/or global pooling layers along with traditional convolutional layers. Pooling layers reduce the dimensions of data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer. Local pooling combines small clusters, tiling sizes such as 2 × 2 are commonly used. Global pooling acts on all the neurons of the feature map. There are two common types of pooling in popular use: max and average. Max pooling uses the maximum value of each local cluster of neurons in the feature map, while average pooling takes the average value. === Fully connected layers === Fully connected layers connect every neuron in one layer to every neuron in another layer. It is the same as a traditional multilayer perceptron neural network (MLP). The flattened matrix goes through a fully connected layer to classify the images. === Receptive field === In neural networks, each neuron receives input from some number of locations in the previous layer. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron's receptive field. Typically the area is a square (e.g. 5 by 5 neurons). Whereas, in a fully connected layer, the receptive field is the entire previous layer. Thus, in each convolutional layer, each neuron takes input from a larger area in the input than previous layers. This is due to applying the convolution over and over, which takes the value of a pixel into account, as well as its surrounding pixels. When using dilated layers, the number of pixels in the receptive field remains constant, but the field is more sparsely populated as its dimensions grow when combining the effect of several layers. To manipulate the receptive field size as desired, there are some alternatives to the standard convolutional layer. For example, atrous or dilated convolution expands the receptive field size without increasing the number of parameters by interleaving visible and blind regions. Moreover, a single dilated convolutional layer can comprise filters with multiple dilation ratios, thus having a variable receptive field size. === Weights === Each neuron in a neural network computes an output value by applying a specific function to the input values received from the receptive field in the previous layer. The function that is applied to the input values is determined by a vector of weights and a bias (typically real numbers). Learning consists of iteratively adjusting these biases and weights. The vectors of weights and biases are called filters and represent particular features of the input (e.g., a particular shape). A distinguishing feature of CNNs is that many neurons can share the same filter. This reduces the memory footprint because a single bias and a single vector of weights are used across all receptive fields that share that filter, as opposed to each receptive field having its own bias and vector weighting. === Deconvolutional === A deconvolutional neural network is essentially the reverse of a CNN. It consists of deconvolutional layers and unpooling layers. A deconvolutional layer is the transpose of a convolutional layer. Specifically, a convolutional layer can be written as a multiplication with a matrix, and a deconvolutional layer is multiplication with the transpose of that matrix. An unpooling layer expands the layer. The max-unpooling layer is the simplest, as it simply copies each entry multiple times. For example, a 2-by-2 max-unpooling layer is [ x ] ↦ [ x x x x ] {\displaystyle [x]\mapsto {\begin{bmatrix}x&x\\x&x\end{bmatrix}}} . Deconvolution layers are used in image generators. By default, it creates periodic checkerboard artifact, which can be fixed by upscale-then-convolve. == History == CNN are often compared to the way the brain achieves vision processing in living organisms. === Receptive fields in the visual cortex === Work by Hubel and Wiesel in the 1950s and 1960s showed that cat visual cortices contain neurons that individually respond to small regions of the visual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as its receptive field. Neighboring cells have similar and overlapping receptive fields. Receptive field size and location varies systematically across the cortex to form a complete map of visual space. The cortex in each hemisphere represents the contralateral visual field. Their 1968 paper identified two basic visual cell types in the brain: simple cells, whose output is maximized by straight edges having particular orientations within their receptive field complex cells, which have larger receptive fields, whose output is insensitive to the exact position of the edges in the field. Hubel and Wiesel also proposed a cascading model of these two types of cells for use in pattern recognition tasks. === Fukushima's analog threshold elements in a vision model === In 1969, Kunihiko Fukushima introduced a multilayer visual feature detection network, inspired by the above-mentioned work of Hubel and Wiesel, in which "All the elements in one layer have the same set of interconnecting coefficients; the arrangement of the elements and their interconnections are all homogeneous over a given layer." This is the essential core of a convolutional network, but the weights were not trained. In the same paper, Fukushima also introduced the ReLU (rectified linear unit) activation function. === Neocognitron, origin of the trainable CNN architecture === The "neocognitron" was introduced by Fukushima in 1980. The neocognitron introduced the two basic types of layers: "S-layer": a shared-weights receptive-field layer, later known as a convolutional layer, which contains units whose receptive fields cover a patch of the previous layer. A shared-weights receptive-field group (a "plane" in neocognitron terminology) is often called a filter, and a layer typically has several such filters. "C-layer": a downsampling layer that contain units whose receptive fields cover patches of previous convolutional layers. Such a unit typically computes a weighted average of the activations of the units in its patch, and applies inhibition (divisive normalization) pooled from a somewhat larger patch and across different filters in a layer, and applies a saturating activation function. The patch weights are nonnegative and are not trainable in the original neocognitron. The downsampling and competitive inhibition help to classify features and objects in visual scenes even when the objects are shifted. Several supervised and unsupervised learning algorithms have been proposed over the decades to train the weights of a neocognitron. Today, however, the CNN architecture is usually trained through backpropagation. Fukushima's ReLU activation function was not used in his neocognitron since all the weights were nonnegative; lateral inhibition was used instead. The rectifier has become a very popular activation function for CNNs and deep neural networks in general. === Convolution in time === The term "convolution" first appears in neural networks in a paper by Toshiteru Homma, Les Atlas, and Robert Marks II at the first Conference on Neural Information Processing Systems in 1987. Their paper replaced multiplication with convolution in time, inherently providing shift invariance, motivated by and connecting more directly to the signal-processing concept of a filter, and demonstrated it on a speech recognition task. They also pointed out that as a data-trainable system, convolution is essentially equivalent to correlation since reversal of the weights does not affect the final learned function ("For convenience, we denote * as correlation instead of convolution. Note that convolving a(t) with b(t) is equivalent to correlating a(-t) with b(t)."). Modern CNN implementations typically do correlation and call it convolution, for convenience, as they did here. === Time delay neural networks === The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel et al. for phoneme recognition and was an early convolutional network exhibiting shift-invariance. A TDNN is a 1-D convolutional neural net where the convolution is performed along the time axis of the data. It is the first CNN utilizing weight sharing in combination with a training by gradient descent, using backpropagation. Thus, while also using a pyramidal structure as in the neocognitron, it performed a global optimization of the weights instead of a local one. TDNNs are convolutional networks that share weights along the temporal dimension. They allow speech signals to be processed time-invariantly. In 1990 Hampshire and Waibel introduced a variant that performs a two-dimensional convolution. Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both time and frequency shifts, as with images processed by a neocognitron. TDNNs improved the performance of far-distance speech recognition. === Image recognition with CNNs trained by gradient descent === Denker et al. (1989) designed a 2-D CNN system to recognize hand-written ZIP Code numbers. However, the lack of an efficient training method to determine the kernel coefficients of the involved convolutions meant that all the coefficients had to be laboriously hand-designed. Following the advances in the training of 1-D CNNs by Waibel et al. (1987), Yann LeCun et al. (1989) used back-propagation to learn the convolution kernel coefficients directly from images of hand-written numbers. Learning was thus fully automatic, performed better than manual coefficient design, and was suited to a broader range of image recognition problems and image types. Wei Zhang et al. (1988) used back-propagation to train the convolution kernels of a CNN for alphabets recognition. The model was called shift-invariant pattern recognition neural network before the name CNN was coined later in the early 1990s. Wei Zhang et al. also applied the same CNN without the last fully connected layer for medical image object segmentation (1991) and breast cancer detection in mammograms (1994). This approach became a foundation of modern computer vision. ==== Max pooling ==== In 1990 Yamaguchi et al. introduced the concept of max pooling, a fixed filtering operation that calculates and propagates the maximum value of a given region. They did so by combining TDNNs with max pooling to realize a speaker-independent isolated word recognition system. In their system they used several TDNNs per word, one for each syllable. The results of each TDNN over the input signal were combined using max pooling and the outputs of the pooling layers were then passed on to networks performing the actual word classification. In a variant of the neocognitron called the cresceptron, instead of using Fukushima's spatial averaging with inhibition and saturation, J. Weng et al. in 1993 used max pooling, where a downsampling unit computes the maximum of the activations of the units in its patch, introducing this method into the vision field. Max pooling is often used in modern CNNs. ==== LeNet-5 ==== LeNet-5, a pioneering 7-level convolutional network by LeCun et al. in 1995, classifies hand-written numbers on checks (British English: cheques) digitized in 32x32 pixel images. The ability to process higher-resolution images requires larger and more layers of convolutional neural networks, so this technique is constrained by the availability of computing resources. It was superior than other commercial courtesy amount reading systems (as of 1995). The system was integrated in NCR's check reading systems, and fielded in several American banks since June 1996, reading millions of checks per day. === Shift-invariant neural network === A shift-invariant neural network was proposed by Wei Zhang et al. for image character recognition in 1988. It is a modified Neocognitron by keeping only the convolutional interconnections between the image feature layers and the last fully connected layer. The model was trained with back-propagation. The training algorithm was further improved in 1991 to improve its generalization ability. The model architecture was modified by removing the last fully connected layer and applied for medical image segmentation (1991) and automatic detection of breast cancer in mammograms (1994). A different convolution-based design was proposed in 1988 for application to decomposition of one-dimensional electromyography convolved signals via de-convolution. This design was modified in 1989 to other de-convolution-based designs. === GPU implementations === Although CNNs were invented in the 1980s, their breakthrough in the 2000s required fast implementations on graphics processing units (GPUs). In 2004, it was shown by K. S. Oh and K. Jung that standard neural networks can be greatly accelerated on GPUs. Their implementation was 20 times faster than an equivalent implementation on CPU. In 2005, another paper also emphasised the value of GPGPU for machine learning. The first GPU-implementation of a CNN was described in 2006 by K. Chellapilla et al. Their implementation was 4 times faster than an equivalent implementation on CPU. In the same period, GPUs were also used for unsupervised training of deep belief networks. In 2010, Dan Ciresan et al. at IDSIA trained deep feedforward networks on GPUs. In 2011, they extended this to CNNs, accelerating by 60 compared to training CPU. In 2011, the network won an image recognition contest where they achieved superhuman performance for the first time. Then they won more competitions and achieved state of the art on several benchmarks. Subsequently, AlexNet, a similar GPU-based CNN by Alex Krizhevsky et al. won the ImageNet Large Scale Visual Recognition Challenge 2012. It was an early catalytic event for the AI boom. Compared to the training of CNNs using GPUs, not much attention was given to CPU. (Viebke et al 2019) parallelizes CNN by thread- and SIMD-level parallelism that is available on the Intel Xeon Phi. == Distinguishing features == In the past, traditional multilayer perceptron (MLP) models were used for image recognition. However, the full connectivity between nodes caused the curse of dimensionality, and was computationally intractable with higher-resolution images. A 1000×1000-pixel image with RGB color channels has 3 million weights per fully-connected neuron, which is too high to feasibly process efficiently at scale. For example, in CIFAR-10, images are only of size 32×32×3 (32 wide, 32 high, 3 color channels), so a single fully connected neuron in the first hidden layer of a regular neural network would have 32*32*3 = 3,072 weights. A 200×200 image, however, would lead to neurons that have 200*200*3 = 120,000 weights. Also, such network architecture does not take into account the spatial structure of data, treating input pixels which are far apart in the same way as pixels that are close together. This ignores locality of reference in data with a grid-topology (such as images), both computationally and semantically. Thus, full connectivity of neurons is wasteful for purposes such as image recognition that are dominated by spatially local input patterns. Convolutional neural networks are variants of multilayer perceptrons, designed to emulate the behavior of a visual cortex. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images. As opposed to MLPs, CNNs have the following distinguishing features: 3D volumes of neurons. The layers of a CNN have neurons arranged in 3 dimensions: width, height and depth. Where each neuron inside a convolutional layer is connected to only a small region of the layer before it, called a receptive field. Distinct types of layers, both locally and completely connected, are stacked to form a CNN architecture. Local connectivity: following the concept of receptive fields, CNNs exploit spatial locality by enforcing a local connectivity pattern between neurons of adjacent layers. The architecture thus ensures that the learned "filters" produce the strongest response to a spatially local input pattern. Stacking many such layers leads to nonlinear filters that become increasingly global (i.e. responsive to a larger region of pixel space) so that the network first creates representations of small parts of the input, then from them assembles representations of larger areas. Shared weights: In CNNs, each filter is replicated across the entire visual field. These replicated units share the same parameterization (weight vector and bias) and form a feature map. This means that all the neurons in a given convolutional layer respond to the same feature within their specific response field. Replicating units in this way allows for the resulting activation map to be equivariant under shifts of the locations of input features in the visual field, i.e. they grant translational equivariance—given that the layer has a stride of one. Pooling: In a CNN's pooling layers, feature maps are divided into rectangular sub-regions, and the features in each rectangle are independently down-sampled to a single value, commonly by taking their average or maximum value. In addition to reducing the sizes of feature maps, the pooling operation grants a degree of local translational invariance to the features contained therein, allowing the CNN to be more robust to variations in their positions. Together, these properties allow CNNs to achieve better generalization on vision problems. Weight sharing dramatically reduces the number of free parameters learned, thus lowering the memory requirements for running the network and allowing the training of larger, more powerful networks. == Building blocks == A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume (e.g. holding the class scores) through a differentiable function. A few distinct types of layers are commonly used. These are further discussed below. === Convolutional layer === The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnable filters (or kernels), which have a small receptive field, but extend through the full depth of the input volume. During the forward pass, each filter is convolved across the width and height of the input volume, computing the dot product between the filter entries and the input, producing a 2-dimensional activation map of that filter. As a result, the network learns filters that activate when it detects some specific type of feature at some spatial position in the input. Stacking the activation maps for all filters along the depth dimension forms the full output volume of the convolution layer. Every entry in the output volume can thus also be interpreted as an output of a neuron that looks at a small region in the input. Each entry in an activation map use the same set of parameters that define the filter. Self-supervised learning has been adapted for use in convolutional layers by using sparse patches with a high-mask ratio and a global response normalization layer. ==== Local connectivity ==== When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing a sparse local connectivity pattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume. The extent of this connectivity is a hyperparameter called the receptive field of the neuron. The connections are local in space (along width and height), but always extend along the entire depth of the input volume. Such an architecture ensures that the learned filters produce the strongest response to a spatially local input pattern. ==== Spatial arrangement ==== Three hyperparameters control the size of the output volume of the convolutional layer: the depth, stride, and padding size: The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color. Stride controls how depth columns around the width and height are allocated. If the stride is 1, then we move the filters one pixel at a time. This leads to heavily overlapping receptive fields between the columns, and to large output volumes. For any integer S > 0 , {\textstyle S>0,} a stride S means that the filter is translated S units at a time per output. In practice, S ≥ 3 {\textstyle S\geq 3} is rare. A greater stride means smaller overlap of receptive fields and smaller spatial dimensions of the output volume. Sometimes, it is convenient to pad the input with zeros (or other values, such as the average of the region) on the border of the input volume. The size of this padding is a third hyperparameter. Padding provides control of the output volume's spatial size. In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume, this is commonly referred to as "same" padding. The spatial size of the output volume is a function of the input volume size W {\displaystyle W} , the kernel field size K {\displaystyle K} of the convolutional layer neurons, the stride S {\displaystyle S} , and the amount of zero padding P {\displaystyle P} on the border. The number of neurons that "fit" in a given volume is then: W − K + 2 P S + 1. {\displaystyle {\frac {W-K+2P}{S}}+1.} If this number is not an integer, then the strides are incorrect and the neurons cannot be tiled to fit across the input volume in a symmetric way. In general, setting zero padding to be P = ( K − 1 ) / 2 {\textstyle P=(K-1)/2} when the stride is S = 1 {\displaystyle S=1} ensures that the input volume and output volume will have the same size spatially. However, it is not always completely necessary to use all of the neurons of the previous layer. For example, a neural network designer may decide to use just a portion of padding. ==== Parameter sharing ==== A parameter sharing scheme is used in convolutional layers to control the number of free parameters. It relies on the assumption that if a patch feature is useful to compute at some spatial position, then it should also be useful to compute at other positions. Denoting a single 2-dimensional slice of depth as a depth slice, the neurons in each depth slice are constrained to use the same weights and bias. Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as a convolution of the neuron's weights with the input volume. Therefore, it is common to refer to the sets of weights as a filter (or a kernel), which is convolved with the input. The result of this convolution is an activation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to the translation invariance of the CNN architecture. Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer". === Pooling layer === Another important concept of CNNs is pooling, which is used as a form of non-linear down-sampling. Pooling provides downsampling because it reduces the spatial dimensions (height and width) of the input feature maps while retaining the most important information. There are several non-linear functions to implement pooling, where max pooling and average pooling are the most common. Pooling aggregates information from small regions of the input creating partitions of the input feature map, typically using a fixed-size window (like 2x2) and applying a stride (often 2) to move the window across the input. Note that without using a stride greater than 1, pooling would not perform downsampling, as it would simply move the pooling window across the input one step at a time, without reducing the size of the feature map. In other words, the stride is what actually causes the downsampling by determining how much the pooling window moves over the input. Intuitively, the exact location of a feature is less important than its rough location relative to other features. This is the idea behind the use of pooling in convolutional neural networks. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters, memory footprint and amount of computation in the network, and hence to also control overfitting. This is known as down-sampling. It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by an activation function, such as a ReLU layer) in a CNN architecture.: 460–461  While pooling layers contribute to local translation invariance, they do not provide global translation invariance in a CNN, unless a form of global pooling is used. The pooling layer commonly operates independently on every depth, or slice, of the input and resizes it spatially. A very common form of max pooling is a layer with filters of size 2×2, applied with a stride of 2, which subsamples every depth slice in the input by 2 along both width and height, discarding 75% of the activations: f X , Y ( S ) = max a , b = 0 1 S 2 X + a , 2 Y + b . {\displaystyle f_{X,Y}(S)=\max _{a,b=0}^{1}S_{2X+a,2Y+b}.} In this case, every max operation is over 4 numbers. The depth dimension remains unchanged (this is true for other forms of pooling as well). In addition to max pooling, pooling units can use other functions, such as average pooling or ℓ2-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which generally performs better in practice. Due to the effects of fast spatial reduction of the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether. ==== Channel max pooling ==== A channel max pooling (CMP) operation layer conducts the MP operation along the channel side among the corresponding positions of the consecutive feature maps for the purpose of redundant information elimination. The CMP makes the significant features gather together within fewer channels, which is important for fine-grained image classification that needs more discriminating features. Meanwhile, another advantage of the CMP operation is to make the channel number of feature maps smaller before it connects to the first fully connected (FC) layer. Similar to the MP operation, we denote the input feature maps and output feature maps of a CMP layer as F ∈ R(C×M×N) and C ∈ R(c×M×N), respectively, where C and c are the channel numbers of the input and output feature maps, M and N are the widths and the height of the feature maps, respectively. Note that the CMP operation only changes the channel number of the feature maps. The width and the height of the feature maps are not changed, which is different from the MP operation. See for reviews for pooling methods. === ReLU layer === ReLU is the abbreviation of rectified linear unit. It was proposed by Alston Householder in 1941, and used in CNN by Kunihiko Fukushima in 1969. ReLU applies the non-saturating activation function f ( x ) = max ( 0 , x ) {\textstyle f(x)=\max(0,x)} . It effectively removes negative values from an activation map by setting them to zero. It introduces nonlinearity to the decision function and in the overall network without affecting the receptive fields of the convolution layers. In 2011, Xavier Glorot, Antoine Bordes and Yoshua Bengio found that ReLU enables better training of deeper networks, compared to widely used activation functions prior to 2011. Other functions can also be used to increase nonlinearity, for example the saturating hyperbolic tangent f ( x ) = tanh ⁡ ( x ) {\displaystyle f(x)=\tanh(x)} , f ( x ) = | tanh ⁡ ( x ) | {\displaystyle f(x)=|\tanh(x)|} , and the sigmoid function σ ( x ) = ( 1 + e − x ) − 1 {\textstyle \sigma (x)=(1+e^{-x})^{-1}} . ReLU is often preferred to other functions because it trains the neural network several times faster without a significant penalty to generalization accuracy. === Fully connected layer === After several convolutional and max pooling layers, the final classification is done via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular (non-convolutional) artificial neural networks. Their activations can thus be computed as an affine transformation, with matrix multiplication followed by a bias offset (vector addition of a learned or fixed bias term). === Loss layer === The "loss layer", or "loss function", exemplifies how training penalizes the deviation between the predicted output of the network, and the true data labels (during supervised learning). Various loss functions can be used, depending on the specific task. The Softmax loss function is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in [ 0 , 1 ] {\displaystyle [0,1]} . Euclidean loss is used for regressing to real-valued labels ( − ∞ , ∞ ) {\displaystyle (-\infty ,\infty )} . == Hyperparameters == Hyperparameters are various settings that are used to control the learning process. CNNs use more hyperparameters than a standard multilayer perceptron (MLP). === Padding === Padding is the addition of (typically) 0-valued pixels on the borders of an image. This is done so that the border pixels are not undervalued (lost) from the output because they would ordinarily participate in only a single receptive field instance. The padding applied is typically one less than the corresponding kernel dimension. For example, a convolutional layer using 3x3 kernels would receive a 2-pixel pad, that is 1 pixel on each side of the image. === Stride === The stride is the number of pixels that the analysis window moves on each iteration. A stride of 2 means that each kernel is offset by 2 pixels from its predecessor. === Number of filters === Since feature map size decreases with depth, layers near the input layer tend to have fewer filters while higher layers can have more. To equalize computation at each layer, the product of feature values va with pixel position is kept roughly constant across layers. Preserving more information about the input would require keeping the total number of activations (number of feature maps times number of pixel positions) non-decreasing from one layer to the next. The number of feature maps directly controls the capacity and depends on the number of available examples and task complexity. === Filter (or Kernel) size === Common filter sizes found in the literature vary greatly, and are usually chosen based on the data set. Typical filter sizes range from 1x1 to 7x7. As two famous examples, AlexNet used 3x3, 5x5, and 11x11. Inceptionv3 used 1x1, 3x3, and 5x5. The challenge is to find the right level of granularity so as to create abstractions at the proper scale, given a particular data set, and without overfitting. === Pooling type and size === Max pooling is typically used, often with a 2x2 dimension. This implies that the input is drastically downsampled, reducing processing cost. Greater pooling reduces the dimension of the signal, and may result in unacceptable information loss. Often, non-overlapping pooling windows perform best. === Dilation === Dilation involves ignoring pixels within a kernel. This reduces processing memory potentially without significant signal loss. A dilation of 2 on a 3x3 kernel expands the kernel to 5x5, while still processing 9 (evenly spaced) pixels. Specifically, the processed pixels after the dilation are the cells (1,1), (1,3), (1,5), (3,1), (3,3), (3,5), (5,1), (5,3), (5,5), where (i,j) denotes the cell of the i-th row and j-th column in the expanded 5x5 kernel. Accordingly, dilation of 4 expands the kernel to 7x7. == Translation equivariance and aliasing == It is commonly assumed that CNNs are invariant to shifts of the input. Convolution or pooling layers within a CNN that do not have a stride greater than one are indeed equivariant to translations of the input. However, layers with a stride greater than one ignore the Nyquist–Shannon sampling theorem and might lead to aliasing of the input signal While, in principle, CNNs are capable of implementing anti-aliasing filters, it has been observed that this does not happen in practice, and therefore yield models that are not equivariant to translations. Furthermore, if a CNN makes use of fully connected layers, translation equivariance does not imply translation invariance, as the fully connected layers are not invariant to shifts of the input. One solution for complete translation invariance is avoiding any down-sampling throughout the network and applying global average pooling at the last layer. Additionally, several other partial solutions have been proposed, such as anti-aliasing before downsampling operations, spatial transformer networks, data augmentation, subsampling combined with pooling, and capsule neural networks. == Evaluation == The accuracy of the final model is typically estimated on a sub-part of the dataset set apart at the start, often called a test set. Alternatively, methods such as k-fold cross-validation are applied. Other strategies include using conformal prediction. == Regularization methods == Regularization is a process of introducing additional information to solve an ill-posed problem or to prevent overfitting. CNNs use various types of regularization. === Empirical === ==== Dropout ==== Because networks have so many parameters, they are prone to overfitting. One method to reduce overfitting is dropout, introduced in 2014. At each training stage, individual nodes are either "dropped out" of the net (ignored) with probability 1 − p {\displaystyle 1-p} or kept with probability p {\displaystyle p} , so that a reduced network is left; incoming and outgoing edges to a dropped-out node are also removed. Only the reduced network is trained on the data in that stage. The removed nodes are then reinserted into the network with their original weights. In the training stages, p {\displaystyle p} is usually 0.5; for input nodes, it is typically much higher because information is directly lost when input nodes are ignored. At testing time after training has finished, we would ideally like to find a sample average of all possible 2 n {\displaystyle 2^{n}} dropped-out networks; unfortunately this is unfeasible for large values of n {\displaystyle n} . However, we can find an approximation by using the full network with each node's output weighted by a factor of p {\displaystyle p} , so the expected value of the output of any node is the same as in the training stages. This is the biggest contribution of the dropout method: although it effectively generates 2 n {\displaystyle 2^{n}} neural nets, and as such allows for model combination, at test time only a single network needs to be tested. By avoiding training all nodes on all training data, dropout decreases overfitting. The method also significantly improves training speed. This makes the model combination practical, even for deep neural networks. The technique seems to reduce node interactions, leading them to learn more robust features that better generalize to new data. ==== DropConnect ==== DropConnect is the generalization of dropout in which each connection, rather than each output unit, can be dropped with probability 1 − p {\displaystyle 1-p} . Each unit thus receives input from a random subset of units in the previous layer. DropConnect is similar to dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights, rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage. ==== Stochastic pooling ==== A major drawback to dropout is that it does not have the same benefits for convolutional layers, where the neurons are not fully connected. Even before dropout, in 2013 a technique called stochastic pooling, the conventional deterministic pooling operations were replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to a multinomial distribution, given by the activities within the pooling region. This approach is free of hyperparameters and can be combined with other regularization approaches, such as dropout and data augmentation. An alternate view of stochastic pooling is that it is equivalent to standard max pooling but with many copies of an input image, each having small local deformations. This is similar to explicit elastic deformations of the input images, which delivers excellent performance on the MNIST data set. Using stochastic pooling in a multilayer model gives an exponential number of deformations since the selections in higher layers are independent of those below. ==== Artificial data ==== Because the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting. Because there is often not enough available data to train, especially considering that some part should be spared for later testing, two approaches are to either generate new data from scratch (if possible) or perturb existing data to create new ones. The latter one is used since mid-1990s. For example, input images can be cropped, rotated, or rescaled to create new examples with the same labels as the original training set. === Explicit === ==== Early stopping ==== One of the simplest methods to prevent overfitting of a network is to simply stop the training before overfitting has had a chance to occur. It comes with the disadvantage that the learning process is halted. ==== Number of parameters ==== Another simple way to prevent overfitting is to limit the number of parameters, typically by limiting the number of hidden units in each layer or limiting network depth. For convolutional networks, the filter size also affects the number of parameters. Limiting the number of parameters restricts the predictive power of the network directly, reducing the complexity of the function that it can perform on the data, and thus limits the amount of overfitting. This is equivalent to a "zero norm". ==== Weight decay ==== A simple form of added regularizer is weight decay, which simply adds an additional error, proportional to the sum of weights (L1 norm) or squared magnitude (L2 norm) of the weight vector, to the error at each node. The level of acceptable model complexity can be reduced by increasing the proportionality constant('alpha' hyperparameter), thus increasing the penalty for large weight vectors. L2 regularization is the most common form of regularization. It can be implemented by penalizing the squared magnitude of all parameters directly in the objective. The L2 regularization has the intuitive interpretation of heavily penalizing peaky weight vectors and preferring diffuse weight vectors. Due to multiplicative interactions between weights and inputs this has the useful property of encouraging the network to use all of its inputs a little rather than some of its inputs a lot. L1 regularization is also common. It makes the weight vectors sparse during optimization. In other words, neurons with L1 regularization end up using only a sparse subset of their most important inputs and become nearly invariant to the noisy inputs. L1 with L2 regularization can be combined; this is called elastic net regularization. ==== Max norm constraints ==== Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector w → {\displaystyle {\vec {w}}} of every neuron to satisfy ‖ w → ‖ 2 < c {\displaystyle \|{\vec {w}}\|_{2}<c} . Typical values of c {\displaystyle c} are order of 3–4. Some papers report improvements when using this form of regularization. == Hierarchical coordinate frames == Pooling loses the precise spatial relationships between high-level parts (such as nose and mouth in a face image). These relationships are needed for identity recognition. Overlapping the pools so that each feature occurs in multiple pools, helps retain the information. Translation alone cannot extrapolate the understanding of geometric relationships to a radically new viewpoint, such as a different orientation or scale. On the other hand, people are very good at extrapolating; after seeing a new shape once they can recognize it from a different viewpoint. An earlier common way to deal with this problem is to train the network on transformed data in different orientations, scales, lighting, etc. so that the network can cope with these variations. This is computationally intensive for large data-sets. The alternative is to use a hierarchy of coordinate frames and use a group of neurons to represent a conjunction of the shape of the feature and its pose relative to the retina. The pose relative to the retina is the relationship between the coordinate frame of the retina and the intrinsic features' coordinate frame. Thus, one way to represent something is to embed the coordinate frame within it. This allows large features to be recognized by using the consistency of the poses of their parts (e.g. nose and mouth poses make a consistent prediction of the pose of the whole face). This approach ensures that the higher-level entity (e.g. face) is present when the lower-level (e.g. nose and mouth) agree on its prediction of the pose. The vectors of neuronal activity that represent pose ("pose vectors") allow spatial transformations modeled as linear operations that make it easier for the network to learn the hierarchy of visual entities and generalize across viewpoints. This is similar to the way the human visual system imposes coordinate frames in order to represent shapes. == Applications == === Image recognition === CNNs are often used in image recognition systems. In 2012, an error rate of 0.23% on the MNIST database was reported. Another paper on using CNN for image classification reported that the learning process was "surprisingly fast"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database. Subsequently, a similar CNN called AlexNet won the ImageNet Large Scale Visual Recognition Challenge 2012. When applied to facial recognition, CNNs achieved a large decrease in error rate. Another paper reported a 97.6% recognition rate on "5,600 still images of more than 10 subjects". CNNs were used to assess video quality in an objective way after manual training; the resulting system had a very low root mean square error. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object classification and detection, with millions of images and hundreds of object classes. In the ILSVRC 2014, a large-scale visual recognition challenge, almost every highly ranked team used CNN as their basic framework. The winner GoogLeNet (the foundation of DeepDream) increased the mean average precision of object detection to 0.439329, and reduced classification error to 0.06656, the best result to date. Its network applied more than 30 layers. That performance of convolutional neural networks on the ImageNet tests was close to that of humans. The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters, an increasingly common phenomenon with modern digital cameras. By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained categories such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this. In 2015, a many-layered CNN demonstrated the ability to spot faces from a wide range of angles, including upside down, even when partially occluded, with competitive performance. The network was trained on a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They used batches of 128 images over 50,000 iterations. === Video analysis === Compared to image data domains, there is relatively little work on applying CNNs to video classification. Video is more complex than images since it has another (temporal) dimension. However, some extensions of CNNs into the video domain have been explored. One approach is to treat space and time as equivalent dimensions of the input and perform convolutions in both time and space. Another way is to fuse the features of two convolutional neural networks, one for the spatial and one for the temporal stream. Long short-term memory (LSTM) recurrent units are typically incorporated after the CNN to account for inter-frame or inter-clip dependencies. Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis. Its application can be seen in text-to-video model. === Natural language processing === CNNs have also been explored for natural language processing. CNN models are effective for various NLP problems and achieved excellent results in semantic parsing, search query retrieval, sentence modeling, classification, prediction and other traditional NLP tasks. Compared to traditional language processing methods such as recurrent neural networks, CNNs can represent different contextual realities of language that do not rely on a series-sequence assumption, while RNNs are better suitable when classical time series modeling is required. === Anomaly detection === A CNN with 1-D convolutions was used on time series in the frequency domain (spectral residual) by an unsupervised model to detect anomalies in the time domain. === Drug discovery === CNNs have been used in drug discovery. Predicting the interaction between molecules and biological proteins can identify potential treatments. In 2015, Atomwise introduced AtomNet, the first deep learning neural network for structure-based drug design. The system trains directly on 3-dimensional representations of chemical interactions. Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures, AtomNet discovers chemical features, such as aromaticity, sp3 carbons, and hydrogen bonding. Subsequently, AtomNet was used to predict novel candidate biomolecules for multiple disease targets, most notably treatments for the Ebola virus and multiple sclerosis. === Checkers game === CNNs have been used in the game of checkers. From 1999 to 2001, Fogel and Chellapilla published papers showing how a convolutional neural network could learn to play checkers using co-evolution. The learning process did not use prior human professional games, but rather focused on a minimal set of information contained in the checkerboard: the location and type of pieces, and the difference in number of pieces between the two sides. Ultimately, the program (Blondie24) was tested on 165 games against players and ranked in the highest 0.4%. It also earned a win against the program Chinook at its "expert" level of play. === Go === CNNs have been used in computer Go. In December 2014, Clark and Storkey published a paper showing that a CNN trained by supervised learning from a database of human professional games could outperform GNU Go and win some games against Monte Carlo tree search Fuego 1.1 in a fraction of the time it took Fuego to play. Later it was announced that a large 12-layer convolutional neural network had correctly predicted the professional move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GNU Go in 97% of games, and matched the performance of the Monte Carlo tree search program Fuego simulating ten thousand playouts (about a million positions) per move. A couple of CNNs for choosing moves to try ("policy network") and evaluating positions ("value network") driving MCTS were used by AlphaGo, the first to beat the best human player at the time. === Time series forecasting === Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients. Convolutional networks can provide an improved forecasting performance when there are multiple similar time series to learn from. CNNs can also be applied to further tasks in time series analysis (e.g., time series classification or quantile forecasting). === Cultural heritage and 3D-datasets === As archaeological findings such as clay tablets with cuneiform writing are increasingly acquired using 3D scanners, benchmark datasets are becoming available, including HeiCuBeDa providing almost 2000 normalized 2-D and 3-D datasets prepared with the GigaMesh Software Framework. So curvature-based measures are used in conjunction with geometric neural networks (GNNs), e.g. for period classification of those clay tablets being among the oldest documents of human history. == Fine-tuning == For many applications, training data is not very available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights, this is known as transfer learning. Furthermore, this technique allows convolutional network architectures to successfully be applied to problems with tiny training sets. == Human interpretable explanations == End-to-end training and prediction are common practice in computer vision. However, human interpretable explanations are required for critical systems such as a self-driving cars. With recent advances in visual salience, spatial attention, and temporal attention, the most critical spatial regions/temporal instants could be visualized to justify the CNN predictions. == Related architectures == === Deep Q-networks === A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning. Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming. Other deep reinforcement learning models preceded it. === Deep belief networks === Convolutional deep belief networks (CDBN) have structure very similar to convolutional neural networks and are trained similarly to deep belief networks. Therefore, they exploit the 2D structure of images, like CNNs do, and make use of pre-training like deep belief networks. They provide a generic structure that can be used in many image and signal processing tasks. Benchmark results on standard image datasets like CIFAR have been obtained using CDBNs. === Neural abstraction pyramid === The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid by lateral and feedback connections. The resulting recurrent convolutional network allows for the flexible incorporation of contextual information to iteratively resolve local ambiguities. In contrast to previous models, image-like outputs at the highest resolution were generated, e.g., for semantic segmentation, image reconstruction, and object localization tasks. == Notable libraries == Caffe: A library for convolutional neural networks. Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers. Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. A general-purpose deep learning library for the JVM production stack running on a C++ scientific computing engine. Allows the creation of custom layers. Integrates with Hadoop and Kafka. Dlib: A toolkit for making real world machine learning and data analysis applications in C++. Microsoft Cognitive Toolkit: A deep learning toolkit written by Microsoft with several unique features enhancing scalability over multiple nodes. It supports full-fledged interfaces for training in C++ and Python and with additional support for model inference in C# and Java. TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU, Google's proprietary tensor processing unit (TPU), and mobile devices. Theano: The reference deep-learning library for Python with an API largely compatible with the popular NumPy library. Allows user to write symbolic mathematical expressions, then automatically generates their derivatives, saving the user from having to code gradients or backpropagation. These symbolic expressions are automatically compiled to CUDA code for a fast, on-the-GPU implementation. Torch: A scientific computing framework with wide support for machine learning algorithms, written in C and Lua. == See also == Attention (machine learning) Convolution Deep learning Natural-language processing Neocognitron Scale-invariant feature transform Time delay neural network Vision processing unit == Notes == == References == == External links == CS231n: Convolutional Neural Networks for Visual Recognition — Andrej Karpathy's Stanford computer science course on CNNs in computer vision vdumoulin/conv_arithmetic: A technical report on convolution arithmetic in the context of deep learning. Animations of convolutions.
Wikipedia/Deconvolutional_neural_network
In experimental particle physics, a calorimeter is a type of detector that measures the energy of particles. Particles enter the calorimeter and initiate a particle shower in which their energy is deposited in the calorimeter, collected, and measured. The energy may be measured in its entirety, requiring total containment of the particle shower, or it may be sampled. Typically, calorimeters are segmented transversely to provide information about the direction of the particle or particles, as well as the energy deposited, and longitudinal segmentation can provide information about the identity of the particle based on the shape of the shower as it develops. Calorimetry design is an active area of research in particle physics. == Types of calorimeters == === Electromagnetic versus hadronic === An electromagnetic calorimeter (ECAL) is one specifically designed to measure the energy of particles that interact primarily via the electromagnetic interaction such as electrons, positrons and photons. A hadronic calorimeter (HCAL) is one designed to measure particles that interact via the strong nuclear force. (See types of particle showers for the differences between the two.) Calorimeters are characterized by the radiation length (for ECALs) and nuclear interaction length (for HCALs) of their active material. ECALs tend to be 15–30 radiation lengths deep while HCALs are 5–8 nuclear interaction lengths deep. === Homogeneous versus sampling === An ECAL or an HCAL can be either a sampling calorimeter or a homogeneous calorimeter. In a sampling calorimeter, the material that produces the particle shower is distinct from the material that measures the deposited energy. Typically the two materials alternate. One advantage of this is that each material can be well-suited to its task; for example, a very dense material can be used to produce a shower that evolves quickly in a limited space, even if the material is unsuitable for measuring the energy deposited by the shower. A disadvantage is that some of the energy is deposited in the wrong material and is not measured; thus the total shower energy must be estimated instead of being measured directly. A homogeneous calorimeter is one in which the entire volume is sensitive and contributes a signal. == Calorimeters in high-energy physics experiments == Most particle physics experiments use some form of calorimetry. Often it is the most practical way to detect and measure neutral particles from an interaction. In addition, calorimeters are necessary for calculating "missing energy" which can be attributed to particles that rarely interact with matter and escape the detector, such as neutrinos. In most experiments the calorimeter works in conjunction with other components like a central tracker and a muon detector. All the detector components work together to achieve the objective of reconstructing a physics event. == See also == Calorimeter (for other uses of the term) Total absorption spectroscopy, a technique whose main measuring device is a calorimeter == References == == External links == Calorimeter section of The Particle Detector BriefBook Explanation of Calorimeters on Quantumdiaries.org
Wikipedia/Calorimeter_(particle_physics)
In mathematics, a Borel set is any subset of a topological space that can be formed from its open sets (or, equivalently, from closed sets) through the operations of countable union, countable intersection, and relative complement. Borel sets are named after Émile Borel. For a topological space X, the collection of all Borel sets on X forms a σ-algebra, known as the Borel algebra or Borel σ-algebra. The Borel algebra on X is the smallest σ-algebra containing all open sets (or, equivalently, all closed sets). Borel sets are important in measure theory, since any measure defined on the open sets of a space, or on the closed sets of a space, must also be defined on all Borel sets of that space. Any measure defined on the Borel sets is called a Borel measure. Borel sets and the associated Borel hierarchy also play a fundamental role in descriptive set theory. In some contexts, Borel sets are defined to be generated by the compact sets of the topological space, rather than the open sets. The two definitions are equivalent for many well-behaved spaces, including all Hausdorff σ-compact spaces, but can be different in more pathological spaces. == Generating the Borel algebra == In the case that X is a metric space, the Borel algebra in the first sense may be described generatively as follows. For a collection T of subsets of X (that is, for any subset of the power set P(X) of X), let T σ {\displaystyle T_{\sigma }} be all countable unions of elements of T T δ {\displaystyle T_{\delta }} be all countable intersections of elements of T T δ σ = ( T δ ) σ . {\displaystyle T_{\delta \sigma }=(T_{\delta })_{\sigma }.} Now define by transfinite induction a sequence Gm, where m is an ordinal number, in the following manner: For the base case of the definition, let G 0 {\displaystyle G^{0}} be the collection of open subsets of X. If i is not a limit ordinal, then i has an immediately preceding ordinal i − 1. Let G i = [ G i − 1 ] δ σ . {\displaystyle G^{i}=[G^{i-1}]_{\delta \sigma }.} If i is a limit ordinal, set G i = ⋃ j < i G j . {\displaystyle G^{i}=\bigcup _{j<i}G^{j}.} The claim is that the Borel algebra is Gω1, where ω1 is the first uncountable ordinal number. That is, the Borel algebra can be generated from the class of open sets by iterating the operation G ↦ G δ σ . {\displaystyle G\mapsto G_{\delta \sigma }.} to the first uncountable ordinal. To prove this claim, any open set in a metric space is the union of an increasing sequence of closed sets. In particular, complementation of sets maps Gm into itself for any limit ordinal m; moreover if m is an uncountable limit ordinal, Gm is closed under countable unions. For each Borel set B, there is some countable ordinal αB such that B can be obtained by iterating the operation over αB. However, as B varies over all Borel sets, αB will vary over all the countable ordinals, and thus the first ordinal at which all the Borel sets are obtained is ω1, the first uncountable ordinal. The resulting sequence of sets is termed the Borel hierarchy. === Example === An important example, especially in the theory of probability, is the Borel algebra on the set of real numbers. It is the algebra on which the Borel measure is defined. Given a real random variable defined on a probability space, its probability distribution is by definition also a measure on the Borel algebra. The Borel algebra on the reals is the smallest σ-algebra on R that contains all the intervals. In the construction by transfinite induction, it can be shown that, in each step, the number of sets is, at most, the cardinality of the continuum. So, the total number of Borel sets is less than or equal to ℵ 1 ⋅ 2 ℵ 0 = 2 ℵ 0 . {\displaystyle \aleph _{1}\cdot 2^{\aleph _{0}}\,=2^{\aleph _{0}}.} In fact, the cardinality of the collection of Borel sets is equal to that of the continuum (compare to the number of Lebesgue measurable sets that exist, which is strictly larger and equal to 2 2 ℵ 0 {\displaystyle 2^{2^{\aleph _{0}}}} ). == Standard Borel spaces and Kuratowski theorems == Let X be a topological space. The Borel space associated to X is the pair (X,B), where B is the σ-algebra of Borel sets of X. George Mackey defined a Borel space somewhat differently, writing that it is "a set together with a distinguished σ-field of subsets called its Borel sets." However, modern usage is to call the distinguished sub-algebra the measurable sets and such spaces measurable spaces. The reason for this distinction is that the Borel sets are the σ-algebra generated by open sets (of a topological space), whereas Mackey's definition refers to a set equipped with an arbitrary σ-algebra. There exist measurable spaces that are not Borel spaces, for any choice of topology on the underlying space. Measurable spaces form a category in which the morphisms are measurable functions between measurable spaces. A function f : X → Y {\displaystyle f:X\rightarrow Y} is measurable if it pulls back measurable sets, i.e., for all measurable sets B in Y, the set f − 1 ( B ) {\displaystyle f^{-1}(B)} is measurable in X. Theorem. Let X be a Polish space, that is, a topological space such that there is a metric d on X that defines the topology of X and that makes X a complete separable metric space. Then X as a Borel space is isomorphic to one of R, Z, a finite space. (This result is reminiscent of Maharam's theorem.) Considered as Borel spaces, the real line R, the union of R with a countable set, and Rn are isomorphic. A standard Borel space is the Borel space associated to a Polish space. A standard Borel space is characterized up to isomorphism by its cardinality, and any uncountable standard Borel space has the cardinality of the continuum. For subsets of Polish spaces, Borel sets can be characterized as those sets that are the ranges of continuous injective maps defined on Polish spaces. Note however, that the range of a continuous noninjective map may fail to be Borel. See analytic set. Every probability measure on a standard Borel space turns it into a standard probability space. == Non-Borel sets == An example of a subset of the reals that is non-Borel, due to Lusin, is described below. In contrast, an example of a non-measurable set cannot be exhibited, although the existence of such a set is implied, for example, by the axiom of choice. Every irrational number has a unique representation by an infinite simple continued fraction x = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + 1 ⋱ {\displaystyle x=a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\cfrac {1}{\ddots \,}}}}}}}}} where a 0 {\displaystyle a_{0}} is some integer and all the other numbers a k {\displaystyle a_{k}} are positive integers. Let A {\displaystyle A} be the set of all irrational numbers that correspond to sequences ( a 0 , a 1 , … ) {\displaystyle (a_{0},a_{1},\dots )} with the following property: there exists an infinite subsequence ( a k 0 , a k 1 , … ) {\displaystyle (a_{k_{0}},a_{k_{1}},\dots )} such that each element is a divisor of the next element. This set A {\displaystyle A} is not Borel. However, it is analytic (all Borel sets are also analytic), and complete in the class of analytic sets. For more details see descriptive set theory and the book by A. S. Kechris (see References), especially Exercise (27.2) on page 209, Definition (22.9) on page 169, Exercise (3.4)(ii) on page 14, and on page 196. It's important to note, that while Zermelo–Fraenkel axioms (ZF) are sufficient to formalize the construction of A {\displaystyle A} , it cannot be proven in ZF alone that A {\displaystyle A} is non-Borel. In fact, it is consistent with ZF that R {\displaystyle \mathbb {R} } is a countable union of countable sets, so that any subset of R {\displaystyle \mathbb {R} } is a Borel set. Another non-Borel set is an inverse image f − 1 [ 0 ] {\displaystyle f^{-1}[0]} of an infinite parity function f : { 0 , 1 } ω → { 0 , 1 } {\displaystyle f\colon \{0,1\}^{\omega }\to \{0,1\}} . However, this is a proof of existence (via the axiom of choice), not an explicit example. == Alternative non-equivalent definitions == According to Paul Halmos, a subset of a locally compact Hausdorff topological space is called a Borel set if it belongs to the smallest σ-ring containing all compact sets. Norberg and Vervaat redefine the Borel algebra of a topological space X {\displaystyle X} as the σ {\displaystyle \sigma } -algebra generated by its open subsets and its compact saturated subsets. This definition is well-suited for applications in the case where X {\displaystyle X} is not Hausdorff. It coincides with the usual definition if X {\displaystyle X} is second countable or if every compact saturated subset is closed (which is the case in particular if X {\displaystyle X} is Hausdorff). == See also == Borel hierarchy Borel isomorphism Baire set Cylindrical σ-algebra Descriptive set theory – Subfield of mathematical logic Polish space – Concept in topology == Notes == == References == William Arveson, An Invitation to C*-algebras, Springer-Verlag, 1981. (See Chapter 3 for an excellent exposition of Polish topology) Richard Dudley, Real Analysis and Probability. Wadsworth, Brooks and Cole, 1989 Halmos, Paul R. (1950). Measure theory. D. van Nostrand Co. See especially Sect. 51 "Borel sets and Baire sets". Halsey Royden, Real Analysis, Prentice Hall, 1988 Alexander S. Kechris, Classical Descriptive Set Theory, Springer-Verlag, 1995 (Graduate texts in Math., vol. 156) == External links == "Borel set", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Formal definition of Borel Sets in the Mizar system, and the list of theorems Archived 2020-06-01 at the Wayback Machine that have been formally proved about it. Weisstein, Eric W. "Borel Set". MathWorld.
Wikipedia/Borel_Algebra
Wi-Fi calling, also called VoWiFi, refers to mobile phone voice calls and data that are made over IP networks using Wi-Fi, instead of the cell towers provided by cellular networks. Using this feature, compatible handsets are able to route regular cellular calls through a wireless LAN (Wi-Fi) network with broadband Internet, while seamlessly changing connections between the two where necessary. This feature makes use of the Generic Access Network (GAN) protocol, also known as Unlicensed Mobile Access (UMA). Voice over wireless LAN (VoWLAN), also voice over Wi‑Fi (VoWiFi), is the use of a wireless broadband network according to the IEEE 802.11 standards for the purpose of vocal conversation. In essence, it is voice over IP (VoIP) over a Wi-Fi network. Essentially, GAN/UMA allows cell phone packets to be forwarded to a network access point over the internet, rather than over-the-air using GSM/GPRS, UMTS or similar. A separate device known as a "GAN Controller" (GANC) receives this data from the Internet and feeds it into the phone network as if it were coming from an antenna on a tower. Calls can be placed from or received to the handset as if it were connected over-the-air directly to the GANC's point of presence, making the call invisible to the network as a whole. This can be useful in locations with poor cell coverage where some other form of internet access is available, especially at the home or office. The system offers seamless handoff, so the user can move from cell to Wi-Fi and back again with the same invisibility that the cell network offers when moving from tower to tower. Since the GAN system works over the internet, a UMA-capable handset can connect to its service provider from any location with internet access. This is particularly useful for travelers, who can connect to their provider's GANC and make calls into their home service area from anywhere in the world. This is subject to the quality of the internet connection, however, and may not work well over limited bandwidth or long-latency connection. To improve quality of service (QoS) in the home or office, some providers also supply a specially programmed wireless access point that prioritizes UMA packets. Another benefit of Wi-Fi calling is that mobile calls can be made through the internet using the same native calling client; it does not require third-party Voice over IP (VoIP) closed services like WhatsApp or Skype, relying instead on the mobile cellular operator. == Technology == The GAN protocol that extends mobile voice, data and multimedia (IP Multimedia Subsystem/Session Initiation Protocol (IMS/SIP)) applications over IP networks. The latest generation system is named or VoWiFi by a number of handset manufacturers, including Apple and Samsung, a move that is being mirrored by carriers like T-Mobile US and Vodafone. The service is dependent on IMS, IPsec, IWLAN and ePDG. === Modes of operation === The original Release 6 GAN specification supported a 2G (A/Gb) connection from the GANC into the mobile core network (MSC/GSN). Today all commercial GAN dual-mode handset deployments are based on a 2G connection and all GAN enabled devices are dual-mode 2G/Wi-Fi. The specification, though, defined support for multimode handset operation. Therefore, 3G/2G/Wi-Fi handsets are supported in the standard. The first 3G/UMA devices were announced in the second half of 2008. A typical UMA/GAN handset will have four modes of operation: GERAN-only: uses only cellular networks GERAN-preferred: uses cellular networks if available, otherwise the 802.11 radio GAN-preferred: uses an 802.11 connection if an access point is in range, otherwise the cellular network GAN-only: uses only the 802.11 connection In all cases, the handset scans for GSM cells when it first turns on, to determine its location area. This allows the carrier to route the call to the nearest GANC, set the correct rate plan, and comply with existing roaming agreements. At the end of 2007, the GAN specification was enhanced to support 3G (Iu) interfaces from the GANC to the mobile core network (MSC/GSN). This native 3G interface can be used for dual-mode handset as well as 3G femtocell service delivery. The GAN release 8 documentation describes these new capabilities. === UMA/GAN beyond dual-mode === While UMA is nearly always associated with dual-mode GSM/Wi-Fi services, it is actually a ‘generic’ access network technology that provides a generic method for extending the services and applications in an operator's mobile core (voice, data, IMS) over IP and the public Internet. GAN defines a secure, managed connection from the mobile core (GANC) to different devices/access points over IP. Femtocells: The GAN standard is currently used to provide a secure, managed, standardized interface from a femtocell to the mobile core network. Recently Kineto, NEC and Motorola issued a joint proposal to the 3GPP work group studying femtocells (also known as ‘Home Node B's or HNB) to propose GAN as the basis for that standard. Analog terminal adaptors (ATAs): T-Mobile US once offered a fixed-line VoIP service called @Home. Similar to Vonage, consumers can port their fixed phone number to T-Mobile. Then T-Mobile associates that number with an analog telephone adapter. The consumer plugs the ATA into a home broadband network and begins receiving calls to the fixed number over the IP access network. The service was discontinued in 2010; however, earlier subscribers were "grandfathered" in. Mobile VoIP client: Consumers have started to use telephony interfaces on their PCs. Applications offer a low-cost, convenient way to access telephony services while traveling. Now mobile operators can offer a similar service with a UMA-enabled mobile VoIP client. The client provides a mirror interface to a subscriber's existing mobile service. For the mobile operator, services can now be extended to a computer, and they can give consumers another way to use their mobile service. === Design considerations === A Wi-Fi network that supports voice telephony must be carefully designed in a way that maximizes performance and is able to support the applicable call density. A voice network includes call gateways in addition to the Wi-Fi access points. The gateways provide call handling among wireless IP phones and connections to traditional telephone systems. The Wi-Fi network supporting voice applications must provide much stronger signal coverage than what's needed for most data-only applications. In addition, the Wi-Fi network must provide seamless roaming between access points. == History == UMA was developed by a group of operator and vendor companies. The initial specifications were published on 2 September 2004. The companies then contributed the specifications to the 3rd Generation Partnership Project (3GPP) as part of 3GPP work item "Generic Access to A/Gb interfaces". On 8 April 2005, 3GPP approved specifications for Generic Access to A/Gb interfaces for 3GPP Release 6 and renamed the system to GAN. But the term GAN is little known outside the 3GPP community, and the term UMA is more common in marketing. == Advantages == For carriers: Instead of erecting expensive base stations to cover dead zones, GAN allows carriers to add coverage using low-cost 802.11 access points. Subscribers at home have very good coverage. In addition, GAN relieves congestion (meaning that networks can, through GAN, essentially piggyback on other infrastructure) on the GSM or UMTS spectrum by removing common types of calls and routing them to the operator via the relatively low-cost Internet. GAN makes sense for network operators that also offer Internet services. Operators can leverage sales of one to promote the other, and can bill both to each customer. Some other operators also run networks of 802.11 hotspots, such as T-Mobile. They can leverage these hotspots to create more capacity and provide better coverage in populous areas. The carrier does not pay for much of the service, the party who provides the Internet and Wi-Fi connection pays for a connection to the Internet, effectively paying the expensive part of routing calls from the subscriber. However, carriers typically do not pass on these savings in the form of lower bills to customers who use Wi-Fi for calls. For subscribers: Subscribers do not rely on their operator's ability to roll out towers and coverage, allowing them to fix some types of coverage dead zones (such as in the home or workplace) themselves. GAN often provides lower rates when roaming internationally. GAN is currently the only commercial technology available that combines GSM and 802.11 into a service that uses a single number, a single handset, a single set of services and a single phone directory for all calls. GAN can migrate between IP and cellular coverage and is thus seamless; in contrast, calls via third-party VOIP plus a data phone are dropped when leaving high-volume data coverage. == Disadvantages == Subscribers must upgrade to Wi-Fi/UMA enabled handsets to take advantage of the service. Calls may be more prone to disconnect when the handset transitions from Wi-Fi to the standard wireless service and vice versa (because the handset moved out or within the Wi-Fi's range). How much this is a problem may vary based on which handset is used. The UMA may use different frequency that is more prone to some types of interference Some setup may be required to provide connection settings (such as authentication details) before advantages may be experienced. This may take time for subscribers and require additional support to be provided. The costs of support may be for more than the wireless phone company: network administrators may be asked to help a user enter appropriate settings into a phone (that the network administrator may know little about). The phones that support multiple signals (both the UMA/Wi-Fi and the type of signal used by the provider's towers) may be more expensive, particularly to manufacture, due to additional circuitry/components required This uses the resources of the network providing the Wi-Fi signal (and any indirect network that is then utilized when that network is used). Bandwidth is used up. Some types of network traffic (like DNS and IPsec-encrypted) need to be permitted by the network, so a decision to support this may impose some requirement(s) regarding the network's security (firewall) rules. Using GAN/UMA on a mobile requires the WiFi module to be enabled. This in turn drains the battery faster, and reduces both the talk time and standby time when compared to disabling GAN/UMA (and in turn WiFi). UMA doesn't work with cellular-based E911 that uses GPS/Assisted GPS. Usually this is addressed by having the subscriber register a fixed primary address with the carrier via mobile settings, a carrier-provided app or website. No QoS guarantees. The Internet (and by extension most home networks) operates on a best-effort delivery model, so network congestion can interfere with call quality. Usually a problem for the subscriber's home network as gaming, high definition video, or P2P file sharing competes for available bandwidth. Some network equipment can deal with this by enabling QoS for VoIP protocols, however is complicated by the fact most UMA runs over IPsec over UDP which makes the underlying protocols (IMS/SIP) opaque from a network perspective. Handsets can mitigate this by prioritizing the IPsec traffic internally to a different WMM class (such as AC_VO). This also requires rest of the subscriber's network (if it's not wholly integrated as in most home WiFi routers/access-points) knowing how to take such traffic and prioritize it over other bulk/latency-sensitive traffic. == Service deployments == The first service launch was BT with BT Fusion in the autumn of 2005. The service is based on pre-3GPP GAN standard technology. Initially, BT Fusion used UMA over Bluetooth with phones from Motorola. From January 2007, it used UMA over 802.11 with phones from Nokia, Motorola and Samsung and was branded as a "Wi-Fi mobile service". BT has since discontinued the service. On August 28, 2006, TeliaSonera was the first to launch an 802.11 based UMA service called "Home Free". The service started in Denmark but is no longer offered. On September 25, 2006 Orange announced its "Unik service", also known as Signal Boost in the UK. However this service is no longer available to new customers in the UK. The announcement, the largest to date, covers more than 60m of Orange's mobile subscribers in the UK, France, Poland, Spain and the Netherlands. Cincinnati Bell announced the first UMA deployment in the United States. The service, originally called CB Home Run, allows users to transfer seamlessly from the Cincinnati Bell cellular network to a home wireless network or to Cincinnati Bell's WiFi HotSpots. It has since been rebranded as Fusion WiFi. This was followed shortly by T-Mobile US on June 27, 2007. T-Mobile's service, originally named "Hotspot Calling", and rebranded to "Wi-Fi Calling" in 2009, allows users to seamlessly transfer from the T-Mobile cellular network to an 802.11x wireless network or T-Mobile HotSpot in the United States. In Canada, both Fido and Rogers Wireless launched UMA plans under the names UNO and Rogers Home Calling Zone (later rebranded Talkspot, and subsequently rebranded again as Wi-Fi Calling), respectively, on May 6, 2008. In Australia, GAN has been implemented by Vodafone, Optus and Telstra. Since 10 April 2015, Wi-Fi Calling has been available for customers of EE in the UK initially on the Nokia Lumia 640 and Samsung Galaxy S6 and Samsung Galaxy S6 Edge handsets. In March 2016, Vodafone Netherlands launched Wi-Fi Calling support along with VoLTE. Since the Autumn of 2016, Wifi Calling / Voice over Wifi has been available for customers of Telenor Denmark, including the ability to do handover to and from the 4G (VoLTE) network. This is available for several Samsung and Apple handsets. AT&T and Verizon are going to launch Wi-Fi calling in 2015. Industry organisation UMA Today tracks all operator activities and handset development. In September 2015, South African cellular network Cell C launched WiFi Calling on its South African network. In November 2024, Belgian cellular network Voo launched WiFi Calling on its Belgian network. == Similar technologies == GAN/UMA is not the first system to allow the use of unlicensed spectrum to connect handsets to a GSM network. The GIP/IWP standard for DECT provides similar functionality, but requires a more direct connection to the GSM network from the base station. While dual-mode DECT/GSM phones have appeared, these have generally been functionally cordless phones with a GSM handset built-in (or vice versa, depending on your point of view), rather than phones implementing DECT/GIP, due to the lack of suitable infrastructure to hook DECT base-stations supporting GIP to GSM networks on an ad-hoc basis. GAN/UMA's ability to use the Internet to provide the "last mile" connection to the GSM network solves the major issue that DECT/GIP has faced. Had GIP emerged as a practical standard, the low power usage of DECT technology when idle would have been an advantage compared to GAN. There is nothing preventing an operator from deploying micro- and pico-cells that use towers that connect with the home network over the Internet. Several companies have developed femtocell systems that do precisely that, broadcasting a "real" GSM or UMTS signal, bypassing the need for special handsets that require 802.11 technology. In theory, such systems are more universal, and again require lower power than 802.11, but their legality will vary depending on the jurisdiction, and will require the cooperation of the operator. Further, users may be charged at higher cell phone rates, even though they are paying for the DSL or other network that ultimately carries their traffic; in contrast, GAN/UMA providers charge reduced rates when making calls off the providers cellular phone network. == Devices == Apple – iPhone 5C, iPhone 5S, and newer devices with iOS 8 or later. Some carriers may not support all models or all iOS versions. BlackBerry – Curve 8320, 8520, 8820, Curve 8900, Pearl 8120 and 8220, Bold 9700, Bold 9780, Torch 9800, Blackberry 9105, 9300, Blackberry Bold 9900 with OS 7.1 Google Nexus 5X, Nexus 6P, all Google Pixel phones HTC – Touch 3G, T-Mobile Shadow 2009, T-Mobile myTouch 4G (sometimes called the myTouch HD), T-Mobile G2 (as of build 1.22.531.8 OTA update), Desire S, Wildfire S, Sensation 4G, Amaze 4G, HTC One, HTC One S Huawei – U8651T LG – KE 520, KF 757 (3G), GT505, Optimus One, LG Optimus Me Motorola – DEFY, Z6w Nokia – 6300i, 6301, 6301b, 6086, 6136, 7510, E73 Mode, E5, C7 Astound, Lumia 521, Lumia 925, Nokia 1 and other low-cost handsets once WiFi calling is enabled, if necessary via a free third-party 'App' Sagem – my419X Samsung – SGH-T339, SGH-T409, SGH-T709, SGH-T739 (Katalyst), T336, P250, P260, P270 (3G), T-Mobile's Galaxy S SGH-T959, Galaxy SII to S5 (VoLTE only), Galaxy S6 and up (VoLTE + Wifi Calling(selected markets)) Sony Ericsson – G705u (3G) === Routers === Linksys WRT54G series § WRT54G-TM, WRTU54G-TM, and WRTU54GV2-TM Westell – UltraVoice UMA Terminal Adapter with Router === Operating Systems === Android – Starting with Android Oreo, Google has embedded a "Carrier Services" application to provide IMS functionality to the base OS. Other vendors may implement their own IMS application. == See also == IP Multimedia Subsystem IEEE 802.21 IEEE 802.11r IEEE 802.11u MoIP (disambiguation) Voice over LTE Local area network Mobile VoIP == References == == External links == 3GPP GAN Specification 43.318, 3GPP GAN Specification 44.318
Wikipedia/Generic_Access_Network
In combinatorial mathematics and extremal set theory, the Sauer–Shelah lemma states that every family of sets with small VC dimension consists of a small number of sets. It is named after Norbert Sauer and Saharon Shelah, who published it independently of each other in 1972. The same result was also published slightly earlier and again independently, by Vladimir Vapnik and Alexey Chervonenkis, after whom the VC dimension is named. In his paper containing the lemma, Shelah gives credit also to Micha Perles, and for this reason the lemma has also been called the Perles–Sauer–Shelah lemma and the Sauer–Shelah–Perles lemma. Buzaglo et al. call this lemma "one of the most fundamental results on VC-dimension", and it has applications in many areas. Sauer's motivation was in the combinatorics of set systems, while Shelah's was in model theory and that of Vapnik and Chervonenkis was in statistics. It has also been applied in discrete geometry and graph theory. == Definitions and statement == If F = { S 1 , S 2 , … } {\displaystyle \textstyle {\mathcal {F}}=\{S_{1},S_{2},\dots \}} is a family of sets and T {\displaystyle T} is a set, then T {\displaystyle T} is said to be shattered by F {\displaystyle {\mathcal {F}}} if every subset of T {\displaystyle T} (including the empty set and T {\displaystyle T} itself) can be obtained as the intersection T ∩ S i {\displaystyle T\cap S_{i}} of T {\displaystyle T} with some set S i {\displaystyle S_{i}} in the family. The VC dimension of F {\displaystyle {\mathcal {F}}} is the largest cardinality of a set shattered by F {\displaystyle {\mathcal {F}}} . In terms of these definitions, the Sauer–Shelah lemma states that if the VC dimension of F {\displaystyle {\mathcal {F}}} is k {\displaystyle k} , and the union of F {\displaystyle {\mathcal {F}}} has n {\displaystyle n} elements, then F {\displaystyle {\mathcal {F}}} can consist of at most ∑ i = 0 k ( n i ) = O ( n k ) {\displaystyle \sum _{i=0}^{k}{\binom {n}{i}}=O(n^{k})} sets, as expressed using big O notation. Equivalently, if the number of sets in the family, | F | {\displaystyle |{\mathcal {F}}|} , obeys the inequality | F | > ∑ i = 0 k − 1 ( n i ) , {\displaystyle |{\mathcal {F}}|>\sum _{i=0}^{k-1}{\binom {n}{i}},} then F {\displaystyle {\mathcal {F}}} shatters a set of size k {\displaystyle k} . The bound of the lemma is tight: Let the family F {\displaystyle {\mathcal {F}}} be composed of all subsets of { 1 , 2 , … , n } {\displaystyle \{1,2,\dots ,n\}} with size less than k {\displaystyle k} . Then the number of sets in F {\displaystyle {\mathcal {F}}} is exactly ∑ i = 0 k − 1 ( n i ) {\textstyle \sum _{i=0}^{k-1}{\binom {n}{i}}} but it does not shatter any set of size k {\displaystyle k} . == The number of shattered sets == A strengthening of the Sauer–Shelah lemma, due to Pajor (1985), states that every finite set family F {\displaystyle {\mathcal {F}}} shatters at least | F | {\displaystyle |{\mathcal {F}}|} sets. This immediately implies the Sauer–Shelah lemma, because only ∑ i = 0 k − 1 ( n i ) {\textstyle \sum _{i=0}^{k-1}{\tbinom {n}{i}}} of the subsets of an n {\displaystyle n} -item universe have cardinality less than k {\displaystyle k} . Thus, when | F | > ∑ i = 0 k − 1 ( n i ) , {\displaystyle |{\mathcal {F}}|>\sum _{i=0}^{k-1}{\binom {n}{i}},} there are not enough small sets to be shattered, so one of the shattered sets must have cardinality at least k {\displaystyle k} . For a restricted type of shattered set, called an order-shattered set, the number of shattered sets always equals the cardinality of the set family. == Proof == Pajor's variant of the Sauer–Shelah lemma may be proved by mathematical induction; the proof has variously been credited to Noga Alon or to Ron Aharoni and Ron Holzman. Base Every family of only one set shatters the empty set. Step Assume the lemma is true for all families of size less than | F | {\displaystyle |{\mathcal {F}}|} and let F {\displaystyle {\mathcal {F}}} be a family of two or more sets. Let x {\displaystyle x} be an element that belongs to some but not all of the sets in F {\displaystyle {\mathcal {F}}} . Split F {\displaystyle {\mathcal {F}}} into two subfamilies, of the sets that contain x {\displaystyle x} and the sets that do not contain x {\displaystyle x} . By the induction assumption, these two subfamilies shatter two collections of sets whose sizes add to at least | F | {\displaystyle |{\mathcal {F}}|} . None of these shattered sets contain x {\displaystyle x} , since a set that contains x {\displaystyle x} cannot be shattered by a family in which all sets contain x {\displaystyle x} or all sets do not contain x {\displaystyle x} . Some of the shattered sets may be shattered by both subfamilies. When a set S {\displaystyle S} is shattered by only one of the two subfamilies, it contributes one unit both to the number of shattered sets of the subfamily and to the number of shattered sets of F {\displaystyle {\mathcal {F}}} . When a set S {\displaystyle S} is shattered by both subfamilies, both S {\displaystyle S} and S ∪ { x } {\displaystyle S\cup \{x\}} are shattered by F {\displaystyle {\mathcal {F}}} , so S {\displaystyle S} contributes two units to the number of shattered sets of the subfamilies and of F {\displaystyle {\mathcal {F}}} . Therefore, the number of shattered sets of F {\displaystyle {\mathcal {F}}} is at least equal to the number shattered by the two subfamilies of F {\displaystyle {\mathcal {F}}} , which is at least | F | {\displaystyle |{\mathcal {F}}|} . A different proof of the Sauer–Shelah lemma in its original form, by Péter Frankl and János Pach, is based on linear algebra and the inclusion–exclusion principle. This proof extends to other settings such as families of vector spaces and, more generally, geometric lattices. == Applications == The original application of the lemma, by Vapnik and Chervonenkis, was in showing that every probability distribution can be approximated (with respect to a family of events of a given VC dimension) by a finite set of sample points whose cardinality depends only on the VC dimension of the family of events. In this context, there are two important notions of approximation, both parameterized by a number ε {\displaystyle \varepsilon } : a set S {\displaystyle S} of samples, and a probability distribution on S {\displaystyle S} , is said to be an ε {\displaystyle \varepsilon } -approximation of the original distribution if the probability of each event with respect to S {\displaystyle S} differs from its original probability by at most ε {\displaystyle \varepsilon } . A set S {\displaystyle S} of (unweighted) samples is said to be an ε {\displaystyle \varepsilon } -net if every event with probability at least ε {\displaystyle \varepsilon } includes at least one point of S {\displaystyle S} . An ε {\displaystyle \varepsilon } -approximation must also be an ε {\displaystyle \varepsilon } -net but not necessarily vice versa. Vapnik and Chervonenkis used the lemma to show that set systems of VC dimension d {\displaystyle d} always have ε {\displaystyle \varepsilon } -approximations of cardinality O ( d ε 2 log ⁡ d ε ) . {\displaystyle O({\tfrac {d}{\varepsilon ^{2}}}\log {\tfrac {d}{\varepsilon }}).} Later authors including Haussler & Welzl (1987) and Komlós, Pach & Woeginger (1992) similarly showed that there always exist ε {\displaystyle \varepsilon } -nets of cardinality O ( d ε log ⁡ 1 ε ) {\displaystyle O({\tfrac {d}{\varepsilon }}\log {\tfrac {1}{\varepsilon }})} , and more precisely of cardinality at most d ε ln ⁡ 1 ε + 2 d ε ln ⁡ ln ⁡ 1 ε + 6 d ε . {\displaystyle {\tfrac {d}{\varepsilon }}\ln {\tfrac {1}{\varepsilon }}+{\tfrac {2d}{\varepsilon }}\ln \ln {\tfrac {1}{\varepsilon }}+{\tfrac {6d}{\varepsilon }}.} The main idea of the proof of the existence of small ε {\displaystyle \varepsilon } -nets is to choose a random sample x {\displaystyle x} of cardinality O ( d ε log ⁡ 1 ε ) {\textstyle O({\tfrac {d}{\varepsilon }}\log {\tfrac {1}{\varepsilon }})} and a second independent random sample y {\displaystyle y} of cardinality O ( d ε log 2 ⁡ 1 ε ) {\textstyle O({\tfrac {d}{\varepsilon }}\log ^{2}{\tfrac {1}{\varepsilon }})} , and to bound the probability that x {\displaystyle x} is missed by some large event E {\displaystyle E} by the probability that x {\displaystyle x} is missed and simultaneously the intersection of y {\displaystyle y} with E {\displaystyle E} is larger than its median value. For any particular E {\displaystyle E} , the probability that x {\displaystyle x} is missed while y {\displaystyle y} is larger than its median is very small, and the Sauer–Shelah lemma (applied to x ∪ y {\displaystyle x\cup y} ) shows that only a small number of distinct events E {\displaystyle E} need to be considered, so by the union bound, with nonzero probability, x {\displaystyle x} is an ε {\displaystyle \varepsilon } -net. In turn, ε {\displaystyle \varepsilon } -nets and ε {\displaystyle \varepsilon } -approximations, and the likelihood that a random sample of large enough cardinality has these properties, have important applications in machine learning, in the area of probably approximately correct learning. In computational geometry, they have been applied to range searching, derandomization, and approximation algorithms. Kozma & Moran (2013) use generalizations of the Sauer–Shelah lemma to prove results in graph theory such as that the number of strong orientations of a given graph is sandwiched between its numbers of connected and 2-edge-connected subgraphs. == See also == Growth function == References ==
Wikipedia/Sauer–Shelah_lemma
Algorithmic accountability refers to the allocation of responsibility for the consequences of real-world actions influenced by algorithms used in decision-making processes. Ideally, algorithms should be designed to eliminate bias from their decision-making outcomes. This means they ought to evaluate only relevant characteristics of the input data, avoiding distinctions based on attributes that are generally inappropriate in social contexts, such as an individual's ethnicity in legal judgments. However, adherence to this principle is not always guaranteed, and there are instances where individuals may be adversely affected by algorithmic decisions. Responsibility for any harm resulting from a machine's decision may lie with the algorithm itself or with the individuals who designed it, particularly if the decision resulted from bias or flawed data analysis inherent in the algorithm's design. == Algorithm usage == Algorithms are widely utilized across various sectors of society that incorporate computational techniques in their control systems. These applications span numerous industries, including but not limited to medical, transportation, and payment services. In these contexts, algorithms perform functions such as: Approving or denying credit card applications; Counting votes in elections; Approving or denying immigrant visas; Determining which taxpayers will be audited on their income taxes; Managing systems that control self-driving cars on a highway; Scoring individuals as potential criminals for use in legal proceedings. However, the implementation of these algorithms can be complex and opaque. Generally, algorithms function as "black boxes," meaning that the specific processes an input undergoes during execution are often not transparent, with users typically only seeing the resulting output. This lack of transparency raises concerns about potential biases within the algorithms, as the parameters influencing decision-making may not be well understood. The outputs generated can lead to perceptions of bias, especially if individuals in similar circumstances receive different results. According to Nicholas Diakopoulos: But these algorithms can make mistakes. They have biases. Yet they sit in opaque black boxes, their inner workings, their inner “thoughts” hidden behind layers of complexity. We need to get inside that black box, to understand how they may be exerting power on us, and to understand where they might be making unjust mistakes == Wisconsin Supreme Court case == Algorithms are prevalent across various fields and significantly influence decisions that affect the population at large. Their underlying structures and parameters often remain unknown to those impacted by their outcomes. A notable case illustrating this issue is a recent ruling by the Wisconsin Supreme Court concerning "risk assessment" algorithms used in criminal justice. The court determined that scores generated by such algorithms, which analyze multiple parameters from individuals, should not be used as a determining factor for arresting an accused individual. Furthermore, the court mandated that all reports submitted to judges must include information regarding the accuracy of the algorithm used to compute these scores. This ruling is regarded as a noteworthy development in how society should manage software that makes consequential decisions, highlighting the importance of reliability, particularly in complex settings like the legal system. The use of algorithms in these contexts necessitates a high degree of impartiality in processing input data. However, experts note that there is still considerable work to be done to ensure the accuracy of algorithmic results. Questions about the transparency of data processing continue to arise, which raises issues regarding the appropriateness of the algorithms and the intentions of their designers. == Controversies == A notable instance of potential algorithmic bias is highlighted in an article by The Washington Post regarding the ride-hailing service Uber. An analysis of collected data revealed that estimated waiting times for users varied based on the neighborhoods in which they resided. Key factors influencing these discrepancies included the predominant ethnicity and average income of the area. Specifically, neighborhoods with a majority white population and higher economic status tended to have shorter waiting times, while those with more diverse ethnic compositions and lower average incomes experienced longer waits. It’s important to clarify that this observation reflects a correlation identified in the data, rather than a definitive cause-and-effect relationship. No value judgments are made regarding the behavior of the Uber app in these cases. In a separate analysis published in the "Direito Digit@l" column on the Migalhas website, authors Coriolano Almeida Camargo and Marcelo Crespo examine the use of algorithms in decision-making contexts traditionally handled by humans. They discuss the challenges in assessing whether machine-generated decisions are fair and the potential flaws that can arise in this validation process. The issue transcends and will transcend the concern with which data is collected from consumers to the question of how this data is used by algorithms. Despite the existence of some consumer protection regulations, there is no effective mechanism available to consumers that tells them, for example, whether they have been automatically discriminated against by being denied loans or jobs. The rapid advancement of technology has introduced numerous innovations to society, including the development of autonomous vehicles. These vehicles rely on algorithms embedded within their systems to manage navigation and respond to various driving conditions. Autonomous systems are designed to collect data and evaluate their surroundings in real time, allowing them to make decisions that simulate the actions of a human driver. In their analysis, Camargo and Crespo address potential issues associated with the algorithms used in autonomous vehicles. They particularly emphasize the challenges related to decision-making during critical moments, highlighting the complexities and ethical considerations involved in programming such systems to ensure safety and fairness. The technological landscape is rapidly changing with the advent of very powerful computers and algorithms that are moving toward the impressive development of artificial intelligence. We have no doubt that artificial intelligence will revolutionize the provision of services and also industry. The problem is that ethical issues urgently need to be thought through and discussed. Or are we simply going to allow machines to judge us in court cases? Or that they decide who should live or die in accident situations that could be intervened by some technological equipment, such as autonomous cars? In TechCrunch website, Hemant Taneja wrote: Concern about “black box” algorithms that govern our lives has been spreading. New York University’s Information Law Institute hosted a conference on algorithmic accountability, noting: “Scholars, stakeholders, and policymakers question the adequacy of existing mechanisms governing algorithmic decision-making and grapple with new challenges presented by the rise of algorithmic power in terms of transparency, fairness, and equal treatment.” Yale Law School’s Information Society Project is studying this, too. “Algorithmic modeling may be biased or limited, and the uses of algorithms are still opaque in many critical sectors,” the group concluded. == Possible solutions == Discussions among experts have sought viable solutions to understand the operations of algorithms, often referred to as "black boxes." It is generally proposed that companies responsible for developing and implementing these algorithms should ensure their reliability by disclosing the internal processes of their systems. Hemant Taneja, writing for TechCrunch, emphasizes that major technology companies, such as Google, Amazon, and Uber, must actively incorporate algorithmic accountability into their operations. He suggests that these companies should transparently monitor their own systems to avoid stringent regulatory measures. One potential approach is the introduction of regulations in the tech sector to enforce oversight of algorithmic processes. However, such regulations could significantly impact software developers and the industry as a whole. It may be more beneficial for companies to voluntarily disclose the details of their algorithms and decision-making parameters, which could enhance the trustworthiness of their solutions. Another avenue discussed is the possibility of self-regulation by the companies that create these algorithms, allowing them to take proactive steps in ensuring accountability and transparency in their operations. In TechCrunch website, Hemant Taneja wrote: There’s another benefit — perhaps a huge one — to software-defined regulation. It will also show us a path to a more efficient government. The world’s legal logic and regulations can be coded into software and smart sensors can offer real-time monitoring of everything from air and water quality, traffic flows and queues at the DMV. Regulators define the rules, technologist create the software to implement them and then AI and ML help refine iterations of policies going forward. This should lead to much more efficient, effective governments at the local, national and global levels. == See also == Algorithmic transparency Artificial intelligence and elections – Use and impact of AI on political elections Big data ethics Regulation of algorithms == References == == Bibliography == Kroll, Joshua A.; Huey, Joanna; Barocas, Solon; Barocas, Solon; Felten, Edward W.; Reidenberg, Joel R.; Robinson, David G.; Robinson, David G.; Yu, Harlan (2016) Accountable Algorithms. University of Pennsylvania Law Review, Vol. 165. Fordham Law Legal Studies Research Paper No. 2765268.
Wikipedia/Algorithmic_accountability
Codeforces (Russian: Коудфорсес) is a website that hosts competitive programming contests. It is maintained by a group of competitive programmers from ITMO University led by Mikhail Mirzayanov. Since 2013, Codeforces claims to surpass TopCoder in terms of active contestants. As of 2019, it has over 600,000 registered users. On its 15th anniversary, Codeforces had a total of 1,692,402 users with at least one submission. Codeforces along with other similar websites are used by some sport programmers, like Gennady Korotkevich, Petr Mitrichev, Benjamin Qi and Makoto Soejima, and by other programmers interested in furthering their careers. == Overview == Codeforces is a platform where people generally practice competitive programming and it offers the following features: Short (2-hours) contests, called "Codeforces Rounds", held about once a week Educational contests (2-2.5 hours, with 12 hours (24 hours before Round 45) hacking period), held 2-3 times per month; Challenge/hack other contestants' solutions; Solve problems from previous contests for training purposes; "Polygon" feature for creating and testing problems; Social networking through internal public blogs. == Rating system == Contestants are rated by a system similar to Elo rating system. There are usually no prizes for winners, though several times a year special contests are held, in which top-performing contestants receive T-shirts. Some bigger contests are hosted on Codeforces base, among them "The Lyft Level 5 Challenge 2018", provided by Lyft or "Microsoft Q# Coding Contest — Summer 2018" provided by Microsoft. Contestants are divided into ranks based on their ratings. Since May 2018, users with ratings between 1900 and 2099 can be rated in both Div. 1 and Div. 2 contests. At the same time, Div. 3 was created for users rated below 1600. There is also a Div. 4, which is for users rated below 1400. == History == Codeforces was created by a group of competitive programmers from Saratov State University led by Mike Mirzayanov. It was originally created for those interested in solving tasks and taking part in competitions. The first Codeforces Round was held on February 19, 2010 with 175 participants. As of the end of August 2022, over 800 rounds were held, with over 9000 registered competitors per round on average. Before 2012, Codeforces Rounds were titled "Codeforces Beta Rounds" to indicate that the system was still under development. == Academic use == Codeforces is recommended by many universities. According to Daniel Sleator, professor of Computer Science at Carnegie Mellon University, competitive programming is valuable in computer science education, because competitors learn to adapt classic algorithms to new problems, thereby improving their understanding of algorithmic concepts. He has used Codeforces problems in his class, 15-295: Competition Programming and Problem Solving. At the National University of Singapore, Codeforces rating is also used as an entrance qualifying criterion for registering for a 4-unit course, CS3233 Competitive Programming, as students have to achieve a rating of at least 1559 to be able to register for the course. == See also == CodeChef CodeFights Facebook Hacker Cup Google Code Jam HackerRank International Collegiate Programming Contest Online judge SPOJ Topcoder UVa Online Judge LeetCode Competitive programming == References == == External sources == Official website
Wikipedia/Codeforces
Algorithms of Oppression: How Search Engines Reinforce Racism is a 2018 book by Safiya Umoja Noble in the fields of information science, machine learning, and human-computer interaction. == Background == Noble earned an undergraduate degree in sociology from California State University, Fresno in the 1990s, then worked in advertising and marketing for fifteen years before going to the University of Illinois Urbana-Champaign for a Master of Library and Information Science degree in the early 2000s. The book's first inspiration came in 2011, when Noble Googled the phrase "black girls" and saw results for pornography on the first page. Noble's doctoral thesis, completed in 2012, was titled Searching for Black Girls: Old Traditions in New Media. At this time, Noble thought of the title "Algorithms of Oppression" for the eventual book. Noble became an assistant professor at University of California, Los Angeles in 2014. In 2017, she published an article on racist and sexist bias in search engines in The Chronicle of Higher Education. The book was published by New York University Press on February 20, 2018. By this time, changes to Google's algorithm had changed the most common results for a search of "black girls," though the underlying biases remain influential. == Overview == Algorithms of Oppression addresses the relationship between search engines and discriminatory biases. She takes a Black intersectional feminist approach. Intersectional feminism takes into account the experiences of women of different races and sexualities when discussing the oppression of women. Noble argues that search algorithms are racist and perpetuate societal problems because they reflect the negative biases that exist in society and the people who create them. Noble rejects the idea that search engines are inherently neutral, explaining how algorithms in search engines privilege whiteness by depicting positive cues when key words like “white” are searched as opposed to “Asian,” “Hispanic,” or “Black.” Her main example surrounds the search results of "Black girls" versus "white girls" and the biases that are depicted in the results. == Synopsis == Chapter 1 explores how Google search's auto suggestion feature is demoralizing, discussing example searches for terms like "black girls" (which returned pornography) and "Jew" (which returned anti-Semitic pages). Noble coins the term algorithmic oppression to describe data failures specific to people of color, women, and other marginalized groups. She discusses how Google could use human curation to eliminate slurs or inappropriate images from the first page of results, and criticizes Google's policy that unless pages are unlawful, Google will allow its algorithm to act without human curation. She identifies AdWords as a hypocritical use of curation to promote commercial interests, since it allows advertisers to pay for controversial or less-relevant topics to appear above the algorithm's selections. Chapter 2 examines Google's claims that they are not responsible for the content of search results, instead blaming the content creators and searchers. Noble highlights aspects of the algorithm which normalize whiteness and men. She argues that Google hides behind their algorithm, while reinforcing social inequalities and stereotypes for Black, Latina, and Asian women. Chapter 3 discusses how Google's search engine combines multiple sources to create threatening narratives about minorities. She explains a case study where she searched “black on white crimes” on Google. Noble highlights that the sources and information that were found after the search pointed to conservative sources that skewed information. These sources displayed racist and anti-black information from white supremacist sources. Ultimately, she believes this readily-available, false information fueled the actions of white supremacist Dylann Roof, who committed a massacre. Chapter 4 examines examples of women being shamed due to their activity in the porn industry, regardless if it was consensual or not. She critiques the internet's ability to influence one's future and compares U.S. privacy laws to those of the European Union, which provides citizens with “the right to forget or be forgotten.” She argues that these breaches of privacy disproportionately affect women and people of color. Chapter 5 moves away from Google and onto other information sources deemed credible and neutral. Noble says that prominent libraries, including the Library of Congress, reinforce hegemonies such as whiteness, heteronormativity, and patriarchy. As an example, she discusses a two-year effort to change the Library of Congress's catalog terminology from "illegal aliens" to "noncitizen" or "unauthorised immigrants". Noble argues all digital search engines reinforce discriminatory biases, highlighting how interconnected technology and society are. Chapter 6 discusses possible solutions for the problem of algorithmic bias. She insists that governments and corporations bear the most responsibility to reform their systemic issues, and rejects the neoliberal argument that algorithmic biases will disappear if more women and racial minorities enter the industry as software engineers. She critiques a mindset she calls “big-data optimism,” or the notion that large institutions solve inequalities. She argues that policies enacted by local and federal governments could reduce Google's “information monopoly” and regulate the ways in which search engines filter their results. To illustrate this point, she uses the example a Black hairdresser whose business faces setbacks because the review site Yelp has used biased advertising practices and searching strategies against her. She closes the chapter by calling upon the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) to “regulate decency,” or to limit the amount of racist, homophobic, or prejudiced rhetoric on the Internet. She urges the public to shy away from “colorblind” ideologies toward race, arguing that these erase the struggles faced by racial minorities. The conclusion synthesizes the previous chapters, and challenges the idea that the internet is a fully democratic or post-racial environment. == Critical reception == Critical reception for Algorithms of Oppression has been largely positive. In the Los Angeles Review of Books, Emily Drabinski writes, "What emerges from these pages is the sense that Google’s algorithms of oppression comprise just one of the hidden infrastructures that govern our daily lives, and that the others are likely just as hard-coded with white supremacy and misogyny as the one that Noble explores." In PopMatters, Hans Rollman writes that Algorithms of Oppression "demonstrate[s] that search engines, and in particular Google, are not simply imperfect machines, but systems designed by humans in ways that replicate the power structures of the western countries where they are built, complete with all the sexism and racism that are built into those structures." In Booklist, reviewer Lesley Williams states, "Noble’s study should prompt some soul-searching about our reliance on commercial search engines and about digital social equity." In early February 2018, Algorithms of Oppression received press attention when the official Twitter account for the Institute of Electrical and Electronics Engineers expressed criticism of the book, saying that the results of a Google search suggested in its blurb did not match Noble's predictions. IEEE's outreach historian, Alexander Magoun, later revealed that he had not read the book, and issued an apology. == See also == Algorithmic bias Techlash == References == == External links == Algorithms of Oppression: How Search Engines Reinforce Racism
Wikipedia/Algorithms_of_Oppression
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (humanities, arts, and social sciences), rebranded in 2020 as SHAPE (social sciences, humanities and the arts for people and the economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. == Terminology == === History === In the early 1990s the acronym STEM was used by a variety of educators. Beverly Schwartz developed a STEM mentoring program in the Capital District of New York State, and was using the acronym as early as November, 1991. Charles E. Vela was the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE) and started a summer program for talented under-represented students in the Washington, D.C. area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering education. Previously referred to as SMET by the NSF, it is through this manner that NSF was first introduced to the acronym STEM. One of the first NSF projects to use the acronym was STEMTEC, the Science, Technology, Engineering, and Math Teacher Education Collaborative at the University of Massachusetts Amherst, which was founded in 1998. In 2001, at the urging of Dr. Peter Faletra, the Director of Workforce Development for Teachers and Scientists at the Office of Science, the acronym was adopted by Rita Colwell and other science administrators in the National Science Foundation (NSF). The Office of Science was also an early adopter of the STEM acronym. === Other variations === eSTEM (environmental STEM) GEMS (girls in engineering, math, and science); used for programs to encourage women to enter these fields. MINT (mathematics, informatics, natural sciences, and technology) SHTEAM (science, humanities, technology, engineering, arts, and mathematics) SMET (science, mathematics, engineering, and technology); previous name STEAM (science, technology, engineering, arts, and mathematics) STEAM (science, technology, engineering, agriculture, and mathematics); add agriculture STEAM (science, technology, engineering, and applied mathematics); has more focus on applied mathematics STEEM (science, technology, engineering, economics, and mathematics); adds economics as a field STEMIE (science, technology, engineering, mathematics, invention, and entrepreneurship); adds inventing and entrepreneurship as a means to apply STEM to real-world problem-solving and markets. STEMM (science, technology, engineering, mathematics, and medicine) STM (scientific, technical, and mathematics or science, technology, and medicine) STREAM (science, technology, robotics, engineering, arts, and mathematics); adds robotics and arts as fields STREAM (science, technology, reading, engineering, arts, and mathematics); adds reading and arts STREAM (science, technology, recreation, engineering, arts, and mathematics); adds recreation and arts == Geographic distribution == By the mid-2000s, China surpassed the United States in the number of PhDs awarded and is expected to produce 77,000 PhDs in 2025, compared to 40,000 in the US. == By country == === Australia === The Australian Curriculum, Assessment, and Reporting Authority 2015 report entitled, National STEM School Education Strategy, stated that "A renewed national focus on STEM in school education is critical to ensuring that all young Australians are equipped with the necessary STEM skills and knowledge that they must need to succeed." Its goals were to: "Ensure all students finish school with strong foundational knowledge in STEM and related skills" "Ensure that students are inspired to take on more challenging STEM subjects" Events and programs meant to help develop STEM in Australian schools include the Victorian Model Solar Vehicle Challenge, the Maths Challenge (Australian Mathematics Trust), Go Girl Go Global and the Australian Informatics Olympiad. === Canada === Canada ranks 12th out of 16 peer countries in the percentage of its graduates who studied in STEM programs, with 21.2%, a number higher than the United States, but lower than France, Germany, and Austria. The peer country with the greatest proportion of STEM graduates, Finland, has over 30% of its university graduates coming from science, mathematics, computer science, and engineering programs. SHAD is an annual Canadian summer enrichment program for high-achieving high school students in July. The program focuses on academic learning, particularly in STEAM fields. Scouts Canada has taken similar measures to their American counterpart to promote STEM fields to youth. Their STEM program began in 2015. In 2011 Canadian entrepreneur and philanthropist Seymour Schulich established the Schulich Leader Scholarships, $100 million in $60,000 scholarships for students beginning their university education in a STEM program at 20 institutions across Canada. Each year 40 Canadian students would be selected to receive the award, two at each institution, with the goal of attracting gifted youth into the STEM fields. The program also supplies STEM scholarships to five participating universities in Israel. === China === To promote STEM in China, the Chinese government issued a guideline in 2016 on national innovation-driven development strategy, "instructing that by 2020, China should become an innovative country; by 2030, it should be at the forefront of innovative countries; and by 2050, it should become a technology innovation power." "[I]n May 2018, the launching ceremony and press conference for the 2029 Action Plan for China's STEM Education was held in Beijing, China. This plan aims to allow as many students to benefit from STEM education as possible and equip all students with scientific thinking and the ability to innovate." "In response to encouraging policies by the government, schools in both public and private sectors around the country have begun to carry out STEM education programs." "However, to effectively implement STEM curricula, full-time teachers specializing in STEM education and relevant content to be taught are needed." Currently, "China lacks qualified STEM teachers and a training system is yet to be established." Several Chinese cities have made programming a mandatory subject for elementary and middle school students. This is the case of the city of Chongqing. However, most students from small and medium-sized cities have not been exposed to the concept of STEM until they enter college. === Europe === Several European projects have promoted STEM education and careers in Europe. For instance, Scientix is a European cooperation of STEM teachers, education scientists, and policymakers. The SciChallenge project used a social media contest and student-generated content to increase the motivation of pre-university students for STEM education and careers. The Erasmus programme project AutoSTEM used automata to introduce STEM subjects to very young children. ==== Finland ==== The LUMA Center is the leading advocate for STEM-oriented education. Its aim is to promote the instruction and research of natural sciences, mathematics, computer science, and technology across all educational levels in the country. In the native tongue luma stands for "luonnontieteellis-matemaattinen" (lit. adj. "scientific-mathematical"). The short is more or less a direct translation of STEM, with engineering fields included by association. However, unlike STEM, the term is also a portmanteau from lu and ma. To address the decline in interest in learning the areas of science, the Finnish National Board of Education launched the LUMA scientific education development program. The project's main goal was to raise the level of Finnish education and to enhance students' competencies, improve educational practices, and foster interest in science. The initiative led to the establishment of 13 LUMA centers at universities across Finland supervised by LUMA Center. ==== France ==== The name of STEM in France is industrial engineering sciences (sciences industrielles or sciences de l'ingénieur). The STEM organization in France is the association UPSTI. === Hong Kong === STEM education has not been promoted among the local schools in Hong Kong until recent years. In November 2015, the Education Bureau of Hong Kong released a document titled Promotion of STEM Education, which proposes strategies and recommendations for promoting STEM education. === India === India is next only to China with STEM graduates per population of 1 to 52. The total number of fresh STEM graduates was 2.6 million in 2016. STEM graduates have been contributing to the Indian economy with well-paid salaries locally and abroad for the past two decades. The turnaround of the Indian economy with comfortable foreign exchange reserves is mainly attributed to the skills of its STEM graduates. In India, women make up an impressive 43% of STEM graduates, the highest percentage worldwide. However, they hold only 14% of STEM-related jobs. Additionally, among the 280,000 scientists and engineers working in research and development institutes in the country, women represent a mere 14% In India, OMOTEC is providing an innovative curriculum based on STEM, and their students are also performing and developing products to solve the new age problems. Two students also won the Microsoft Imagine Cup for developing a non-invasive method to screen for skin cancer using artificial intelligence. === Nigeria === In Nigeria, the Association of Professional Women Engineers Of Nigeria (APWEN) has involved girls between the ages of 12 and 19 in science-based courses in order for them to pursue science-based courses in higher institutions of learning. The National Science Foundation (NSF) In Nigeria has made conscious efforts to encourage girls to innovate, invent, and build through the "invent it, build it" program sponsored by NNPC. === Pakistan === STEM subjects are taught in Pakistan as part of electives taken in the 9th and 10th grades, culminating in Matriculation exams. These electives are pure sciences (Physics, Chemistry, Biology), mathematics (Physics, Chemistry, Maths), and computer science (Physics, Chemistry, Computer Science). STEM subjects are also offered as electives taken in the 11th and 12th grades, more commonly referred to as first and second year, culminating in Intermediate exams. These electives are FSc pre-medical (Physics, Chemistry, Biology), FSc pre-engineering (Physics, Chemistry, Maths), and ICS (Physics/Statistics, Computer Science, Maths). These electives are intended to aid students in pursuing STEM-related careers in the future by preparing them for the study of these courses at university. A STEM education project has been approved by the government to establish STEM labs in public schools. The Ministry of Information Technology and Telecommunication has collaborated with Google to launch Pakistan's first grassroots-level Coding Skills Development Program, based on Google's CS First Program, a global initiative aimed at developing coding skills in children. The program aims to develop applied coding skills using gamification techniques for children between the ages of 9 and 14. The KPITBs Early Age Programming initiative, established in the province of Khyber Pakhtunkhwa, has been successfully introduced in 225 Elementary and Secondary Schools. Many private organizations are working in Pakistan to introduce STEM education in schools. === Philippines === In the Philippines, STEM is a two-year program and strand that is used for Senior High School (Grades 11 and 12), assigned by the Department of Education or DepEd. The STEM strand is under the Academic Track, which also includes other strands like ABM, HUMSS, and GAS. The purpose of the STEM strand is to educate students in the field of science, technology, engineering, and mathematics, in an interdisciplinary and applied approach, and to give students advanced knowledge and application in the field. After completing the program, the students will earn a Diploma in Science, Technology, Engineering, and Mathematics. In some colleges and universities, they require students applying for STEM degrees (like medicine, engineering, computer studies, etc.) to be a graduate of STEM, if not, they will need to enter a bridging program. === Qatar === In Qatar, AL-Bairaq is an outreach program to high-school students with a curriculum that focuses on STEM, run by the Center for Advanced Materials (CAM) at Qatar University. Each year around 946 students, from about 40 high schools, participate in AL-Bairaq competitions. AL-Bairaq makes use of project-based learning, encourages students to solve authentic problems, and inquires them to work with each other as a team to build real solutions. Research has so far shown positive results for the program. === Singapore === STEM is part of the Applied Learning Programme (ALP) that the Singapore Ministry of Education (MOE) has been promoting since 2013, and currently, all secondary schools have such a program. It is expected that by 2023, all primary schools in Singapore will have an ALP. There are no tests or exams for ALPs. The emphasis is for students to learn through experimentation – they try, fail, try, learn from it, and try again. The MOE actively supports schools with ALPs to further enhance and strengthen their capabilities and programs that nurture innovation and creativity. The Singapore Science Centre established a STEM unit in January 2014, dedicated to igniting students' passion for STEM. To further enrich students' learning experiences, their Industrial Partnership Programme (IPP) creates opportunities for students to get early exposure to real-world STEM industries and careers. Curriculum specialists and STEM educators from the Science Centre will work hand-in-hand with teachers to co-develop STEM lessons, provide training to teachers, and co-teach such lessons to provide students with early exposure and develop their interest in STEM. === Thailand === In 2017, Thai Education Minister Teerakiat Jareonsettasin said after the 49th Southeast Asia Ministers of Education Organisation (SEAMEO) Council Conference in Jakarta that the meeting approved the establishment of two new SEAMEO regional centers in Thailand. One would be the STEM Education Centre, while the other would be a Sufficient Economy Learning Centre. Teerakiat said that the Thai government had already allocated Bt250 million over five years for the new STEM center. The center will be the regional institution responsible for STEM education promotion. It will not only set up policies to improve STEM education, but it will also be the center for information and experience sharing among the member countries and education experts. According to him, "This is the first SEAMEO regional center for STEM education, as the existing science education center in Malaysia only focuses on the academic perspective. Our STEM education center will also prioritize the implementation and adaptation of science and technology." The Institute for the Promotion of Teaching Science and Technology has initiated a STEM Education Network. Its goals are to promote integrated learning activities improve student creativity and application of knowledge, and establish a network of organations and personnel for the promotion of STEM education in the country. === Turkey === Turkish STEM Education Task Force (or FeTeMM—Fen Bilimleri, Teknoloji, Mühendislik ve Matematik) is a coalition of academicians and teachers who show an effort to increase the quality of education in STEM fields rather than focussing on increasing the number of STEM graduates. === United States === In the United States, the acronym began to be used in education and immigration debates in initiatives to begin to address the perceived lack of qualified candidates for high-tech jobs. It also addresses concern that the subjects are often taught in isolation, instead of as an integrated curriculum. Maintaining a citizenry that is well-versed in the STEM fields is a key portion of the public education agenda of the United States. The acronym has been widely used in the immigration debate regarding access to United States work visas for immigrants who are skilled in these fields. It has also become commonplace in education discussions as a reference to the shortage of skilled workers and inadequate education in these areas. The term tends not to refer to the non-professional and less visible sectors of the fields, such as electronics assembly line work. ==== National Science Foundation ==== Many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field. The NSF uses a broad definition of STEM subjects that includes subjects in the fields of chemistry, computer and information technology science, engineering, geoscience, life sciences, mathematical sciences, physics and astronomy, social sciences (anthropology, economics, psychology, and sociology), and STEM education and learning research. The NSF is the only American federal agency whose mission includes support for all fields of fundamental science and engineering, except for medical sciences. Its disciplinary program areas include scholarships, grants, and fellowships in fields such as biological sciences, computer and information science and engineering, education and human resources, engineering, environmental research and education, geoscience, international science and engineering, mathematical and physical sciences, social, behavioral and economic sciences, cyberinfrastructure, and polar programs. ==== Immigration policy ==== Although many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field, the United States Department of Homeland Security (DHS) has its own functional definition used for immigration policy. In 2012, DHS or ICE announced an expanded list of STEM-designated degree programs that qualify eligible graduates on student visas for an optional practical training (OPT) extension. Under the OPT program, international students who graduate from colleges and universities in the United States can stay in the country and receive up to twelve months of training through work experience. Students who graduate from a designated STEM degree program can stay for an additional seventeen months on an OPT STEM extension. As of 2023, the U.S. faces a shortage of high-skilled workers in STEM, and foreign talents must navigate difficult hurdles to immigrate. Meanwhile, some other countries, such as Australia, Canada, and the United Kingdom, have introduced programs to attract talent at the expense of the United States. In the case of China, the United States risks losing its edge over a strategic rival. ==== Education ==== By cultivating an interest in the natural and social sciences in preschool or immediately following school entry, the chances of STEM success in high school can be greatly improved. STEM supports broadening the study of engineering within each of the other subjects and beginning engineering at younger grades, even elementary school. It also brings STEM education to all students rather than only the gifted programs. In his 2012 budget, President Barack Obama renamed and broadened the "Mathematics and Science Partnership (MSP)" to award block grants to states for improving teacher education in those subjects. In the 2015 run of the international assessment test the Program for International Student Assessment (PISA), American students came out 35th in mathematics, 24th in reading, and 25th in science, out of 109 countries. The United States also ranked 29th in the percentage of 24-year-olds with science or mathematics degrees. STEM education often uses new technologies such as 3D printers to encourage interest in STEM fields. STEM education can also leverage the combination of new technologies, such as photovoltaics and environmental sensors, with old technologies such as composting systems and irrigation within land lab environments. In 2006 the United States National Academies expressed their concern about the declining state of STEM education in the United States. Its Committee on Science, Engineering, and Public Policy developed a list of 10 actions. Their top three recommendations were to: Increase America's talent pool by improving K–12 science and mathematics education Strengthen the skills of teachers through additional training in science, mathematics, and technology Enlarge the pipeline of students prepared to enter college and graduate with STEM degrees The National Aeronautics and Space Administration also has implemented programs and curricula to advance STEM education to replenish the pool of scientists, engineers, and mathematicians who will lead space exploration in the 21st century. Individual states, such as California, have run pilot after-school STEM programs to learn what the most promising practices are and how to implement them to increase the chance of student success. Another state to invest in STEM education is Florida, where Florida Polytechnic University, Florida's first public university for engineering and technology dedicated to science, technology, engineering, and mathematics (STEM), was established. During school, STEM programs have been established for many districts throughout the U.S. Some states include New Jersey, Arizona, Virginia, North Carolina, Texas, and Ohio. Continuing STEM education has expanded to the post-secondary level through masters programs such as the University of Maryland's STEM Program as well as the University of Cincinnati. ==== Racial gap in STEM fields ==== In the United States, the National Science Foundation found that the average science score on the 2011 National Assessment of Educational Progress was lower for black and Hispanic students than for white, Asian, and Pacific Islanders. In 2011, eleven percent of the U.S. workforce was black, while only six percent of STEM workers were black. Though STEM in the U.S. has typically been dominated by white males, there have been considerable efforts to create initiatives to make STEM a more racially and gender-diverse field. Some evidence suggests that all students, including black and Hispanic students, have a better chance of earning a STEM degree if they attend a college or university at which their entering academic credentials are at least as high as the average student's. ==== Gender gaps in STEM ==== Although women make up 47% of the workforce in the U.S., they hold only 24% of STEM jobs. Research suggests that exposing girls to female inventors at a young age has the potential to reduce the gender gap in technical STEM fields by half. Campaigns from organizations like the National Inventors Hall of Fame aimed to achieve a 50/50 gender balance in their youth STEM programs by 2020. The gender gap in Zimbabwe's STEM fields is also significant, with only 28.79% of women holding STEM degrees compared to 71.21% of men. ==== Intersectionality in STEM ==== STEM fields have been recognized as areas where underrepresentation and exclusion of marginalized groups are prevalent. STEM poses unique challenges related to intersectionality due to rigid norms and stereotypes, both in higher education and professional settings. These norms often prioritize objectivity and meritocracy while overlooking structural inequities, creating environments where individuals with intersecting marginalized identities face compounded barriers. For instance, individuals from traditionally underrepresented groups may experience a phenomenon known as "chilly climates" which refers to incidents of sexism, isolation, and pressure to prove themselves to peers and high level academics. For minority populations in STEM, loneliness is experienced due to lack of belonging and social isolation. ==== American Competitiveness Initiative ==== In the State of the Union Address on January 31, 2006, President George W. Bush announced the American Competitiveness Initiative. Bush proposed the initiative to address shortfalls in federal government support of educational development and progress at all academic levels in the STEM fields. In detail, the initiative called for significant increases in federal funding for advanced R&D programs (including a doubling of federal funding support for advanced research in the physical sciences through DOE) and an increase in U.S. higher education graduates within STEM disciplines. The NASA Means Business competition, sponsored by the Texas Space Grant Consortium, furthers that goal. College students compete to develop promotional plans to encourage students in middle and high school to study STEM subjects and to inspire professors in STEM fields to involve their students in outreach activities that support STEM education. The National Science Foundation has numerous programs in STEM education, including some for K–12 students such as the ITEST Program that supports The Global Challenge Award ITEST Program. STEM programs have been implemented in some Arizona schools. They implement higher cognitive skills for students and enable them to inquire and use techniques used by professionals in the STEM fields. Project Lead The Way (PLTW) is a provider of STEM education curricular programs to middle and high schools in the United States. Programs include a high school engineering curriculum called Pathway To Engineering, a high school biomedical sciences program, and a middle school engineering and technology program called Gateway To Technology. PLTW programs have been endorsed by President Barack Obama and United States Secretary of Education Arne Duncan as well as various state, national, and business leaders. ==== STEM Education Coalition ==== The Science, Technology, Engineering, and Mathematics (STEM) Education Coalition works to support STEM programs for teachers and students at the U.S. Department of Education, the National Science Foundation, and other agencies that offer STEM-related programs. Activity of the STEM Coalition seems to have slowed since September 2008. ==== Scouting ==== In 2012, the Boy Scouts of America began handing out awards, titled NOVA and SUPERNOVA, for completing specific requirements appropriate to the scouts' program level in each of the four main STEM areas. The Girl Scouts of the USA has similarly incorporated STEM into their program through the introduction of merit badges such as "Naturalist" and "Digital Art". SAE is an international organization, and provider specializing in supporting education, award, and scholarship programs for STEM matters, from pre-K to college degrees. It also promotes scientific and technological innovation. ==== Department of Defense programs ==== eCybermission is a free, web-based science, mathematics, and technology competition for students in grades six through nine sponsored by the U.S. Army. Each webinar is focused on a different step of the scientific method and is presented by an experienced eCybermission CyberGuide. CyberGuides are military and civilian volunteers with a strong background in STEM and STEM education, who can provide insight into science, technology, engineering, and mathematics to students and team advisers. STARBASE is an educational program, sponsored by the Office of the Assistant Secretary of Defense for Reserve Affairs. Students interact with military personnel to explore careers and make connections with the "real world". The program provides students with 20–25 hours of experience at the National Guard, Navy, Marines, Air Force Reserve, and Air Force bases across the nation. SeaPerch is an underwater robotics program that trains teachers to teach their students how to build an underwater remotely operated vehicle (ROV) in an in-school or out-of-school setting. Students build the ROV from a kit composed of low-cost, easily accessible parts, following a curriculum that teaches basic engineering and science concepts with a marine engineering theme. ==== NASA ==== NASAStem is a program of the U.S. space agency NASA to increase diversity within its ranks, including age, disability, and gender as well as race/ethnicity. ==== Legislation ==== The America COMPETES Act (P.L. 110–69) became law on August 9, 2007. It is intended to increase the nation's investment in science and engineering research and in STEM education from kindergarten to graduate school and postdoctoral education. The act authorizes funding increases for the National Science Foundation, National Institute of Standards and Technology laboratories, and the Department of Energy (DOE) Office of Science over FY2008–FY2010. Robert Gabrys, Director of Education at NASA's Goddard Space Flight Center, articulated success as increased student achievement, early expression of student interest in STEM subjects, and student preparedness to enter the workforce. ==== Jobs ==== In November 2012 the White House announcement before the congressional vote on the STEM Jobs Act put President Obama in opposition to many of the Silicon Valley firms and executives who bankrolled his re-election campaign. The Department of Labor identified 14 sectors that are "projected to add substantial numbers of new jobs to the economy or affect the growth of other industries or are being transformed by technology and innovation requiring new sets of skills for workers." The identified sectors were as follows: advanced manufacturing, Automotive, construction, financial services, geospatial technology, homeland security, information technology, Transportation, Aerospace, Biotechnology, energy, healthcare, hospitality, and retail. The Department of Commerce notes STEM fields careers are some of the best-paying and have the greatest potential for job growth in the early 21st century. The report also notes that STEM workers play a key role in the sustained growth and stability of the U.S. economy, and training in STEM fields generally results in higher wages, whether or not they work in a STEM field. In 2015, there were around 9.0 million STEM jobs in the United States, representing 6.1% of American employment. STEM jobs were increasing by around 9% percent per year. Brookings Institution found that the demand for competent technology graduates will surpass the number of capable applicants by at least one million individuals. According to Pew Research Center, a typical STEM worker earns two-thirds more than those employed in other fields. ==== Recent progress ==== According to the 2014 US census "74 percent of those who have a bachelor's degree in science, technology, engineering and math — commonly referred to as STEM — are not employed in STEM occupations." In September 2017, several large American technology firms collectively pledged to donate $300 million for computer science education in the U.S. PEW findings revealed in 2018 that Americans identified several issues that hound STEM education which included unconcerned parents, disinterested students, obsolete curriculum materials, and too much focus on state parameters. 57 percent of survey respondents pointed out that one main problem of STEM is the lack of students' concentration in learning. The recent National Assessment of Educational Progress (NAEP) report card made public technology as well as engineering literacy scores which determines whether students can apply technology and engineering proficiency to real-life scenarios. The report showed a gap of 28 points between low-income students and their high-income counterparts. The same report also indicated a 38-point difference between white and black students. The Smithsonian Science Education Center (SSEC) announced the release of a five-year strategic plan by the Committee on STEM Education of the National Science and Technology Council on December 4, 2018. The plan is entitled "Charting a Course for Success: America's Strategy for STEM Education." The objective is to propose a federal strategy anchored on a vision for the future so that all Americans are given permanent access to premium-quality education in Science, Technology, Engineering, and Mathematics. In the end, the United States can emerge as a world leader in STEM mastery, employment, and innovation. The goals of this plan are building foundations for STEM literacy; enhancing diversity, equality, and inclusion in STEM; and preparing the STEM workforce for the future. The 2019 fiscal budget proposal of the White House supported the funding plan in President Donald Trump's Memorandum on STEM Education which allocated around $200 million (grant funding) for STEM education every year. This budget also supports STEM through a grant program worth $20 million for career as well as technical education programs. ==== Events and programs to help develop STEM in US schools ==== FIRST Tech Challenge VEX Robotics Competitions FIRST Robotics Competition === Vietnam === In Vietnam, beginning in 2012 many private education organizations have STEM education initiatives. In 2015, the Ministry of Science and Technology and Liên minh STEM organized the first National STEM Day, followed by many similar events across the country. in 2015, the Ministry of Education and Training included STEM as an area that needed to be encouraged in the national school year program. In May 2017, the Prime Minister signed a Directive No. 16 stating: "Dramatically change the policies, contents, education and vocational training methods to create a human resource capable of receiving new production technology trends, with a focus on promoting training in science, technology, engineering and mathematics (STEM), foreign languages, information technology in general education; " and asking "Ministry of Education and Training (to): Promote the deployment of science, technology, engineering and mathematics (STEM) education in general education program; Pilot organize in some high schools from 2017 to 2018. == Women == Women constitute 47% of the U.S. workforce and perform 24% of STEM-related jobs. In the UK women perform 13% of STEM-related jobs (2014). In the U.S. women with STEM degrees are more likely to work in education or healthcare rather than STEM fields compared with their male counterparts. The gender ratio depends on the field of study. For example, in the European Union in 2012 women made up 47.3% of the total, 51% of the social sciences, business, and law, 42% of the science, mathematics, and computing, 28% of engineering, manufacturing, and construction, and 59% of PhD graduates in Health and Welfare. In a study from 2019, it was shown that part of the success of women in STEM depends on the way women in STEM are viewed. In a study that researched grants given based primarily on a project versus primarily based on the project lead there was almost no difference in the evaluation between projects from men or women when evaluated on the project, but those evaluated mainly on the project leader showed that projects headed by women were given grants four percent less often. Improving the experiences of women in STEM is a major component of increasing the number of women in STEM. One part of this includes the need for role models and mentors who are women in STEM. Along with this, having good resources for information and networking opportunities can improve women's ability to flourish in STEM fields. Adding to the complexity, global studies indicate that biology may play a significant role in the gender gaps in STEM fields because the propensity for women to pursue college degrees in STEM fields declines consistently as countries become more wealthy and egalitarian. As women are more free to choose their careers, they are more prone to chose careers that relate to people rather than objects. == LGBTQ+ == People identifying within the group LGBTQ+ have faced discrimination in STEM fields throughout history. Few were openly queer in STEM; however, a couple of well-known people are Alan Turing, the father of computer science, and Sara Josephine Baker, an American physician and public-health leader. Despite recent changes in attitudes towards LGBTQ+ people, discrimination still permeates throughout STEM fields. A recent study has shown that sexual minority students were less likely to have completed a bachelor's degree in a STEM field, having opted to switch their major. Those that remained in a STEM field were however more likely to participate in undergraduate research programs. According to the study sexual minorities did show higher overall retention rates within STEM related fields as compared to heterosexual woman. Another study concluded that queer people are more likely to experience exclusion, harassment, and other negative impacts while in a STEM career while also having fewer opportunities and resources available to them. Multiple programs and institutions are working towards increasing the inclusion and acceptance of LGBTQ+ people in STEM. In the US, the National Organization of Gay and Lesbian Scientists and Technical Professionals (NOGLSTP) has organized people to address homophobia since the 1980s and now promotes activism and support for queer scientists. Other programs, including 500 Queer Scientists and Pride in STEM, function as visibility campaigns for LGBTQ+ people in STEM worldwide. == Criticism == The focus on increasing participation in STEM fields has attracted criticism. In the 2014 article "The Myth of the Science and Engineering Shortage" in The Atlantic, demographer Michael S. Teitelbaum criticized the efforts of the U.S. government to increase the number of STEM graduates, saying that, among studies on the subject, "No one has been able to find any evidence indicating current widespread labor market shortages or hiring difficulties in science and engineering occupations that require bachelor's degrees or higher", and that "Most studies report that real wages in many—but not all—science and engineering occupations have been flat or slow-growing, and unemployment as high or higher than in many comparably-skilled occupations." Teitelbaum also wrote that the then-current national fixation on increasing STEM participation paralleled previous U.S. government efforts since World War II to increase the number of scientists and engineers, all of which he stated ultimately ended up in "mass layoffs, hiring freezes, and funding cuts"; including one driven by the Space Race of the late 1950s and 1960s, which he wrote led to "a bust of serious magnitude in the 1970s." IEEE Spectrum contributing editor Robert N. Charette echoed these sentiments in the 2013 article "The STEM Crisis Is a Myth", also noting that there was a "mismatch between earning a STEM degree and having a STEM job" in the United States, with only around 1⁄4 of STEM graduates working in STEM fields, while less than half of workers in STEM fields have a STEM degree. Economics writer Ben Casselman, in a 2014 study of post-graduation earnings in the United States for FiveThirtyEight, wrote that, based on the data, science should not be grouped with the other three STEM categories, because, while the other three generally result in high-paying jobs, "many sciences, particularly the life sciences, pay below the overall median for recent college graduates." A 2017 article from the University of Leicester concluded, that "maintaining accounts of a ‘crisis’ in the supply of STEM workers has usually been in the interests of industry, the education sector and government, as well as the lobby groups that represent them. Concerns about a shortage have meant the allocation of significant additional resources to the sector whose representatives have, in turn, become powerful voices in advocating for further funds and further investment." A 2022 report from Rutgers University stated: "In the United States, the STEM crisis theme is a perennial policy favorite, appearing every few years as an urgent concern in the nation’s competition with whatever other nation is ascendant, or as the cause of whatever problem is ailing the domestic economy. And the solution is always the same: increase the supply of STEM workers through expanding STEM education. Time and again, serious and empirically grounded studies find little evidence of any systemic failures or an inability of market responses to address whatever supply is required to meet workforce needs." A study of the UK job market, published in 2022, found similar problems, which have been reported for the USA earlier: "It is not clear that having a degree in the sciences, rather than in other subjects, provides any sort of advantage in terms of short- or long-term employability... While only a minority of STEM graduates ever work in highly-skilled STEM jobs, we identified three particular characteristics of the STEM labour market that may present challenges for employers: STEM employment appears to be predicated on early entry to the sector; a large proportion of STEM graduates are likely to never work in the sector; and there may be more movement out of HS STEM positions by older workers than in other sectors... " == See also == == References == == Further reading == David Beede; et al. (September 2011). "Education Supports Racial and Ethnic Equality in STEM" (PDF). U.S. Department of Commerce. Retrieved 2012-12-21. David Beede; et al. (August 2011). "Women in STEM: An Opportunity and An Imperative" (PDF). U.S. Department of Commerce. Retrieved 2012-12-21. Kaye Husbands Fealing, Aubrey Incorvaia, and Richard Utz, "Humanizing Science and Engineering for the Twenty-First Century." Issues in Science and Technology, Fall issue, 2022: 54–57. David Langdon; et al. (July 2011). "STEM: Good Jobs Now and For the Future" (PDF). U.S. Department of Commerce. Retrieved 2012-12-21. Arden Bement (May 24, 2005). "Statement To House & Senate Appriopriators In Support Of STEM Education And NSF Education" (PDF). STEM Coalition. Archived from the original (PDF) on November 20, 2012. Retrieved 2012-12-21. Carla C. Johnson, et al., eds. (2020) Handbook of research on STEM education (Routledge, 2020). Mary Kirk (2009). Gender and Information Technology: Moving Beyond Access to Co-Create Global Partnership. IGI Global Snippet. ISBN 978-1-59904-786-7. Shirley M. Malcom; Daryl E. Chubin; Jolene K. Jesse (2004). Standing Our Ground: A Guidebook for STEM Educators in the Post-Michigan Era. American Association for the Advancement of Science. ISBN 0871686996. Unesco publication on girls education in STEM – Cracking the code: girls' and women's education in science, technology, engineering and mathematics (STEM) "http://unesdoc.unesco.org/images/0025/002534/253479E.pdf " Wing Lau – Chief Engineer at the Department of Physics, Oxford University (Oct 12, 2017). "STEM Re-vitalisation, not trivialisation". OpenSchool. Retrieved 2017-10-12. == External links == Media related to STEM at Wikimedia Commons
Wikipedia/Science,_technology,_engineering,_and_mathematics
A disease of despair is one of three classes of behavior-related medical conditions that increase in groups of people who experience despair due to a sense that their long-term social and economic prospects are bleak. The three disease types are drug overdose, including alcohol overdose, suicide, and alcoholic liver disease. In 2017, diseases of despair, and the resulting deaths of despair, were high in the Appalachian region of the United States, especially in Pennsylvania, West Virginia, and Delaware. The prevalence increased markedly during the first decades of the 21st century, especially among middle-aged and older working class White Americans starting in 2010, followed by an increase in mortality for Hispanic Americans in 2011 and African Americans in 2014. It gained media attention because of its connection to the opioid epidemic. In 2018, some 158,000 U.S. citizens died from these causes, compared to 65,000 in 1995. Deaths of despair have increased sharply during the COVID-19 pandemic and associated recession, with a 10% to 60% increase above pre-pandemic levels. Life expectancy in the United States declined further to 76.4 years in 2021, with the main drivers being the COVID-19 pandemic, along with deaths from drug overdoses, suicides and liver disease. == Definitions == The concept of despair in any form can affect an individual person, and arise in and spread through social communities. There are four basic types of despair. Cognitive despair denotes thoughts connected to defeat, guilt, hopelessness and pessimism. It may make a person perceive other people's actions as hostile and discount the value of long-term outcomes. Emotional despair refers to feelings of sadness, irritability, loneliness and apathy and may partly impede the process of creating and nourishing interpersonal relationships. The term behavioural despair describes risky, reckless and self-destructive acts reflecting little to no consideration of the future, such as self-harm, reckless driving, drug use, risky sexual behaviours and others. Biological despair relates to dysfunction or dysregulation of the body's stress reactive system and/or to hormonal instability. Being under the influence of despair for an extended amount of time may lead to the development of one or more of the diseases of despair, such as suicidal thoughts or drug and alcohol abuse. If an individual has a disease of despair, there is an increased risk of death of despair, usually classified as a suicide, drug or alcohol overdose, or liver failure. == Risk factors == Unstable mental health, depression, suicidal thoughts and addiction to drugs and alcohol affect people of every age, every ethnicity, and every demographic group in every country in the world. === White Americans === In 2017, these problems were on the rise, especially among the US White non-Hispanic men and women in midlife. Since the beginning of the millennium, this particular group of people is the single one in the world which experienced continual increase in mortality and morbidity while US Black non-Hispanics and US Hispanics, as well as all subgroups of populations in other rich countries (such as countries from the EU, Japan, Australia and others), show the exact opposite trend. Men and women having no more than high school education and those living in rural areas are more affected by this phenomenon than their peers who are college-educated and live in urban areas. In 2024, UCLA researchers Joseph Friedman and Helena Hansen found that by 2022, the White rate of overdoses was falling. === African Americans === In 2024, UCLA researchers Joseph Friedman and Helena Hansen stated that African-American deaths of despair are extensive - and have been ignored by policy makers, the medical establishment, and the media. Friedman and Hansen cited African American’s relatively low life expectancy, and the high rate of African-American drug overdose deaths. After 2015, there was a sharp increase in deaths of despair among African Americans. In 2024, California social worker and mental health leader Nakeya Fields cited racial injustice, financial strain, social isolation, and gun violence, as contributing push-factors to African American despair and subsequent substance abuse. === Native Americans === In 2024, UCLA researchers Joseph Friedman and Helena Hansen found that rates of “mid life mortality from deaths of despair” among Native American and Native Alaskans individuals was significantly higher than that among White individuals from 1999 to 2022. In 2022, the Native American midlife death rate was 241.70 per 100,000 people, 2.36 times the rate among White individuals. In 2022, Native Americans had a drug overdose rate of 104.95 per 100,000 people. This is in comparison to the 2022 Black rate of 84.80 overdoses per 100,000, and the White rate of 59.26 per 100,000. In 2022, the Native American mid life rate of alcoholic liver disease was 108.83 per 100,000 people. This was more than six times the White rate, at 17.92 per 100,000. === Hispanic Americans === In 2024, UCLA researchers Joseph Friedman and Helena Hansen stated that deaths of despair amongst Latinos in 2022, were catching up to the higher White and Black rates of deaths of despair. == History == Mortality and morbidity rates in the United States have been decreasing for decades. Between 1970 and 2013, mortality rates for middle-aged Americans fell by 44% and morbidity was on a decline even among the elderly. After 1998, mortality rates in other rich countries have been declining by 2% a year. Midlife mortality fell by more than 200 per 100,000 for Black non-Hispanics and by more than 60 per 100,000 for Hispanics during the 1998–2013 period. The AIDS epidemic in the US was brought under control. In 2018, 37,968 people received an HIV diagnosis in the USA and its 6 dependent areas, which is an overall 7% decrease compared with the year 2014. In 2017, cardiovascular disease and cancer, the two biggest killers in middle age, were on a decline, even though escalating obesity remained uncontrolled. Despite all of these satisfactory numbers, the White non-Hispanic population exhibited an increase in premature deaths, especially in those caused by suicide, drug overdose and alcoholic liver disease. There were two main factors driving this trend. The 2017 data showed that the US White non-Hispanic population significantly differed from populations in other countries. For example, in 2015, drug, alcohol and suicide mortality was more than two times higher among US White non-Hispanics, in comparison to people from the United Kingdom, Sweden or Australia. In comparison to US Black non-Hispanics, the mortality and morbidity rates were lower. In 2017, the gap between these groups was narrowing quickly. For people aged 30–34, the difference between these two ethnicities was almost completely diminished. In 2015, White non-Hispanics aged 50–54 with no more than a high school diploma had almost 1,000 premature deaths per 100,000. The average for all White non-Hispanics regardless of their education was around 500 deaths per 100,000. Therefore, education probably negatively correlates with the probability of developing a disease of despair. That means higher education correlates with lower probability of developing a disease of despair. Secondly, the excess premature deaths are, as stated above, caused primarily by suicide, poisonings or drug overdoses and other causes connected especially to alcoholism such as chronic liver diseases. The proportion of these causes of death (in comparison to deaths caused by assaults, cancer, cardiovascular diseases, HIV and motor vehicle crashes) in population white non-Hispanic people aged 25–44 is increased by 210%. It is also worth noting that the highest rates are to be discovered among people living in rural areas. For example, during the years 1999–2015, the rate of deaths of despair increased twice as much as the rate of other causes of deaths in the population of White non-Hispanics aged 30–44 living in rural areas. In total, death rates in rural subpopulations for all ethnicities increased among those aged 25–64 years by 6%. As a result of these findings, it is possible to assume that living in rural areas is also connected to the diseases and deaths of despair. In 2022, US suicides reached record levels, with 49,369 suicide deaths. Since 2011, roughly 540,000 people have died by suicide in the United States. In 2010, life expectancy for working class Americans without a college degree peaked and has been declining since, with adult life expectancy after the age of 25 being another 49.8 years, down from 51.6 in 1992. Anne Case and Angus Deaton attribute this trend in part to rising deaths of despair. == Causes == The factors that seem to exacerbate diseases of despair are not fully known, but they are generally recognized as including a worsening of economic inequality and feeling of hopelessness about personal financial success. This can take many forms and appear in different situations. For example, people feel inadequate and disadvantaged when products are marketed to them as being important, but these products repeatedly prove to be unaffordable for them. This increase in rates of mental distress and diseases of despair have been attributed to the flaws in contemporary capitalism and policies associated with the ideology of neoliberalism, which seeks to release markets from all restrictions and reduce or eliminate government assistance programs. The overall loss of employment in affected geographic regions, and stagnant wages and deteriorating working conditions along with the decline of labor unions and the welfare state, are widely hypothesized factors. As such, some scholars have characterized deaths of despair as driven by austerity policies and privatization as "social murder". The changes in the labor market also affect social connections that might otherwise provide protection, as people at risk for this problem are less likely to get married, more likely to get divorced, and more likely to experience social isolation. However, some experts claim the correlation between income and mortality/morbidity rate is only coincidental and may not be associated with deaths for all groups. In 2017, Anne Case and Angus Deaton argued that "after 1999, blacks with a college education experienced even more severe percentage declines in income than did whites in the same education group. Yet black mortality rates have fallen steadily, at rates between 2 and 3 percent per year for all age groups." Other examples from Europe show that decreased incomes and/or increased unemployment do not, in general, correlate with increased mortality rates. Case and Deaton argue that the ultimate cause, is people's sense that life is meaningless, unsatisfying, or unfulfilling, rather than strictly the basic economic security that makes these higher order feelings more likely. In their 2020 book, Case and Deaton assert that in the United States, much more so than in peer countries such as those of Western Europe, globalization and technological advancement dramatically shifted political power towards capital, and away from labor, by empowering corporations and weakening labor unions. As such, other rich countries, while facing challenges associated with globalization and technological change, did not experience a "long-term stagnation of wages, nor an epidemic of deaths of despair." 2018 data shows that diseases of despair pose a complex threat to modern society and that they are not correlated only to the economic strength of an individual. Social connections, level of education, place of residence, medical condition, mental health, working opportunities, subjective perception of one's own future – all of these play a role in determining whether the individual will develop diseases of despair or not. Additionally, the younger generations are more and more influenced by social media and other modern technologies, which may have unexpected and unfavourable effects on their lives as well. For example, a 2016 study stated that the use of social media "was significantly associated with increased depression." == COVID-19 pandemic == Preliminary studies indicate an aggravation of depression, anxiety, drug overdoses, and suicidal ideation following the beginning of the COVID-19 pandemic. Though health aspects like stress can be concurrent with the crisis, other biopsychosocial risk factors such as job loss, housing precarity, and food insecurity can manifest over time. This range of social determinants, commonly experienced during an economic downturn, can induce and aggravate a sense of despair. Loneliness, which is associated with despair, was also aggravated by social isolation practices put in place during the COVID-19 pandemic, which may contribute to a rise in diseases of despair. A 2022 preliminary review of 70 published studies conducted in 17 countries concerning the potential impacts of COVID-19 on deaths of despair, indicates that women, ethnic minorities and younger age groups, may have suffered disproportionately more than other groups. === Drug overdoses === In 2021, preliminary indications in Canada and the United States demonstrated that the trajectory of drug overdose-related deaths was exacerbated by the Covid-19 pandemic. In Canada, drug overdose-related deaths stabilized prior to the onset of COVID-19, but increased after the onset of COVID-19. In the United States, drug overdose-related deaths increased prior to and accelerated after the onset of COVID-19. More specifically, the opioid overdose crisis worsened within the three years, from 2017 to 2020, in Wisconsin. Particularly in Milwaukee County, Wisconsin, it was found that the pandemic had remarkably escalated the number of monthly overdose deaths, due to opioids. It was found that the worst of these drug impacts seemed to primarily occur in poor and urban neighborhoods, especially affecting Black and Hispanic communities. Despite this, even wealthy and prosperous, White communities within the suburbs, also faced an increase in the number of overdose deaths. === Impact on Wisconsin opioid crisis === The rise in the use of opioids for recreational use or self-medicating purposes has given way to the idea of an opioid crisis, which started emerging in the 1990s, but has significantly risen to the point of an opioid epidemic within the last two decades, especially since 2010. This is primarily due to the introduction of synthetic opioids, which has led to number of annual overdose deaths doubling between 2010 (38,329 deaths) and 2019 (70,360 deaths) within the USA. In order to observe the results in relation to the opioid epidemic, following the COVID-19 crisis, the number of monthly OODs (opioid overdose deaths) were examined from January 1st, 2017, until December 31st, 2020. As a result of the rising COVID-19 pandemic, an order to stay home was issued on March 23, 2020. This date is used to distinguish between the two scenarios within the experiment, with the pre-pandemic scenario ranging from January 1, 2017, until March 23, 2020, whereas the post-pandemic scenario ranges from March 23, 2020, until December 31, 2020. According to the results of the study, the peak monthly overdose death total was 37 within the pre-pandemic timeline. After the COVID-19 pandemic hit, and following the stay-at-home order issued on March 23, 2020, the peak monthly overdose death total rose to 57, during the post-pandemic scenario. However, in addition to this, the minimum monthly OODs was 23. Overall, the number of average monthly OODs increased by around 12, which implies that, even on average, the rise in the number of OODs was significantly impacted by the conditions of COVID-19. The Milwaukee County Medical Examiners' Office provided detailed death certificates of OODs, which included information on the drugs involved in the overdose. This information was utilized in order to isolate and extract data specifically related to opioid overdose deaths. The data included the addresses where the overdoses occurred, which was matched to the administrative boundaries of census tracts using ArcGIS Desktop 10.7 and the TIGER/Line database from the Census Bureau. The researchers also collected demographic data at the census tract level from the U.S. Census Bureau's 2010 Census data platform. On one perspective, only 3% of census tracts consistently had low rates of opioid overdose deaths. These tracts were located in suburban areas and had a predominantly White population (83%) with a higher median household income ($87,079) and educational attainment (34% had a bachelor's degree or higher). These areas also had better access to health resources with 99% of residents having access to health insurance and care. Additionally, they had other positive indicators of social well-being such as a high rate of internet subscriptions (85%) and a low incarceration rate (3%). However, when looking at pre-pandemic opioid overdose death patterns in Milwaukee County (2017-2019), it becomes evident that the opioid crisis has had a disproportionate impact on historically marginalized Black and Hispanic communities in central Milwaukee neighborhoods. This corresponds with our previous study's findings that policies, resources, and interventions aimed at addressing the opioid crisis in Milwaukee County have primarily benefited White communities rather than communities of color. This lack of impact on Black and Hispanic communities has further exacerbated existing health inequalities. To analyze changes in opioid overdose deaths (OOD) during the pandemic, OOD increment percentages were calculated for all census tracts, comparing post-pandemic OOD rates to the 5-year average. The results revealed two areas where OOD rates were significantly impacted: (1) predominantly poor, Black neighborhoods in the inner city and (2) predominantly affluent White suburban census tracts. There were also areas where OOD rates remained stable, including tracts with high OOD rates prior to the pandemic. However, it's important to note that some suburban tracts saw large percent increases in OODs due to the relatively low pre-pandemic OOD frequency, which doesn't necessarily reflect a large increase in the total number of OODs. The urban tracts that experienced a 10,280% increment in OODs suffer from racial and economic segregation, concentrated poverty, and a lack of educational and employment opportunities. The population in these tracts is predominantly Black (72.46%), with a low median household income ($31,192), high unemployment rate (34%), and low educational attainment (only 5% hold a bachelor's degree or higher). They also have a high incarceration rate (7%) and low internet subscription rate (69%). The pandemic has further exacerbated existing socioeconomic disparities in these areas. In contrast, the suburban census tracts that experienced an 11,600% increment in OODs are affluent and well-educated. The population is predominantly White (83.63%), with a higher median household income ($75,959), lower unemployment rate (11%), and higher educational attainment (45% hold a bachelor's degree or higher). These tracts also have high rates of internet subscription (90%) and low incarceration rates (2%). Despite their high economic and social well-being, suburban tracts have still been affected by pandemic-related stress, which likely contributed to the increase in OODs. == Contrasted with diseases of poverty == Diseases of despair differ from diseases of poverty because poverty itself is not the central factor. Groups of impoverished people with a sense that their lives or their children's lives will improve are not affected as much by diseases of despair. Instead, this affects people who have little reason to believe that the future will be better. As a result, this problem is distributed unevenly, for example by affecting working-class people in the United States more than working-class people in Europe, even when the European economy was weaker. It also affects White people more than racially disadvantaged groups, possibly because working-class White people are more likely to believe that they are not doing better than their parents did, while non-White people in similar economic situations are more likely to believe that they are better off than their parents. == Effects == Starting in 1998, a rise in deaths of despair has resulted in an unexpected increase in the number of middle-aged White Americans dying (the age-specific mortality rate). By 2014, the increasing number of deaths of despair had resulted in a drop in overall life expectancy. Anne Case and Angus Deaton propose that the increase in mid-life mortality is the result of cumulative disadvantages that have occurred over decades, and that solving it will require patience and perseverance for many years, rather than a quick fix that produces immediate results. The number of deaths of despair in the United States has been estimated at 150,000 per year in 2017. Even though the main cause of diseases of despair may not be purely economical, the consequences of this phenomenon are, in terms of money, expensive. According to a report from 2016, alcohol misuse, misuse of illegal drugs and non-prescribed medications, treatment of associated disorders and lost productivity cost the U.S. more than $400 billion every year. About 40 percent of those costs were paid by government, which implies a huge cost of alcohol and drug misuse to taxpayers. Another study claims even higher costs of around $1.5 trillion in economic loss, loss of productivity, and societal harm. == Terminology == The phrase diseases of despair has been criticized for medicalizing problems that are primarily social and economic, and for underplaying the role of specific drugs, such as OxyContin, in increasing deaths. While the disease model of addiction has a strong body of empirical support, there is weak evidence for biological markers of suicidal thoughts and behaviors and no evidence that suicide fits a disease model. The use of the phrase diseases of despair to describe suicide in medical literature is more reflective of the medical model than suicidal thoughts and behaviors. == See also == == References == == Further reading == == External links == Why Americans Are Dying From Despair. The New Yorker, March 16, 2020. 'Deaths of Despair' and the Failure of Capitalism. Current Affairs, April 28, 2021.
Wikipedia/Diseases_of_despair
Geographical segregation exists whenever the proportions of population rates of two or more populations are not homogeneous throughout a defined space. Populations can be considered any plant or animal species, human genders, followers of a certain religion, people of different nationalities, ethnic groups, etc. In social geography segregation of ethnic groups, social classes and genders is often measured by the calculation of indices such as the index of dissimilarity. Different dimensions of segregation (or its contrary) are recognized: exposure, evenness, clustering, concentration, centralization, etc. More recent studies also highlight new local indices of segregation. Geographical segregation is most often measured with individuals' place of residence, but increasing geographical data availability makes it now possible to compute segregation indexes using individuals' activity space, in whole or in part. == Human geographical segregation == Segregation, as a broad concept, has appeared in all parts of the world where people exist—in different contexts and times it takes on different forms, shaped by the physical and human environments. The spatial concentration of population groups is not a new phenomenon. Since societies began to form there have been segregated inhabitants. Either segregated purposefully by force, or gradually over time, segregation was based on socio-economic, religious, educational, linguistic or ethnic grounds. Some groups choose to be segregated to strengthen social identity. == Types == === Legal segregation === Segregation can be caused by legal frameworks, such as in the extreme example of apartheid in South Africa, and even Jewish ghettoization in Germany in the 20th century. Segregation can also happen slowly, stimulated by increased land and housing prices in certain neighborhoods, resulting in segregation of rich and poor in many urban cities. Segregation can also be assigned arbitrarily. This can occur on a global scale, such as is seen in the Partition of India, instances in Ireland, and many other situations. Geographical boundaries were often put in place without much consideration for native peoples and natural geographic terrain and cultural limits that had long been in place. In the United States, segregation was enforced through the law. Notably, the racial segregation between white and black racial populations in the American South during the late 1800s into the first half of the 20th century. These laws consisted of separating people of color from white people in public places, including movie theaters, restaurants, schools, shopping centers, etc. The legislations were commonly referred to as Jim Crow laws. Although these laws were abolished in the mid 1960’s the impacts are still present in American communities today. Represented through the significant gap in homeownership, income status, and education levels in communities of color versus majority white. In apartheid South Africa, segregation was very much a legal concept. Enforced by the government, black and South-Africans of color were discriminated against, and forced to comply with apartheid. Some of the legislation passed dealt with physical segregation in schools, land tenure, geographic segregation and state repression. These were very clearly legislative, but also in the case of most white South Africans, a social construct as well. Segregation can also be encouraged, using geographical boundaries, while not explicitly enforced. Public housing projects, especially in the United States, have been criticized for this. Putting cheap housing in poor black neighborhoods encouraged local African-Americans to stay in the area, keeping other richer areas white by not building public housing there. Current day, many communities within the United States are still segregated, due to the ongoing racial inequalities still present and self-segregation. === Social segregation and gentrification === Segregation can also be caused by social factors that become evident as they happen, but are not necessarily government sanctioned. This could be things like informal ghettos, or simply rich neighborhoods. In terms of land capital, over time in a given area, humans will settle down and buy or take land. Some privileged people will acquire better land (that is, more arable, proximate to potential capital, more pleasing views). Demand for these nicer habitats drives up prices, and areas deemed “better” based solely on geography become inherently exclusionary in their population makeup. West Point Grey, an area of Vancouver Canada, is in part rich because of the views offered of Downtown Vancouver, the Gulf Islands, and its location near the water and University of British Columbia. Wealthy people had the resources to pay for advantages, and subsequently drove up prices. Examples of this can be seen all over the world. Geographical segregation is not always defined by the sightline of places. It also occurs around certain structures, or simply in areas that are specifically developed with an income bracket in mind. These social factors are commonly attributed to the impacts of gentrification. Gentrification is the process in which the makeup of a community is changed. These changes include racial identity, economic status, and level of education. Generally, gentrification occurs in communities that are low-income and a majority-minority population. It begins when affluent families, usually of white racial identity, move into these lower-income neighborhoods and invest their money into the community. These improvements to the community consist of reconstructing public transit, the businesses within downtown areas, and the houses in neighborhoods. This raises the overall investment value of the area, which increases the living costs. Which in turn, causes the original low-income residents to be displaced, due to the unaffordability. It can also create physical health issues for the original residents. As they are segregated in areas typically near factors or construction zones, exposing them to toxins. The Center for Disease Control and Prevention has issued that gentrification is a public health issue. Another segregation term, the ghetto, has been used in many different contexts over time, generally meaning any physical part of a city predominantly occupied by any particular group of people. It implies that the group may be looked down upon and segregated purposefully. This does not mean that all ghettos are built up communities and buildings specifically for a segregation purpose, although many are. In the case of the United States, segregation of the African-American community was to a degree due to white flight out of the cities, rather than forcing African-Americans to live in the downtown cores. === Gated communities === Gated communities could be seen as a combination of both legal frameworks and social conventions regarding segregation. A gated community today is a controlled neighborhood, inhabited by people with common interests, such as safety, or class separation, but not necessarily of the same ethnicity or religion—it is distinct from an international community (in most cases). Gated communities are very controversial, as they can be seen as encouraging distinction and separation, and therefore superiority from those who do not live with the gates community. === Self-segregation === Self segregation is almost as common an occurrence as involuntary segregation is. Often, immigrants coming to a new and foreign country will band together for mutual benefit, and to keep a sense of community in the new country. These can be called ethnic enclaves and can be formed by any community or people group. Some well-known groups are Chinatowns, Little Italys and barrios. These localized phenomena also come in the form of ethnoburbs, which are essentially the same concept as an ethnic enclave, but specifically located in suburbs, rather than the traditional downtowns, where Chinatowns and Little Italys are usually based. == References ==
Wikipedia/Geographical_segregation
Diagnoses of autism have become more frequent since the 1980s, which has led to various controversies about both the cause of autism and the nature of the diagnoses themselves. Whether autism has mainly a genetic or developmental cause, and the degree of coincidence between autism and intellectual disability, are all matters of current scientific controversy as well as inquiry. There is also more sociopolitical debate as to whether autism should be considered a disability on its own. == Epidemiology == The current accepted prevalence of autism spectrum disorder (ASD) are around 1%, although previous research has shown far lower rates of incidence. ASD averages a 4.3:1 male-to-female ratio. The number of children diagnosed with the autism spectrum has increased dramatically since the 1980s, at least partly due to changes in diagnostic practice; it is unclear whether prevalence has actually increased; and as-yet-unidentified environmental risk factors cannot be ruled out. The risk of autism is associated with several prenatal factors, including advanced parental age and diabetes in the mother during pregnancy. ASD is associated with several genetic disorders and epilepsy. == Genetics == The role of genetic influence on ASD has been heavily researched over the past few years. ASD is considered to have polygenic traits since there is not a single risk factor, but multiple ones. Multiple twin and family studies have been conducted in order to observe any genetic influence in diagnosing ASD. The chance of both twins having ASD was significantly higher in identical twins than fraternal twins, concluding that ASD is heritable. A reoccurring finding is that de novo (new mutation) copy number variants (CNVs) are a primary cause of ASD – they alter synaptic functions; germ line mutations can produce de novo CNVs. These mutations can only be passed on to offspring; this explains the phenomenon that occurs when the child has symptoms of ASD, but the parents have no symptoms or history of ASD. De novo variants differ from person to person, i.e. one variant can cause ASD in one person, whereas another person would need multiple variants to cause the same disorder. Loss of function variants occur in 16-18% of ASD diagnoses, which is nearly double the normal population. These loss of function variants reduce function in the protein neurexin, which connects neurons at the synapse and is important for neurological development; deletion mutations of neurexin are also very common in people with autism, as well as other neurological disorders like schizophrenia, bipolar disorder, and ADHD. There is also controversy over the Nature vs. Nurture debate. According to family studies, genetic and environmental factors have an equal influence on risk of ASD. == Bacteria == Gut microbiome has a relation to ASD. Excessive Clostridia spp. was found in children with ASD and gastrointestinal difficulties; Clostridia spp produces propionic acid, which is impaired or in excess in people with ASD. Specifically, C. tetani and C. histolyticum are two species of this bacteria that affect people with ASD. C. tetani produces tetanus neurotoxin in the intestinal tract; C. histolyticum is a toxin producer that is abundant in people diagnosed with ASD. Both of these could contribute to neurological symptoms. == Vaccines == A later-retracted article from The Lancet making false claims provoked concern about vaccines among parents. Its author was found to be on the payroll of litigants against vaccine manufacturers. The idea of a link between vaccines and autism was extensively investigated and shown to be false. The scientific consensus is that there is no relationship, causal or otherwise, between vaccines and incidence of autism, and vaccine ingredients do not cause autism. Nevertheless, the anti-vaccination movement continues to promote myths, conspiracy theories and misinformation linking the two. A developing tactic appears to be the "promotion of irrelevant research [as] an active aggregation of several questionable or peripherally related research studies in an attempt to justify the science underlying a questionable claim." == Intelligence == The percentage of autistic individuals who also meet criteria for intellectual disability has been reported as anywhere from 25% to 70%, a wide variation illustrating the difficulty of assessing autistic intelligence. For pervasive developmental disorder not otherwise specified (PDD-NOS), the association with intellectual disability is much weaker. The diagnosis of Asperger syndrome excludes clinically significant delays in mental or cognitive skills. A 2007 study suggested that Raven's Progressive Matrices (RPM), a test of abstract reasoning, may be a better indicator of intelligence for autistic children than the more commonly used Wechsler Intelligence Scale for Children (WISC). Researchers suspected that the WISC relied too heavily on language to be an accurate measure of intelligence for autistic individuals. Their study revealed that the neurotypical children scored similarly on both tests, but the autistic children fared far better on the RPM than on the WISC. The RPM measures abstract, general and fluid reasoning, an ability autistic individuals have been presumed to lack. A 2008 study found a similar effect, but to a much lesser degree and only for individuals with IQs less than 85 on the Wechsler scales. == Facilitated communication == Facilitated communication (FC) is a scientifically discredited technique that attempts to facilitate communication by people with severe educational and communication disabilities. The facilitator holds or gently touches the disabled person's arm or hand during this process and attempts to help them move to type on a special keyboard. It was used by many hopeful parents of individuals with autism when it was first introduced during the early 1990s by Douglas Biklen, a professor at Syracuse University. There is widespread agreement within the scientific community and multiple disability advocacy organizations that FC is not a valid technique for authentically augmenting the communication skills of those with autism spectrum disorder. Instead, research indicates that the facilitator is the source of the messages obtained through FC (involving ideomotor effect guidance of the arm of the patient by the facilitator). Thus, studies have consistently found that patients are unable to provide the correct response to even simple questions when the facilitator does not know the answers to the questions (e.g., showing the patient but not the facilitator an object). In addition, numerous cases have been reported by investigators in which disabled persons were assumed by facilitators to be typing a coherent message while the patient's eyes were closed or while they were looking away from or showing no particular interest in the letter board. Despite the evidence opposing FC, many continue to use and promote this technique. Note that facilitated communication is separate and different from a range of scientifically supported augmentative and alternative communication (AAC) devices and processes that facilitate communication for people with communication difficulties. == Advocacy initiatives == There are two major conceptualizations of autism within autism advocacy. Those who favour the pathology paradigm, which aligns with the medical model of disability, see autism as a disorder to be treated or cured. Those who favor the pathology paradigm argue that atypical behaviors of autistic individuals are detrimental and should therefore be reduced or eliminated through behavior modification therapies. Their advocacy efforts focus primarily on medical research to identify genetic and environmental risk factors in autism. Those who favour the neurodiversity paradigm, which aligns with the social model of disability, see autism as a naturally-occurring variation in the brain. Neurodiversity advocates argue that efforts to eliminate autism should not be compared, for example, to curing cancer, but instead to the antiquated notion of curing left-handedness. Their advocacy efforts focus primarily on acceptance, accommodation, and support for autistic people as "neuro-minorities" in society. These two paradigms are not fully exclusive, and many people hold a combination of these viewpoints. === Pathology paradigm === The pathology paradigm is the traditional view of autism through a biomedical lens, in which it is seen as a disorder characterized by various impairments, mainly in communication and social interaction. Those taking this perspective believe that autism is generally a harmful dysfunction. Ways of functioning which diverge from a typical brain are perceived as harmful and disordered and must therefore be treated or cured. The atypical behaviors of autistic individuals are considered a detriment to social and professional success and should therefore be reduced or eliminated through therapy. Advocates with this view include both a small but significant minority of autistic adults and large majority of parents of autistic children, but contain a higher percentage of parents when compared to those adopting the neurodiversity paradigm. These advocates believe that medical research is necessary to address the rapid rise in autism diagnoses (sometimes referred to as the "autism epidemic"), reduce suffering, and provide the best outcomes for autistic individuals. In addition to etiological research, other areas of focus may include biology, diagnosis, and treatment, including medication, behavioural and psychological interventions, and the treatment of co-existing medical conditions. Advocacy groups that focus primarily on medical research include Autism Speaks, the Autism Science Foundation, and its predecessor organizations, the Autism Coalition for Research and Education, the National Alliance for Autism Research, and Cure Autism Now, and the former Autism Research Institute. === Neurodiversity paradigm === The neurodiversity paradigm is a view of autism as a different way of being rather than as a disease or disorder that must be cured. Autistic people are considered to have neurocognitive differences which give them distinct strengths and weaknesses, and are capable of succeeding when appropriately accommodated and supported. The belief is that efforts to eliminate autism should not be compared, for example, to curing cancer but instead to the antiquated notion of curing left-handedness. There is no leader of the neurodiversity movement and little academic research has been conducted on it as a social phenomenon. As such, proponents of the neurodiversity paradigm have heterogenous beliefs, but are consistent in the view that autism cannot be separated from an autistic person. Advocacy efforts may include accommodations in schools and work environments, lobbying for the inclusion of autistic people when making decisions that affect them, and opposition to therapies that aim to make children "indistinguishable from their peers". Neurodiversity advocates are opposed to medical research for a cure, believing that it will lead to eugenics, and instead support research that helps autistic people thrive as they are. For example, NeuroTribes author Steve Silberman noted a lack of research in regards to seizure-controlling drugs and autistic brains; that sensory differences in autistic people were unheard of until Temple Grandin spoke about her experiences; and that only a small percentage of research funding goes towards the needs of autistic adults. Advocacy groups that focus primarily on acceptance and accommodation include Autism Network International, Autism National Committee, Autistic Self Advocacy Network, and Autistic Women & Nonbinary Network. == Further reading == Decoteau, C. L., & Daniel, M. (2020). Scientific Hegemony and the Field of Autism. American Sociological Review. == See also == Neurodiversity Employment of autistic people == References ==
Wikipedia/Medical_model_of_autism
Disparate impact in the law of the United States refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws consider race, color, religion, national origin, and sex to be protected characteristics, and some laws include disability status and other traits as well. A violation of Title VII of the 1964 Civil Rights Act may be proven by showing that an employment practice or policy has a disproportionately adverse effect on members of the protected class as compared with non-members of the protected class. Therefore, the disparate impact theory under Title VII prohibits employers "from using a facially neutral employment practice that has an unjustified adverse impact on members of a protected class. A facially neutral employment practice is one that does not appear to be discriminatory on its face; rather it is one that is discriminatory in its application or effect." Where a disparate impact is shown, the plaintiff can prevail without the necessity of showing intentional discrimination unless the defendant employer demonstrates that the practice or policy in question has a demonstrable relationship to the requirements of the job in question. This is the "business necessity" defense. Some civil rights laws, such as Title VI of the Civil Rights Act of 1964, do not contain disparate impact provisions creating a private right of action, although the federal government may still pursue disparate impact claims under these laws. Although they do not contain explicit disparate impact provisions, the U.S. Supreme Court has held that the Age Discrimination in Employment Act of 1967 and the Fair Housing Act of 1968 create a cause of action for disparate impact. During the second presidency of Donald Trump, US agencies were ordered to stop using disparate impact in civil rights cases. The idea of disparate impact has been applied within the EU with respect to systemic discrimination and substantive equality. == Substantive equality == Disparate impact is violation of substantive equality, the equality of outcomes for groups. In contrast, disparate treatment is violation of formal equal opportunity. == Adverse impact == While disparate impact is a legal theory of liability under Title VII, adverse impact is one element of that doctrine, which measures the effect an employment practice has on a class protected by Title VII. In the Uniform Guidelines on Employee Selection Procedures, an adverse impact is defined as a "substantially different rate of selection in hiring, promotion, or other employment decision which works to the disadvantage of members of a race, sex, or ethnic group". A "substantially different" rate is typically defined in government enforcement or Title VII litigation settings using the 80% Rule, statistical significance tests, and/or practical significance tests. Adverse impact is often used interchangeably with "disparate impact", which was a legal term coined in one of the most significant U.S. Supreme Court rulings on disparate or adverse impact: Griggs v. Duke Power Co., 1971. Adverse Impact does not mean that an individual in a majority group is given preference over a minority group. However, having adverse impact does mean that there is the "potential" for discrimination in the hiring process and it could warrant investigation. == The 80% rule == The 80% test was originally framed by a panel of 32 professionals (called the Technical Advisory Committee on Testing, or TACT) assembled by the State of California Fair Employment Practice Commission (FEPC) in 1971, which published the State of California Guidelines on Employee Selection Procedures in October 1972. This was the first official government document that listed the 80% test in the context of adverse impact, and was later codified in the 1978 Uniform Guidelines on Employee Selection Procedures, a document used by the U.S. Equal Employment Opportunity Commission (EEOC), Department of Labor, and Department of Justice in Title VII enforcement. Originally, the Uniform Guidelines on Employee Selection Procedures provided a simple "80 percent" rule for determining that a company's selection system was having an "adverse impact" on a minority group. The rule was based on the rates at which job applicants were hired. For example, if XYZ Company hired 50 percent of the men applying for work in a predominantly male occupation while hiring only 20 percent of the female applicants, one could look at the ratio of those two hiring rates to judge whether there might be a discrimination problem. The ratio of 20:50 means that the rate of hiring for female applicants is only 40 percent of the rate of hiring for male applicants. That is, 20 divided by 50 equals 0.40, which is equivalent to 40 percent. Clearly, 40 percent is well below the 80 percent that was arbitrarily set as an acceptable difference in hiring rates. Therefore, in this example, XYZ Company could have been called upon to prove that there was a legitimate reason for hiring men at a rate so much higher than the rate of hiring women. Since the 1980s, courts in the U.S. have questioned the arbitrary nature of the 80 percent rule, making the rule less important than it was when the Uniform Guidelines were first published. A 2007 memorandum from the U.S. Equal Employment Opportunities Commission suggests that a more defensible standard would be based on comparing a company's hiring rate of a particular group with the rate that would occur if the company simply selected people at random. === False positives === The 80% threshold can result in a large number of false positives. We are able to convert between measures of effect size using the relationships: d = 3 π log ⁡ OR , d = 2 ρ 2 1 − ρ 2 , P ( X > Y ) = Φ ( d / 2 ) {\displaystyle d={{\sqrt {3}} \over {\pi }}\log {\text{OR}},\quad d=2{\sqrt {\rho ^{2} \over {1-\rho ^{2}}}},\quad \mathbb {P} (X>Y)=\Phi (d/{\sqrt {2}})} where d {\displaystyle d} is Cohen's d, OR {\displaystyle {\text{OR}}} is the odds ratio, ρ {\displaystyle \rho } is the Pearson correlation, and Φ ( ⋅ ) {\displaystyle \Phi (\cdot )} is the standard normal cumulative distribution function. The coefficient of determination R 2 {\displaystyle R^{2}} is the square of the correlation. The term P ( X > Y ) {\displaystyle \mathbb {P} (X>Y)} is the probability that a member of group X {\displaystyle X} obtains a score greater than a member of group Y {\displaystyle Y} . For a set of odds ratios, which is often used to determine if there is a disparate impact, we may convert between effect sizes as such: Using these different measures of effect size, we are able to quantitatively determine the size of a gap based on several common interpretations. Notably, we may interpret the effect size as: The amount of explained variation (coefficient of determination) The difference in terms of standard deviations (Cohen's d) The probability of a greater score If we take the 80% rule to apply via the odds ratio, this implies that the threshold odds ratio for assuming discrimination is 1.25 – the other measures of effect size are therefore: ρ = 0.061 , R 2 = 0.004 , d = 0.123 , P ( X > Y ) = 0.535 {\displaystyle \rho =0.061,\quad R^{2}=0.004,\quad d=0.123,\quad \mathbb {P} (X>Y)=0.535} This implies that discrimination is presumed to exist if 0.4% of the variation in outcomes is explained and there is a 0.123 standard deviation difference between two groups. Both of these quantities are small enough that there are significant concerns about finding false positive instances of discrimination at an unacceptable level. A greater threshold for presuming that disparities are due to discrimination, such as an odds ratio of 2–3, is less likely to have false positives. === More advanced statistical tests === The concept of practical significance for adverse impact was first introduced by Section 4D of the Uniform Guidelines, which states "Smaller differences in selection rate may nevertheless constitute adverse impact, where they are significant in both statistical and practical terms ..." Several federal court cases have applied practical significance tests to adverse impact analyses to assess the "practicality" or "stability" of the results. This is typically done by evaluating the change to the statistical significance tests after hypothetically changing focal group members selection status from "failing" to "passing" (see for example, Contreras v. City of Los Angeles (656 F.2d 1267, 9th Cir. 1981); U.S. v. Commonwealth of Virginia (569 F.2d 1300, 4th Cir. 1978); and Waisome v. Port Authority (948 F.2d 1370, 1376, 2d Cir. 1991)). == Unintentional discrimination == This form of discrimination occurs where an employer does not intend to discriminate; to the contrary, it occurs when identical standards or procedures are applied to everyone, despite the fact that they lead to a substantial difference in employment outcomes for the members of a particular group and they are unrelated to successful job performance. An important thing to note is that disparate impact is not, in and of itself, illegal. This is because disparate impact only becomes illegal if the employer cannot justify the employment practice causing the adverse impact as a "job related for the position in question and consistent with business necessity" (called the "business necessity defense"). == The Fair Housing Act == The disparate impact theory has application also in the housing context under Title VIII of the Civil Rights Act of 1968, also known as the Fair Housing Act. The ten federal appellate courts that have addressed the issue have all determined that one may establish a Fair Housing Act violation through the disparate impact theory of liability. The U.S. Department of Housing and Urban Development's Office of Fair Housing and Equal Opportunity, the federal government which administers the Fair Housing Act, issued a proposed regulation on November 16, 2011, setting forth how HUD applies disparate impact in Fair Housing Act cases. On February 8, 2013, HUD issued its Final Rule. Until 2015, the U.S. Supreme Court had not yet determined whether the Fair Housing Act allowed for claims of disparate impact. This question reached the Supreme Court twice since 2012, first in Magner v. Gallagher and then in Township of Mount Holly v. Mount Holly Gardens Citizens. Both cases settled before the Supreme Court could issue a decision; the Obama administration had encouraged settlement, as civil rights groups feared that a Supreme Court ruling on the issue would be hostile to disparate impact theories, and thus weaken housing discrimination enforcement. On June 25, 2015, by a 5–4 decision in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc., the Supreme Court held that disparate-impact claims are cognizable under the Fair Housing Act. In an opinion by Justice Kennedy, "Recognition of disparate-impact claims is also consistent with the central purpose of the FHA, which, like Title VII and the ADEA, was enacted to eradicate discriminatory practices within a sector of the Nation's economy. Suits targeting unlawful zoning laws and other housing restrictions that unfairly exclude minorities from certain neighborhoods without sufficient justification are at the heartland of disparate-impact liability...Recognition of disparate impact liability under the FHA plays an important role in uncovering discriminatory intent: it permits plaintiffs to counteract unconscious prejudices and disguised animus that escape easy classification as disparate treatment." Under the Court's ruling in Inclusive Communities, in order to prove a case of disparate impact housing discrimination, the following must occur: First, a plaintiff must make out a prima facie case, drawing an explicit, causal connection between a policy or practice and the disparate impact or statistical disparity. As Justice Kennedy wrote, "A disparate-impact claim relying on a statistical disparity must fail if the plaintiff cannot point to a defendant's policy or policies causing that disparity." Justice Kennedy also noted that "policies are not contrary to the disparate-impact requirement unless they are artificial, arbitrary, and unnecessary barriers". Second, a defendant must have the opportunity to prove "that the challenged practice is necessary to achieve one or more substantial, legitimate, non-discriminatory interests". If a defendant cannot do so, then a plaintiff's claim of disparate impact must prevail. Finally, if the defendant has "satisfied its burden at step two", the plaintiff may "prevail upon proving that the substantial, legitimate, nondiscriminatory interests supporting the challenged practice could be served by another [i.e. alternative] practice that has a less discriminatory effect". If a plaintiff cannot do so, then their disparate impact claim must fail. == Criticism == The disparate impact theory is in contrast with disparate treatment provisions under civil rights laws as well as the U.S. Constitution's guarantee of equal protection. For example, if an hypothetical fire department used a 100-pound test, that policy might disproportionately exclude female job applicants from employment. Under the 80% rule mentioned above, unsuccessful female job applicants would have a prima facie case of disparate impact "discrimination" against the department if they passed the 100-pound test at a rate less than 80% of the rate at which men passed the test. In order to avoid a lawsuit by the female job applicants, the department might refuse to hire anyone from its applicant pool—in other words, the department may refuse to hire anyone because too many of the successful job applicants were male. Thus, the employer would have intentionally discriminated against the successful male job applicants because of their gender, and that likely amounts to illegal disparate treatment and a violation of the Constitution's right to equal protection. In the 2009 case Ricci v. DeStefano, the U.S. Supreme Court did rule that a fire department committed illegal disparate treatment by refusing to promote white firefighters, in an effort to avoid disparate impact liability in a potential lawsuit by black and Hispanic firefighters who disproportionately failed the required tests for promotion. Although the Court in that case did not reach the constitutional issue, Justice Scalia's concurring opinion suggested the fire department also violated the constitutional right to equal protection. Even before Ricci, lower federal courts have ruled that actions taken to avoid potential disparate impact liability violate the constitutional right to equal protection. One such case is Biondo v. City of Chicago, Illinois, from the Seventh Circuit. Thomas Sowell has argued that assuming that disparities in outcomes are caused by discrimination is a logical fallacy. === Liability for criminal acts committed by employees === In 2013, the Equal Employment Opportunity Commission (EEOC) filed a suit, EEOC v. FREEMAN, against the use of typical criminal-background and credit checks during the hiring process. While admitting that there are many legitimate and race-neutral reasons for employers to screen out convicted criminals and debtors, the EEOC presented the theory that this practice is discriminatory because minorities in the U.S. are more likely to be convicted criminals with bad credit histories than White Americans. Ergo, employers should have to include criminals and debtors in their hiring. In this instance U.S. District Judge Roger Titus ruled firmly against the disparate impact theory, stating that EEOC's action had been "a theory in search of facts to support it". "By bringing actions of this nature, the EEOC has placed many employers in the "Hobson's choice" of ignoring criminal history and credit background, thus exposing themselves to potential liability for criminal and fraudulent acts committed by employees, on the one hand, or incurring the wrath of the EEOC for having utilized information deemed fundamental by most employers. Something more... must be utilized to justify a disparate impact claim based upon criminal history and credit checks. To require less, would be to condemn the use of common sense, and this is simply not what the laws of this country require." === Confounding === Disparities may be affected by underlying variables (confounding), which would imply that the disparity is due to underlying differences that are not predicated on group membership. When investigating whether or not a pay disparity between two groups is due to discrimination we may construct a multiple regression model for pay y {\displaystyle y} as: y = β 0 ⏟ Intercept + ∑ i = 1 p β i x i ⏟ Confounding Variables + γ G ⏟ Effect of Group Membership + ϵ ⏟ Error Term {\displaystyle y=\underbrace {\beta _{0}} _{\text{Intercept}}+\underbrace {\sum _{i=1}^{p}\beta _{i}x_{i}} _{\text{Confounding Variables}}+\underbrace {\gamma G} _{\text{Effect of Group Membership}}+\underbrace {\epsilon } _{\text{Error Term}}} where the x i {\displaystyle x_{i}} are the confounding variables, G ∈ { 0 , 1 } {\displaystyle G\in \{0,1\}} is a dichotomous variable indicating group membership, and ϵ ∼ N ( 0 , σ 2 ) {\displaystyle \epsilon \sim {\mathcal {N}}(0,\sigma ^{2})} is a normally distributed random variable. After correction for the potentially confounding variables in a regression model, we should be able to tell if there is still an impact of group membership on the quantity of interest. If we have not omitted any important confounding variables and not engaged in p-hacking, then a statistically significant | γ | > 0 {\displaystyle |\gamma |>0} suggests a very good possibility of positive or negative discrimination. For example, following disparities can be explained by confounders: Under-representation of women among firefighters can be explained by firefighters requiring physical strength and sex differences in human physiology. Among Uber drivers, a 7% pay gap between men and women was explained by three factors: where and when rides originate from (i.e., time and location), amount of driver experience and driving speed. While differences in Police use of deadly force in the United States use of less-than-deadly force still exist after accounting for confounding variables, there does not appear to be any relationship between race and deadly force once confounders are taken into account. == Relevant case law == Griggs v. Duke Power Company, 401 U.S. 424 (1971) — established theory of disparate impact; held Title VII of the Civil Rights Act of 1964 authorizes disparate impact lawsuits Lau v. Nichols, 414 U.S. 563 (1974) Albemarle Paper Co. v. Moody, 422 U.S. 405 (1975) Washington v. Davis, 426 U.S. 229 (1976) Village of Arlington Heights v. Metropolitan Housing Development Corp., 429 U.S. 252 (1977) Teamsters v. United States, 431 U.S. 324 (1977) Hazelwood School District v. United States, 433 U.S. 299 (1977) Dothard v. Rawlinson, 433 U.S. 321 (1977) Connecticut v. Teal, 457 U.S. 440 (1982) General Bldg. Contractors Assn., Inc. v. Pennsylvania, 458 U.S. 375 (1982) Guardians Assn. v. Civil Svc. Comm'n, 463 U.S. 582 (1983) Watson v. Fort Worth Bank & Trust, 487 U.S. 977 (1988) Town of Huntington v. NAACP, 488 U.S. 15 (1988) Wards Cove Packing Co. v. Atonio, 490 U.S. 642 (1989) Alexander v. Sandoval, 532 U.S. 275 (2001) — held Title VI of the Civil Rights Act of 1964 does not authorize disparate impact lawsuits by private citizens Raytheon Co. v. Hernandez, 540 U.S. 44 (2003) Smith v. City of Jackson, 544 U.S. 228 (2005) — held the Age Discrimination in Employment Act of 1967 authorizes disparate impact lawsuits Meacham v. Knolls Atomic Power Laboratory, 554 U.S. 84 (2008) Ricci v. DeStefano, 557 U.S. 557 (2009) Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc., No. 13-1371, 576 U.S. ___ (2015) — held the Fair Housing Act authorizes disparate impact lawsuits Brnovich v. Democratic National Committee, No. 19-1257, 594 U.S. ___ (2021) — curtailed the use of disparate impact to evaluate "vote denial" claims brought under Section 2 of the Voting Rights Act of 1965 Marietta Memorial Hospital Employee Health Benefit Plan v. Davita Inc., No. 20-1641, 596 U.S. ___ (2022) == See also == Abella commission Affirmative action Disparate treatment Diversity, Equity, and Inclusion Indirect discrimination (Sufficient disparate impact is equivalent) Intelligence and public policy Jurimetrics Office of Fair Housing and Equal Opportunity Simpson's paradox#UC Berkeley gender bias == References == == External links == The Office of Fair Housing and Equal Opportunity Free online software that assesses whether or not disparate impact has occurred Explanation of disparate impact under the Fair Housing Act and example briefs
Wikipedia/Disparate_impact
Discrimination against people with substance use disorders is a form of discrimination against people with this disease. In the United States, people with substance use disorders are often blamed for their disease, which is often seen as a moral failing, due to a lack of public understanding about substance use disorders being diseases of the brain with 40-60% heritability. People with substance use disorders are likely to be stigmatized, whether in society or healthcare. In the process of stigmatization, people with substance use disorders are stereotyped as having a particular set of undesirable traits, in turn causing other individuals to act in a fearful or prejudicial manner toward them. == Background == Drug use discrimination is the unequal treatment people experience because of the drugs they use. People who use or have used illicit drugs may face discrimination in employment, welfare, housing, child custody, and travel, in addition to imprisonment, asset forfeiture, and in some cases forced labor, torture, and execution. Though often prejudicially stereotyped as deviants and misfits, most drug users are well-adjusted and productive members of society. Drug prohibitions may have been partly motivated by racism and other prejudice against minorities, and racial disparities have been found to exist in the enforcement and prosecution of drug laws. Discrimination due to illicit drug use was the most commonly reported type of discrimination among Blacks and Latinos in a 2003 study of minority drug users in New York City, double to triple that due to race. People who use legal drugs such as tobacco and prescription medications may also face discrimination. === Individual factors === Clinicians use DSM-V-TR criteria to establish whether a person has a Substance Use Disorder, which may be classified as mild, moderate, or severe. It may also be ruled out, as some people may use substances or may be prescribed controlled substances that have the potential for addiction, but never go on to develop a substance use disorder. Addictive substances include stimulants, (caffeine, cocaine, amphetamine, methamphetamine, ephedra, etc.), sedatives/anxiolytics (benzodiazepines, barbiturates, quaaludes, etc.), opioids (oxycodone, fentanyl, etc.), alcohol, nicotine/tobacco, cannabis, dissociatives (ketamine, nitrous oxide, etc.), certain hallucinogens (especially MDMA), hormones (testosterone), GHB, Kratom, gabenergic agents (such as gabapentin), and more. The term addiction usually correlates with a severe substance use disorder. Addiction is characterized by behavior that is originally voluntary and reward-seeking that over time, becomes compulsive, with a desire to avoid dysphoria or withdrawal rather than to experience the original positive effects associated. A person may become physiologically dependent, experience withdrawal, and experience significant cravings. It does not degrade their personality, but people may engage in illicit behaviors such as buying drugs that are controlled substances, or engaging in prostitution to fund their addiction. Since these behaviors are illegal and they may face legal issues as a result, people who use drugs may not be forthcoming about these practices and may also delay seeking medical treatment for sequelae related to substance use out of fear of stigma or legal consequences. === Institutional basis === Stigma by health care professionals has many contributing factors. The first is a well-documented, decades-long lack of education on substance use disorders in many healthcare professions, such as medicine, nursing, and pharmacy. Due to this gap in educational curricula, and competency, healthcare professionals may be unaware of how much of what they assume to be true about treatment and people with substance use disorders is neither evidence-based nor factual. Very few providers are certified in addiction treatment; addiction is often thought of as a subspecialty with providers having an initial certification in another specialty area, such as psychiatry. Healthcare professionals may hold biases similar to those of the general US population, who often see substance use disorders as a moral failure rather than a chronic brain disease that has significant contributory racial and psychosocial factors, with 40-60% heritability. Unfortunately, healthcare providers may perpetuate stigma when they use language that is stigmatizing/non-factual or refuse to provide care that is evidence-based or person-centered as a result of their lack of competency or biases which may be subconscious. Unfortunately, they may also believe stereotypes about people with substance use disorders or drugs, that the general population holds. These include people with SUDs not being able to get better when they have similar relapse rates to people with diabetes or hypertension, and most people with SUDs recovering without treatment. What is more, medications for opioid use disorder are highly effective at preventing relapse; methadone and buprenorphine specifically also prevent all-cause mortality by more than 70% in patients with OUD. However, medical providers may hold the false belief that in being prescribed these, patients are "substituting one drug for another". Medications for Opioid Use Disorder (MOUD), Medications for Alcohol Use Disorder (MAUD), and medications for Tobacco Use Disorder, are widely used in the United States Healthcare System. However, regulatory and legal barriers to methadone and buprenorphine prescription have inhibited their utility. Under current law, people on methadone have to go to SAMHSA-approved OTPs (special facilities) to receive this medication, on a nearly daily basis. Although now removed, until recently providers who wanted to prescribe buprenorphine had to be x-waivered, receiving addition education to prescribe, and were limited in the number of patients they could prescribe for. SAMHSA and ASAM have clinical guidelines for things such as medications for addiction treatment, withdrawal management, and non-pharmacological therapeutic interventions as well. There are several terms related to addiction that are not stages of addiction per se. Tolerance means the need to take more of a substance to achieve the desired effect when compared to before--it does not matter what the desired effect is (pain control/focus/euphoria/etc.). Tolerance to opioids can develop rapidly. Tolerance may occur in addiction but also occurs in those who are prescribed certain medications who may not meet DSM-V-TR criteria for a Substance Use Disorder. Tolerance quickly develops to the effects of hallucinogens after a few administrations, which is part of the reason why addiction to this class of substances is rare. When people describe dependence in addiction treatment, they often mean physiological rather than psychological dependence. If someone is physiologically dependent, they are likely to experience withdrawal once the substance is removed, as their body has become accustomed to the substance's presence and its presence has affected their body's homeostasis. Withdrawal symptoms are often unpleasant and are typically the opposite of those experienced during intoxication. Rarely, withdrawal can be life-threatening; this may occur in patients with benzodiazepine, alcohol, or barbiturate dependence for example. Withdrawal is not a stage of addiction but is often a symptom, although a few drugs do not have withdrawal as part of their DSM-V-TR criteria (hallucinogens and cannabis). People with substance use disorders may have co-occurring mental health disorders, substance-induced mental disorders, both, or not have mental health disorders. Substance-use disorders are not thought of as mental health disorders, but can induce acute symptoms such as mood alterations or psychosis, depending on the drug and whether a person is intoxicated, experiencing withdrawal. In some cases, a person can be in active recovery (methamphetamine-induced psychosis specifically can last up to 2 years following discontinuation of methamphetamine, and hallucinogen persistent perception disorder can last several months following discontinuation). Generally, however, substance induced mental health symptoms are time-limited, clearing up within one month or less based on DSM-V-TR criteria. If symptoms persist for longer following discontinuation, providers may consider a mental health disorder as primary, or as stemming from a different etiology, rather than as substance-induced. Thus the person's diagnosis may change based on the timeframe and symptoms following discontinuation. People with substance use disorders still have agency, but it may be very difficult to control cravings and urges in early recovery. Therefore, people who use drugs win early recovery may avoid people, places, or things, that serve as triggers for these. If unavoidable, they may urge surf, call a friend or sponsor, or use other coping mechanisms to distract themselves or ride the cravings and urges out. While much substance use is initially voluntary, this is not always the case. People may be exposed to substances in utero, or involuntarily initially in childhood (especially if parents use or manufacture drugs) or in adulthood (e.g. GHB, rophynol, etc.). Some people who use may use substances that do not know are contaminated (laced) with other substances, such as fentanyl, nitrazine, or xylazine. This is a significant problem, with the DEA reporting fentanyl-related overdoses are the #1 cause of death in people 18-45. Although 1 in 10 people in the United States will meet criteria for a substance use disorder in their lifetime, few will receive treatment. With the increasing number of adults that suffer from an addiction, only a few will receive treatment due to the complexity of health care systems. Most health care systems do not have insurance coverage for addiction recovery and many health care providers have little to no training in treating addiction. Some doctors do not feel comfortable treating addictions, due to their lack of knowledge and training of the topic. The American Society of Addiction Medicine reports that there are only 3,000 board-certified addiction specialist physicians in the United States while there are nearly 2 million people experiencing opioid addiction. The limited presence and access to comprehensive care for addiction poses a barrier for recovery for many, particularly those hailing from lower socioeconomic backgrounds. ==== Role of language ==== Stigma founded in societal preconceptions about substance dependence often perpetuates discrimination against those with Substance use disorder (SUD). How language regarding SUD is framed plays an important role in mediating stigma experienced by those with the condition, which can consequently shape critical outcomes for this population such as treatment contact, social isolation, and attitudes towards healthcare providers. Shifting towards person-first language has been emphasized in healthcare provider circles to mitigate such stigma. For instance, as opposed to saying "former addict" or "reformed addict", the National Institute on Drug Abuse (NIDA) recommends language such as "person in recovery" or "person who previously used drugs" to separate the problem from the individual. The NIDA additionally applies a similar framework to terminology such as "clean" or "dirty" to denote whether or not someone is actively using as they cite the former vocabulary holds punitive connotations. Moreover, SUD policy reform advocates report language adjacent to SUD can misconstrue associated medical treatment practices which in turn poses barriers to expanded harm reduction efforts from being adopted. An example of this provided in a 2017 executive memorandum from The National Prevention Council was a recommendation to wean usage of "opioid substitution replacement therapy" which many believe falsely alludes that an individual is substituting their addiction for another (i.e. from heroin to methadone) to "opioid agonist therapy". Another term that is losing favor is "abuse", due to its negative connotations and its impact on patient care. Instead, the terms "misuse" for prescribed medications or "use" for illicit substances are used. Similarly, "user" is not used in favor of person-first language, such as "person with a substance use disorder", or "person who uses intraveneously" ==== Drugs and HIV infection ==== Among people who use drugs intravenously, the incidence of HIV and Hepatitis C infection is higher than among those who administer drugs through other routes. However punitive and discriminatory measures against people who use drugs are not able to eliminate either the spread of drug addiction or HIV. Researchers say that around 90% of people who choose to inject drugs have missed prior opportunities for HIV testing that were provided. This is why annual screening for Hep C and HIV is recommended for patients who use drugs. Also, in states where it is legal, people may use syringe service programs, or use at safe consumption sites. Also, patients may employ other harm reduction measures, such as employing aseptic technique, to reduce their risk of exposure to infectious disease. The website NextDistro has harm reduction resources for people who use drugs to minimize the risk associated with intravenous use. == Regional patterns == === Africa === In Africa, approximately 28 million people use substances. This number is impacted by the rising availability of drugs that can be administered intraveneously such as heroin, cocaine, and methamphetamine. Socio-demographic factors are often primary determinants of the health status of people who use drugs. These factors contribute to an individual's drug use behaviors such as the sharing of needles and the solicitation of sex in exchange for police protection or more drugs. Nutritional status, family support, stigma/discrimination, adherence to medication, and recovery from addiction are also impacted by these socio-demographic factors. Research shows that the majority of people who use drugs transition from the use of non-soluble substances s to substances that can be used intravenously or end up using both simultaneously. ==== Kenya ==== In Kenya there is a link between injection-related discrimination, mental health, physical health, and the quality of life for those who inject drugs. The rates of discrimination are linked to higher levels of psychological distress and risky behaviors. Women in Kenya account for 10% of people who use drugs. These women tend to experience typical discrimination faced by people who use drugs in addition to gender-related discrimination. Levels of discrimination are often higher for those that are also HIV positive. ==== Tanzania ==== The Tanzanian government initiated support for substance-dependence treatment rehabilitation in the latter 20th century, with the Ministry of Health administering the Treatment II center network to oversee this care. Treatment centers and harm reduction efforts in Tanzania have come into conflict with recent discourse from politicians, such as President John Magufuli, who established the nation's war on drugs in early 2017. Calling for the arrest of anyone involved in narcotics, Magufuli's stance is distinct from growing harm reduction pathways established in sub-Saharan Africa in the early decades of 2000. This wave of criminalization policy aims to redress the issue of those who use being primarily being targeted by law enforcement, rather than other individuals involved in the trafficking schema. Tanzania's policing of injection drug use has encouraged both consumers and traffickers to further ingratiate themselves in the nation's black market, with injection drug users consequently being more likely to be involved in sex work and other illicit trafficking, rather than engage in traditional employment opportunities which risk greater exposure. Populations that exist at this intersection, for instance, Tanzanian women sex workers who engage in injection drug use, are alienated from utilizing risk reduction interventions due to fear of arrest. Low-income, urban, young men which are the most likely populace to be recruited to illicit substance trafficking due to lack of economic opportunity otherwise, have been highly scrutinized under recent waves of drug criminalization. Substance use ranging from marijuana to heroin is prohibited and a record denoting arrest for such use highly influences subsequent employment outcomes after time served for these individuals, which can ultimately be deleterious to expanding economic mobility within the communities they hail from. A study published in the Review of African Political Economy notes that commerce and political corruption in Tanzania have promulgated crack cocaine consumption and flash-blood practices, or blood sharing between substance users after recent injections, specifically among poor youth in urban centers. === Asia === ==== India ==== Narcotic substance consumption is prohibited in India by the Narcotic Drugs and Psychotropic Substances Bill inducted in 1985, which also levies punitive measures on adjacent activities such as production or vending of such substances. Possession of a controlled substance can result in punishment ranging from a $136.21 USD fine and half a year imprisonment to $121,261 USD and twenty years imprisonment, depending on whether the amount identified is considered small or commercial. Certain crimes outlined by the Narcotic Drugs and Psychotropic Substances Bill are also eligible for the death penalty, and while cases involving marijuana have been charged with capital punishment in the past, they tend to be successfully appealed in higher courts. This legislation is heavily influenced by a coordinated United Nations effort throughout the latter twentieth century to stymie international drug trafficking. According to the International Drug Policy Consortium, India's Narcotics Control Bureau, which executes the various facets of the Narcotic Drugs and Psychotropic Substances Bill, has encountered criticism for the legislation's stringent measures which have limited access to pain-relief medication, specifically the prescription of opiates for post-operative patients. Bill revisions in response have expanded access to such substances, like methadone, to be distributed through recognized care providers, and members of parliaments have subsequently pushed for expanded bill protections for marijuana use, which has not gained traction. Language cited as demeaning within the 2012 National Policy on Drugs and Psychotropic Substances regarding harm reduction pipelines such as clean needle programs, referring to such as "shooting galleries," have posed barriers to preventing comorbidities such as HIV which are prevalent among people who inject drugs in India. This poses an issue in states such as Punjab where over 20% of people who inject drugs are also infected with HIV. ==== Philippines ==== In the Philippines, the government's war on drugs has led to allegations of killings and other human rights violations by the Philippine National Police against drug suspects. This has led the United Nations Human Rights Council to adopt a resolution urging the Philippine government to set up an investigation into mass killings during the war on drugs. ==== Vietnam ==== Drug control strategy in modern Vietnam was first formally introduced in 1990 around the cause of eradicating "social evils," in reference to substance use. Such policies were inspired by the UN, and specifically, its International Drug Conventions which took place from the latter 1960s to 1997. Ordinances and violation measures were propositioned by the Vietnamese National Assembly in this legislation to mandate compulsory treatment for substance users, rather than subject them to prison. High input in mandatory treatment centers has resulted in a tendency for there to be more patients at treatment centers than can be handled, thus limiting access to rehabilitation for these individuals. Harm reduction measures such as clean needles and condom access have been introduced throughout the 2000s at a national level to address the prevalence of HIV and HCV among drug users. Inconsistencies between the Ordinance on HIVAIDS which outlines such harm reduction practices, and the Drug Law of 2000, which prohibits the distribution of materials like needles, has made provincial adoption of harm reduction institutions, like syringe exchanges, challenging. While Vietnamese policy leaders generally veer towards addressing substance use as a medical issue, rather than criminal activity, having decriminalized many substances since 2009, the Ordinance of Administrative Violation continues to classify illicit substance consumption as a crime. Consequently, at a local level, substance users remain eligible to be charged by law enforcement and subjected to forced labor treatment centers that are comparable to detention. Thus, many substance users do not access harm reduction institutions out of fear of being identified by law enforcement and placed in these conditions. === Europe === ==== Sweden ==== Narcotic substance use is criminalized in Sweden, with drug offenses holding punishments ranging from fines to six months imprisonment. To apprehend people who use, law enforcement is permitted to conduct urine testing based on suspicion, rather than wholly requiring a public disturbance. Such protocol is justified by lawmakers as a way to expand early intervention for people who use substances to be referred to rehabilitation channels, but legal advocates have challenged such practices for infringing upon personal freedoms. Diversion to court-ordered treatment programs rather than criminalization has been expanded in response during the early 21st century. However, there are disparities in representation in such programs. For example, people who use drugs found in violation who belong to the top third Swedish wealth bracket are twice as likely to be admitted into a treatment program rather than imprisoned compared to people who committed a similar offense but belong to the bottom two-thirds of the wealth bracket. Moreover, while those who use drugs can apply to their local welfare administrator for rehabilitative services, this process is selective despite being less costly than long-term imprisonment for an associated drug-related crime. Sweden has faced criticism for having harsher drug policies and less accessible rehabilitative programs for people who use drugs than peer Nordic nations which are moving towards drug liberalization. Many cite this for why Sweden has rising substance-related mortality in the 21st century, for instance, having 157 overdose deaths in 2006 compared to the Netherlands which had a little over a hundred despite having a population close to double the size. Zero-tolerance policies are also in place for those who drive under the influence of an illicit substance. === North America === ==== Canada ==== In Vancouver, Canada, there have been efforts to reduce opioid-related deaths. An article published by the Canadian Medical Association Journal discusses new efforts to create safe injection sites for people struggling with opioid addiction. Vancouver politicians created these sites for people to safely use drugs that they are addicted to without the risk of infection or prosecution by the police. These safe injection sites provide sterilized needles to limit the reuse of needles that lead to the spread of AIDS and other diseases. Safe injection sites or sometimes called supervised consumption sites (SCS) are a form of harm reduction which is a highly studied approach to public health issues. Drug addicts in Vancouver have been discriminated against on numerous occasions. Mothers who are said to be drug addicts have had their children taken away, as they are thought to be unfit mothers. These women have a hard time getting jobs because employers might not want to hire someone who they believe are drug addicts. Women have started a union for drug users in Vancouver to aid them with housing and education to help them get back on their feet. ==== United States ==== The War on Drugs, which formalized in the 1970s with the Nixon administration, has disparately affected communities of color in the United States. Substantial punitive measures exist for illicit possession, whether that be in the context of use, trafficking, or selling, with length of incarceration scaling up with repeat offenses. Charges can go up to life without parole for third-time offenses related to opioids such as fentanyl. Three-quarters of those imprisoned for fentanyl today are people of color, which directly corresponds to Black and Latin populations being disproportionately policed for drug-related crimes. This additionally infringes upon voting eligibility among people who use drugs, as more extreme drug charges hold felony status which revokes voting rights in a majority of states. Drug criminalization moreover operates within the deportation pipeline in the US, with drug charges making all individuals without citizenship eligible for deportation. This includes marijuana-related charges which have constituted over ten thousand deportations from 2012 to 2013, often severing families and communities. While statewide measures to legalize marijuana have gained traction throughout 2010, individuals of color have been less likely to receive post-carceral clemency for these charges due to barriers to legal advocacy. Human rights advocates have criticized the use of demeaning language regarding the condition in criminal litigation to leverage character assault against defendants or victims who have or are presumed to have the condition. A prominent example of this is the trial of Derek Chauvin, the former Minneapolis police officer convicted of murdering George Floyd, whose legal defense asserted substance use as a potential cause of death, rather than the asphyxiation which incurred from Chauvin. In the US, employers and educational institutions may legally discriminate against people who are currently using based on the results of their drug screens. Otherwise, the ADA protects people with a prior history of substance use or who are receiving treatment. Employers may elect to administer drug screens at random, upon suspicion, on a routine basis, and before employment as a prerequisite as part of a zero-tolerance policy. However, according to the Rehabilitation Act of 1973, employers are supposed to ensure that people with alcohol use disorder and substance use disorders receive needed treatment and accommodations. The lack of job opportunities and treatment for people with substance use disorders may result in relapses or jail time . Nathan Kim and his associates once conducted a study on the HIV status of people who inject drugs and found that the HIV rate in those individuals in San Francisco increased by 16.1% from the year 2009 when the HIV rate was 64.4%, to 80.5% in 2015. == See also == Cognitive liberty Drug liberalization Legalisation of marijuana == References ==
Wikipedia/Discrimination_against_drug_addicts
The breadwinner model is a paradigm of family centered on a breadwinner, "the member of a family who earns the money to support the others." Traditionally, the earner works outside the home to provide the family with income and benefits such as health insurance, while the non-earner stays at home and takes care of children and the elderly. The breadwinner model largely arose in western cultures after industrialization occurred. Before industrialization, all members of the household—including men, women, and children—contributed to the productivity of the household. Gender roles underwent a re-definition as a result of industrialization, with a split between public and private roles for men and women, which did not exist before industrialization. Norwegian government policy has increasingly targeted men as fathers, as a tool of changing gender relations. Recent years have seen a shift in gender norms for the breadwinner role in the U.S. A 2013 Pew Research study found that women were the sole or primary breadwinners in 40% of heterosexual relationships with children. == Rise == In Britain, the breadwinner model developed among the emerging middle class towards the end of the Industrial Revolution in the mid-nineteenth century. Prior to this, in low-income families, a subsistence wage was paid on the basis of the individual worker's output, with all members of the family expected to contribute to the household upkeep. There was another side to the transformation of wage relations in mid-19th-century Britain involving two closely related changes: first, a shift in the prevailing wage form, from a joint to an individual payment; and second, a shift in the predominant subsistence norm of a living wage, from a family group's income to the ideal of an adult male-breadwinner wage. This is the notion that the wage earned by a husband ought to be sufficient to support his family without his wife and young children having to work for pay. The increase in wages among skilled labourers and lower-middle-class workers allowed for a far larger number of families to be able to support the entire family unit on one wage, and the breadwinner model became an attainable goal for a far wider proportion of society. Within this model, "The division of labour in parenting tasks can also be classified as 'caring about' (breadwinning) and 'caring for' (nurturing) children". == Advantages == In the United Kingdom, the emergence of the breadwinner norm coincided with and helped to facilitate the removal of children from the workforce. In 1821, approximately 49% of the nation's workforce was under the age of 20. Throughout the century, multiple items of legislation were written in to law limiting the age at which a child could enter work and ensuring mandatory standards of education. Historically, families that rely on the earning power of one parent have had a lower divorce rate than families where both parents are in gainful employment. However, a lower divorce rate is not universally accepted as a positive facet of society. A primary reason women in domestic abuse situations choose not to divorce or report their spouses is economic dependence on their partner. Marriages in a breadwinner economy may last longer or be less likely to end, but this may be an effect of the economically disadvantaged partner lacking the freedom to end a bad marriage. == Disadvantages == One associated disadvantage is that 'male breadwinner regimes make women dependent within marriage cohabitation especially when they have young children'. In societies where the breadwinner model is present, it is common for the non-earner (predominantly women) to have broken career paths, providing unpaid labour to the family or working part-time. This contributes to the fact that, on average, women obtain lower levels of lifetime earnings than men. This income disparity can often lead to an increase in financial insecurity or poverty – predominantly affecting women – if the relationship collapses. Another risk that has been identified with this has been a higher exposure to domestic violence, which has been associated with the non-earner's lack of independent resources. Since the US economy has evolved past the breadwinner economy, studies have examined the well-being of working mothers. Data spanning over 10 years showed that on average working mothers are happier than stay at home mothers, report better health and lower depression. === Effect on gender identity === As breadwinning has been part of male identity in societies that have a breadwinner economy, people may continue to expect men to take on a breadwinner role, and some may be against women taking on the breadwinning role. However, people in younger generations report less strict gendered expectations for men to be a breadwinner. When surveyed, people in all generations report that it is more important that their spouse is a good partner or parent than that their partner is a breadwinner. == Decline of the male breadwinner == In 2013 the UK female employment rate reached 67.2 per cent, the highest since the Office for National Statistics' records began. As women's growing presence in the professional world has risen, as well as support for gender equality, male–female relations in the home have changed, especially the breadwinner paradigm. The breadwinner model was most prevalent during the 20-year period directly after World War II. During this time, the economy relied heavily on men to financially support the family and to provide the main source of income, typically relying on women to stay at home and look after the children and undertaking domestic work. "Women's support for gender specialisation in marriage began to decline rapidly from the late 1970s through to the mid 1980s, this was followed by an interval of stability until the mid 1990s". "As increasing proportions of women entered the paid labour market during the latter decades of the 20th century, the family model of a male breadwinner and female homemaker came under significant challenge both as a practice and an ideology". There is now agreement in most literature that the breadwinner model, in which men take primary responsibility for earning and women for the unpaid work of care, has been substantially eroded. The Nordic countries in particular have begun to adopt the dual-breadwinner model, with high employment rates among men and women, and a very small difference between men's and women's hours of work. With the exception of Denmark, research by the World Economic Forum has shown that all Nordic countries have closed over 80 percent of the gender gap. == Gender cliff == In many countries the likelihood of women's contribution to the household income drops sharply above 50% for heterosexual marriages. This gender cliff in the relative income contribution can be explained by both women and men preferring to marry people with higher income than themselves (Hypergamy) together with on average higher income for men. == Breadwinner mothers == The female breadwinner model, otherwise known as breadwinner mothers or breadwinner moms, takes place when the female provides the main source of income for the family. Recent data from the US Census stated that "40% of all households with children under the age of 18 include mothers who are either the sole or primary source of income for the family". 37% of these "Breadwinner Moms" are married mothers who have a higher income than their husbands, and 63% are single mothers. == Concerns with the decline of the breadwinner model == The decline of the breadwinner model has been accompanied by an erosion of the economic support of family members and the "distribution of time and regulation of marriage and parenthood". With two parents in the workforce, there is a risk that a job could undermine family life, consequently leading to relationship breakdown or adversely affecting original family formation. While some evidence suggests that "women's gains on the economic front may be contributing to a decline in the formation and stability of marriages", one reason for this may be that women with greater earning and economic security have more freedom to leave abusive marriages. Another possibility could be that men are more hesitant to this change in social norms. == Global variations == The ideal of the breadwinning model varies across the globe. In Norway, a country with strong gender equality ideology, the breadwinner model is less prevalent. Second generation Pakistani immigrants living in Norway experience the effects of this equality and reinforce women's rights to paid work as opposed to the strict male centric ideologies that generations before them practiced. In the United Kingdom, women's rates of employment decline after becoming a mother, and the male breadwinning model is still constant. In the United States during industrialization, nothing was more central to the American industrial order than the breadwinner ideal. It served to promote commerce while keeping it within proper bounds. The American Federation of Labor adopted politics of male breadwinning. However, the North and South did not agree on this new cultural ideal and it contributed to sectional political strife. == During the COVID-19 pandemic == The COVID-19 pandemic caused a workplace transition from office to home. The majority of the world's workforce (93% in 2022) was located in countries with lockdowns. Also, in-person services like daycare and school shut down at the same time. When women, especially women from minority groups, are employed outside the home, it can be challenging to manage their time effectively. These women are already at a disadvantage, and the weakening COVID economy, which has a disproportionate impact on the hiring of racial and ethnic minorities and women, may cause them to lose hours at work and influence the breadwinning mode. == See also == Sexual division of labour Sociology of the family == References == == Notes == Crompton, Rosemary (1999). Restructuring gender relations and employment: the decline of the male breadwinner. Oxford New York: Oxford University Press. ISBN 9780198296089. Book review: Fagan, Colette (March 2001). "Restructuring gender relations and employment: the decline of the male breadwinner (review)". Work, Employment & Society. 15 (1). Cambridge Journals: 195–212. doi:10.1017/S0950017001230104 (inactive 28 January 2025). JSTOR 23747792.{{cite journal}}: CS1 maint: DOI inactive as of January 2025 (link) Creighton, Colin (September 1999). "The rise and decline of the 'male breadwinner family' in Britain". Cambridge Journal of Economics. 23 (5). Oxford Journals: 519–541. doi:10.1093/cje/23.5.519. JSTOR 23599633. Cunningham, Mick (September 2008). "Changing attitudes toward the male breadwinner, female homemaker family model: Influences of women's employment and education over the lifecourse". Social Forces. 87 (1). Oxford Journals: 299–323. doi:10.1353/sof.0.0097. JSTOR 20430858. S2CID 144490888. Nagla, Madhu (March 2008). "Male migration and emerging female headed families: Issues and challenges". Asian Women. 24 (1). Research Institute of Asian Women (RIAW): 1–23. doi:10.14431/aw.2008.03.24.1.1. Dugan, Emily (19 February 2014). "Number of women in work in Britain hits record high - but figures show the gender pay gap is growing too". The Independent. Independent Print Limited. Retrieved 30 October 2014. World Economic Forum (2013). Insight Report: The Global Gender Gap Report 2013 (PDF) (Report). World Economic Forum, Switzerland. p. 103. Retrieved 19 October 2014. Lewis, Jane (Summer 2001). "The decline of the male breadwinner model: The implications for work and care". Social Politics. 8 (2). Oxford Journals: 152–170. doi:10.1093/sp/8.2.152. Osawa, Mari (Winter 2006). "The vicious cycle of the 'male breadwinner' model of livelihood security". Women's Asia 21: Voices from Japan. 16 (1). Asia-Japan Women's Resource Center: 1–5. Pdf. Pascall, Gillian (2010), "Male breadwinner model", in Pascall, Gillian; et al. (eds.), International encyclopedia of social policy, London New York: Routledge, ISBN 9780415576949 Text. Sayer, Liana C.; Bianchi, Suzanne M.; Robinson, John P. (July 2004). "Are parents investing less in children? Trends in mothers' and fathers' time with children". American Journal of Sociology. 110 (1). The University of Chicago Press: 1–43. doi:10.1086/386270. JSTOR 10.1086/386270. S2CID 141718530. Thaler, Richard H. (1 June 2013). "Breadwinner wives and nervous husbands". The New York Times. Retrieved 18 October 2014. Pew Research Center (19 November 2010). The decline of marriage and rise of new families (Report). Pew Research Center. Retrieved 18 October 2014. Wang, Wendy; Parker, Kim; Taylor, Paul (29 May 2013). Breadwinner moms, mothers are the sole or primary provider in four-in-ten households with children: Public conflicted about the growing trend (PDF). Pew Research Center (Report). Washington, DC. Archived from the original (PDF) on 6 November 2014. Retrieved 1 November 2014.
Wikipedia/Breadwinner_model
In law, selective enforcement occurs when government officials (such as police officers, prosecutors, or regulators) exercise discretion, which is the power to choose whether or how to punish a person who has violated the law. The biased use of enforcement discretion, such as that based on racial prejudice or corruption, is usually considered a legal abuse and a threat to the rule of law. This concept is closely related to prosecutorial discretion. There is a divide between countries where prosecutions are inherently discretionary (known as the opportunity principle) and where prosecutions are mandatory (known as the legality principle). In addition, in some countries prosecutors operate independently with more discretion vs in a hierarchical system that require more conformity. In some cases, selective enforcement may be desirable. For example, a verbal warning to a teenager may effectively alter their behavior without resorting to legal punishment and with the added benefit of reducing governmental legal costs. In other cases, selective enforcement may be inevitable. For example, it may be impractical for police officers to issue traffic tickets to every driver they observe exceeding the speed limit, so they may have no choice but to limit action to the most flagrant examples of reckless driving. The dangers of selective enforcement lie in its potential to undermine the fundamental principles of justice and equality. When laws are enforced inconsistently, it can lead to arbitrary outcomes, favoritism, and unequal treatment under the law. Individuals from marginalized communities may face harsher penalties, while others escape accountability due to their social status or connections. == United States == In the United States, the principle of discretion grants public prosecutors and police significant latitude in deciding whether to charge someone with a crime and which charges to file. Therefore, the mere fact that a law is selectively enforced against one person and not against another, absent bias or pattern of enforcement against a constitutionally-protected class is not illegal. === Immigration law === Selective enforcement has become a topic of great discussion in the illegal immigration debate. The 2011 "Morton Memo" laid out enforcement priorities for the U.S. Immigration and Customs Enforcement, and was intended to channel limited resources into prioritized pursuit of cases involving criminals and felons. It was interpreted as the waiver of active prosecution of non-criminal illegal aliens and the exclusive focus on criminal illegal aliens. Enforcement priorities were further defined by the Deferred Action for Childhood Arrivals program, which started in 2012. This uses the executive branch's discretionary authority to grant certain people who were illegally brought to the United States as minors the temporary authorization to live and work in the United States. == See also == Selective prosecution Sentencing disparity Prosecutorial discretion Equality before the law Ticket fixing == Further reading == Michal Tamir, "Public Law as a Whole and Normative Duality: Reclaiming Administrative Insights in Enforcement Review", 12 Texas Journal on Civil Liberties and Civil Rights 43-99 (2006) == References ==
Wikipedia/Selective_enforcement
In sociology, societal transformation refers to “a deep and sustained, nonlinear systemic change” in a society. Transformational changes can occur within a particular system, such as a city, a transport or energy system. Societal transformations can also refer to changes of an entire culture or civilization. Such transformations often include not only social changes but cultural, technological, political, and economic, as well as environmental. Transformations can be seen as occurring over several centuries, such as the Neolithic Revolution or at a rapid pace, such as the rapid expansion of megacities in China. Whereas social transformation is typically used within sociology to characterize the process of change either in an individual’s ascribed social status, or in social structures, such as institutional relationships, habits, norms, and values, societal transformation refers to a wider set of societal structural changes. The concept of societal transformations have for some time been used in academic disciplines such as political economy, development economics, history or anthropology. Since 2010, the concept has been increasingly used in policy-making, research and media to point out that small adjustments of present habits, technologies and policies does not suffice to meet the environmental, climate and sustainable development goals. The Decision of the United Nations 2030 Agenda outlining the Sustainable Development Goals bears the heading “transforming our world”. The special report on global warming of 1.5 °C by the Intergovernmental Panel on Climate Change (IPCC) states that curbing global warming to 1.5 °C compared to preindustrial levels “would require transformative systemic change, integrated with sustainable development”. Similarly, the 2019 global assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) concludes that transformative changes in society are crucial for nature protection. The European Green Deal, proposed by the European Commission, see deeply transformative policies to restructure the EU's economy as fundamental for its vision of a healthier, greener and more prosperous Europe. == Further reading == == See also == Social change Social transformation Sustainability transformation Transformation of the Western Roman Empire Global catastrophic risk == References ==
Wikipedia/Societal_transformation
A latent variable model is a statistical model that relates a set of observable variables (also called manifest variables or indicators) to a set of latent variables. Latent variable models are applied across a wide range of fields such as biology, computer science, and social science. Common use cases for latent variable models include applications in psychometrics (e.g., summarizing responses to a set of survey questions with a factor analysis model positing a smaller number of psychological attributes, such as the trait extraversion, that are presumed to cause the survey question responses), and natural language processing (e.g., a topic model summarizing a corpus of texts with a number of "topics"). It is assumed that the responses on the indicators or manifest variables are the result of an individual's position on the latent variable(s), and that the manifest variables have nothing in common after controlling for the latent variable (local independence). Different types of the latent variable models can be grouped according to whether the manifest and latent variables are categorical or continuous: The Rasch model represents the simplest form of item response theory. Mixture models are central to latent profile analysis. In factor analysis and latent trait analysis the latent variables are treated as continuous normally distributed variables, and in latent profile analysis and latent class analysis as from a multinomial distribution. The manifest variables in factor analysis and latent profile analysis are continuous and in most cases, their conditional distribution given the latent variables is assumed to be normal. In latent trait analysis and latent class analysis, the manifest variables are discrete. These variables could be dichotomous, ordinal or nominal variables. Their conditional distributions are assumed to be binomial or multinomial. == See also == Confirmatory factor analysis Hidden Markov model Partial least squares path modeling Structural equation modeling == Notes == == References == == Further reading == Skrondal, Anders; Rabe-Hesketh, Sophia (2004). Generalized Latent Variable Modeling. Chapman & Hall. ISBN 1-58488-000-7.
Wikipedia/Latent_variable_model
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances in the field of deep learning have allowed neural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance. ML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. The application of ML to business problems is known as predictive analytics. Statistics and mathematical optimisation (mathematical programming) methods comprise the foundations of machine learning. Data mining is a related field of study, focusing on exploratory data analysis (EDA) via unsupervised learning. From a theoretical viewpoint, probably approximately correct learning provides a framework for describing machine learning. == History == The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence. The synonym self-teaching computers was also used in this time period. Although the earliest machine learning model was introduced in the 1950s when Arthur Samuel invented a program that calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes. In 1949, Canadian psychologist Donald Hebb published the book The Organization of Behavior, in which he introduced a theoretical neural structure formed by certain interactions among nerve cells. Hebb's model of neurons interacting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, or artificial neurons used by computers to communicate data. Other researchers who have studied human cognitive systems contributed to the modern machine learning technologies as well, including logician Walter Pitts and Warren McCulloch, who proposed the early mathematical models of neural networks to come up with algorithms that mirror human thought processes. By the early 1960s, an experimental "learning machine" with punched tape memory, called Cybertron, had been developed by Raytheon Company to analyse sonar signals, electrocardiograms, and speech patterns using rudimentary reinforcement learning. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions. A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that an artificial neural network learns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal. Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?". Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions. == Relationships to other fields == === Artificial intelligence === As a scientific endeavour, machine learning grew out of the quest for artificial intelligence (AI). In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalised linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.: 488  However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.: 488  By 1980, expert systems had come to dominate AI, and statistics was out of favour. Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming(ILP), but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.: 708–710, 755  Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including John Hopfield, David Rumelhart, and Geoffrey Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.: 25  Machine learning (ML), reorganised and recognised as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics, fuzzy logic, and probability theory. === Data compression === === Data mining === Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data. Machine learning also has intimate ties to optimisation: Many learning problems are formulated as minimisation of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the preassigned labels of a set of examples). === Generalization === Characterizing the generalisation of various learning algorithms is an active topic of current research, especially for deep learning algorithms. === Statistics === Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalisable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field. Conventional statistical analyses require the a priori selection of a model most suitable for the study data set. In addition, only significant or theoretically relevant variables based on previous experience are included for analysis. In contrast, machine learning is not built on a pre-structured model; rather, the data shape the model by detecting underlying patterns. The more variables (input) used to train the model, the more accurate the ultimate model will be. Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random Forest. Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning. === Statistical physics === Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyse the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics. == Theory == A core objective of a learner is to generalise from its experience. Generalisation in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases. The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory via the probably approximately correct learning model. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalisation error. For the best performance in the context of generalisation, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalisation will be poorer. In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. == Approaches == Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system: Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs. Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning). Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximise. Although each algorithm has advantages and limitations, no single algorithm works for all problems. === Supervised learning === Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data, known as training data, consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimisation of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task. Types of supervised-learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, while regression algorithms are used when the outputs can take any numerical value within a range. For example, in a classification algorithm that filters emails, the input is an incoming email, and the output is the folder in which to file the email. In contrast, regression is used for tasks such as predicting a person's height based on factors like age and genetics or forecasting future temperatures based on historical data. Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification. === Unsupervised learning === Unsupervised learning algorithms find structures in data that has not been labelled, classified or categorised. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering, dimensionality reduction, and density estimation. Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity. A special type of unsupervised learning called, self-supervised learning involves training a model by generating the supervisory signal from the data itself. === Semi-supervised learning === Semi-supervised learning falls between unsupervised learning (without any labelled training data) and supervised learning (with completely labelled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabelled data, when used in conjunction with a small amount of labelled data, can produce a considerable improvement in learning accuracy. In weakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets. === Reinforcement learning === Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximise some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimisation, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In reinforcement learning, the environment is typically represented as a Markov decision process (MDP). Many reinforcement learning algorithms use dynamic programming techniques. Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent. === Dimensionality reduction === Dimensionality reduction is a process of reducing the number of random variables under consideration by obtaining a set of principal variables. In other words, it is a process of reducing the dimension of the feature set, also called the "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination or extraction. One of the popular methods of dimensionality reduction is principal component analysis (PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D). The manifold hypothesis proposes that high-dimensional data sets lie along low-dimensional manifolds, and many dimensionality reduction techniques make this assumption, leading to the area of manifold learning and manifold regularisation. === Other types === Other approaches have been developed which do not fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example, topic modelling, meta-learning. ==== Self-learning ==== Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, named crossbar adaptive array (CAA). It gives a solution to the problem learning without any external reward, by introducing emotion as an internal reward. Emotion is used as state evaluation of a self-learning agent. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion. The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine: in situation s perform action a receive a consequence situation s' compute emotion of being in the consequence situation v(s') update crossbar memory w'(a,s) = w(a,s) + v(s') It is a system with only one input, situation, and only one output, action (or behaviour) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioural environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioural environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behaviour, in an environment that contains both desirable and undesirable situations. ==== Feature learning ==== Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include principal component analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task. Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labelled input data. Examples include artificial neural networks, multilayer perceptrons, and supervised dictionary learning. In unsupervised feature learning, features are learned with unlabelled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorisation and various forms of clustering. Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros. Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors. Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. ==== Sparse dictionary learning ==== Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions and assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately. A popular heuristic method for sparse dictionary learning is the k-SVD algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot. ==== Anomaly detection ==== In data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties, noise, deviations and exceptions. In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns. Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabelled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labelled as "normal" and "abnormal" and involves training a classifier (the key difference from many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behaviour from a given normal training data set and then test the likelihood of a test instance to be generated by the model. ==== Robot learning ==== Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally meta-learning (e.g. MAML). ==== Association rules ==== Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness". Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilisation of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems. Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule { o n i o n s , p o t a t o e s } ⇒ { b u r g e r } {\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}} found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions. Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs. Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting. Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples. The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set. == Models == A machine learning model is a type of mathematical model that, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimise errors in its predictions. By extension, the term "model" can refer to several levels of specificity, from a general class of models and their associated learning algorithms to a fully trained model with all its internal parameters tuned. Various types of models have been used and researched for machine learning systems, picking the best model for a task is called model selection. === Artificial neural networks === Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition. === Decision trees === Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels, and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making. === Random forest regression === Random forest regression (RFR) falls under umbrella of decision tree-based models. RFR is an ensemble learning method that builds multiple decision trees and averages their predictions to improve accuracy and to avoid overfitting. To build decision trees, RFR uses bootstrapped sampling, for instance each decision tree is trained on random data of from training set. This random selection of RFR for training enables model to reduce bias predictions and achieve accuracy. RFR generates independent decision trees, and it can work on single output data as well multiple regressor task. This makes RFR compatible to be used in various application. === Support-vector machines === Support-vector machines (SVMs), also known as support-vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. === Regression analysis === Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularisation methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Multivariate linear regression extends the concept of linear regression to handle multiple dependent variables simultaneously. This approach estimates the relationships between a set of input variables and several output variables by fitting a multidimensional linear model. It is particularly useful in scenarios where outputs are interdependent or share underlying patterns, such as predicting multiple economic indicators or reconstructing images, which are inherently multi-dimensional. === Bayesian networks === A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalisations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. === Gaussian processes === A Gaussian process is a stochastic process in which every finite collection of the random variables in the process has a multivariate normal distribution, and it relies on a pre-defined covariance function, or kernel, that models how pairs of points relate to each other depending on their locations. Given a set of observed points, or input–output examples, the distribution of the (unobserved) output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point. Gaussian processes are popular surrogate models in Bayesian optimisation used to do hyperparameter optimisation. === Genetic algorithms === A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms. === Belief functions === The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined (e.g., Dempster's rule of combination), just like how in a pmf-based Bayesian approach would combine probabilities. However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance and uncertainty quantification. These belief function approaches that are implemented within the machine learning domain typically leverage a fusion approach of various ensemble methods to better handle the learner's decision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving. However, the computational complexity of these algorithms are dependent on the number of propositions (classes), and can lead to a much higher computation time when compared to other machine learning approaches. === Rule-based models === Rule-based machine learning (RBML) is a branch of machine learning that automatically discovers and learns 'rules' from data. It provides interpretable models, making it useful for decision-making in fields like healthcare, fraud detection, and cybersecurity. Key RBML techniques includes learning classifier systems, association rule learning, artificial immune systems, and other similar models. These methods extract patterns from data and evolve rules over time. === Training models === Typically, machine learning models require a high quantity of reliable data to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representative sample of data. Data from the training set can be as varied as a corpus of text, a collection of images, sensor data, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Biased models may result in detrimental outcomes, thereby furthering the negative impacts on society or objectives. Algorithmic bias is a potential result of data not being fully prepared for training. Machine learning ethics is becoming a field of study and notably, becoming integrated within machine learning engineering teams. ==== Federated learning ==== Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralises the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralised server. This also increases efficiency by decentralising the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google. == Applications == There are many applications for machine learning, including: In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. Shortly after the prize was awarded, Netflix realised that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly. In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis. In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software. In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognised influences among artists. In 2019 Springer Nature published the first research book created using machine learning. In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19. Machine learning was recently applied to predict the pro-environmental behaviour of travellers. Recently, machine learning technology was also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone. When applied correctly, machine learning algorithms (MLAs) can utilise a wide range of company characteristics to predict stock returns without overfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques like OLS. Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes. Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes. Other applications have been focusing on pre evacuation decisions in building fires. Machine learning is also emerging as a promising tool in geotechnical engineering, where it is used to support tasks such as ground classification, hazard prediction, and site characterization. Recent research emphasizes a move toward data-centric methods in this field, where machine learning is not a replacement for engineering judgment, but a way to enhance it using site-specific data and patterns. == Limitations == Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems. The "black box theory" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data. The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes. In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. Microsoft's Bing Chat chatbot has been reported to produce hostile and offensive response against its users. Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves. === Explainability === Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI. It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation. === Overfitting === Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalising the theory in accordance with how complex the theory is. === Other limitations and vulnerabilities === Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers often do not primarily make judgements from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies. Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel. Machine learning models are often vulnerable to manipulation or evasion via adversarial machine learning. Researchers have demonstrated how backdoors can be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type of data/software transparency is provided, possibly including white-box access. == Model assessments == Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy. In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning true positive rate (TPR) and true negative rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. Receiver operating characteristic (ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model. == Ethics == === Bias === Different machine learning approaches can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society. Systems that are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitising cultural prejudices. For example, in 1988, the UK's Commission for Racial Equality found that St. George's Medical School had been using a computer program trained from data of previous admissions staff and that this program had denied nearly 60 candidates who were found to either be women or have non-European sounding names. Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants. Another example includes predictive policing company Geolitica's predictive algorithm that resulted in "disproportionately high levels of over-policing in low-income and minority communities" after being trained with historical crime data. While responsible collection of data and documentation of algorithmic rules used by a system is considered a critical part of machine learning, some researchers blame lack of participation and representation of minority population in the field of AI for machine learning's vulnerability to biases. In fact, according to research carried out by the Computing Research Association (CRA) in 2021, "female faculty merely make up 16.1%" of all faculty members who focus on AI among several universities around the world. Furthermore, among the group of "new U.S. resident AI PhD graduates," 45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as African American, which further demonstrates a lack of diversity in the field of AI. Language models learned from data have been shown to contain human-like biases. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases. In 2016, Microsoft tested Tay, a chatbot that learned from Twitter, and it quickly picked up racist and sexist language. In an experiment carried out by ProPublica, an investigative journalism organisation, a machine learning algorithm's insight into the recidivism rates among prisoners falsely flagged "black defendants high risk twice as often as white defendants". In 2015, Google Photos once tagged a couple of black people as gorillas, which caused controversy. The gorilla label was subsequently removed, and in 2023, it still cannot recognise gorillas. Similar issues with recognising non-white people have been found in many other systems. Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains. Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good, is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who said that "[t]here's nothing artificial about AI. It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility." === Financial incentives === There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated. == Hardware == Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months. === Tensor Processing Units (TPUs) === Tensor Processing Units (TPUs) are specialised hardware accelerators developed by Google specifically for machine learning workloads. Unlike general-purpose GPUs and FPGAs, TPUs are optimised for tensor computations, making them particularly efficient for deep learning tasks such as training and inference. They are widely used in Google Cloud AI services and large-scale machine learning models like Google's DeepMind AlphaFold and large language models. TPUs leverage matrix multiplication units and high-bandwidth memory to accelerate computations while maintaining energy efficiency. Since their introduction in 2016, TPUs have become a key component of AI infrastructure, especially in cloud-based environments. === Neuromorphic computing === Neuromorphic computing refers to a class of computing systems designed to emulate the structure and functionality of biological neural networks. These systems may be implemented through software-based simulations on conventional hardware or through specialised hardware architectures. ==== physical neural networks ==== A physical neural network is a specific type of neuromorphic hardware that relies on electrically adjustable materials, such as memristors, to emulate the function of neural synapses. The term "physical neural network" highlights the use of physical hardware for computation, as opposed to software-based implementations. It broadly refers to artificial neural networks that use materials with adjustable resistance to replicate neural synapses. === Embedded machine learning === Embedded machine learning is a sub-field of machine learning where models are deployed on embedded systems with limited computing resources, such as wearable computers, edge devices and microcontrollers. Running models directly on these devices eliminates the need to transfer and store data on cloud servers for further processing, thereby reducing the risk of data breaches, privacy leaks and theft of intellectual property, personal data and business secrets. Embedded machine learning can be achieved through various techniques, such as hardware acceleration, approximate computing, and model optimisation. Common optimisation techniques include pruning, quantisation, knowledge distillation, low-rank factorisation, network architecture search, and parameter sharing. == Software == Software suites containing a variety of machine learning algorithms include the following: === Free and open-source software === === Proprietary software with free and open-source editions === KNIME RapidMiner === Proprietary software === == Journals == Journal of Machine Learning Research Machine Learning Nature Machine Intelligence Neural Computation IEEE Transactions on Pattern Analysis and Machine Intelligence == Conferences == AAAI Conference on Artificial Intelligence Association for Computational Linguistics (ACL) European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB) International Conference on Machine Learning (ICML) International Conference on Learning Representations (ICLR) International Conference on Intelligent Robots and Systems (IROS) Conference on Knowledge Discovery and Data Mining (KDD) Conference on Neural Information Processing Systems (NeurIPS) == See also == Automated machine learning – Process of automating the application of machine learning Big data – Extremely large or complex datasets Deep learning — branch of ML concerned with artificial neural networks Differentiable programming – Programming paradigm List of datasets for machine-learning research M-theory (learning framework) Machine unlearning Solomonoff's theory of inductive inference – A mathematical theory == References == == Sources == Domingos, Pedro (22 September 2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0465065707. Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2. == Further reading == == External links == International Machine Learning Society mloss is an academic database of open-source machine learning software.
Wikipedia/Machine_learning_model
A semantic similarity network (SSN) is a special form of semantic network. designed to represent concepts and their semantic similarity. Its main contribution is reducing the complexity of calculating semantic distances. Bendeck (2004, 2008) introduced the concept of semantic similarity networks (SSN) as the specialization of a semantic network to measure semantic similarity from ontological representations. Implementations include genetic information handling. The concept is formally defined (Bendeck 2008) as a directed graph, with concepts represented as nodes and semantic similarity relations as edges. The relationships are grouped into relation types. The concepts and relations contain attribute values to evaluate the semantic similarity between concepts. The semantic similarity relationships of the SSN represent several of the general relationship types of the standard Semantic network, reducing the complexity of the (normally, very large) network for calculations of semantics. SSNs define relation types as templates (and taxonomy of relations) for semantic similarity attributes that are common to relations of the same type. SSN representation allows propagation algorithms to faster calculate semantic similarities, including stop conditions within a specified threshold. This reduces the computation time and power required for calculation. A more recent publications on Semantic Matching and Semantic Similarity Networks could be found in (Bendeck 2019). Specific Semantic Similarity Network application on healthcare was presented at the Healthcare information exchange Format (FHIR European Conference) 2019. The latest evolution in Artificial Intelligence (like ChatGPT, based on Large language model), relay strongly on evolutionary computation, the next level will be to include semantic unification (like in the Semantic Networks and this Semantic similarity network) to extend the current models with more powerful understanding tools. == References ==
Wikipedia/Semantic_similarity_network
The bag-of-words (BoW) model is a model of text which uses an unordered collection (a "bag") of words. It is used in natural language processing and information retrieval (IR). It disregards word order (and thus most of syntax or grammar) but captures multiplicity. The bag-of-words model is commonly used in methods of document classification where, for example, the (frequency of) occurrence of each word is used as a feature for training a classifier. It has also been used for computer vision. An early reference to "bag of words" in a linguistic context can be found in Zellig Harris's 1954 article on Distributional Structure. == Definition == The following models a text document using bag-of-words. Here are two simple text documents: Based on these two text documents, a list is constructed as follows for each document: Representing each bag-of-words as a JSON object, and attributing to the respective JavaScript variable: Each key is the word, and each value is the number of occurrences of that word in the given text document. The order of elements is free, so, for example {"too":1,"Mary":1,"movies":2,"John":1,"watch":1,"likes":2,"to":1} is also equivalent to BoW1. It is also what we expect from a strict JSON object representation. Note: if another document is like a union of these two, its JavaScript representation will be: So, as we see in the bag algebra, the "union" of two documents in the bags-of-words representation is, formally, the disjoint union, summing the multiplicities of each element. === Word order === The BoW representation of a text removes all word ordering. For example, the BoW representation of "man bites dog" and "dog bites man" are the same, so any algorithm that operates with a BoW representation of text must treat them in the same way. Despite this lack of syntax or grammar, BoW representation is fast and may be sufficient for simple tasks that do not require word order. For instance, for document classification, if the words "stocks" "trade" "investors" appears multiple times, then the text is likely a financial report, even though it would be insufficient to distinguish between Yesterday, investors were rallying, but today, they are retreating.andYesterday, investors were retreating, but today, they are rallying.and so the BoW representation would be insufficient to determine the detailed meaning of the document. == Implementations == Implementations of the bag-of-words model might involve using frequencies of words in a document to represent its contents. The frequencies can be "normalized" by the inverse of document frequency, or tf–idf. Additionally, for the specific purpose of classification, supervised alternatives have been developed to account for the class label of a document. Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in the WEKA machine learning software system). == Python implementation == == Hashing trick == A common alternative to using dictionaries is the hashing trick, where words are mapped directly to indices with a hashing function. Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets. In practice, hashing simplifies the implementation of bag-of-words models and improves scalability. == See also == Additive smoothing Feature extraction Machine learning MinHash Vector space model w-shingling == Notes == == References == McTear, Michael (et al) (2016). The Conversational Interface. Springer International Publishing.
Wikipedia/Bag-of-words_model
The factored language model (FLM) is an extension of a conventional language model introduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector of k factors: w i = { f i 1 , . . . , f i k } . {\displaystyle w_{i}=\{f_{i}^{1},...,f_{i}^{k}\}.} An FLM provides the probabilistic model P ( f | f 1 , . . . , f N ) {\displaystyle P(f|f_{1},...,f_{N})} where the prediction of a factor f {\displaystyle f} is based on N {\displaystyle N} parents { f 1 , . . . , f N } {\displaystyle \{f_{1},...,f_{N}\}} . For example, if w {\displaystyle w} represents a word token and t {\displaystyle t} represents a Part of speech tag for English, the expression P ( w i | w i − 2 , w i − 1 , t i − 1 ) {\displaystyle P(w_{i}|w_{i-2},w_{i-1},t_{i-1})} gives a model for predicting current word token based on a traditional Ngram model as well as the Part of speech tag of the previous word. A major advantage of factored language models is that they allow users to specify linguistic knowledge such as the relationship between word tokens and Part of speech in English, or morphological information (stems, root, etc.) in Arabic. Like N-gram models, smoothing techniques are necessary in parameter estimation. In particular, generalized back-off is used in training an FLM. == References == J Bilmes and K Kirchhoff (2003). "Factored Language Models and Generalized Parallel Backoff" (PDF). Human Language Technology Conference. Archived from the original (PDF) on 17 July 2012.
Wikipedia/Factored_language_model
Katz back-off is a generative n-gram language model that estimates the conditional probability of a word given its history in the n-gram. It accomplishes this estimation by backing off through progressively shorter history models under certain conditions. By doing so, the model with the most reliable information about a given history is used to provide the better results. The model was introduced in 1987 by Slava M. Katz. Prior to that, n-gram language models were constructed by training individual models for different n-gram orders using maximum likelihood estimation and then interpolating them together. == Method == The equation for Katz's back-off model is: P b o ( w i ∣ w i − n + 1 ⋯ w i − 1 ) = { d w i − n + 1 ⋯ w i C ( w i − n + 1 ⋯ w i − 1 w i ) C ( w i − n + 1 ⋯ w i − 1 ) if C ( w i − n + 1 ⋯ w i ) > k α w i − n + 1 ⋯ w i − 1 P b o ( w i ∣ w i − n + 2 ⋯ w i − 1 ) otherwise {\displaystyle {\begin{aligned}&P_{bo}(w_{i}\mid w_{i-n+1}\cdots w_{i-1})\\[4pt]={}&{\begin{cases}d_{w_{i-n+1}\cdots w_{i}}{\dfrac {C(w_{i-n+1}\cdots w_{i-1}w_{i})}{C(w_{i-n+1}\cdots w_{i-1})}}&{\text{if }}C(w_{i-n+1}\cdots w_{i})>k\\[10pt]\alpha _{w_{i-n+1}\cdots w_{i-1}}P_{bo}(w_{i}\mid w_{i-n+2}\cdots w_{i-1})&{\text{otherwise}}\end{cases}}\end{aligned}}} where C(x) = number of times x appears in training wi = ith word in the given context Essentially, this means that if the n-gram has been seen more than k times in training, the conditional probability of a word given its history is proportional to the maximum likelihood estimate of that n-gram. Otherwise, the conditional probability is equal to the back-off conditional probability of the (n − 1)-gram. The more difficult part is determining the values for k, d and α. k {\displaystyle k} is the least important of the parameters. It is usually chosen to be 0. However, empirical testing may find better values for k. d {\displaystyle d} is typically the amount of discounting found by Good–Turing estimation. In other words, if Good–Turing estimates C {\displaystyle C} as C ∗ {\displaystyle C^{*}} , then d = C ∗ C {\displaystyle d={\frac {C^{*}}{C}}} To compute α {\displaystyle \alpha } , it is useful to first define a quantity β, which is the left-over probability mass for the (n − 1)-gram: β w i − n + 1 ⋯ w i − 1 = 1 − ∑ { w i : C ( w i − n + 1 ⋯ w i ) > k } d w i − n + 1 ⋯ w i C ( w i − n + 1 ⋯ w i − 1 w i ) C ( w i − n + 1 ⋯ w i − 1 ) {\displaystyle \beta _{w_{i-n+1}\cdots w_{i-1}}=1-\sum _{\{w_{i}:C(w_{i-n+1}\cdots w_{i})>k\}}d_{w_{i-n+1}\cdots w_{i}}{\frac {C(w_{i-n+1}\cdots w_{i-1}w_{i})}{C(w_{i-n+1}\cdots w_{i-1})}}} Then the back-off weight, α, is computed as follows: α w i − n + 1 ⋯ w i − 1 = β w i − n + 1 ⋯ w i − 1 ∑ { w i : C ( w i − n + 1 ⋯ w i ) ≤ k } P b o ( w i ∣ w i − n + 2 ⋯ w i − 1 ) {\displaystyle \alpha _{w_{i-n+1}\cdots w_{i-1}}={\frac {\beta _{w_{i-n+1}\cdots w_{i-1}}}{\sum _{\{w_{i}:C(w_{i-n+1}\cdots w_{i})\leq k\}}P_{bo}(w_{i}\mid w_{i-n+2}\cdots w_{i-1})}}} The above formula only applies if there is data for the "(n − 1)-gram". If not, the algorithm skips n-1 entirely and uses the Katz estimate for n-2. (and so on until an n-gram with data is found) == Discussion == This model generally works well in practice, but fails in some circumstances. For example, suppose that the bigram "a b" and the unigram "c" are very common, but the trigram "a b c" is never seen. Since "a b" and "c" are very common, it may be significant (that is, not due to chance) that "a b c" is never seen. Perhaps it's not allowed by the rules of the grammar. Instead of assigning a more appropriate value of 0, the method will back off to the bigram and estimate P(c | b), which may be too high. == References ==
Wikipedia/Katz's_back-off_model
Lemmatization (or less commonly lemmatisation) in linguistics is the process of grouping together the inflected forms of a word so they can be analysed as a single item, identified by the word's lemma, or dictionary form. In computational linguistics, lemmatization is the algorithmic process of determining the lemma of a word based on its intended meaning. Unlike stemming, lemmatization depends on correctly identifying the intended part of speech and meaning of a word in a sentence, as well as within the larger context surrounding that sentence, such as neighbouring sentences or even an entire document. As a result, developing efficient lemmatization algorithms is an open area of research. == Description == In many languages, words appear in several inflected forms. For example, in English, the verb 'to walk' may appear as 'walk', 'walked', 'walks' or 'walking'. The base form, 'walk', that one might look up in a dictionary, is called the lemma for the word. The association of the base form with a part of speech is often called a lexeme of the word. Lemmatization is closely related to stemming. The difference is that a stemmer operates on a single word without knowledge of the context, and therefore cannot discriminate between words which have different meanings depending on part of speech. However, stemmers are typically easier to implement and run faster. The reduced "accuracy" may not matter for some applications. In fact, when used within information retrieval systems, stemming improves query recall accuracy, or true positive rate, when compared to lemmatization. Nonetheless, stemming reduces precision, or the proportion of positively-labeled instances that are actually positive, for such systems. For instance: The word "better" has "good" as its lemma. This link is missed by stemming, as it requires a dictionary look-up. The word "walk" is the base form for the word "walking", and hence this is matched in both stemming and lemmatization. The word "meeting" can be either the base form of a noun or a form of a verb ("to meet") depending on the context; e.g., "in our last meeting" or "We are meeting again tomorrow". Unlike stemming, lemmatization attempts to select the correct lemma depending on the context. Document indexing software like Lucene can store the base stemmed format of the word without the knowledge of meaning, but only considering word formation grammar rules. The stemmed word itself might not be a valid word: 'lazy', as seen in the example below, is stemmed by many stemmers to 'lazi'. This is because the purpose of stemming is not to produce the appropriate lemma – that is a more challenging task that requires knowledge of context. The main purpose of stemming is to map different forms of a word to a single form. As a rule-based algorithm, dependent only upon the spelling of a word, it sacrifices accuracy to ensure that, for example, when 'laziness' is stemmed to 'lazi', it has the same stem as 'lazy'. == Algorithms == A trivial way to do lemmatization is by simple dictionary lookup. This works well for straightforward inflected forms, but a rule-based system will be needed for other cases, such as in languages with long compound words. Such rules can be either hand-crafted or learned automatically from an annotated corpus. == Use in biomedicine == Morphological analysis of published biomedical literature can yield useful results. Morphological processing of biomedical text can be more effective by a specialized lemmatization program for biomedicine, and may improve the accuracy of practical information extraction tasks. == See also == Canonicalization – Process for converting data into a "standard", "normal", or canonical form == References == == External links ==
Wikipedia/Lemmatisation
A word n-gram language model is a purely statistical model of language. It has been superseded by recurrent neural network–based models, which have been superseded by large language models. It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. If only one previous word is considered, it is called a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, each word's probability is slightly higher than its frequency count in a corpus. To calculate it, various methods were used, from simple "add-one" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good–Turing discounting or back-off models. == Unigram model == A special case, where n = 1, is called a unigram model. Probability of each word in a sequence is independent from probabilities of other word in the sequence. Each word's probability in the sequence is equal to the word's probability in an entire document. P uni ( t 1 t 2 t 3 ) = P ( t 1 ) P ( t 2 ) P ( t 3 ) . {\displaystyle P_{\text{uni}}(t_{1}t_{2}t_{3})=P(t_{1})P(t_{2})P(t_{3}).} The model consists of units, each treated as one-state finite automata. Words with their probabilities in a document can be illustrated as follows. Total mass of word probabilities distributed across the document's vocabulary, is 1. ∑ word in doc P ( word ) = 1 {\displaystyle \sum _{\text{word in doc}}P({\text{word}})=1} The probability generated for a specific query is calculated as P ( query ) = ∏ word in query P ( word ) {\displaystyle P({\text{query}})=\prod _{\text{word in query}}P({\text{word}})} Unigram models of different documents have different probabilities of words in it. The probability distributions from different documents are used to generate hit probabilities for each query. Documents can be ranked for a query according to the probabilities. Example of unigram models of two documents: == Bigram model == In a bigram word (n = 2) language model, the probability of the sentence I saw the red house is approximated as P ( I, saw, the, red, house ) ≈ P ( I ∣ ⟨ s ⟩ ) P ( saw ∣ I ) P ( the ∣ saw ) P ( red ∣ the ) P ( house ∣ red ) P ( ⟨ / s ⟩ ∣ house ) {\displaystyle P({\text{I, saw, the, red, house}})\approx P({\text{I}}\mid \langle s\rangle )P({\text{saw}}\mid {\text{I}})P({\text{the}}\mid {\text{saw}})P({\text{red}}\mid {\text{the}})P({\text{house}}\mid {\text{red}})P(\langle /s\rangle \mid {\text{house}})} == Trigram model == In a trigram (n = 3) language model, the approximation is P ( I, saw, the, red, house ) ≈ P ( I ∣ ⟨ s ⟩ , ⟨ s ⟩ ) P ( saw ∣ ⟨ s ⟩ , I ) P ( the ∣ I, saw ) P ( red ∣ saw, the ) P ( house ∣ the, red ) P ( ⟨ / s ⟩ ∣ red, house ) {\displaystyle P({\text{I, saw, the, red, house}})\approx P({\text{I}}\mid \langle s\rangle ,\langle s\rangle )P({\text{saw}}\mid \langle s\rangle ,I)P({\text{the}}\mid {\text{I, saw}})P({\text{red}}\mid {\text{saw, the}})P({\text{house}}\mid {\text{the, red}})P(\langle /s\rangle \mid {\text{red, house}})} Note that the context of the first n – 1 n-grams is filled with start-of-sentence markers, typically denoted <s>. Additionally, without an end-of-sentence marker, the probability of an ungrammatical sequence *I saw the would always be higher than that of the longer sentence I saw the red house. == Approximation method == The approximation method calculates the probability P ( w 1 , … , w m ) {\displaystyle P(w_{1},\ldots ,w_{m})} of observing the sentence w 1 , … , w m {\displaystyle w_{1},\ldots ,w_{m}} P ( w 1 , … , w m ) = ∏ i = 1 m P ( w i ∣ w 1 , … , w i − 1 ) ≈ ∏ i = 2 m P ( w i ∣ w i − ( n − 1 ) , … , w i − 1 ) {\displaystyle P(w_{1},\ldots ,w_{m})=\prod _{i=1}^{m}P(w_{i}\mid w_{1},\ldots ,w_{i-1})\approx \prod _{i=2}^{m}P(w_{i}\mid w_{i-(n-1)},\ldots ,w_{i-1})} It is assumed that the probability of observing the ith word wi (in the context window consisting of the preceding i − 1 words) can be approximated by the probability of observing it in the shortened context window consisting of the preceding n − 1 words (nth-order Markov property). To clarify, for n = 3 and i = 2 we have P ( w i ∣ w i − ( n − 1 ) , … , w i − 1 ) = P ( w 2 ∣ w 1 ) {\displaystyle P(w_{i}\mid w_{i-(n-1)},\ldots ,w_{i-1})=P(w_{2}\mid w_{1})} . The conditional probability can be calculated from n-gram model frequency counts: P ( w i ∣ w i − ( n − 1 ) , … , w i − 1 ) = c o u n t ( w i − ( n − 1 ) , … , w i − 1 , w i ) c o u n t ( w i − ( n − 1 ) , … , w i − 1 ) {\displaystyle P(w_{i}\mid w_{i-(n-1)},\ldots ,w_{i-1})={\frac {\mathrm {count} (w_{i-(n-1)},\ldots ,w_{i-1},w_{i})}{\mathrm {count} (w_{i-(n-1)},\ldots ,w_{i-1})}}} === Out-of-vocabulary words === An issue when using n-gram language models are out-of-vocabulary (OOV) words. They are encountered in computational linguistics and natural language processing when the input includes words which were not present in a system's dictionary or database during its preparation. By default, when a language model is estimated, the entire observed vocabulary is used. In some cases, it may be necessary to estimate the language model with a specific fixed vocabulary. In such a scenario, the n-grams in the corpus that contain an out-of-vocabulary word are ignored. The n-gram probabilities are smoothed over all the words in the vocabulary even if they were not observed. Nonetheless, it is essential in some cases to explicitly model the probability of out-of-vocabulary words by introducing a special token (e.g. <unk>) into the vocabulary. Out-of-vocabulary words in the corpus are effectively replaced with this special <unk> token before n-grams counts are cumulated. With this option, it is possible to estimate the transition probabilities of n-grams involving out-of-vocabulary words. == n-grams for approximate matching == n-grams were also used for approximate matching. If we convert strings (with only letters in the English alphabet) into character 3-grams, we get a 26 3 {\displaystyle 26^{3}} -dimensional space (the first dimension measures the number of occurrences of "aaa", the second "aab", and so forth for all possible combinations of three letters). Using this representation, we lose information about the string. However, we know empirically that if two strings of real text have a similar vector representation (as measured by cosine distance) then they are likely to be similar. Other metrics have also been applied to vectors of n-grams with varying, sometimes better, results. For example, z-scores have been used to compare documents by examining how many standard deviations each n-gram differs from its mean occurrence in a large collection, or text corpus, of documents (which form the "background" vector). In the event of small counts, the g-score (also known as g-test) gave better results. It is also possible to take a more principled approach to the statistics of n-grams, modeling similarity as the likelihood that two strings came from the same source directly in terms of a problem in Bayesian inference. n-gram-based searching was also used for plagiarism detection. == Bias–variance tradeoff == To choose a value for n in an n-gram model, it is necessary to find the right trade-off between the stability of the estimate against its appropriateness. This means that trigram (i.e. triplets of words) is a common choice with large training corpora (millions of words), whereas a bigram is often used with smaller ones. === Smoothing techniques === There are problems of balance weight between infrequent grams (for example, if a proper name appeared in the training data) and frequent grams. Also, items not seen in the training data will be given a probability of 0.0 without smoothing. For unseen but plausible data from a sample, one can introduce pseudocounts. Pseudocounts are generally motivated on Bayesian grounds. In practice it was necessary to smooth the probability distributions by also assigning non-zero probabilities to unseen words or n-grams. The reason is that models derived directly from the n-gram frequency counts have severe problems when confronted with any n-grams that have not explicitly been seen before – the zero-frequency problem. Various smoothing methods were used, from simple "add-one" (Laplace) smoothing (assign a count of 1 to unseen n-grams; see Rule of succession) to more sophisticated models, such as Good–Turing discounting or back-off models. Some of these methods are equivalent to assigning a prior distribution to the probabilities of the n-grams and using Bayesian inference to compute the resulting posterior n-gram probabilities. However, the more sophisticated smoothing models were typically not derived in this fashion, but instead through independent considerations. Linear interpolation (e.g., taking the weighted mean of the unigram, bigram, and trigram) Good–Turing discounting Witten–Bell discounting Lidstone's smoothing Katz's back-off model (trigram) Kneser–Ney smoothing === Skip-gram language model === Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the rain in Spain falls mainly on the plain the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences the in, rain Spain, in falls, Spain mainly, falls on, mainly the, and on plain. In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. == Syntactic n-grams == Syntactic n-grams are n-grams defined by paths in syntactic dependency or constituent trees rather than the linear structure of the text. For example, the sentence "economic news has little effect on financial markets" can be transformed to syntactic n-grams following the tree structure of its dependency relations: news-economic, effect-little, effect-on-markets-financial. Syntactic n-grams are intended to reflect syntactic structure more faithfully than linear n-grams, and have many of the same applications, especially as features in a vector space model. Syntactic n-grams for certain tasks gives better results than the use of standard n-grams, for example, for authorship attribution. Another type of syntactic n-grams are part-of-speech n-grams, defined as fixed-length contiguous overlapping subsequences that are extracted from part-of-speech sequences of text. Part-of-speech n-grams have several applications, most commonly in information retrieval. == Other applications == n-grams find use in several areas of computer science, computational linguistics, and applied mathematics. They have been used to: design kernels that allow machine learning algorithms such as support vector machines to learn from string data find likely candidates for the correct spelling of a misspelled word improve compression in compression algorithms where a small area of data requires n-grams of greater length assess the probability of a given word sequence appearing in text of a language of interest in pattern recognition systems, speech recognition, OCR (optical character recognition), Intelligent Character Recognition (ICR), machine translation and similar applications improve retrieval in information retrieval systems when it is hoped to find similar "documents" (a term for which the conventional meaning is sometimes stretched, depending on the data set) given a single query document and a database of reference documents improve retrieval performance in genetic sequence analysis as in the BLAST family of programs identify the language a text is in or the species a small sequence of DNA was taken from predict letters or words at random in order to create text, as in the dissociated press algorithm. cryptanalysis == See also == Collocation Feature engineering Hidden Markov model Longest common substring MinHash n-tuple String kernel == References ==
Wikipedia/Word_n-gram_language_model
The Conference and Workshop on Neural Information Processing Systems (abbreviated as NeurIPS and formerly NIPS) is a machine learning and computational neuroscience conference held every December. Along with ICLR and ICML, it is one of the three primary conferences of high impact in machine learning and artificial intelligence research. The conference is currently a double-track meeting (single-track until 2015) that includes invited talks as well as oral and poster presentations of refereed papers, followed by parallel-track workshops that up to 2013 were held at ski resorts. == History == The NeurIPS meeting was first proposed in 1986 at the annual invitation-only Snowbird Meeting on Neural Networks for Computing organized by The California Institute of Technology and Bell Laboratories. NeurIPS was designed as a complementary open interdisciplinary meeting for researchers exploring biological and artificial Neural Networks. Reflecting this multidisciplinary approach, NeurIPS began in 1987 with information theorist Ed Posner as the conference president and learning theorist Yaser Abu-Mostafa as program chairman. Research presented in the early NeurIPS meetings included a wide range of topics from efforts to solve purely engineering problems to the use of computer models as a tool for understanding biological nervous systems. Since then, the biological and artificial systems research streams have diverged, and recent NeurIPS proceedings have been dominated by papers on machine learning, artificial intelligence and statistics. From 1987 until 2000 NeurIPS was held in Denver, United States. Since then, the conference was held in Vancouver, Canada (2001–2010), Granada, Spain (2011), and Lake Tahoe, United States (2012–2013). In 2014 and 2015, the conference was held in Montreal, Canada, in Barcelona, Spain in 2016, in Long Beach, United States in 2017, in Montreal, Canada in 2018 and Vancouver, Canada in 2019. Reflecting its origins at Snowbird, Utah, the meeting was accompanied by workshops organized at a nearby ski resort up until 2013, when it outgrew ski resorts. The first NeurIPS Conference was sponsored by the IEEE. The following NeurIPS Conferences have been organized by the NeurIPS Foundation, established by Ed Posner. Terrence Sejnowski has been the president of the NeurIPS Foundation since Posner's death in 1993. The board of trustees consists of previous general chairs of the NeurIPS Conference. The first proceedings was published in book form by the American Institute of Physics in 1987, and was entitled Neural Information Processing Systems, then the proceedings from the following conferences have been published by Morgan Kaufmann (1988–1993), MIT Press (1994–2004) and Curran Associates (2005–present) under the name Advances in Neural Information Processing Systems. The conference was originally abbreviated as "NIPS". By 2018 a few commentators were criticizing the abbreviation as encouraging sexism due to its association with the word nipples, and as being a slur against Japanese. The board changed the abbreviation to "NeurIPS" in November 2018. == Topics == Along with machine learning and neuroscience, other fields represented at NeurIPS include cognitive science, psychology, computer vision, statistical linguistics, and information theory. Over the years, NeurIPS became a premier conference on machine learning and although the 'Neural' in the NeurIPS acronym had become something of a historical relic, the resurgence of deep learning in neural networks since 2012, fueled by faster computers and big data, has led to achievements in speech recognition, object recognition in images, image captioning, language translation and world championship performance in the game of Go, based on neural architectures inspired by the hierarchy of areas in the visual cortex (ConvNet) and reinforcement learning inspired by the basal ganglia (Temporal difference learning). Notable affinity groups have emerged from the NeurIPS conference and displayed diversity, including Black in AI (in 2017), Queer in AI (in 2016), and others. === Named lectures === In addition to invited talks and symposia, NeurIPS also organizes two named lectureships to recognize distinguished researchers. The NeurIPS Board introduced the Posner Lectureship in honor of NeurIPS founder Ed Posner; two Posner Lectures were given each year up to 2015. Past lecturers have included: 2010 – Josh Tenenbaum and Michael I. Jordan 2011 – Rich Sutton and Bernhard Schölkopf 2012 – Thomas Dietterich and Terry Sejnowski 2013 – Daphne Koller and Peter Dayan 2014 – Michael Kearns and John Hopfield 2015 – Zoubin Ghahramani and Vladimir Vapnik 2016 – Yann LeCun 2017 – John Platt 2018 – Joëlle Pineau 2019 – Yoshua Bengio 2020 – Christopher Bishop 2021 – Peter Bartlett In 2015, the NeurIPS Board introduced the Breiman Lectureship to highlight work in statistics relevant to conference topics. The lectureship was named for statistician Leo Breiman, who served on the NeurIPS Board from 1994 to 2005. Past lecturers have included: 2015 – Robert Tibshirani 2016 – Susan Holmes 2017 – Yee Whye Teh 2018 – David Spiegelhalter 2019 – Bin Yu 2020 – Marloes Maathuis 2021 – Gabor Lugosi 2022 – Emmanuel Candes 2023 – Susan Murphy 2024 – Arnaud Doucet == NIPS experiment == In NIPS 2014, the program chairs duplicated 10% of all submissions and sent them through separate reviewers to evaluate randomness in the reviewing process. Several researchers interpreted the result. Regarding whether the decision in NIPS is completely random or not, John Langford writes: "Clearly not—a purely random decision would have arbitrariness of ~78%. It is, however, quite notable that 60% is much closer to 78% than 0%." He concludes that the result of the reviewing process is mostly arbitrary. == Locations == 1987–2000: Denver, Colorado, United States 2001–2010: Vancouver, British Columbia, Canada 2011: Granada, Spain 2012 & 2013: Stateline, Nevada, United States 2014 & 2015: Montréal, Quebec, Canada 2016: Barcelona, Spain 2017: Long Beach, California, United States 2018: Montréal, Quebec, Canada 2019: Vancouver, British Columbia, Canada 2020: Vancouver, British Columbia, Canada (virtual conference) 2021: Virtual conference 2022 & 2023: New Orleans, Louisiana, United States 2024: Vancouver, British Columbia, Canada 2025: San Diego, California, United States == See also == AAAI Conference on Artificial Intelligence (AAAI) Computational and Systems Neuroscience (COSYNE) International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB) International Conference on Learning Representations (ICLR) International Conference on Machine Learning (ICML) == Notes == == External links == 2019 Conference NeurIPS proceedings NIPS 2011 video lectures NIPS 2012 video lectures Video Journal of Machine Learning Abstracts – Volume 3
Wikipedia/Advances_in_Neural_Information_Processing_Systems
The transformer is a deep learning architecture based on the multi-head attention mechanism, in which text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished. Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM). Later variations have been widely adopted for training large language models (LLM) on large (language) datasets. The modern version of the transformer was proposed in the 2017 paper "Attention Is All You Need" by researchers at Google. Transformers were first developed as an improvement over previous architectures for machine translation, but have found many applications since. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning, audio, multimodal learning, robotics, and even playing chess. It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs) and BERT (bidirectional encoder representations from transformers). == History == === Predecessors === For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens. A key breakthrough was LSTM (1995), a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called multiplicative units. Neural networks using multiplicative units were later called sigma-pi networks or higher-order networks. LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers. However, LSTM still used sequential processing, like most other RNNs. Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence. Modern Transformers overcome this problem, but unlike RNNs, they require computation time that is quadratic in the size of the context window. The linearly scaling fast weight controller (1992) learns to compute a weight matrix for further processing depending on the input. One of its two networks has "fast weights" or "dynamic links" (1981). A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries. This was later shown to be equivalent to the unnormalized linear Transformer. === Attention with seq2seq === The idea of encoder-decoder sequence transduction had been developed in the early 2010s; commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014. A 380M-parameter model for machine translation uses two long short-term memories (LSTM). Its architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model used gated recurrent units (GRU) instead of LSTM. Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq. These early seq2seq models had no attention mechanism, and the state vector is accessible only after the last word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into a fixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation. The RNNsearch model introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of the fixed-size output vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation". The relative performances were compared between global (that of RNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time. In 2016, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM. It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop. === Parallelizing attention === Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them from being accelerated on GPUs. In 2016, decomposable attention applied a self-attention mechanism to feedforward networks, which are easy to parallelize, and achieved SOTA result in textual entailment with an order of magnitude fewer parameters than LSTMs. One of its authors, Jakob Uszkoreit, suspected that attention without recurrence would be sufficient for language translation, thus the title "attention is all you need". That hypothesis was against conventional wisdom at the time, and even his father Hans Uszkoreit, a well-known computational linguist, was skeptical. In the same year, self-attention (called intra-attention or intra-sentence attention) was proposed for LSTMs. In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks. === AI boom era === Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles. Transformer architecture is now used alongside many generative models that contribute to the ongoing AI boom. In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model. In 2019 October, Google started using BERT to process search queries. In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model. Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly popular, triggering a boom around large language models. Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer, speech recognition, robotics, and multimodal. The vision transformer, in turn, stimulated new developments in convolutional neural networks. Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024), and Sora (2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data. == Training == === Methods for stabilizing training === The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again. A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup. === Pretrain-finetune === Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include: language modeling next-sentence prediction question answering reading comprehension sentiment analysis paraphrasing The T5 transformer report documents a large number of natural language pretraining tasks. Some examples are: restoring or repairing incomplete or corrupted text. For example, the input, "Thank you ~~ me to your party ~~ week", might generate the output, "Thank you for inviting me to your party last week". translation between natural languages (machine translation) judging the pragmatic acceptability of natural language. For example, the following sentence might be judged "not acceptable", because even though it is syntactically well-formed, it is improbable in ordinary human usage: The course is jumping well. Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture. === Tasks === In general, there are 3 classes of language modelling tasks: "masked", "autoregressive", and "prefixLM". These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer. In a masked task, one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens: Loss = − ∑ t ∈ masked tokens ln ⁡ ( probability of t conditional on its context ) {\displaystyle {\text{Loss}}=-\sum _{t\in {\text{masked tokens}}}\ln({\text{probability of }}t{\text{ conditional on its context}})} and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task. In an autoregressive task, the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks. In a prefixLM task, the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. The T5 series of models are trained by prefixLM tasks. Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model). == Architecture == All transformers have the same primary components: Tokenizers, which convert text into tokens. Embedding layer, which converts tokens and positions of the tokens into vector representations. Transformer layers, which carry out repeated transformations on the vector representations, extracting more and more linguistic information. These consist of alternating attention and feedforward layers. There are two major types of transformer layers: encoder layers and decoder layers, with further variants. Un-embedding layer, which converts the final vector representations back to a probability distribution over the tokens. The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section. By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as x W {\displaystyle xW} . === Tokenization === As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is a tokenizer. The set of all tokens is the vocabulary of the tokenizer, and its size is the vocabulary size n vocabulary {\displaystyle n_{\text{vocabulary}}} . When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown". Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece. === Embedding === Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix M {\displaystyle M} . For example, if the input token is 3 {\displaystyle 3} , then the one-hot representation is [ 0 , 0 , 0 , 1 , 0 , 0 , … ] {\displaystyle [0,0,0,1,0,0,\dots ]} , and its embedding vector is E m b e d ( 3 ) = [ 0 , 0 , 0 , 1 , 0 , 0 , … ] M {\displaystyle \mathrm {Embed} (3)=[0,0,0,1,0,0,\dots ]M} The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors. The number of dimensions in an embedding vector is called hidden size or embedding size and written as d emb {\displaystyle d_{\text{emb}}} . This size is written as d model {\displaystyle d_{\text{model}}} in the original Transformer paper. === Un-embedding === An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens. The un-embedding layer is a linear-softmax layer: U n E m b e d ( x ) = s o f t m a x ( x W + b ) {\displaystyle \mathrm {UnEmbed} (x)=\mathrm {softmax} (xW+b)} The matrix has shape ( d emb , n vocabulary ) {\displaystyle (d_{\text{emb}},n_{\text{vocabulary}})} . The embedding matrix M {\displaystyle M} and the un-embedding matrix W {\displaystyle W} are sometimes required to be transposes of each other, a practice called weight tying. === Positional encoding === A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information about where the words are in the input sequence. This shall induce a bias towards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man". The positional encoding is defined as a function of type f : R → R d ; d ∈ Z , d > 0 {\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0} , where d {\displaystyle d} is a positive even integer. The full positional encoding defined in the original paper is: ( f ( t ) 2 k , f ( t ) 2 k + 1 ) = ( sin ⁡ ( θ ) , cos ⁡ ( θ ) ) ∀ k ∈ { 0 , 1 , … , d / 2 − 1 } {\displaystyle (f(t)_{2k},f(t)_{2k+1})=(\sin(\theta ),\cos(\theta ))\quad \forall k\in \{0,1,\ldots ,d/2-1\}} where θ = t r k , r = N 2 / d {\displaystyle \theta ={\frac {t}{r^{k}}},r=N^{2/d}} . Here, N {\displaystyle N} is a free parameter that should be significantly larger than the biggest k {\displaystyle k} that would be input into the positional encoding function. The original paper uses N = 10000 {\displaystyle N=10000} . The function is in a simpler form when written as a complex function of type f : R → C d / 2 {\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}} f ( t ) = ( e i t / r k ) k = 0 , 1 , … , d 2 − 1 {\displaystyle f(t)=\left(e^{it/r^{k}}\right)_{k=0,1,\ldots ,{\frac {d}{2}}-1}} where r = N 2 / d {\displaystyle r=N^{2/d}} . The main reason for using this positional encoding function is that using it, shifts are linear transformations: f ( t + Δ t ) = d i a g ( f ( Δ t ) ) f ( t ) {\displaystyle f(t+\Delta t)=\mathrm {diag} (f(\Delta t))f(t)} where Δ t ∈ R {\displaystyle \Delta t\in \mathbb {R} } is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication. By taking a linear sum, any convolution can also be implemented as linear transformations: ∑ j c j f ( t + Δ t j ) = ( ∑ j c j d i a g ( f ( Δ t j ) ) ) f ( t ) {\displaystyle \sum _{j}c_{j}f(t+\Delta t_{j})=\left(\sum _{j}c_{j}\,\mathrm {diag} (f(\Delta t_{j}))\right)f(t)} for any constants c j {\displaystyle c_{j}} . This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in a convolutional neural network language model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position." In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference. === Encoder-decoder (overview) === Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far. The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time). Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps. These feed-forward layers contain most of the parameters in a Transformer model. === Feedforward network === The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons: F F N ( x ) = ϕ ( x W ( 1 ) + b ( 1 ) ) W ( 2 ) + b ( 2 ) {\displaystyle \mathrm {FFN} (x)=\phi (xW^{(1)}+b^{(1)})W^{(2)}+b^{(2)}} where W ( 1 ) {\displaystyle W^{(1)}} and W ( 2 ) {\displaystyle W^{(2)}} are weight matrices and b ( 1 ) {\displaystyle b^{(1)}} and b ( 2 ) {\displaystyle b^{(2)}} are bias vectors, and ϕ {\displaystyle \phi } is its activation function. The original Transformer used ReLU activation. The number of neurons in the middle layer is called intermediate size (GPT), filter size (BERT), or feedforward size (BERT). It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size: d ffn = 4 d emb {\displaystyle d_{\text{ffn}}=4d_{\text{emb}}} . === Scaled dot-product attention === ==== Attention head ==== The attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights W Q {\displaystyle W^{Q}} , the key weights W K {\displaystyle W^{K}} , and the value weights W V {\displaystyle W^{V}} . The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length ℓ seq, query {\displaystyle \ell _{\text{seq, query}}} , and each entry is a vector of dimension d emb, query {\displaystyle d_{\text{emb, query}}} . Similarly for the key and value sequences. For each vector x i , query {\displaystyle x_{i,{\text{query}}}} in the query sequence, it is multiplied by a matrix W Q {\displaystyle W^{Q}} to produce a query vector q i = x i , query W Q {\displaystyle q_{i}=x_{i,{\text{query}}}W^{Q}} . The matrix of all query vectors is the query matrix: Q = X query W Q {\displaystyle Q=X_{\text{query}}W^{Q}} Similarly, we construct the key matrix K = X key W K {\displaystyle K=X_{\text{key}}W^{K}} and the value matrix V = X value W V {\displaystyle V=X_{\text{value}}W^{V}} . It is usually the case that all W Q , W K , W V {\displaystyle W^{Q},W^{K},W^{V}} are square matrices, meaning d emb, query = d query {\displaystyle d_{\text{emb, query}}=d_{\text{query}}} , etc. Attention weights are calculated using the query and key vectors: the attention weight a i j {\displaystyle a_{ij}} from token i {\displaystyle i} to token j {\displaystyle j} is the dot product between q i {\displaystyle q_{i}} and k j {\displaystyle k_{j}} . The attention weights are divided by the square root of the dimension of the key vectors, d k {\displaystyle {\sqrt {d_{k}}}} , which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that W Q {\displaystyle W^{Q}} and W K {\displaystyle W^{K}} are different matrices allows attention to be non-symmetric: if token i {\displaystyle i} attends to token j {\displaystyle j} (i.e. q i ⋅ k j {\displaystyle q_{i}\cdot k_{j}} is large), this does not necessarily mean that token j {\displaystyle j} will attend to token i {\displaystyle i} (i.e. q j ⋅ k i {\displaystyle q_{j}\cdot k_{i}} could be small). The output of the attention unit for token i {\displaystyle i} is the weighted sum of the value vectors of all tokens, weighted by a i j {\displaystyle a_{ij}} , the attention from token i {\displaystyle i} to each token. The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices Q {\displaystyle Q} , K {\displaystyle K} and V {\displaystyle V} are defined as the matrices where the i {\displaystyle i} th rows are vectors q i {\displaystyle q_{i}} , k i {\displaystyle k_{i}} , and v i {\displaystyle v_{i}} respectively. Then we can represent the attention as Attention ( Q , K , V ) = softmax ( Q K T d k ) V {\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}} where the softmax is applied over each of the rows of the matrix. The number of dimensions in a query vector is query size d query {\displaystyle d_{\text{query}}} and similarly for the key size d key {\displaystyle d_{\text{key}}} and value size d value {\displaystyle d_{\text{value}}} . The output dimension of an attention head is its head dimension d head {\displaystyle d_{\text{head}}} . The attention mechanism requires the following three equalities to hold: ℓ seq, key = ℓ seq, value , d query = d key , d value = d head {\displaystyle \ell _{\text{seq, key}}=\ell _{\text{seq, value}},\;d_{\text{query}}=d_{\text{key}},\;d_{\text{value}}=d_{\text{head}}} but is otherwise unconstrained. If the attention head is used in a self-attention fashion, then X query = X key = X value {\displaystyle X_{\text{query}}=X_{\text{key}}=X_{\text{value}}} . If the attention head is used in a cross-attention fashion, then usually X query ≠ X key = X value {\displaystyle X_{\text{query}}\neq X_{\text{key}}=X_{\text{value}}} . It is theoretically possible for all three to be different, but that is rarely the case in practice. ==== Multiheaded attention ==== One set of ( W Q , W K , W V ) {\displaystyle \left(W^{Q},W^{K},W^{V}\right)} matrices is called an attention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices, W Q {\displaystyle W^{Q}} and W K {\displaystyle W^{K}} , which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrix W V {\displaystyle W^{V}} , in combination with the part of the output projection matrix W O {\displaystyle W^{O}} , determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects. The computations for each attention head can be performed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers. Concretely, let the multiple attention heads be indexed by i {\displaystyle i} , then we have MultiheadedAttention ( Q , K , V ) = Concat i ∈ [ n heads ] ( Attention ( X W i Q , X W i K , X W i V ) ) W O {\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V}))W^{O}} where the matrix X {\displaystyle X} is the concatenation of word embeddings, and the matrices W i Q , W i K , W i V {\displaystyle W_{i}^{Q},W_{i}^{K},W_{i}^{V}} are "projection matrices" owned by individual attention head i {\displaystyle i} , and W O {\displaystyle W^{O}} is a final projection matrix owned by the whole multi-headed attention head. It is theoretically possible for each attention head to have a different head dimension d head {\displaystyle d_{\text{head}}} , but that is rarely the case in practice. As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions: d emb = 768 , n head = 12 , d head = 64 {\displaystyle d_{\text{emb}}=768,n_{\text{head}}=12,d_{\text{head}}=64} Since 12 × 64 = 768 {\displaystyle 12\times 64=768} , its output projection matrix W O ∈ R ( 12 × 64 ) × 768 {\displaystyle W^{O}\in \mathbb {R} ^{(12\times 64)\times 768}} is a square matrix. ==== Masked attention ==== The Transformer architecture is constructed to calculate output tokens iteratively. Assuming t = 0 {\displaystyle t=0} refers to the calculation of the first output token i = 0 {\displaystyle i=0} , for step t > 0 {\displaystyle t>0} , the output token i = 0 {\displaystyle i=0} shall remain constant. This ensures properties of the model similar to autoregressive models. Therefore, at every time step t {\displaystyle t} , the calculation for all outputs i {\displaystyle i} should not have access to tokens at position j {\displaystyle j} for j >= i {\displaystyle j>=i} (as it naturally is the case for time step t = i {\displaystyle t=i} , when tokens j > t {\displaystyle j>t} are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrix M {\displaystyle M} that is − ∞ {\displaystyle -\infty } at entries where the attention link must be cut, and 0 {\displaystyle 0} at other places: MaskedAttention ( Q , K , V ) = softmax ( M + Q K T d k ) V {\displaystyle {\begin{aligned}{\text{MaskedAttention}}(Q,K,V)={\text{softmax}}\left(M+{\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}} The following matrix is commonly used in decoder self-attention modules, called "causal masking": M causal = [ 0 − ∞ − ∞ … − ∞ 0 0 − ∞ … − ∞ 0 0 0 … − ∞ ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 … 0 ] {\displaystyle M_{\text{causal}}={\begin{bmatrix}0&-\infty &-\infty &\dots &-\infty \\0&0&-\infty &\dots &-\infty \\0&0&0&\dots &-\infty \\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &0\end{bmatrix}}} In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form P M causal P − 1 {\displaystyle PM_{\text{causal}}P^{-1}} , where P {\displaystyle P} is a random permutation matrix. === Encoder === An encoder consists of an embedding layer, followed by multiple encoder layers. Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have: given input vectors h 0 , h 1 , … combine them into a matrix H = [ h 0 h 1 ⋮ ] EncoderLayer ( H ) = [ FFN ( MultiheadedAttention ( H , H , H ) 0 ) FFN ( MultiheadedAttention ( H , H , H ) 1 ) ⋮ ] {\displaystyle {\begin{aligned}{\text{given input vectors }}&h_{0},h_{1},\dots \\{\text{combine them into a matrix }}H&={\begin{bmatrix}h_{0}\\h_{1}\\\vdots \end{bmatrix}}\\{\text{EncoderLayer}}(H)&={\begin{bmatrix}{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{0})\\{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{1})\\\vdots \end{bmatrix}}\\\end{aligned}}} where FFN {\displaystyle {\text{FFN}}} stands for "feed-forward network". We can more succinctly write it as EncoderLayer ( H ) = FFN ( MultiheadedAttention ( H , H , H ) ) {\displaystyle {\text{EncoderLayer}}(H)={\text{FFN}}({\text{MultiheadedAttention}}(H,H,H))} with the implicit convention that the FFN {\displaystyle {\text{FFN}}} is applied to each row of the matrix individually. The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder. As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking. === Decoder === A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer. Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention. Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked. In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism. Schematically, we have: H ′ = MaskedMultiheadedAttention ( H , H , H ) DecoderLayer ( H ) = FFN ( MultiheadedAttention ( H ′ , H E , H E ) ) {\displaystyle {\begin{aligned}H'&={\text{MaskedMultiheadedAttention}}(H,H,H)\\{\text{DecoderLayer}}(H)&={\text{FFN}}({\text{MultiheadedAttention}}(H',H^{E},H^{E}))\end{aligned}}} where H E {\displaystyle H^{E}} is the matrix with rows being the output vectors from the encoder. The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text. === Adapted architectures === Many large language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. Early GPT models are decoder-only models trained to predict the next token in a sequence. BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence. == Full transformer architecture == === Sublayers === Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network. The final points of detail are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence. The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero. Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector. There are two common conventions in use: the post-LN and the pre-LN convention. In the post-LN convention, the output of each sublayer is L a y e r N o r m ( x + S u b l a y e r ( x ) ) {\displaystyle \mathrm {LayerNorm} (x+\mathrm {Sublayer} (x))} where S u b l a y e r ( x ) {\displaystyle \mathrm {Sublayer} (x)} is the function implemented by the sublayer itself. In the pre-LN convention, the output of each sublayer is x + S u b l a y e r ( L a y e r N o r m ( x ) ) {\displaystyle x+\mathrm {Sublayer} (\mathrm {LayerNorm} (x))} The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018, was found to be easier to train, requiring no warm-up, leading to faster convergence. === Pseudocode === The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from input: Encoder input t_e Decoder input t_d output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence)) /* encoder */ z_e ← encoder.tokenizer(t_e) for each t in 1:length(z_e) do z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t) for each l in 1:length(encoder.layers) do layer ← encoder.layers[l] /* first sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.multiheaded_attention(z_e, z_e, z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] /* second sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.feedforward(z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] for each t in 1:length(z_e) do z_e[t] ← encoder.final_layer_norm(z_e[t]) /* decoder */ z_d ← decoder.tokenizer(t_d) for each t in 1:length(z_d) do z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t) for each l in 1:length(decoder.layers) do layer ← decoder.layers[l] /* first sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* second sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.multiheaded_attention(z_d, z_e, z_e) for each i in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* third sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.feedforward(z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] z_d ← decoder.final_layer_norm(z_d) output_distributions ← [] for each t in 1:length(z_d) do output_distributions.append(decoder.unembed(z_d[t])) return output_distributions === Terminology === The Transformer architecture, being modular, allows variations. Several common variations are described here. An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder. A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only. An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder. A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form: Figure 3  M prefixLM = [ 0 − ∞ 0 M causal ] {\displaystyle M_{\text{prefixLM}}={\begin{bmatrix}\mathbf {0} &-\infty \\\mathbf {0} &M_{\text{causal}}\end{bmatrix}}} where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons. There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively. == Subsequent work == === Alternative activation functions === The original transformer uses ReLU activation function. Other activation functions were developed. The Llama series and PaLM used SwiGLU; both GPT-1 and BERT used GELU. Alternative activation functions are often used in combination with Gated Linear Units in the feedforward module. === Alternative normalizations === The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm which is used in the Llama series. Other examples include CapsuleNorm ScaleNorm, or FixNorm. === Alternative positional encodings === Transformers may use other positional encoding methods than sinusoidal. The original Transformer paper reported using a learned positional encoding, but finding it not superior to the sinusoidal one. Later, found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module. ==== RoPE ==== RoPE (rotary positional embedding), is best explained by considering a list of 2-dimensional vectors [ ( x 1 ( 1 ) , x 1 ( 2 ) ) , ( x 2 ( 1 ) , x 2 ( 2 ) ) , ( x 3 ( 1 ) , x 3 ( 2 ) ) , . . . ] {\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),...]} . Now pick some angle θ {\displaystyle \theta } . Then RoPE encoding is RoPE ( x m ( 1 ) , x m ( 2 ) , m ) = ( cos ⁡ m θ − sin ⁡ m θ sin ⁡ m θ cos ⁡ m θ ) ( x m ( 1 ) x m ( 2 ) ) = ( x m ( 1 ) cos ⁡ m θ − x m ( 2 ) sin ⁡ m θ x m ( 2 ) cos ⁡ m θ + x m ( 1 ) sin ⁡ m θ ) {\displaystyle {\text{RoPE}}{\big (}x_{m}^{(1)},x_{m}^{(2)},m{\big )}={\begin{pmatrix}\cos m\theta &-\sin m\theta \\\sin m\theta &\cos m\theta \end{pmatrix}}{\begin{pmatrix}x_{m}^{(1)}\\x_{m}^{(2)}\\\end{pmatrix}}={\begin{pmatrix}x_{m}^{(1)}\cos m\theta -x_{m}^{(2)}\sin m\theta \\x_{m}^{(2)}\cos m\theta +x_{m}^{(1)}\sin m\theta \\\end{pmatrix}}} Equivalently, if we write the 2-dimensional vectors as complex numbers z m := x m ( 1 ) + i x m ( 2 ) {\displaystyle z_{m}:=x_{m}^{(1)}+ix_{m}^{(2)}} , then RoPE encoding is just multiplication by an angle: RoPE ( z m , m ) = e i m θ z m {\displaystyle {\text{RoPE}}{\big (}z_{m},m{\big )}=e^{im\theta }z_{m}} For a list of 2 n {\displaystyle 2n} -dimensional vectors, a RoPE encoder is defined by a sequence of angles θ ( 1 ) , . . . , θ ( n ) {\displaystyle \theta ^{(1)},...,\theta ^{(n)}} . Then the RoPE encoding is applied to each pair of coordinates. The benefit of RoPE is that the dot-product between two vectors depends on their relative location only: RoPE ( x , m ) T RoPE ( y , n ) = RoPE ( x , m + k ) T RoPE ( y , n + k ) {\displaystyle {\text{RoPE}}{\big (}x,m{\big )}^{T}{\text{RoPE}}{\big (}y,n{\big )}={\text{RoPE}}{\big (}x,m+k{\big )}^{T}{\text{RoPE}}{\big (}y,n+k{\big )}} for any integer k {\displaystyle k} . ==== ALiBi ==== ALiBi (Attention with Linear Biases) is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism is Attention ( Q , K , V ) = softmax ( Q K T d k + s B ) V {\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+sB\right)V\end{aligned}}} Here, s {\displaystyle s} is a real number ("scalar"), and B {\displaystyle B} is the linear bias matrix defined by B = ( 0 1 2 3 ⋯ − 1 0 1 2 ⋯ − 2 − 1 0 1 ⋯ − 3 − 2 − 1 0 ⋯ ⋮ ⋮ ⋮ ⋮ ⋱ ) {\displaystyle B={\begin{pmatrix}0&1&2&3&\cdots \\-1&0&1&2&\cdots \\-2&-1&0&1&\cdots \\-3&-2&-1&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}} in other words, B i , j = j − i {\displaystyle B_{i,j}=j-i} . The idea being that the linear bias matrix is a softened mask. Just as 0 {\displaystyle 0} represent full attention paid, and − ∞ {\displaystyle -\infty } represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction. ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located). ==== Relative Position Encodings ==== Relative Position Encodings is similar to ALiBi, but more generic: Attention ( Q , K , V ) = softmax ( Q K T d k + B ) V {\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+B\right)V\end{aligned}}} where B {\displaystyle B} is a Toeplitz matrix, that is, B i , j = B i ′ , j ′ {\displaystyle B_{i,j}=B_{i',j'}} whenever i − j = i ′ − j ′ {\displaystyle i-j=i'-j'} . This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding". === Efficient implementation === The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models. ==== KV caching ==== When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching. If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots. ==== FlashAttention ==== FlashAttention is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page on softmax for details. An improved version, FlashAttention-2, was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention. Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA). Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8. ==== Multi-Query Attention ==== Multi-Query Attention changes the multiheaded attention mechanism. Whereas normally, MultiheadedAttention ( Q , K , V ) = Concat i ∈ [ n heads ] ( Attention ( X W i Q , X W i K , X W i V ) ) W O {\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V})\right)W^{O}} with Multi-Query Attention, there is just one W K , W V {\displaystyle W^{K},W^{V}} , thus: MultiQueryAttention ( Q , K , V ) = Concat i ∈ [ n heads ] ( Attention ( X W i Q , X W K , X W V ) ) W O {\displaystyle {\text{MultiQueryAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW^{K},XW^{V})\right)W^{O}} This has a neutral effect on model quality and training speed, but increases inference speed. More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups. Multihead Latent Attention (MLA) is a low-rank approximation to standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached. ==== Speculative decoding ==== Speculative decoding is a method to accelerate token decoding. Similarly to speculative execution in CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly. The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense. Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a token x 1 , x 2 , . . . , x 512 {\displaystyle x_{1},x_{2},...,x_{512}} , taking time 512 T GPT-3 {\displaystyle 512T_{\text{GPT-3}}} . However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that each x t {\displaystyle x_{t}} is indeed the token with the largest log-likelihood in the t {\displaystyle t} -th output. In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens: x ~ 1 , x ~ 2 , x ~ 3 , x ~ 4 {\displaystyle {\tilde {x}}_{1},{\tilde {x}}_{2},{\tilde {x}}_{3},{\tilde {x}}_{4}} . This only takes 4 T GPT-3-small {\displaystyle 4T_{\text{GPT-3-small}}} . These tokens are then run through the larger GPT-3 in one go. Suppose that x ~ 1 {\displaystyle {\tilde {x}}_{1}} and x ~ 2 {\displaystyle {\tilde {x}}_{2}} are verified by GPT-3 as what it would have picked, then those are kept, but x ~ 3 {\displaystyle {\tilde {x}}_{3}} is not, so x ~ 3 , x ~ 4 {\displaystyle {\tilde {x}}_{3},{\tilde {x}}_{4}} are discarded, and GPT-3 is run on those. This would take 4 T GPT-3-small + 3 T GPT-3 {\displaystyle 4T_{\text{GPT-3-small}}+3T_{\text{GPT-3}}} , which might be shorter than 4 T GPT-3 {\displaystyle 4T_{\text{GPT-3}}} . For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used. In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict the next token, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack. === Sub-quadratic transformers === Training transformer-based architectures can be expensive, especially for long inputs. Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows. In the audio domain, SepTr decouples the attention in time and frequency domains. Long Range Arena (2020) is a standard benchmark for comparing the behavior of transformer architectures over long inputs. ==== Alternative attention graphs ==== The standard attention graph is either all-to-all or causal, both of which scales as O ( N 2 ) {\displaystyle O(N^{2})} where N {\displaystyle N} is the number of tokens in a sequence. Reformer (2020) reduces the computational load from O ( N 2 ) {\displaystyle O(N^{2})} to O ( N ln ⁡ N ) {\displaystyle O(N\ln N)} by using locality-sensitive hashing and reversible layers. Sparse attention uses attention graphs that grows slower than O ( N 2 ) {\displaystyle O(N^{2})} . For example, BigBird (2020) uses random small-world networks which grows as O ( N ) {\displaystyle O(N)} . Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value. ==== Random Feature Attention ==== Random Feature Attention (2021) uses Fourier random features: φ ( x ) = 1 D [ cos ⁡ ⟨ w 1 , x ⟩ , sin ⁡ ⟨ w 1 , x ⟩ , ⋯ cos ⁡ ⟨ w D , x ⟩ , sin ⁡ ⟨ w D , x ⟩ ] T {\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\cdots \cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}} where w 1 , . . . , w D {\displaystyle w_{1},...,w_{D}} are independent samples from the normal distribution N ( 0 , σ 2 I ) {\displaystyle N(0,\sigma ^{2}I)} . This choice of parameters satisfy E [ ⟨ φ ( x ) , φ ( y ) ⟩ ] = e − ‖ x − y ‖ 2 2 σ 2 {\displaystyle \mathbb {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{-{\frac {\|x-y\|^{2}}{2\sigma ^{2}}}}} , or e ⟨ x , y ⟩ / σ 2 = E [ ⟨ e ‖ x ‖ 2 / 2 σ 2 φ ( x ) , e ‖ y ‖ 2 / 2 σ 2 φ ( y ) ⟩ ] ≈ ⟨ e ‖ x ‖ 2 / 2 σ 2 φ ( x ) , e ‖ y ‖ 2 / 2 σ 2 φ ( y ) ⟩ {\displaystyle e^{\langle x,y\rangle /\sigma ^{2}}=\mathbb {E} [\langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle ]\approx \langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle } Consequently, the one-headed attention, with one query, can be written as Attention ( q , K , V ) = softmax ( q K T d k ) V ≈ φ ( q ) T ∑ i e ‖ k i ‖ 2 / 2 σ 2 φ ( k i ) v i T φ ( q ) T ∑ i e ‖ k i ‖ 2 / 2 σ 2 φ ( k i ) {\displaystyle {\text{Attention}}(q,K,V)={\text{softmax}}\left({\frac {qK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx {\frac {\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})v_{i}^{T}}{\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})}}} where σ = d K 1 / 4 {\displaystyle \sigma =d_{K}^{1/4}} . Similarly for multiple queries, and for multiheaded attention. This approximation can be computed in linear time, as we can compute the matrix φ ( k i ) v i T {\displaystyle \varphi (k_{i})v_{i}^{T}} first, then multiply it with the query. In essence, we have managed to obtain a more precise version of Attention ( Q , K , V ) = softmax ( Q K T d k ) V ≈ Q ( K T V / d k ) {\displaystyle {\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx Q(K^{T}V/{\sqrt {d_{k}}})} Performer (2022) uses the same Random Feature Attention, but w 1 , . . . , w D {\displaystyle w_{1},...,w_{D}} are first independently sampled from the normal distribution N ( 0 , σ 2 I ) {\displaystyle N(0,\sigma ^{2}I)} , then they are Gram-Schmidt processed. === Multimodality === Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality. Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning. The LLaVA was a vision-language model composed of a language model (Vicuna-13B) and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned. Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer. Conformer and later Whisper follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer. Perceivers are a variant of Transformers designed for multimodality. For image generation, notable architectures are DALL-E 1 (2021), Parti (2022), Phenaki (2023), and Muse (2023). Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image. Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image. Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted. Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video. == Applications == The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Gemini, AlbertAGPT, Claude, BERT, Grok, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including: machine translation time series prediction document summarization document generation named entity recognition (NER) writing computer code based on requirements expressed in natural language. speech-to-text Beyond traditional NLP, the transformer architecture has had success in other applications, such as: biological sequence analysis video understanding protein folding (such as AlphaFold) evaluating chess board positions. Using static evaluation alone (that is, with no Minimax search) transformer achieved an Elo of 2895, putting it at grandmaster level. == See also == seq2seq – Family of machine learning approaches Perceiver – Variant of Transformer designed for multimodal data Vision transformer – Machine learning model for vision processing Large language model – Type of machine learning model BERT (language model) – Series of language models developed by Google AI Generative pre-trained transformer – Type of large language model T5 (language model) – Series of large language models developed by Google AI == Notes == == References == == Further reading ==
Wikipedia/Transformer_(machine_learning)
GraphPad Software Inc. was a privately held software development corporation until its acquisition by Insight Partners in 2017. The company was named Insightful Science, which itself merged with Dotmatics in 2021. The original software was written by Harvey Motulsky in 1989 and it was co-founded by Motulsky and Earl Beutler. The company operates in California. Its products include the 2D scientific graphing, biostatistics, curve fitting software GraphPad Prism and the free, web-based statistical calculation software, GraphPad QuickCalcs. == GraphPad Prism == GraphPad Prism is a commercial scientific 2D graphing and statistics software for Windows and Mac OS desktop computers. Software features include nonlinear regression, with functionalities including the removal of outliers, comparisons of models, comparisons of curves, and interpolation of standard curves. The software allows the automatic updating of results and graphs, and has functionality for displaying error bars. Features for usability include built in formulae, batch processing and standardisation features, along with automated analysis and data validation. == References ==
Wikipedia/GraphPad_InStat
GraphPad Software Inc. was a privately held software development corporation until its acquisition by Insight Partners in 2017. The company was named Insightful Science, which itself merged with Dotmatics in 2021. The original software was written by Harvey Motulsky in 1989 and it was co-founded by Motulsky and Earl Beutler. The company operates in California. Its products include the 2D scientific graphing, biostatistics, curve fitting software GraphPad Prism and the free, web-based statistical calculation software, GraphPad QuickCalcs. == GraphPad Prism == GraphPad Prism is a commercial scientific 2D graphing and statistics software for Windows and Mac OS desktop computers. Software features include nonlinear regression, with functionalities including the removal of outliers, comparisons of models, comparisons of curves, and interpolation of standard curves. The software allows the automatic updating of results and graphs, and has functionality for displaying error bars. Features for usability include built in formulae, batch processing and standardisation features, along with automated analysis and data validation. == References ==
Wikipedia/GraphPad_Prism
Recurrent neural networks (RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, and time series, where the order of elements is important. Unlike feedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences. The fundamental building block of RNNs is the recurrent unit, which maintains a hidden state—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connected handwriting recognition, speech recognition, natural language processing, and neural machine translation. However, traditional RNNs suffer from the vanishing gradient problem, which limits their ability to learn long-range dependencies. This issue was addressed by the development of the long short-term memory (LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later, gated recurrent units (GRUs) were introduced as a more computationally efficient alternative. In recent years, transformers, which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial. == History == === Before modern === One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex formed by parallel fiber, Purkinje cells, and granule cells. In 1933, Lorente de Nó discovered "recurrent, reciprocal connections" by Golgi's method, and proposed that excitatory loops explain certain aspects of the vestibulo-ocular reflex. During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943), which proposed the McCulloch-Pitts neuron model, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past. They were both interested in closed loops as possible explanations for e.g. epilepsy and causalgia. Recurrent inhibition was proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at the Macy conferences. See for an extensive review of recurrent neural network models in neuroscience. Frank Rosenblatt in 1960 published "close-loop cross-coupled perceptrons", which are 3-layered perceptron networks whose middle layer contains recurrent connections that change by a Hebbian learning rule.: 73–75  Later, in Principles of Neurodynamics (1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks,: Chapter 19, 21  and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network.: Section 19.11  Similar networks were published by Kaoru Nakano in 1971,Shun'ichi Amari in 1972, and William A. Little in 1974, who was acknowledged by Hopfield in his 1982 paper. Another origin of RNN was statistical mechanics. The Ising model was developed by Wilhelm Lenz and Ernst Ising in the 1920s as a simple statistical mechanical model of magnets at equilibrium. Glauber in 1963 studied the Ising model evolving in time, as a process towards equilibrium (Glauber dynamics), adding in the component of time. The Sherrington–Kirkpatrick model of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions. In a 1984 paper he extended this to continuous activation functions. It became a standard model for the study of neural networks through statistical mechanics. === Modern === Modern RNN networks are mainly based on two architectures: LSTM and BRNN. At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets". Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. Long short-term memory (LSTM) networks were invented by Hochreiter and Schmidhuber in 1995 and set accuracy records in multiple applications domains. It became the default choice for RNN architecture. Bidirectional recurrent neural networks (BRNN) uses two RNN that processes the same input in opposite directions. These two are often combined, giving the bidirectional LSTM architecture. Around 2006, bidirectional LSTM started to revolutionize speech recognition, outperforming traditional models in certain speech applications. They also improved large-vocabulary speech recognition and text-to-speech synthesis and was used in Google voice search, and dictation on Android devices. They broke records for improved machine translation, language modeling and Multilingual Language Processing. Also, LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning. The idea of encoder-decoder sequence transduction had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014. A seq2seq architecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and was instrumental in the development of attention mechanisms and transformers. == Configurations == An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNN can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc. === Standard === RNNs come in many variants. Abstractly speaking, an RNN is a function f θ {\displaystyle f_{\theta }} of type ( x t , h t ) ↦ ( y t , h t + 1 ) {\displaystyle (x_{t},h_{t})\mapsto (y_{t},h_{t+1})} , where x t {\displaystyle x_{t}} : input vector; h t {\displaystyle h_{t}} : hidden vector; y t {\displaystyle y_{t}} : output vector; θ {\displaystyle \theta } : neural network parameters. In words, it is a neural network that maps an input x t {\displaystyle x_{t}} into an output y t {\displaystyle y_{t}} , with the hidden vector h t {\displaystyle h_{t}} playing the role of "memory", a partial record of all previous input-output pairs. At each step, it transforms input to an output, and modifies its "memory" to help it to better perform future processing. The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to be layers are, in fact, different steps in time, "unfolded" to produce the appearance of layers. === Stacked RNN === A stacked RNN, or deep RNN, is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows Layer 1 has hidden vector h 1 , t {\displaystyle h_{1,t}} , parameters θ 1 {\displaystyle \theta _{1}} , and maps f θ 1 : ( x 0 , t , h 1 , t ) ↦ ( x 1 , t , h 1 , t + 1 ) {\displaystyle f_{\theta _{1}}:(x_{0,t},h_{1,t})\mapsto (x_{1,t},h_{1,t+1})} . Layer 2 has hidden vector h 2 , t {\displaystyle h_{2,t}} , parameters θ 2 {\displaystyle \theta _{2}} , and maps f θ 2 : ( x 1 , t , h 2 , t ) ↦ ( x 2 , t , h 2 , t + 1 ) {\displaystyle f_{\theta _{2}}:(x_{1,t},h_{2,t})\mapsto (x_{2,t},h_{2,t+1})} . ... Layer n {\displaystyle n} has hidden vector h n , t {\displaystyle h_{n,t}} , parameters θ n {\displaystyle \theta _{n}} , and maps f θ n : ( x n − 1 , t , h n , t ) ↦ ( x n , t , h n , t + 1 ) {\displaystyle f_{\theta _{n}}:(x_{n-1,t},h_{n,t})\mapsto (x_{n,t},h_{n,t+1})} . Each layer operates as a stand-alone RNN, and each layer's output sequence is used as the input sequence to the layer above. There is no conceptual limit to the depth of stacked RNN. === Bidirectional === A bidirectional RNN (biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows: The forward RNN processes in one direction: f θ ( x 0 , h 0 ) = ( y 0 , h 1 ) , f θ ( x 1 , h 1 ) = ( y 1 , h 2 ) , … {\displaystyle f_{\theta }(x_{0},h_{0})=(y_{0},h_{1}),f_{\theta }(x_{1},h_{1})=(y_{1},h_{2}),\dots } The backward RNN processes in the opposite direction: f θ ′ ′ ( x N , h N ′ ) = ( y N ′ , h N − 1 ′ ) , f θ ′ ′ ( x N − 1 , h N − 1 ′ ) = ( y N − 1 ′ , h N − 2 ′ ) , … {\displaystyle f'_{\theta '}(x_{N},h_{N}')=(y'_{N},h_{N-1}'),f'_{\theta '}(x_{N-1},h_{N-1}')=(y'_{N-1},h_{N-2}'),\dots } The two output sequences are then concatenated to give the total output: ( ( y 0 , y 0 ′ ) , ( y 1 , y 1 ′ ) , … , ( y N , y N ′ ) ) {\displaystyle ((y_{0},y_{0}'),(y_{1},y_{1}'),\dots ,(y_{N},y_{N}'))} . Bidirectional RNN allows the model to process a token both in the context of what came before it and what came after it. By stacking multiple bidirectional RNNs together, the model can process a token increasingly contextually. The ELMo model (2018) is a stacked bidirectional LSTM which takes character-level as inputs and produces word-level embeddings. === Encoder-decoder === Two RNNs can be run front-to-back in an encoder-decoder configuration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optional attention mechanism. This was used to construct state of the art neural machine translators during the 2014–2017 period. This was an instrumental step towards the development of transformers. === PixelRNN === An RNN may process data with more than one dimension. PixelRNN processes two-dimensional data, with many possible directions. For example, the row-by-row direction processes an n × n {\displaystyle n\times n} grid of vectors x i , j {\displaystyle x_{i,j}} in the following order: x 1 , 1 , x 1 , 2 , … , x 1 , n , x 2 , 1 , x 2 , 2 , … , x 2 , n , … , x n , n {\displaystyle x_{1,1},x_{1,2},\dots ,x_{1,n},x_{2,1},x_{2,2},\dots ,x_{2,n},\dots ,x_{n,n}} The diagonal BiLSTM uses two LSTMs to process the same grid. One processes it from the top-left corner to the bottom-right, such that it processes x i , j {\displaystyle x_{i,j}} depending on its hidden state and cell state on the top and the left side: h i − 1 , j , c i − 1 , j {\displaystyle h_{i-1,j},c_{i-1,j}} and h i , j − 1 , c i , j − 1 {\displaystyle h_{i,j-1},c_{i,j-1}} . The other processes it from the top-right corner to the bottom-left. == Architectures == === Fully recurrent === Fully recurrent neural networks (FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is a fully connected network. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons. === Hopfield === The Hopfield network is an RNN in which all connections across layers are equally sized. It requires stationary inputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained using Hebbian learning, then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration. === Elman networks and Jordan networks === An Elman network is a three-layer network (arranged horizontally as x, y, and z in the illustration) with the addition of a set of context units (u in the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one. At each time step, the input is fed forward and a learning rule is applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform tasks such as sequence-prediction that are beyond the power of a standard multilayer perceptron. Jordan networks are similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also called the state layer. They have a recurrent connection to themselves. Elman and Jordan networks are also known as "Simple recurrent networks" (SRN). Elman network h t = σ h ( W h x t + U h h t − 1 + b h ) y t = σ y ( W y h t + b y ) {\displaystyle {\begin{aligned}h_{t}&=\sigma _{h}(W_{h}x_{t}+U_{h}h_{t-1}+b_{h})\\y_{t}&=\sigma _{y}(W_{y}h_{t}+b_{y})\end{aligned}}} Jordan network h t = σ h ( W h x t + U h s t + b h ) y t = σ y ( W y h t + b y ) s t = σ s ( W s , s s t − 1 + W s , y y t − 1 + b s ) {\displaystyle {\begin{aligned}h_{t}&=\sigma _{h}(W_{h}x_{t}+U_{h}s_{t}+b_{h})\\y_{t}&=\sigma _{y}(W_{y}h_{t}+b_{y})\\s_{t}&=\sigma _{s}(W_{s,s}s_{t-1}+W_{s,y}y_{t-1}+b_{s})\end{aligned}}} Variables and functions x t {\displaystyle x_{t}} : input vector h t {\displaystyle h_{t}} : hidden layer vector s t {\displaystyle s_{t}} : "state" vector, y t {\displaystyle y_{t}} : output vector W {\displaystyle W} , U {\displaystyle U} and b {\displaystyle b} : parameter matrices and vector σ {\displaystyle \sigma } : Activation functions === Long short-term memory === Long short-term memory (LSTM) is the most widely used RNN architecture. It was designed to solve the vanishing gradient problem. LSTM is normally augmented by recurrent gates called "forget gates". LSTM prevents backpropagated errors from vanishing or exploding. Instead, errors can flow backward through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved. LSTM works even given long delays between significant events and can handle signals that mix low and high-frequency components. Many applications use stacks of LSTMs, for which it is called "deep LSTM". LSTM can learn to recognize context-sensitive languages unlike previous models based on hidden Markov models (HMM) and similar concepts. === Gated recurrent unit === Gated recurrent unit (GRU), introduced in 2014, was designed as a simplification of LSTM. They are used in the full form and several further simplified variants. They have fewer parameters than LSTM, as they lack an output gate. Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory. There does not appear to be particular performance difference between LSTM and GRU. ==== Bidirectional associative memory ==== Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bidirectionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using Markov stepping were optimized for increased network stability and relevance to real-world applications. A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer. === Echo state === Echo state networks (ESN) have a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certain time series. A variant for spiking neurons is known as a liquid state machine. === Recursive === A recursive neural network is created by applying the same set of weights recursively over a differentiable graph-like structure by traversing the structure in topological order. Such networks are typically also trained by the reverse mode of automatic differentiation. They can process distributed representations of structure, such as logical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied to natural language processing. The Recursive Neural Tensor Network uses a tensor-based composition function for all nodes in the tree. === Neural Turing machines === Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to external memory resources with which they interact. The combined system is analogous to a Turing machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology. Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers of context free grammars (CFGs). Recurrent neural networks are Turing complete and can run arbitrary programs to process arbitrary sequences of inputs. == Training == === Teacher forcing === An RNN can be trained into a conditionally generative model of sequences, aka autoregression. Concretely, let us consider the problem of machine translation, that is, given a sequence ( x 1 , x 2 , … , x n ) {\displaystyle (x_{1},x_{2},\dots ,x_{n})} of English words, the model is to produce a sequence ( y 1 , … , y m ) {\displaystyle (y_{1},\dots ,y_{m})} of French words. It is to be solved by a seq2seq model. Now, during training, the encoder half of the model would first ingest ( x 1 , x 2 , … , x n ) {\displaystyle (x_{1},x_{2},\dots ,x_{n})} , then the decoder half would start generating a sequence ( y ^ 1 , y ^ 2 , … , y ^ l ) {\displaystyle ({\hat {y}}_{1},{\hat {y}}_{2},\dots ,{\hat {y}}_{l})} . The problem is that if the model makes a mistake early on, say at y ^ 2 {\displaystyle {\hat {y}}_{2}} , then subsequent tokens are likely to also be mistakes. This makes it inefficient for the model to obtain a learning signal, since the model would mostly learn to shift y ^ 2 {\displaystyle {\hat {y}}_{2}} towards y 2 {\displaystyle y_{2}} , but not the others. Teacher forcing makes it so that the decoder uses the correct output sequence for generating the next entry in the sequence. So for example, it would see ( y 1 , … , y k ) {\displaystyle (y_{1},\dots ,y_{k})} in order to generate y ^ k + 1 {\displaystyle {\hat {y}}_{k+1}} . === Gradient descent === Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear activation functions are differentiable. The standard method for training RNN by gradient descent is the "backpropagation through time" (BPTT) algorithm, which is a special case of the general algorithm of backpropagation. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL, which is an instance of automatic differentiation in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space. In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space. For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the Jacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon. An online hybrid between BPTT and RTRL with intermediate complexity exists, along with variants for continuous time. A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events. LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems. This problem is also solved in the independently recurrent neural network (IndRNN) by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problem. The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks. It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback. One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation. It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations. It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza. === Connectionist temporal classification === The connectionist temporal classification (CTC) is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable. === Global optimization methods === Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function. The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks. Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows: Each weight encoded in the chromosome is assigned to the respective weight link of the network. The training set is presented to the network which propagates the input signals forward. The mean-squared error is returned to the fitness function. This function drives the genetic selection process. Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is: When the neural network has learned a certain percentage of the training data or When the minimum value of the mean-squared-error is satisfied or When the maximum number of training generations has been reached. The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error. Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as simulated annealing or particle swarm optimization. == Other architectures == === Independently RNN (IndRNN) === The independently recurrent neural network (IndRNN) addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as ReLU. Deep networks can be trained using skip connections. === Neural history compressor === The neural history compressor is an unsupervised stack of RNNs. At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level. The system effectively minimizes the description length or the negative logarithm of the probability of the data. Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events. It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level). Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events. A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. === Second order RNNs === Second-order RNNs use higher order weights w i j k {\displaystyle w{}_{ijk}} instead of the standard w i j {\displaystyle w{}_{ij}} weights, and states can be a product. This allows a direct mapping to a finite-state machine both in training, stability, and representation. Long short-term memory is an example of this but has no such formal mappings or proof of stability. === Hierarchical recurrent neural network === Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose hierarchical behavior into useful subprograms. Such hierarchical structures of cognition are present in theories of memory presented by philosopher Henri Bergson, whose philosophical views have inspired hierarchical models. Hierarchical recurrent neural networks are useful in forecasting, helping to predict disaggregated inflation components of the consumer price index (CPI). The HRNN model leverages information from higher levels in the CPI hierarchy to enhance lower-level predictions. Evaluation of a substantial dataset from the US CPI-U index demonstrates the superior performance of the HRNN model compared to various established inflation prediction methods. === Recurrent multilayer perceptron network === Generally, a recurrent multilayer perceptron network (RMLP network) consists of cascaded subnetworks, each containing multiple layers of nodes. Each subnetwork is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections. === Multiple timescales model === A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties. With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the memory-prediction theory of brain function by Hawkins in his book On Intelligence. Such a hierarchy also agrees with theories of memory posited by philosopher Henri Bergson, which have been incorporated into an MTRNN model. === Memristive networks === Greg Snider of HP Labs describes a system of cortical computing with memristive nanodevices. The memristors (memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film. DARPA's SyNAPSE project has funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems. Memristive networks are a particular type of physical neural network that have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to the Ising model. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type of neuromorphic engineering in which the device behavior depends on the circuit wiring or topology. The evolution of these networks can be studied analytically using variations of the Caravelli–Traversa–Di Ventra equation. === Continuous-time === A continuous-time recurrent neural network (CTRNN) uses a system of ordinary differential equations to model the effects on a neuron of the incoming inputs. They are typically analyzed by dynamical systems theory. Many RNN models in neuroscience are continuous-time. For a neuron i {\displaystyle i} in the network with activation y i {\displaystyle y_{i}} , the rate of change of activation is given by: τ i y ˙ i = − y i + ∑ j = 1 n w j i σ ( y j − Θ j ) + I i ( t ) {\displaystyle \tau _{i}{\dot {y}}_{i}=-y_{i}+\sum _{j=1}^{n}w_{ji}\sigma (y_{j}-\Theta _{j})+I_{i}(t)} Where: τ i {\displaystyle \tau _{i}} : Time constant of postsynaptic node y i {\displaystyle y_{i}} : Activation of postsynaptic node y ˙ i {\displaystyle {\dot {y}}_{i}} : Rate of change of activation of postsynaptic node w j i {\displaystyle w{}_{ji}} : Weight of connection from pre to postsynaptic node σ ( x ) {\displaystyle \sigma (x)} : Sigmoid of x e.g. σ ( x ) = 1 / ( 1 + e − x ) {\displaystyle \sigma (x)=1/(1+e^{-x})} . y j {\displaystyle y_{j}} : Activation of presynaptic node Θ j {\displaystyle \Theta _{j}} : Bias of presynaptic node I i ( t ) {\displaystyle I_{i}(t)} : Input (if any) to node CTRNNs have been applied to evolutionary robotics where they have been used to address vision, co-operation, and minimal cognitive behaviour. Note that, by the Shannon sampling theorem, discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent difference equations. This transformation can be thought of as occurring after the post-synaptic node activation functions y i ( t ) {\displaystyle y_{i}(t)} have been low-pass filtered but prior to sampling. They are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step. From a time-series perspective, RNNs can appear as nonlinear versions of finite impulse response and infinite impulse response filters and also as a nonlinear autoregressive exogenous model (NARX). RNN has infinite impulse response whereas convolutional neural networks have finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that cannot be unrolled. The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency spiking activity. Additional stored states and the storage under direct control by the network can be added to both infinite-impulse and finite-impulse networks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part of long short-term memory networks (LSTMs) and gated recurrent units. This is also called Feedback Neural Network (FNN). == Libraries == Modern libraries provide runtime-optimized implementations of the above functionality or allow to speed up the slow loop by just-in-time compilation. Apache Singa Caffe: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers. Chainer: Fully in Python, production support for CPU, GPU, distributed training. Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. Flux: includes interfaces for RNNs, including GRUs and LSTMs, written in Julia. Keras: High-level API, providing a wrapper to many other deep learning libraries. Microsoft Cognitive Toolkit MXNet: an open-source deep learning framework used to train and deploy deep neural networks. PyTorch: Tensors and Dynamic neural networks in Python with GPU acceleration. TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU and Google's proprietary TPU, mobile Theano: A deep-learning library for Python with an API largely compatible with the NumPy library. Torch: A scientific computing framework with support for machine learning algorithms, written in C and Lua. == Applications == Applications of recurrent neural networks include: Machine translation Robot control Time series prediction Speech recognition Speech synthesis Brain–computer interfaces Time series anomaly detection Text-to-Video model Rhythm learning Music composition Grammar learning Handwriting recognition Human action recognition Protein homology detection Predicting subcellular localization of proteins Several prediction tasks in the area of business process management Prediction in medical care pathways Predictions of fusion plasma disruptions in reactors (Fusion Recurrent Neural Network (FRNN) code) == References == == Further reading == Mandic, Danilo P.; Chambers, Jonathon A. (2001). Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley. ISBN 978-0-471-49517-8. Grossberg, Stephen (2013-02-22). "Recurrent Neural Networks". Scholarpedia. 8 (2): 1888. Bibcode:2013SchpJ...8.1888G. doi:10.4249/scholarpedia.1888. ISSN 1941-6016. Recurrent Neural Networks. List of RNN papers by Jürgen Schmidhuber's group at Dalle Molle Institute for Artificial Intelligence Research.
Wikipedia/Recurrent_neural_networks
Spacecraft attitude control is the process of controlling the orientation of a spacecraft (vehicle or satellite) with respect to an inertial frame of reference or another entity such as the celestial sphere, certain fields, and nearby objects, etc. Controlling vehicle attitude requires actuators to apply the torques needed to orient the vehicle to a desired attitude, and algorithms to command the actuators based on the current attitude and specification of a desired attitude. Before and during attitude control can be performed, spacecraft attitude determination must be performed, which requires sensors for absolute or relative measurement. The broader integrated field that studies the combination of sensors, actuators and algorithms is called guidance, navigation and control, which also involves non-attitude concepts, such as position determination and navigation. == Motivation == A spacecraft's attitude must typically be stabilized and controlled for a variety of reasons. It is often needed so that the spacecraft high-gain antenna may be accurately pointed to Earth for communications, so that onboard experiments may accomplish precise pointing for accurate collection and subsequent interpretation of data, so that the heating and cooling effects of sunlight and shadow may be used intelligently for thermal control, and also for guidance: short propulsive maneuvers must be executed in the right direction. Many spacecraft have components that require articulation or pointing. Voyager and Galileo, for example, were designed with scan platforms for pointing optical instruments at their targets largely independently of spacecraft orientation. Many spacecraft, such as Mars orbiters, have solar panels that must track the Sun so they can provide electrical power to the spacecraft. Cassini's main engine nozzles were steerable. Knowing where to point a solar panel, or scan platform, or a nozzle — that is, how to articulate it — requires knowledge of the spacecraft's attitude. Because a single subsystem keeps track of the spacecraft's attitude, the Sun's location, and Earth's location, it can compute the proper direction to point the appendages. It logically falls to the same subsystem – the Attitude and Articulation Control Subsystem (AACS), then, to manage both attitude and articulation. The name AACS may even be carried over to a spacecraft even if it has no appendages to articulate. == Background == Attitude is part of the description of how an object is placed in the space it occupies. Attitude and position fully describe how an object is placed in space. (For some applications such as in robotics and computer vision, it is customary to combine position and attitude together into a single description known as Pose.) Attitude can be described using a variety of methods; however, the most common are Rotation matrices, Quaternions, and Euler angles. While Euler angles are oftentimes the most straightforward representation to visualize, they can cause problems for highly-maneuverable systems because of a phenomenon known as Gimbal lock. A rotation matrix, on the other hand, provides a full description of the attitude at the expense of requiring nine values instead of three. The use of a rotation matrix can lead to increased computational expense and they can be more difficult to work with. Quaternions offer a decent compromise in that they do not suffer from gimbal lock and only require four values to fully describe the attitude. == Control == === Types of stabilization === Attitude control of spacecraft is maintained using one of two principal approaches: Spin stabilization Spin stabilization is accomplished by setting the spacecraft spinning, using the gyroscopic action of the rotating spacecraft mass as the stabilizing mechanism. Propulsion system thrusters are fired only occasionally to make desired changes in spin rate, or in the spin-stabilized attitude. If desired, the spinning may be stopped through the use of thrusters or by yo-yo de-spin. The Pioneer 10 and Pioneer 11 probes in the outer Solar System are examples of spin-stabilized spacecraft. Three-axis stabilization is an alternative method of spacecraft attitude control in which the spacecraft is held fixed in the desired orientation without any rotation. One method is to use small thrusters to continually nudge the spacecraft back and forth within a deadband of allowed attitude error. Thrusters may also be referred to as mass-expulsion control (MEC) systems, or reaction control systems (RCS). The space probes Voyager 1 and Voyager 2 employ this method, and have used up about three quarters of their 100 kg of propellant as of July 2015. Another method for achieving three-axis stabilization is to use electrically powered reaction wheels, also called momentum wheels, which are mounted on three orthogonal axes aboard the spacecraft. They provide a means to trade angular momentum back and forth between spacecraft and wheels. To rotate the vehicle on a given axis, the reaction wheel on that axis is accelerated in the opposite direction. To rotate the vehicle back, the wheel is slowed. Excess momentum that builds up in the system due to external torques from, for example, solar photon pressure or gravity gradients, must be occasionally removed from the system by applying controlled torque to the spacecraft to allowing the wheels to return to a desired speed under computer control. This is done during maneuvers called momentum desaturation or momentum unload maneuvers. Most spacecraft use a system of thrusters to apply the torque for desaturation maneuvers. A different approach was used by the Hubble Space Telescope, which had sensitive optics that could be contaminated by thruster exhaust, and instead used magnetic torquers for desaturation maneuvers. There are advantages and disadvantages to both spin stabilization and three-axis stabilization. Spin-stabilized craft provide a continuous sweeping motion that is desirable for fields and particles instruments, as well as some optical scanning instruments, but they may require complicated systems to de-spin antennas or optical instruments that must be pointed at targets for science observations or communications with Earth. Three-axis controlled craft can point optical instruments and antennas without having to de-spin them, but they may have to carry out special rotating maneuvers to best utilize their fields and particle instruments. If thrusters are used for routine stabilization, optical observations such as imaging must be designed knowing that the spacecraft is always slowly rocking back and forth, and not always exactly predictably. Reaction wheels provide a much steadier spacecraft from which to make observations, but they add mass to the spacecraft, they have a limited mechanical lifetime, and they require frequent momentum desaturation maneuvers, which can perturb navigation solutions because of accelerations imparted by the use of thrusters. === Actuators === Attitude control can be obtained by several mechanisms, including: ==== Thrusters ==== Vernier thrusters are the most common actuators, as they may be used for station keeping as well. Thrusters must be organized as a system to provide stabilization about all three axes, and at least two thrusters are generally used in each axis to provide torque as a couple in order to prevent imparting a translation to the vehicle. Their limitations are fuel usage, engine wear, and cycles of the control valves. The fuel efficiency of an attitude control system is determined by its specific impulse (proportional to exhaust velocity) and the smallest torque impulse it can provide (which determines how often the thrusters must fire to provide precise control). Thrusters must be fired in one direction to start rotation, and again in the opposing direction if a new orientation is to be held. Thruster systems have been used on most crewed space vehicles, including Vostok, Mercury, Gemini, Apollo, Soyuz, and the Space Shuttle. To minimize the fuel limitation on mission duration, auxiliary attitude control systems may be used to reduce vehicle rotation to lower levels, such as small ion thrusters that accelerate ionized gases electrically to extreme velocities, using power from solar cells. ==== Reaction/momentum wheels ==== Momentum wheels are electric motor driven rotors made to spin in the direction opposite to that required to re-orient the vehicle. Because momentum wheels make up a small fraction of the spacecraft's mass and are computer controlled, they give precise control. Momentum wheels are generally suspended on magnetic bearings to avoid bearing friction and breakdown problems. Spacecraft Reaction wheels often use mechanical ball bearings. To maintain orientation in three dimensional space a minimum of three reaction wheels must be used, with additional units providing single failure protection. See Euler angles. ==== Control moment gyros ==== These are rotors spun at constant speed, mounted on gimbals to provide attitude control. Although a CMG provides control about the two axes orthogonal to the gyro spin axis, triaxial control still requires two units. A CMG is a bit more expensive in terms of cost and mass, because gimbals and their drive motors must be provided. The maximum torque (but not the maximum angular momentum change) exerted by a CMG is greater than for a momentum wheel, making it better suited to large spacecraft. A major drawback is the additional complexity, which increases the number of failure points. For this reason, the International Space Station uses a set of four CMGs to provide dual failure tolerance. ==== Solar sails ==== Small solar sails (devices that produce thrust as a reaction force induced by reflecting incident light) may be used to make small attitude control and velocity adjustments. This application can save large amounts of fuel on a long-duration mission by producing control moments without fuel expenditure. For example, Mariner 10 adjusted its attitude using its solar cells and antennas as small solar sails. ==== Gravity-gradient stabilization ==== In orbit, a spacecraft with one axis much longer than the other two will spontaneously orient so that its long axis points at the planet's center of mass. This system has the virtue of needing no active control system or expenditure of fuel. The effect is caused by a tidal force. The upper end of the vehicle feels less gravitational pull than the lower end. This provides a restoring torque whenever the long axis is not co-linear with the direction of gravity. Unless some means of damping is provided, the spacecraft will oscillate about the local vertical. Sometimes tethers are used to connect two parts of a satellite, to increase the stabilizing torque. A problem with such tethers is that meteoroids as small as a grain of sand can part them. ==== Magnetic torquers ==== Coils or (on very small satellites) permanent magnets exert a moment against the local magnetic field. This method works only where there is a magnetic field against which to react. One classic field "coil" is actually in the form of a conductive tether in a planetary magnetic field. Such a conductive tether can also generate electrical power, at the expense of orbital decay. Conversely, by inducing a counter-current, using solar cell power, the orbit may be raised. Due to massive variability in Earth's magnetic field from an ideal radial field, control laws based on torques coupling to this field will be highly non-linear. Moreover, only two-axis control is available at any given time meaning that a vehicle reorient may be necessary to null all rates. ==== Passive attitude control ==== Three main types of passive attitude control exist for satellites. The first one uses gravity gradient, and it leads to four stable states with the long axis (axis with smallest moment of inertia) pointing towards Earth. As this system has four stable states, if the satellite has a preferred orientation, e.g. a camera pointed at the planet, some way to flip the satellite and its tether end-for-end is needed. The second passive system orients the satellite along Earth's magnetic field thanks to a magnet. These purely passive attitude control systems have limited pointing accuracy, because the spacecraft will oscillate around energy minima. This drawback is overcome by adding damper, which can be hysteretic materials or a viscous damper. The viscous damper is a small can or tank of fluid mounted in the spacecraft, possibly with internal baffles to increase internal friction. Friction within the damper will gradually convert oscillation energy into heat dissipated within the viscous damper. A third form of passive attitude control is aerodynamic stabilization. This is achieved using a drag gradient, as demonstrated on the Get Away Special Passive Attitude Control Satellite (GASPACS) technology demonstration. In low Earth orbit, the force due to drag is many orders of magnitude more dominant than the force imparted due to gravity gradients. When a satellite is utilizing aerodynamic passive attitude control, air molecules from the Earth's upper atmosphere strike the satellite in such a way that the center of pressure remains behind the center of mass, similar to how the feathers on an arrow stabilize the arrow. GASPACS utilized a 1 m inflatable 'AeroBoom', which extended behind the satellite, creating a stabilizing torque along the satellite's velocity vector. === Control algorithms === Control algorithms are computer programs that receive data from vehicle sensors and derive the appropriate commands to the actuators to rotate the vehicle to the desired attitude. The algorithms range from very simple, e.g. proportional control, to complex nonlinear estimators or many in-between types, depending on mission requirements. Typically, the attitude control algorithms are part of the software running on the computer hardware, which receives commands from the ground and formats vehicle data telemetry for transmission to a ground station. The attitude control algorithms are written and implemented based on requirement for a particular attitude maneuver. Asides the implementation of passive attitude control such as the gravity-gradient stabilization, most spacecraft make use of active control which exhibits a typical attitude control loop. The design of the control algorithm depends on the actuator to be used for the specific attitude maneuver although using a simple proportional–integral–derivative controller (PID controller) satisfies most control needs. The appropriate commands to the actuators are obtained based on error signals described as the difference between the measured and desired attitude. The error signals are commonly measured as euler angles (Φ, θ, Ψ), however an alternative to this could be described in terms of direction cosine matrix or error quaternions. The PID controller which is most common reacts to an error signal (deviation) based on attitude as follows T c ( t ) = K p e ( t ) + K i ∫ 0 t e ( τ ) d τ + K d e ˙ ( t ) , {\displaystyle T_{c}(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau +K_{\text{d}}{\dot {e}}(t),} where T c {\displaystyle T_{c}} is the control torque, e {\displaystyle e} is the attitude deviation signal, and K p , K i , K d {\displaystyle K_{\text{p}},K_{\text{i}},K_{\text{d}}} are the PID controller parameters. A simple implementation of this can be the application of the proportional control for nadir pointing making use of either momentum or reaction wheels as actuators. Based on the change in momentum of the wheels, the control law can be defined in 3-axes x, y, z as T c x = − K q1 q 1 + K w1 w x , {\displaystyle T_{c}x=-K_{\text{q1}}q_{1}+K_{\text{w1}}{w_{x}},} T c y = − K q2 q 2 + K w2 w y , {\displaystyle T_{c}y=-K_{\text{q2}}q_{2}+K_{\text{w2}}{w_{y}},} T c z = − K q3 q 3 + K w3 w z , {\displaystyle T_{c}z=-K_{\text{q3}}q_{3}+K_{\text{w3}}{w_{z}},} This control algorithm also affects momentum dumping. Another important and common control algorithm involves the concept of detumbling, which is attenuating the angular momentum of the spacecraft. The need to detumble the spacecraft arises from the uncontrollable state after release from the launch vehicle. Most spacecraft in low Earth orbit (LEO) makes use of magnetic detumbling concept which utilizes the effect of the Earth's magnetic field. The control algorithm is called the B-Dot controller and relies on magnetic coils or torque rods as control actuators. The control law is based on the measurement of the rate of change of body-fixed magnetometer signals. m = − K B ˙ {\displaystyle m=-K{\dot {B}}} where m {\displaystyle m} is the commanded magnetic dipole moment of the magnetic torquer and K {\displaystyle K} is the proportional gain and B ˙ {\displaystyle {\dot {B}}} is the rate of change of the Earth's magnetic field. == Determination == Spacecraft attitude determination is the process of determining the orientation of a spacecraft (vehicle or satellite). It is a pre-requisite for spacecraft attitude control. A variety of sensors are utilized for relative and absolute attitude determination. === Sensors === ==== Relative attitude sensors ==== Many sensors generate outputs that reflect the rate of change in attitude. These require a known initial attitude, or external information to use them to determine attitude. Many of this class of sensor have some noise, leading to inaccuracies if not corrected by absolute attitude sensors. ===== Gyroscopes ===== Gyroscopes are devices that sense rotation in three-dimensional space without reliance on the observation of external objects. Classically, a gyroscope consists of a spinning mass, but there are also "ring laser gyros" utilizing coherent light reflected around a closed path. Another type of "gyro" is a hemispherical resonator gyro where a crystal cup shaped like a wine glass can be driven into oscillation just as a wine glass "sings" as a finger is rubbed around its rim. The orientation of the oscillation is fixed in inertial space, so measuring the orientation of the oscillation relative to the spacecraft can be used to sense the motion of the spacecraft with respect to inertial space. ===== Motion reference units ===== Motion reference units are a kind of inertial measurement unit with single- or multi-axis motion sensors. They utilize MEMS gyroscopes. Some multi-axis MRUs are capable of measuring roll, pitch, yaw and heave. They have applications outside the aeronautical field, such as: Antenna motion compensation and stabilization Dynamic positioning Heave compensation of offshore cranes High speed craft motion control and damping systems Hydro acoustic positioning Motion compensation of single and multibeam echosounders Ocean wave measurements Offshore structure motion monitoring Orientation and attitude measurements on Autonomous underwater vehicles and Remotely operated underwater vehicles Ship motion monitoring ==== Absolute attitude sensors ==== This class of sensors sense the position or orientation of fields, objects or other phenomena outside the spacecraft. ===== Horizon sensor ===== A horizon sensor is an optical instrument that detects light from the 'limb' of Earth's atmosphere, i.e., at the horizon. Thermal infrared sensing is often used, which senses the comparative warmth of the atmosphere, compared to the much colder cosmic background. This sensor provides orientation with respect to Earth about two orthogonal axes. It tends to be less precise than sensors based on stellar observation. Sometimes referred to as an Earth sensor. ===== Orbital gyrocompass ===== Similar to the way that a terrestrial gyrocompass uses a pendulum to sense local gravity and force its gyro into alignment with Earth's spin vector, and therefore point north, an orbital gyrocompass uses a horizon sensor to sense the direction to Earth's center, and a gyro to sense rotation about an axis normal to the orbit plane. Thus, the horizon sensor provides pitch and roll measurements, and the gyro provides yaw. See Tait-Bryan angles. ===== Sun sensor ===== A Sun sensor is a device that senses the direction to the Sun. This can be as simple as some solar cells and shades, or as complex as a steerable telescope, depending on mission requirements. ===== Earth sensor ===== An Earth sensor is a device that senses the direction to Earth. It is usually an infrared camera; nowadays the main method to detect attitude is the star tracker, but Earth sensors are still integrated in satellites for their low cost and reliability. ===== Star tracker ===== A star tracker is an optical device that measures the position(s) of star(s) using photocell(s) or a camera. It uses magnitude of brightness and spectral type to identify and then calculate the relative position of stars around it. ===== Magnetometer ===== A magnetometer is a device that senses magnetic field strength and, when used in a three-axis triad, magnetic field direction. As a spacecraft navigational aid, sensed field strength and direction is compared to a map of Earth's magnetic field stored in the memory of an on-board or ground-based guidance computer. If spacecraft position is known then attitude can be inferred. === Attitude estimation === Attitude cannot be measured directly by any single measurement, and so must be calculated (or estimated) from a set of measurements (often using different sensors). This can be done either statically (calculating the attitude using only the measurements currently available), or through the use of a statistical filter (most commonly, the Kalman filter) that statistically combine previous attitude estimates with current sensor measurements to obtain an optimal estimate of the current attitude. ==== Static attitude estimation methods ==== Static attitude estimation methods are solutions to Wahba's problem. Many solutions have been proposed, notably Davenport's q-method, QUEST, TRIAD, and singular value decomposition. Crassidis, John L., and John L. Junkins.. Chapman and Hall/CRC, 2004. ==== Sequential estimation methods ==== Kalman filtering can be used to sequentially estimate the attitude, as well as the angular rate. Because attitude dynamics (combination of rigid body dynamics and attitude kinematics) are non-linear, a linear Kalman filter is not sufficient. Because attitude dynamics is not very non-linear, the Extended Kalman filter is usually sufficient (however Crassidis and Markely demonstrated that the Unscented Kalman filter could be used, and can provide benefits in cases where the initial estimate is poor). Multiple methods have been proposed, however the Multiplicative Extended Kalman Filter (MEKF) is by far the most common approach. This approach utilizes the multiplicative formulation of the error quaternion, which allows for the unity constraint on the quaternion to be better handled. It is also common to use a technique known as dynamic model replacement, where the angular rate is not estimated directly, but rather the measured angular rate from the gyro is used directly to propagate the rotational dynamics forward in time. This is valid for most applications as gyros are typically far more precise than one's knowledge of disturbance torques acting on the system (which is required for precise estimation of the angular rate). === Position/location determination === For some sensors and applications (such as spacecraft using magnetometers) the precise location must also be known. While pose estimation can be employed, for spacecraft it is usually sufficient to estimate the position (via Orbit determination) separate from the attitude estimation. For terrestrial vehicles and spacecraft operating near the Earth, the advent of Satellite navigation systems allows for precise position knowledge to be obtained easily. This problem becomes more complicated for deep space vehicles, or terrestrial vehicles operating in Global Navigation Satellite System (GNSS) denied environments (see Navigation). == See also == Astrionics#Attitude determination and control Longitudinal static stability Directional stability Reaction control system Spacecraft_flight_dynamics#Attitude_control Triad method Wahba's problem == References == == External links == Media related to Spacecraft attitude control at Wikimedia Commons
Wikipedia/Spacecraft_attitude_control
In statistical hypothesis testing, a type I error, or a false positive, is the erroneous rejection of a true null hypothesis. A type II error, or a false negative, is the erroneous failure in bringing about appropriate rejection of a false null hypothesis. Type I errors can be thought of as errors of commission, in which the status quo is erroneously rejected in favour of new, misleading information. Type II errors can be thought of as errors of omission, in which a misleading status quo is allowed to remain due to failures in identifying it as such. For example, if the assumption that people are innocent until proven guilty were taken as a null hypothesis, then proving an innocent person as guilty would constitute a Type I error, while failing to prove a guilty person as guilty would constitute a Type II error. If the null hypothesis were inverted, such that people were by default presumed to be guilty until proven innocent, then proving a guilty person's innocence would constitute a Type I error, while failing to prove an innocent person's innocence would constitute a Type II error. The manner in which a null hypothesis frames contextually default expectations influences the specific ways in which type I errors and type II errors manifest, and this varies by context and application. Knowledge of type I errors and type II errors is applied widely in fields of in medical science, biometrics and computer science. Minimising these errors is an object of study within statistical theory, though complete elimination of either is impossible when relevant outcomes are not determined by known, observable, causal processes. == Definition == === Statistical background === In statistical test theory, the notion of a statistical error is an integral part of hypothesis testing. The test goes about choosing about two competing propositions called null hypothesis, denoted by H 0 {\textstyle H_{0}} and alternative hypothesis, denoted by H 1 {\textstyle H_{1}} . This is conceptually similar to the judgement in a court trial. The null hypothesis corresponds to the position of the defendant: just as he is presumed to be innocent until proven guilty, so is the null hypothesis presumed to be true until the data provide convincing evidence against it. The alternative hypothesis corresponds to the position against the defendant. Specifically, the null hypothesis also involves the absence of a difference or the absence of an association. Thus, the null hypothesis can never be that there is a difference or an association. If the result of the test corresponds with reality, then a correct decision has been made. However, if the result of the test does not correspond with reality, then an error has occurred. There are two situations in which the decision is wrong. The null hypothesis may be true, whereas we reject H 0 {\textstyle H_{0}} . On the other hand, the alternative hypothesis H 1 {\textstyle H_{1}} may be true, whereas we do not reject H 0 {\textstyle H_{0}} . Two types of error are distinguished: type I error and type II error. === Type I error === The first kind of error is the mistaken rejection of a null hypothesis as the result of a test procedure. This kind of error is called a type I error (false positive) and is sometimes called an error of the first kind. In terms of the courtroom example, a type I error corresponds to convicting an innocent defendant. === Type II error === The second kind of error is the mistaken failure to reject the null hypothesis as the result of a test procedure. This sort of error is called a type II error (false negative) and is also referred to as an error of the second kind. In terms of the courtroom example, a type II error corresponds to acquitting a criminal. === Crossover error rate === The crossover error rate (CER) is the point at which type I errors and type II errors are equal. A system with a lower CER value provides more accuracy than a system with a higher CER value. === False positive and false negative === In terms of false positives and false negatives, a positive result corresponds to rejecting the null hypothesis, while a negative result corresponds to failing to reject the null hypothesis; "false" means the conclusion drawn is incorrect. Thus, a type I error is equivalent to a false positive, and a type II error is equivalent to a false negative. === Table of error types === Tabulated relations between truth/falseness of the null hypothesis and outcomes of the test: == Error rate == A perfect test would have zero false positives and zero false negatives. However, statistical methods are probabilistic, and it cannot be known for certain whether statistical conclusions are correct. Whenever there is uncertainty, there is the possibility of making an error. Considering this, all statistical hypothesis tests have a probability of making type I and type II errors. The type I error rate is the probability of rejecting the null hypothesis given that it is true. The test is designed to keep the type I error rate below a prespecified bound called the significance level, usually denoted by the Greek letter α (alpha) and is also called the alpha level. Usually, the significance level is set to 0.05 (5%), implying that it is acceptable to have a 5% probability of incorrectly rejecting the true null hypothesis. The rate of the type II error is denoted by the Greek letter β (beta) and related to the power of a test, which equals 1−β. These two types of error rates are traded off against each other: for any given sample set, the effort to reduce one type of error generally results in increasing the other type of error. === The quality of hypothesis test === The same idea can be expressed in terms of the rate of correct results and therefore used to minimize error rates and improve the quality of hypothesis test. To reduce the probability of committing a type I error, making the alpha value more stringent is both simple and efficient. For example, setting the alpha value at 0.01, instead of 0.05. To decrease the probability of committing a type II error, which is closely associated with analyses' power, either increasing the test's sample size or relaxing the alpha level, ex. setting the alpha level to 0.1 instead of 0.05, could increase the analyses' power. A test statistic is robust if the type I error rate is controlled. Varying different threshold (cut-off) values could also be used to make the test either more specific or more sensitive, which in turn elevates the test quality. For example, imagine a medical test, in which an experimenter might measure the concentration of a certain protein in the blood sample. The experimenter could adjust the threshold (black vertical line in the figure) and people would be diagnosed as having diseases if any number is detected above this certain threshold. According to the image, changing the threshold would result in changes in false positives and false negatives, corresponding to movement on the curve. == Example == Since in a real experiment it is impossible to avoid all type I and type II errors, it is important to consider the amount of risk one is willing to take to falsely reject H0 or accept H0. The solution to this question would be to report the p-value or significance level α of the statistic. For example, if the p-value of a test statistic result is 0.0596, then there is a probability of 5.96% that we falsely reject H0 given it is true. Or, if we say, the statistic is performed at level α, like 0.05, then we allow to falsely reject H0 at 5%. A significance level α of 0.05 is relatively common, but there is no general rule that fits all scenarios. === Vehicle speed measuring === The speed limit of a freeway in the United States is 120 kilometers per hour (75 mph). A device is set to measure the speed of passing vehicles. Suppose that the device will conduct three measurements of the speed of a passing vehicle, recording as a random sample X1, X2, X3. The traffic police will or will not fine the drivers depending on the average speed X ¯ {\displaystyle {\bar {X}}} . That is to say, the test statistic T = X 1 + X 2 + X 3 3 = X ¯ {\displaystyle T={\frac {X_{1}+X_{2}+X_{3}}{3}}={\bar {X}}} In addition, we suppose that the measurements X1, X2, X3 are modeled as normal distribution N(μ,2). Then, T should follow N(μ,2/ 3 {\displaystyle {\sqrt {3}}} ) and the parameter μ represents the true speed of passing vehicle. In this experiment, the null hypothesis H0 and the alternative hypothesis H1 should be H0: μ=120 against H1: μ>120. If we perform the statistic level at α=0.05, then a critical value c should be calculated to solve P ( Z ⩾ c − 120 2 3 ) = 0.05 {\displaystyle P\left(Z\geqslant {\frac {c-120}{\frac {2}{\sqrt {3}}}}\right)=0.05} According to change-of-units rule for the normal distribution. Referring to Z-table, we can get c − 120 2 3 = 1.645 ⇒ c = 121.9 {\displaystyle {\frac {c-120}{\frac {2}{\sqrt {3}}}}=1.645\Rightarrow c=121.9} Here, the critical region. That is to say, if the recorded speed of a vehicle is greater than critical value 121.9, the driver will be fined. However, there are still 5% of the drivers are falsely fined since the recorded average speed is greater than 121.9 but the true speed does not pass 120, which we say, a type I error. The type II error corresponds to the case that the true speed of a vehicle is over 120 kilometers per hour but the driver is not fined. For example, if the true speed of a vehicle μ=125, the probability that the driver is not fined can be calculated as P = ( T < 121.9 | μ = 125 ) = P ( T − 125 2 3 < 121.9 − 125 2 3 ) = ϕ ( − 2.68 ) = 0.0036 {\displaystyle P=(T<121.9|\mu =125)=P\left({\frac {T-125}{\frac {2}{\sqrt {3}}}}<{\frac {121.9-125}{\frac {2}{\sqrt {3}}}}\right)=\phi (-2.68)=0.0036} which means, if the true speed of a vehicle is 125, the driver has the probability of 0.36% to avoid the fine when the statistic is performed at level α=0.05, since the recorded average speed is lower than 121.9. If the true speed is closer to 121.9 than 125, then the probability of avoiding the fine will also be higher. The tradeoffs between type I error and type II error should also be considered. That is, in this case, if the traffic police do not want to falsely fine innocent drivers, the level α can be set to a smaller value, like 0.01. However, if that is the case, more drivers whose true speed is over 120 kilometers per hour, like 125, would be more likely to avoid the fine. == Etymology == In 1928, Jerzy Neyman (1894–1981) and Egon Pearson (1895–1980), both eminent statisticians, discussed the problems associated with "deciding whether or not a particular sample may be judged as likely to have been randomly drawn from a certain population": and, as Florence Nightingale David remarked, "it is necessary to remember the adjective 'random' [in the term 'random sample'] should apply to the method of drawing the sample and not to the sample itself". They identified "two sources of error", namely: In 1930, they elaborated on these two sources of error, remarking that in testing hypotheses two considerations must be kept in view, we must be able to reduce the chance of rejecting a true hypothesis to as low a value as desired; the test must be so devised that it will reject the hypothesis tested when it is likely to be false. In 1933, they observed that these "problems are rarely presented in such a form that we can discriminate with certainty between the true and false hypothesis". They also noted that, in deciding whether to fail to reject, or reject a particular hypothesis amongst a "set of alternative hypotheses", H1, H2..., it was easy to make an error, [and] these errors will be of two kinds: In all of the papers co-written by Neyman and Pearson the expression H0 always signifies "the hypothesis to be tested". In the same paper they call these two sources of error, errors of type I and errors of type II respectively. == Related terms == === Null hypothesis === It is standard practice for statisticians to conduct tests in order to determine whether or not a "speculative hypothesis" concerning the observed phenomena of the world (or its inhabitants) can be supported. The results of such testing determine whether a particular set of results agrees reasonably (or does not agree) with the speculated hypothesis. On the basis that it is always assumed, by statistical convention, that the speculated hypothesis is wrong, and the so-called "null hypothesis" that the observed phenomena simply occur by chance (and that, as a consequence, the speculated agent has no effect) – the test will determine whether this hypothesis is right or wrong. This is why the hypothesis under test is often called the null hypothesis (most likely, coined by Fisher (1935, p. 19)), because it is this hypothesis that is to be either nullified or not nullified by the test. When the null hypothesis is nullified, it is possible to conclude that data support the "alternative hypothesis" (which is the original speculated one). The consistent application by statisticians of Neyman and Pearson's convention of representing "the hypothesis to be tested" (or "the hypothesis to be nullified") with the expression H0 has led to circumstances where many understand the term "the null hypothesis" as meaning "the nil hypothesis" – a statement that the results in question have arisen through chance. This is not necessarily the case – the key restriction, as per Fisher (1966), is that "the null hypothesis must be exact, that is free from vagueness and ambiguity, because it must supply the basis of the 'problem of distribution', of which the test of significance is the solution." As a consequence of this, in experimental science the null hypothesis is generally a statement that a particular treatment has no effect; in observational science, it is that there is no difference between the value of a particular measured variable, and that of an experimental prediction. === Statistical significance === If the probability of obtaining a result as extreme as the one obtained, supposing that the null hypothesis were true, is lower than a pre-specified cut-off probability (for example, 5%), then the result is said to be statistically significant and the null hypothesis is rejected. British statistician Sir Ronald Aylmer Fisher (1890–1962) stressed that the null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis. == Application domains == === Medicine === In the practice of medicine, the differences between the applications of screening and testing are considerable. ==== Medical screening ==== Screening involves relatively cheap tests that are given to large populations, none of whom manifest any clinical indication of disease (e.g., Pap smears). Testing involves far more expensive, often invasive, procedures that are given only to those who manifest some clinical indication of disease, and are most often applied to confirm a suspected diagnosis. For example, most states in the US require newborns to be screened for phenylketonuria and hypothyroidism, among other congenital disorders. Hypothesis: "The newborns have phenylketonuria and hypothyroidism". Null hypothesis (H0): "The newborns do not have phenylketonuria and hypothyroidism". Type I error (false positive): The true fact is that the newborns do not have phenylketonuria and hypothyroidism but we consider they have the disorders according to the data. Type II error (false negative): The true fact is that the newborns have phenylketonuria and hypothyroidism but we consider they do not have the disorders according to the data. Although they display a high rate of false positives, the screening tests are considered valuable because they greatly increase the likelihood of detecting these disorders at a far earlier stage. The simple blood tests used to screen possible blood donors for HIV and hepatitis have a significant rate of false positives; however, physicians use much more expensive and far more precise tests to determine whether a person is actually infected with either of these viruses. Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. The US rate of false positive mammograms is up to 15%, the highest in world. One consequence of the high false positive rate in the US is that, in any 10-year period, half of the American women screened receive a false positive mammogram. False positive mammograms are costly, with over $100 million spent annually in the U.S. on follow-up testing and treatment. They also cause women unneeded anxiety. As a result of the high false positive rate in the US, as many as 90–95% of women who get a positive mammogram do not have the condition. The lowest rate in the world is in the Netherlands, 1%. The lowest rates are generally in Northern Europe where mammography films are read twice and a high threshold for additional testing is set (the high threshold decreases the power of the test). The ideal population screening test would be cheap, easy to administer, and produce zero false negatives, if possible. Such tests usually produce more false positives, which can subsequently be sorted out by more sophisticated (and expensive) testing. ==== Medical testing ==== False negatives and false positives are significant issues in medical testing. Hypothesis: "The patients have the specific disease". Null hypothesis (H0): "The patients do not have the specific disease". Type I error (false positive): The true fact is that the patients do not have a specific disease but the physician judges the patient is ill according to the test reports. Type II error (false negative): The true fact is that the disease is actually present but the test reports provide a falsely reassuring message to patients and physicians that the disease is absent. False positives can also produce serious and counter-intuitive problems when the condition being searched for is rare, as in screening. If a test has a false positive rate of one in ten thousand, but only one in a million samples (or people) is a true positive, most of the positives detected by that test will be false. The probability that an observed positive result is a false positive may be calculated using Bayes' theorem. False negatives produce serious and counter-intuitive problems, especially when the condition being searched for is common. If a test with a false negative rate of only 10% is used to test a population with a true occurrence rate of 70%, many of the negatives detected by the test will be false. This sometimes leads to inappropriate or inadequate treatment of both the patient and their disease. A common example is relying on cardiac stress tests to detect coronary atherosclerosis, even though cardiac stress tests are known to only detect limitations of coronary artery blood flow due to advanced stenosis. === Biometrics === Biometric matching, such as for fingerprint recognition, facial recognition or iris recognition, is susceptible to type I and type II errors. Hypothesis: "The input does not identify someone in the searched list of people". Null hypothesis: "The input does identify someone in the searched list of people". Type I error (false reject rate): The true fact is that the person is someone in the searched list but the system concludes that the person is not according to the data. Type II error (false match rate): The true fact is that the person is not someone in the searched list but the system concludes that the person is someone whom we are looking for according to the data. The probability of type I errors is called the "false reject rate" (FRR) or false non-match rate (FNMR), while the probability of type II errors is called the "false accept rate" (FAR) or false match rate (FMR). If the system is designed to rarely match suspects then the probability of type II errors can be called the "false alarm rate". On the other hand, if the system is used for validation (and acceptance is the norm) then the FAR is a measure of system security, while the FRR measures user inconvenience level. === Security screening === False positives are routinely found every day in airport security screening, which are ultimately visual inspection systems. The installed security alarms are intended to prevent weapons being brought onto aircraft; yet they are often set to such high sensitivity that they alarm many times a day for minor items, such as keys, belt buckles, loose change, mobile phones, and tacks in shoes. Hypothesis: "The item is a weapon". Null hypothesis: "The item is not a weapon". Type I error (false positive): The true fact is that the item is not a weapon but the system still sounds an alarm. Type II error (false negative) The true fact is that the item is a weapon but the system keeps silent at this time. The ratio of false positives (identifying an innocent traveler as a terrorist) to true positives (detecting a would-be terrorist) is, therefore, very high; and because almost every alarm is a false positive, the positive predictive value of these screening tests is very low. The relative cost of false results determines the likelihood that test creators allow these events to occur. As the cost of a false negative in this scenario is extremely high (not detecting a bomb being brought onto a plane could result in hundreds of deaths) whilst the cost of a false positive is relatively low (a reasonably simple further inspection) the most appropriate test is one with a low statistical specificity but high statistical sensitivity (one that allows a high rate of false positives in return for minimal false negatives). === Computers === The notions of false positives and false negatives have a wide currency in the realm of computers and computer applications, including computer security, spam filtering, malware, optical character recognition, and many others. For example, in the case of spam filtering: Hypothesis: "The message is spam". Null hypothesis: "The message is not spam". Type I error (false positive): Spam filtering or spam blocking techniques wrongly classify a legitimate email message as spam and, as a result, interfere with its delivery. Type II error (false negative): Spam email is not detected as spam, but is classified as non-spam. While most anti-spam tactics can block or filter a high percentage of unwanted emails, doing so without creating significant false-positive results is a much more demanding task. A low number of false negatives is an indicator of the efficiency of spam filtering. == See also == Binary classification – Dividing things between two categories Detection theory – Means to measure signal processing ability Ethics in mathematics – Emerging field of applied ethics False discovery rate – Statistical method for handling multiple comparisons False positive paradox – Logic error due to ignoring the base ratePages displaying short descriptions of redirect targets Family-wise error rate – Probability of making type I errors when performing multiple hypotheses tests Information retrieval performance measures – Obtaining information resources relevant to an information needPages displaying short descriptions of redirect targets Lemma (mathematics) – Theorem for proving more complex theorems Jerzy Neyman – Polish American mathematician Neyman–Pearson lemma – Theorem about the power of the likelihood ratio test Null hypothesis – Position that there is no relationship between two phenomena Probability of a hypothesis for Bayesian inference – Method of statistical inference Egon Pearson – British statistician (1895–1980) Precision and recall – Pattern-recognition performance metrics Prosecutor's fallacy – Logic error due to ignoring the base ratePages displaying short descriptions of redirect targets Prozone phenomenon – Immunologic phenomenon occurring in high antigen or antibody levelsPages displaying short descriptions of redirect targets Receiver operating characteristic – Diagnostic plot of binary classifier ability Sensitivity and specificity – Statistical measure of a binary classification Statisticians' and engineers' cross-reference of statistical terms Testing hypotheses suggested by the data – Problem of circular reasoning in statistics Type III error – Term in statistical hypothesis testing == References == == Bibliography == == External links == Bias and Confounding – presentation by Nigel Paneth, Graduate School of Public Health, University of Pittsburgh
Wikipedia/Type_I_error_rate
In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors when performing multiple hypotheses tests. == Familywise and experimentwise error rates == John Tukey developed in 1953 the concept of a familywise error rate as the probability of making a Type I error among a specified group, or "family," of tests. Ryan (1959) proposed the related concept of an experimentwise error rate, which is the probability of making a Type I error in a given experiment. Hence, an experimentwise error rate is a familywise error rate where the family includes all the tests that are conducted within an experiment. As Ryan (1959, Footnote 3) explained, an experiment may contain two or more families of multiple comparisons, each of which relates to a particular statistical inference and each of which has its own separate familywise error rate. Hence, familywise error rates are usually based on theoretically informative collections of multiple comparisons. In contrast, an experimentwise error rate may be based on a collection of simultaneous comparisons that refer to a diverse range of separate inferences. Some have argued that it may not be useful to control the experimentwise error rate in such cases. Indeed, Tukey suggested that familywise control was preferable in such cases (Tukey, 1956, personal communication, in Ryan, 1962, p. 302). == Background == Within the statistical framework, there are several definitions for the term "family": Hochberg & Tamhane (1987) defined "family" as "any collection of inferences for which it is meaningful to take into account some combined measure of error". According to Cox (1982), a set of inferences should be regarded a family: To take into account the selection effect due to data dredging To ensure simultaneous correctness of a set of inferences as to guarantee a correct overall decision To summarize, a family could best be defined by the potential selective inference that is being faced: A family is the smallest set of items of inference in an analysis, interchangeable about their meaning for the goal of research, from which selection of results for action, presentation or highlighting could be made (Yoav Benjamini). === Classification of multiple hypothesis tests === The following table defines the possible outcomes when testing multiple null hypotheses. Suppose we have a number m of null hypotheses, denoted by: H1, H2, ..., Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing each type of outcome over all Hi yields the following random variables: m is the total number hypotheses tested m 0 {\displaystyle m_{0}} is the number of true null hypotheses, an unknown parameter m − m 0 {\displaystyle m-m_{0}} is the number of true alternative hypotheses V is the number of false positives (Type I error) (also called "false discoveries") S is the number of true positives (also called "true discoveries") T is the number of false negatives (Type II error) U is the number of true negatives R = V + S {\displaystyle R=V+S} is the number of rejected null hypotheses (also called "discoveries", either true or false) In m hypothesis tests of which m 0 {\displaystyle m_{0}} are true null hypotheses, R is an observable random variable, and S, T, U, and V are unobservable random variables. == Definition == The FWER is the probability of making at least one type I error in the family, F W E R = Pr ( V ≥ 1 ) , {\displaystyle \mathrm {FWER} =\Pr(V\geq 1),\,} or equivalently, F W E R = 1 − Pr ( V = 0 ) . {\displaystyle \mathrm {FWER} =1-\Pr(V=0).} Thus, by assuring F W E R ≤ α {\displaystyle \mathrm {FWER} \leq \alpha \,\!\,} , the probability of making one or more type I errors in the family is controlled at level α {\displaystyle \alpha \,\!} . A procedure controls the FWER in the weak sense if the FWER control at level α {\displaystyle \alpha \,\!} is guaranteed only when all null hypotheses are true (i.e. when m 0 = m {\displaystyle m_{0}=m} , meaning the "global null hypothesis" is true). A procedure controls the FWER in the strong sense if the FWER control at level α {\displaystyle \alpha \,\!} is guaranteed for any configuration of true and non-true null hypotheses (whether the global null hypothesis is true or not). == Controlling procedures == Some classical solutions that ensure strong level α {\displaystyle \alpha } FWER control, and some newer solutions exist. === The Bonferroni procedure === Denote by p i {\displaystyle p_{i}} the p-value for testing H i {\displaystyle H_{i}} reject H i {\displaystyle H_{i}} if p i ≤ α m {\displaystyle p_{i}\leq {\frac {\alpha }{m}}} === The Šidák procedure === Testing each hypothesis at level α S I D = 1 − ( 1 − α ) 1 m {\displaystyle \alpha _{SID}=1-(1-\alpha )^{\frac {1}{m}}} is Sidak's multiple testing procedure. This procedure is more powerful than Bonferroni but the gain is small. This procedure can fail to control the FWER when the tests are negatively dependent. === Tukey's procedure === Tukey's procedure is only applicable for pairwise comparisons. It assumes independence of the observations being tested, as well as equal variation across observations (homoscedasticity). The procedure calculates for each pair the studentized range statistic: Y A − Y B S E {\displaystyle {\frac {Y_{A}-Y_{B}}{SE}}} where Y A {\displaystyle Y_{A}} is the larger of the two means being compared, Y B {\displaystyle Y_{B}} is the smaller, and S E {\displaystyle SE} is the standard error of the data in question. Tukey's test is essentially a Student's t-test, except that it corrects for family-wise error-rate. === Holm's step-down procedure (1979) === Start by ordering the p-values (from lowest to highest) P ( 1 ) … P ( m ) {\displaystyle P_{(1)}\ldots P_{(m)}} and let the associated hypotheses be H ( 1 ) … H ( m ) {\displaystyle H_{(1)}\ldots H_{(m)}} Let k {\displaystyle k} be the minimal index such that P ( k ) > α m + 1 − k {\displaystyle P_{(k)}>{\frac {\alpha }{m+1-k}}} Reject the null hypotheses H ( 1 ) … H ( k − 1 ) {\displaystyle H_{(1)}\ldots H_{(k-1)}} . If k = 1 {\displaystyle k=1} then none of the hypotheses are rejected. This procedure is uniformly more powerful than the Bonferroni procedure. The reason why this procedure controls the family-wise error rate for all the m hypotheses at level α in the strong sense is, because it is a closed testing procedure. As such, each intersection is tested using the simple Bonferroni test. === Hochberg's step-up procedure === Yosef Hochberg's step-up procedure (1988) is performed using the following steps: Start by ordering the p-values (from lowest to highest) P ( 1 ) … P ( m ) {\displaystyle P_{(1)}\ldots P_{(m)}} and let the associated hypotheses be H ( 1 ) … H ( m ) {\displaystyle H_{(1)}\ldots H_{(m)}} For a given α {\displaystyle \alpha } , let R {\displaystyle R} be the largest k {\displaystyle k} such that P ( k ) ≤ α m + 1 − k {\displaystyle P_{(k)}\leq {\frac {\alpha }{m+1-k}}} Reject the null hypotheses H ( 1 ) … H ( R ) {\displaystyle H_{(1)}\ldots H_{(R)}} Hochberg's procedure is more powerful than Holm's. Nevertheless, while Holm’s is a closed testing procedure (and thus, like Bonferroni, has no restriction on the joint distribution of the test statistics), Hochberg’s is based on the Simes test, so it holds only under non-negative dependence. The Simes test is derived under assumption of independent tests; it is conservative for tests that are positively dependent in a certain sense and is anti-conservative for certain cases of negative dependence. However, it has been suggested that a modified version of the Hochberg procedure remains valid under general negative dependence. === Dunnett's correction === Charles Dunnett (1955, 1966) described an alternative alpha error adjustment when k groups are compared to the same control group. Now known as Dunnett's test, this method is less conservative than the Bonferroni adjustment. === Scheffé's method === === Resampling procedures === The procedures of Bonferroni and Holm control the FWER under any dependence structure of the p-values (or equivalently the individual test statistics). Essentially, this is achieved by accommodating a `worst-case' dependence structure (which is close to independence for most practical purposes). But such an approach is conservative if dependence is actually positive. To give an extreme example, under perfect positive dependence, there is effectively only one test and thus, the FWER is uninflated. Accounting for the dependence structure of the p-values (or of the individual test statistics) produces more powerful procedures. This can be achieved by applying resampling methods, such as bootstrapping and permutations methods. The procedure of Westfall and Young (1993) requires a certain condition that does not always hold in practice (namely, subset pivotality). The procedures of Romano and Wolf (2005a,b) dispense with this condition and are thus more generally valid. === Harmonic mean p-value procedure === The harmonic mean p-value (HMP) procedure provides a multilevel test that improves on the power of Bonferroni correction by assessing the significance of groups of hypotheses while controlling the strong-sense family-wise error rate. The significance of any subset R {\textstyle {\mathcal {R}}} of the m {\textstyle m} tests is assessed by calculating the HMP for the subset, p ∘ R = ∑ i ∈ R w i ∑ i ∈ R w i / p i , {\displaystyle {\overset {\circ }{p}}_{\mathcal {R}}={\frac {\sum _{i\in {\mathcal {R}}}w_{i}}{\sum _{i\in {\mathcal {R}}}w_{i}/p_{i}}},} where w 1 , … , w m {\textstyle w_{1},\dots ,w_{m}} are weights that sum to one (i.e. ∑ i = 1 m w i = 1 {\textstyle \sum _{i=1}^{m}w_{i}=1} ). An approximate procedure that controls the strong-sense family-wise error rate at level approximately α {\textstyle \alpha } rejects the null hypothesis that none of the p-values in subset R {\textstyle {\mathcal {R}}} are significant when p ∘ R ≤ α w R {\textstyle {\overset {\circ }{p}}_{\mathcal {R}}\leq \alpha \,w_{\mathcal {R}}} (where w R = ∑ i ∈ R w i {\textstyle w_{\mathcal {R}}=\sum _{i\in {\mathcal {R}}}w_{i}} ). This approximation is reasonable for small α {\textstyle \alpha } (e.g. α < 0.05 {\textstyle \alpha <0.05} ) and becomes arbitrarily good as α {\textstyle \alpha } approaches zero. An asymptotically exact test is also available (see main article). == Alternative approaches == FWER control exerts a more stringent control over false discovery compared to false discovery rate (FDR) procedures. FWER control limits the probability of at least one false discovery, whereas FDR control limits (in a loose sense) the expected proportion of false discoveries. Thus, FDR procedures have greater power at the cost of increased rates of type I errors, i.e., rejecting null hypotheses that are actually true. On the other hand, FWER control is less stringent than per-family error rate control, which limits the expected number of errors per family. Because FWER control is concerned with at least one false discovery, unlike per-family error rate control it does not treat multiple simultaneous false discoveries as any worse than one false discovery. The Bonferroni correction is often considered as merely controlling the FWER, but in fact also controls the per-family error rate. == References == == External links == Understanding Family Wise Error Rate - blog post including its utility relative to False Discovery Rate
Wikipedia/Family-wise_error_rate
In statistics, a false coverage rate (FCR) is the average rate of false coverage, i.e. not covering the true parameters, among the selected intervals. The FCR gives a simultaneous coverage at a (1 − α)×100% level for all of the parameters considered in the problem. The FCR has a strong connection to the false discovery rate (FDR). Both methods address the problem of multiple comparisons, FCR from confidence intervals (CIs) and FDR from P-value's point of view. FCR was needed because of dangers caused by selective inference. Researchers and scientists tend to report or highlight only the portion of data that is considered significant without clearly indicating the various hypothesis that were considered. It is therefore necessary to understand how the data is falsely covered. There are many FCR procedures which can be used depending on the length of the CI – Bonferroni-selected–Bonferroni-adjusted, Adjusted BH-Selected CIs (Benjamini and Yekutieli 2005). The incentive of choosing one procedure over another is to ensure that the CI is as narrow as possible and to keep the FCR. For microarray experiments and other modern applications, there are a huge number of parameters, often tens of thousands or more and it is very important to choose the most powerful procedure. The FCR was first introduced by Daniel Yekutieli in his PhD thesis in 2001. == Definitions == Not keeping the FCR means FCR > q {\displaystyle {\text{FCR}}>q} when q = V R = α m 0 R {\displaystyle q={\frac {V}{R}}={\frac {\alpha m_{0}}{R}}} , where m 0 {\displaystyle m_{0}} is the number of true null hypotheses, R {\displaystyle R} is the number of rejected hypothesis, V {\displaystyle V} is the number of false positives, and α {\displaystyle \alpha } is the significance level. Intervals with simultaneous coverage probability 1 − q {\displaystyle 1-q} can control the FCR to be bounded by q {\displaystyle q} . === Classification of multiple hypothesis tests === The following table defines the possible outcomes when testing multiple null hypotheses. Suppose we have a number m of null hypotheses, denoted by: H1, H2, ..., Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing each type of outcome over all Hi yields the following random variables: m is the total number hypotheses tested m 0 {\displaystyle m_{0}} is the number of true null hypotheses, an unknown parameter m − m 0 {\displaystyle m-m_{0}} is the number of true alternative hypotheses V is the number of false positives (Type I error) (also called "false discoveries") S is the number of true positives (also called "true discoveries") T is the number of false negatives (Type II error) U is the number of true negatives R = V + S {\displaystyle R=V+S} is the number of rejected null hypotheses (also called "discoveries", either true or false) In m hypothesis tests of which m 0 {\displaystyle m_{0}} are true null hypotheses, R is an observable random variable, and S, T, U, and V are unobservable random variables. == The problems addressed by FCR == === Selection === Selection causes reduced average coverage. Selection can be presented as conditioning on an event defined by the data and may affect the coverage probability of a CI for a single parameter. Equivalently, the problem of selection changes the basic sense of P-values. FCR procedures consider that the goal of conditional coverage following any selection rule for any set of (unknown) values for the parameters is impossible to achieve. A weaker property when it comes to selective CIs is possible and will avoid false coverage statements. FCR is a measure of interval coverage following selection. Therefore, even though a 1 − α CI does not offer selective (conditional) coverage, the probability of constructing a no covering CI is at most α, where Pr [ θ ∉ C I , CI constructed ] ≤ Pr [ θ ∉ C I ] ≤ α {\displaystyle \Pr[\theta \not \in \mathrm {CI} ,\ {\text{CI constructed}}]\leq \Pr[\theta \not \in \mathrm {CI} ]\leq \alpha } === Selection and multiplicity === When facing both multiplicity (inference about multiple parameters) and selection, not only is the expected proportion of coverage over selected parameters at 1−α not equivalent to the expected proportion of no coverage at α, but also the latter can no longer be ensured by constructing marginal CIs for each selected parameter. FCR procedures solve this by taking the expected proportion of parameters not covered by their CIs among the selected parameters, where the proportion is 0 if no parameter is selected. This false coverage-statement rate (FCR) is a property of any procedure that is defined by the way in which parameters are selected and the way in which the multiple intervals are constructed. == Controlling procedures == === Bonferroni procedure (Bonferroni-selected–Bonferroni-adjusted) for simultaneous CI === Simultaneous CIs with Bonferroni procedure when we have m parameters, each marginal CI constructed at the 1 − α/m level. Without selection, these CIs offer simultaneous coverage, in the sense that the probability that all CIs cover their respective parameters is at least 1 − α. unfortunately, even such a strong property does not ensure the conditional confidence property following selection. === FCR for Bonferroni-selected–Bonferroni-adjusted simultaneous CI === The Bonferroni–Bonferroni procedure cannot offer conditional coverage, however it does control the FCR at <α In fact it does so too well, in the sense that the FCR is much too close to 0 for large values of θ. Intervals selection is based on Bonferroni testing, and Bonferroni CIs are then constructed. The FCR is estimated as, the proportion of intervals failing to cover their respective parameters among the constructed CIs is calculated (setting the proportion to 0 when none are selected). Where selection is based on unadjusted individual testing and unadjusted CIs are constructed. === FCR-adjusted BH-selected CIs === In BH procedure for FDR after sorting the p values P(1) ≤ • • • ≤ P(m) and calculating R = max{ j : P( j) ≤ j • q/m}, the R null hypotheses for which P(i) ≤ R • q/m are rejected. If testing is done using the Bonferroni procedure, then the lower bound of the FCR may drop well below the desired level q, implying that the intervals are too long. In contrast, applying the following procedure, which combines the general procedure with the FDR controlling testing in the BH procedure, also yields a lower bound for the FCR, q/2 ≤ FCR. This procedure is sharp in the sense that for some configurations, the FCR approaches q. 1. Sort the p values used for testing the m hypotheses regarding the parameters, P(1) ≤ • • • ≤P(m). 2. Calculate R = max{i : P(i) ≤ i • q/m}. 3. Select the R parameters for which P(i) ≤ R • q/m, corresponding to the rejected hypotheses. 4. Construct a 1 − R • q/m CI for each parameter selected. == See also == False positive rate Post-hoc analysis == References == Footnotes Other Sources Zhao, Zhigen; Hwang, J. T. Gene (2012). "Empirical Bayes false coverage rate controlling confidence intervals" (pdf). Journal of the Royal Statistical Society, Series B. doi:10.1111/j.1467-9868.2012.01033.x.
Wikipedia/False_coverage_rate
Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning differs from supervised learning in not needing labelled input-output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead, the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge) with the goal of maximizing the cumulative reward (the feedback of which might be incomplete or delayed). The search for this balance is known as the exploration–exploitation dilemma. The environment is typically stated in the form of a Markov decision process (MDP), as many reinforcement learning algorithms use dynamic programming techniques. The main difference between classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the Markov decision process, and they target large MDPs where exact methods become infeasible. == Principles == Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, RL is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in RL have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation (particularly in the absence of a mathematical model of the environment). Basic reinforcement learning is modeled as a Markov decision process: A set of environment and agent states (the state space), S {\displaystyle {\mathcal {S}}} ; A set of actions (the action space), A {\displaystyle {\mathcal {A}}} , of the agent; P a ( s , s ′ ) = Pr ( S t + 1 = s ′ ∣ S t = s , A t = a ) {\displaystyle P_{a}(s,s')=\Pr(S_{t+1}=s'\mid S_{t}=s,A_{t}=a)} , the transition probability (at time t {\displaystyle t} ) from state s {\displaystyle s} to state s ′ {\displaystyle s'} under action a {\displaystyle a} . R a ( s , s ′ ) {\displaystyle R_{a}(s,s')} , the immediate reward after transition from s {\displaystyle s} to s ′ {\displaystyle s'} under action a {\displaystyle a} . The purpose of reinforcement learning is for the agent to learn an optimal (or near-optimal) policy that maximizes the reward function or other user-provided reinforcement signal that accumulates from immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals learn to adopt behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning. A basic reinforcement learning agent interacts with its environment in discrete time steps. At each time step t, the agent receives the current state S t {\displaystyle S_{t}} and reward R t {\displaystyle R_{t}} . It then chooses an action A t {\displaystyle A_{t}} from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state S t + 1 {\displaystyle S_{t+1}} and the reward R t + 1 {\displaystyle R_{t+1}} associated with the transition ( S t , A t , S t + 1 ) {\displaystyle (S_{t},A_{t},S_{t+1})} is determined. The goal of a reinforcement learning agent is to learn a policy: π : S × A → [ 0 , 1 ] {\displaystyle \pi :{\mathcal {S}}\times {\mathcal {A}}\rightarrow [0,1]} , π ( s , a ) = Pr ( A t = a ∣ S t = s ) {\displaystyle \pi (s,a)=\Pr(A_{t}=a\mid S_{t}=s)} that maximizes the expected cumulative reward. Formulating the problem as a Markov decision process assumes the agent directly observes the current environmental state; in this case, the problem is said to have full observability. If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have partial observability, and formally the problem must be formulated as a partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed. When the agent's performance is compared to that of an agent that acts optimally, the difference in performance yields the notion of regret. In order to act near optimally, the agent must reason about long-term consequences of its actions (i.e., maximize future rewards), although the immediate reward associated with this might be negative. Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including energy storage, robot control, photovoltaic generators, backgammon, checkers, Go (AlphaGo), and autonomous driving systems. Two elements make reinforcement learning powerful: the use of samples to optimize performance, and the use of function approximation to deal with large environments. Thanks to these two key components, RL can be used in large environments in the following situations: A model of the environment is known, but an analytic solution is not available; Only a simulation model of the environment is given (the subject of simulation-based optimization); The only way to collect information about the environment is to interact with it. The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems. == Exploration == The exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and for finite state space Markov decision processes in Burnetas and Katehakis (1997). Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical. One such method is ε {\displaystyle \varepsilon } -greedy, where 0 < ε < 1 {\displaystyle 0<\varepsilon <1} is a parameter controlling the amount of exploration vs. exploitation. With probability 1 − ε {\displaystyle 1-\varepsilon } , exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability ε {\displaystyle \varepsilon } , exploration is chosen, and the action is chosen uniformly at random. ε {\displaystyle \varepsilon } is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics. == Algorithms for control learning == Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards. === Criterion of optimality === ==== Policy ==== The agent's action selection is modeled as a map called policy: π : A × S → [ 0 , 1 ] {\displaystyle \pi :{\mathcal {A}}\times {\mathcal {S}}\rightarrow [0,1]} π ( a , s ) = Pr ( A t = a ∣ S t = s ) {\displaystyle \pi (a,s)=\Pr(A_{t}=a\mid S_{t}=s)} The policy map gives the probability of taking action a {\displaystyle a} when in state s {\displaystyle s} .: 61  There are also deterministic policies π {\displaystyle \pi } for which π ( s ) {\displaystyle \pi (s)} denotes the action that should be played at state s {\displaystyle s} . ==== State-value function ==== The state-value function V π ( s ) {\displaystyle V_{\pi }(s)} is defined as, expected discounted return starting with state s {\displaystyle s} , i.e. S 0 = s {\displaystyle S_{0}=s} , and successively following policy π {\displaystyle \pi } . Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.: 60  V π ( s ) = E ⁡ [ G ∣ S 0 = s ] = E ⁡ [ ∑ t = 0 ∞ γ t R t + 1 ∣ S 0 = s ] , {\displaystyle V_{\pi }(s)=\operatorname {\mathbb {E} } [G\mid S_{0}=s]=\operatorname {\mathbb {E} } \left[\sum _{t=0}^{\infty }\gamma ^{t}R_{t+1}\mid S_{0}=s\right],} where the random variable G {\displaystyle G} denotes the discounted return, and is defined as the sum of future discounted rewards: G = ∑ t = 0 ∞ γ t R t + 1 = R 1 + γ R 2 + γ 2 R 3 + … , {\displaystyle G=\sum _{t=0}^{\infty }\gamma ^{t}R_{t+1}=R_{1}+\gamma R_{2}+\gamma ^{2}R_{3}+\dots ,} where R t + 1 {\displaystyle R_{t+1}} is the reward for transitioning from state S t {\displaystyle S_{t}} to S t + 1 {\displaystyle S_{t+1}} , 0 ≤ γ < 1 {\displaystyle 0\leq \gamma <1} is the discount rate. γ {\displaystyle \gamma } is less than 1, so rewards in the distant future are weighted less than rewards in the immediate future. The algorithm must find a policy with maximum expected discounted return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies. A policy is stationary if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to deterministic stationary policies. A deterministic stationary policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality. === Brute force === The brute force approach entails two steps: For each possible policy, sample returns while following it Choose the policy with the largest expected discounted return One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the discounted return of each policy. These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search. === Value function === Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a set of estimates of expected discounted returns E ⁡ [ G ] {\displaystyle \operatorname {\mathbb {E} } [G]} for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one). These methods rely on the theory of Markov decision processes, where optimality is defined in a sense stronger than the one above: A policy is optimal if it achieves the best-expected discounted return from any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found among stationary policies. To define optimality in a formal manner, define the state-value of a policy π {\displaystyle \pi } by V π ( s ) = E ⁡ [ G ∣ s , π ] , {\displaystyle V^{\pi }(s)=\operatorname {\mathbb {E} } [G\mid s,\pi ],} where G {\displaystyle G} stands for the discounted return associated with following π {\displaystyle \pi } from the initial state s {\displaystyle s} . Defining V ∗ ( s ) {\displaystyle V^{*}(s)} as the maximum possible state-value of V π ( s ) {\displaystyle V^{\pi }(s)} , where π {\displaystyle \pi } is allowed to change, V ∗ ( s ) = max π V π ( s ) . {\displaystyle V^{*}(s)=\max _{\pi }V^{\pi }(s).} A policy that achieves these optimal state-values in each state is called optimal. Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since V ∗ ( s ) = max π E [ G ∣ s , π ] {\displaystyle V^{*}(s)=\max _{\pi }\mathbb {E} [G\mid s,\pi ]} , where s {\displaystyle s} is a state randomly sampled from the distribution μ {\displaystyle \mu } of initial states (so μ ( s ) = Pr ( S 0 = s ) {\displaystyle \mu (s)=\Pr(S_{0}=s)} ). Although state-values suffice to define optimality, it is useful to define action-values. Given a state s {\displaystyle s} , an action a {\displaystyle a} and a policy π {\displaystyle \pi } , the action-value of the pair ( s , a ) {\displaystyle (s,a)} under π {\displaystyle \pi } is defined by Q π ( s , a ) = E ⁡ [ G ∣ s , a , π ] , {\displaystyle Q^{\pi }(s,a)=\operatorname {\mathbb {E} } [G\mid s,a,\pi ],\,} where G {\displaystyle G} now stands for the random discounted return associated with first taking action a {\displaystyle a} in state s {\displaystyle s} and following π {\displaystyle \pi } , thereafter. The theory of Markov decision processes states that if π ∗ {\displaystyle \pi ^{*}} is an optimal policy, we act optimally (take the optimal action) by choosing the action from Q π ∗ ( s , ⋅ ) {\displaystyle Q^{\pi ^{*}}(s,\cdot )} with the highest action-value at each state, s {\displaystyle s} . The action-value function of such an optimal policy ( Q π ∗ {\displaystyle Q^{\pi ^{*}}} ) is called the optimal action-value function and is commonly denoted by Q ∗ {\displaystyle Q^{*}} . In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally. Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions Q k {\displaystyle Q_{k}} ( k = 0 , 1 , 2 , … {\displaystyle k=0,1,2,\ldots } ) that converge to Q ∗ {\displaystyle Q^{*}} . Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces. ==== Monte Carlo methods ==== Monte Carlo methods are used to solve reinforcement learning problems by averaging sample returns. Unlike methods that require full knowledge of the environment's dynamics, Monte Carlo methods rely solely on actual or simulated experience—sequences of states, actions, and rewards obtained from interaction with an environment. This makes them applicable in situations where the complete dynamics are unknown. Learning from actual experience does not require prior knowledge of the environment and can still lead to optimal behavior. When using simulated experience, only a model capable of generating sample transitions is required, rather than a full specification of transition probabilities, which is necessary for dynamic programming methods. Monte Carlo methods apply to episodic tasks, where experience is divided into episodes that eventually terminate. Policy and value function updates occur only after the completion of an episode, making these methods incremental on an episode-by-episode basis, though not on a step-by-step (online) basis. The term "Monte Carlo" generally refers to any method involving random sampling; however, in this context, it specifically refers to methods that compute averages from complete returns, rather than partial returns. These methods function similarly to the bandit algorithms, in which returns are averaged for each state-action pair. The key difference is that actions taken in one state affect the returns of subsequent states within the same episode, making the problem non-stationary. To address this non-stationarity, Monte Carlo methods use the framework of general policy iteration (GPI). While dynamic programming computes value functions using full knowledge of the Markov decision process (MDP), Monte Carlo methods learn these functions through sample returns. The value functions and policies interact similarly to dynamic programming to achieve optimality, first addressing the prediction problem and then extending to policy improvement and control, all based on sampled experience. ==== Temporal difference methods ==== The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of generalized policy iteration algorithms. Many actor-critic methods belong to this category. The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called λ {\displaystyle \lambda } parameter ( 0 ≤ λ ≤ 1 ) {\displaystyle (0\leq \lambda \leq 1)} that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue. ==== Function approximation methods ==== In order to address the fifth issue, function approximation methods are used. Linear function approximation starts with a mapping ϕ {\displaystyle \phi } that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair ( s , a ) {\displaystyle (s,a)} are obtained by linearly combining the components of ϕ ( s , a ) {\displaystyle \phi (s,a)} with some weights θ {\displaystyle \theta } : Q ( s , a ) = ∑ i = 1 d θ i ϕ i ( s , a ) . {\displaystyle Q(s,a)=\sum _{i=1}^{d}\theta _{i}\phi _{i}(s,a).} The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored. Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants. Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems. The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency. === Direct policy search === An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods. Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector θ {\displaystyle \theta } , let π θ {\displaystyle \pi _{\theta }} denote the policy associated to θ {\displaystyle \theta } . Defining the performance function by ρ ( θ ) = ρ π θ {\displaystyle \rho (\theta )=\rho ^{\pi _{\theta }}} under mild conditions this function will be differentiable as a function of the parameter vector θ {\displaystyle \theta } . If the gradient of ρ {\displaystyle \rho } was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams's REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature). A large class of methods avoids relying on gradient information. These include simulated annealing, cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum. Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, actor–critic methods have been proposed and performed well on various problems. Policy search methods have been used in the robotics context. Many policy search methods may get stuck in local optima (as they are based on local search). === Model-based algorithms === Finally, all of the above methods can be combined with algorithms that first learn a model of the Markov decision process, the probability of each next state given an action taken from an existing state. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions. Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and "replayed" to the learning algorithm. Model-based methods can be more computationally intensive than model-free approaches, and their utility can be limited by the extent to which the Markov decision process can be learnt. There are other ways to use models than to update a value function. For instance, in model predictive control the model is used to update the behavior directly. == Theory == Both the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with provably good online performance (addressing the exploration issue) are known. Efficient exploration of Markov decision processes is given in Burnetas and Katehakis (1997). Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations. For incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation). == Research == Research topics include: actor-critic architecture actor-critic-scenery architecture adaptive methods that work with fewer (or no) parameters under a large number of conditions bug detection in software projects continuous learning combinations with logic-based frameworks exploration in large Markov decision processes entity-based reinforcement learning human feedback interaction between implicit and explicit learning in skill acquisition intrinsic motivation which differentiates information-seeking, curiosity-type behaviours from task-dependent goal-directed behaviours large-scale empirical evaluations large (or continuous) action spaces modular and hierarchical reinforcement learning multiagent/distributed reinforcement learning is a topic of interest. Applications are expanding. occupant-centric control optimization of computing resources partial information (e.g., using predictive state representation) reward function based on maximising novel information sample-based planning (e.g., based on Monte Carlo tree search). securities trading transfer learning TD learning modeling dopamine-based learning in the brain. Dopaminergic projections from the substantia nigra to the basal ganglia function are the prediction error. value-function and policy search methods == Comparison of key algorithms == The following table lists the key algorithms for learning a policy depending on several criteria: The algorithm can be on-policy (it performs policy updates using trajectories sampled via the current policy) or off-policy. The action space may be discrete (e.g. the action space could be "going up", "going left", "going right", "going down", "stay") or continuous (e.g. moving the arm with a given angle). The state space may be discrete (e.g. the agent could be in a cell in a grid) or continuous (e.g. the agent could be located at a given position in the plane). === Associative reinforcement learning === Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment. === Deep reinforcement learning === This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning. === Adversarial deep reinforcement learning === Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies. === Fuzzy reinforcement learning === By introducing fuzzy inference in reinforcement learning, approximating the state-action value function with fuzzy rules in continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values). === Inverse reinforcement learning === In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal. One popular IRL paradigm is named maximum entropy inverse reinforcement learning (MaxEnt IRL). MaxEnt IRL estimates the parameters of a linear model of the reward function by maximizing the entropy of the probability distribution of observed trajectories subject to constraints related to matching expected feature counts. Recently it has been shown that MaxEnt IRL is a particular case of a more general framework named random utility inverse reinforcement learning (RU-IRL). RU-IRL is based on random utility theory and Markov decision processes. While prior IRL approaches assume that the apparent random behavior of an observed agent is due to it following a random policy, RU-IRL assumes that the observed agent follows a deterministic policy but randomness in observed behavior is due to the fact that an observer only has partial access to the features the observed agent uses in decision making. The utility function is modeled as a random variable to account for the ignorance of the observer regarding the features the observed agent actually considers in its utility function. === Multi-objective reinforcement learning === Multi-objective reinforcement learning (MORL) is a form of reinforcement learning concerned with conflicting alternatives. It is distinct from multi-objective optimization in that it is concerned with agents acting in environments. === Safe reinforcement learning === Safe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. An alternative approach is risk-averse reinforcement learning, where instead of the expected return, a risk-measure of the return is optimized, such as the conditional value at risk (CVaR). In addition to mitigating risk, the CVaR objective increases robustness to model uncertainties. However, CVaR optimization in risk-averse RL requires special care, to prevent gradient bias and blindness to success. === Self-reinforcement learning === Self-reinforcement learning (or self-learning), is a learning paradigm which does not use the concept of immediate reward R a ( s , s ′ ) {\displaystyle R_{a}(s,s')} after transition from s {\displaystyle s} to s ′ {\displaystyle s'} with action a {\displaystyle a} . It does not use an external reinforcement, it only uses the agent internal self-reinforcement. The internal self-reinforcement is provided by mechanism of feelings and emotions. In the learning process emotions are backpropagated by a mechanism of secondary reinforcement. The learning equation does not include the immediate reward, it only includes the state evaluation. The self-reinforcement algorithm updates a memory matrix W = | | w ( a , s ) | | {\displaystyle W=||w(a,s)||} such that in each iteration executes the following machine learning routine: In situation s {\displaystyle s} perform action a {\displaystyle a} . Receive a consequence situation s ′ {\displaystyle s'} . Compute state evaluation v ( s ′ ) {\displaystyle v(s')} of how good is to be in the consequence situation s ′ {\displaystyle s'} . Update crossbar memory w ′ ( a , s ) = w ( a , s ) + v ( s ′ ) {\displaystyle w'(a,s)=w(a,s)+v(s')} . Initial conditions of the memory are received as input from the genetic environment. It is a system with only one input (situation), and only one output (action, or behavior). Self-reinforcement (self-learning) was introduced in 1982 along with a neural network capable of self-reinforcement learning, named Crossbar Adaptive Array (CAA). The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence states. The system is driven by the interaction between cognition and emotion. === Reinforcement Learning in Natural Language Processing === In recent years, Reinforcement learning has become a significant concept in Natural Language Processing (NLP), where tasks are often sequential decision-making rather than static classification. Reinforcement learning is where an agent take actions in an environment to maximize the accumulation of rewards. This framework is best fit for many NLP tasks, including dialogue generation, text summarization, and machine translation, where the quality of the output depends on optimizing long-term or human-centered goals rather than the prediction of single correct label. Early application of RL in NLP emerged in dialogue systems, where conversation was determined as a series of actions optimized for fluency and coherence. These early attempts, including policy gradient and sequence-level training techniques, laid a foundation for the broader application of reinforcement learning to other areas of NLP. A major breakthrough happened with the introduction of Reinforcement Learning from Human Feedback (RLHF), a method in which human feedbacks are used to train a reward model that guides the RL agent. Unlike traditional rule-based or supervised systems, RLHF allows models to align their behavior with human judgments on complex and subjective tasks. This technique was initially used in the development of InstructGPT, an effective language model trained to follow human instructions and later in ChatGPT which incorporates RLHF for improving output responses and ensuring safety. More recently, researchers have explored the use of offline RL in NLP to improve dialogue systems without the need of live human interaction. These methods optimize for user engagement, coherence, and diversity based on past conversation logs and pre-trained reward models. == Statistical comparison of reinforcement learning algorithms == Efficient comparison of RL algorithms is essential for research, deployment and monitoring of RL systems. To compare different algorithms on a given environment, an agent can be trained for each algorithm. Since the performance is sensitive to implementation details, all algorithms should be implemented as closely as possible to each other. After the training is finished, the agents can be run on a sample of test episodes, and their scores (returns) can be compared. Since episodes are typically assumed to be i.i.d, standard statistical tools can be used for hypothesis testing, such as T-test and permutation test. This requires to accumulate all the rewards within an episode into a single number—the episodic return. However, this causes a loss of information, as different time-steps are averaged together, possibly with different levels of noise. Whenever the noise level varies across the episode, the statistical power can be improved significantly, by weighting the rewards according to their estimated noise. == Challenges and Limitations == Despite significant advancements, reinforcement learning (RL) continues to face several challenges and limitations that hinder its widespread application in real-world scenarios. === Sample Inefficiency === RL algorithms often require a large number of interactions with the environment to learn effective policies, leading to high computational costs and time-intensive to train the agent. For instance, OpenAI's Dota-playing bot utilized thousands of years of simulated gameplay to achieve human-level performance. Techniques like experience replay and curriculum learning have been proposed to deprive sample inefficiency, but these techniques add more complexity and are not always sufficient for real-world applications. === Stability and Convergence Issues === Training RL models, particularly for deep neural network-based models, can be unstable and prone to divergence. A small change in the policy or environment can lead to extreme fluctuations in performance, making it difficult to achieve consistent results. This instability is further enhanced in the case of the continuous or high-dimensional action space, where the learning step becomes more complex and less predictable. === Generalization and Transferability === The RL agents trained in specific environments often struggle to generalize their learned policies to new, unseen scenarios. This is the major setback preventing the application of RL to dynamic real-world environments where adaptability is crucial. The challenge is to develop such algorithms that can transfer knowledge across tasks and environments without extensive retraining. === Bias and Reward Function Issues === Designing appropriate reward functions is critical in RL because poorly designed reward functions can lead to unintended behaviors. In addition, RL systems trained on biased data may perpetuate existing biases and lead to discriminatory or unfair outcomes. Both of these issues requires careful consideration of reward structures and data sources to ensure fairness and desired behaviors. == See also == == References == == Further reading == Annaswamy, Anuradha M. (3 May 2023). "Adaptive Control and Intersections with Reinforcement Learning". Annual Review of Control, Robotics, and Autonomous Systems. 6 (1): 65–93. doi:10.1146/annurev-control-062922-090153. ISSN 2573-5144. S2CID 255702873. Auer, Peter; Jaksch, Thomas; Ortner, Ronald (2010). "Near-optimal regret bounds for reinforcement learning". Journal of Machine Learning Research. 11: 1563–1600. Bertsekas, Dimitri P. (2023) [2019]. REINFORCEMENT LEARNING AND OPTIMAL CONTROL (1st ed.). Athena Scientific. ISBN 978-1-886-52939-7. Busoniu, Lucian; Babuska, Robert; De Schutter, Bart; Ernst, Damien (2010). Reinforcement Learning and Dynamic Programming using Function Approximators. Taylor & Francis CRC Press. ISBN 978-1-4398-2108-4. François-Lavet, Vincent; Henderson, Peter; Islam, Riashat; Bellemare, Marc G.; Pineau, Joelle (2018). "An Introduction to Deep Reinforcement Learning". Foundations and Trends in Machine Learning. 11 (3–4): 219–354. arXiv:1811.12560. Bibcode:2018arXiv181112560F. doi:10.1561/2200000071. S2CID 54434537. Li, Shengbo Eben (2023). Reinforcement Learning for Sequential Decision and Optimal Control (1st ed.). Springer Verlag, Singapore. doi:10.1007/978-981-19-7784-8. ISBN 978-9-811-97783-1. Powell, Warren (2011). Approximate dynamic programming: solving the curses of dimensionality. Wiley-Interscience. Archived from the original on 2016-07-31. Retrieved 2010-09-08. Sutton, Richard S. (1988). "Learning to predict by the method of temporal differences". Machine Learning. 3: 9–44. doi:10.1007/BF00115009. Sutton, Richard S.; Barto, Andrew G. (2018) [1998]. Reinforcement Learning: An Introduction (2nd ed.). MIT Press. ISBN 978-0-262-03924-6. Szita, Istvan; Szepesvari, Csaba (2010). "Model-based Reinforcement Learning with Nearly Tight Exploration Complexity Bounds" (PDF). ICML 2010. Omnipress. pp. 1031–1038. Archived from the original (PDF) on 2010-07-14. == External links == Dissecting Reinforcement Learning Series of blog post on reinforcement learning with Python code A (Long) Peek into Reinforcement Learning
Wikipedia/Reinforcement_Learning
Metalearning is a neuroscientific term proposed by Kenji Doya, as a theory for how neurotransmitters facilitate distributed learning mechanisms in the Basal Ganglia. The theory primarily involves the role of neurotransmitters in dynamically adjusting the way computational learning algorithms interact to produce the kinds of robust learning behaviour currently unique to biological life forms. 'Metalearning' has previously been applied to the fields of Social Psychology and Computer Science but in this context exists as an entirely new concept. The theory of Metalearning builds off earlier work by Doya into the learning algorithms of Supervised learning, Reinforcement learning and Unsupervised learning in the Cerebellum, Basal Ganglia and Cerebral Cortex respectively. The theory emerged from efforts to unify the dynamic selection process for these three learning algorithms to a regulatory mechanism reducible to individual neurotransmitters. == Roles of Neuromodulators == === Dopamine === Dopamine is proposed to act as a "global learning" signal, critical to prediction of rewards and action reinforcement. In this way, dopamine is involved in a learning algorithm in which Actor, Environment and Critic are bound in a dynamic interplay that ultimately seeks to maximise the sum of future rewards by producing an optimal action selection policy. In this context, Critic and Actor are characterised as independent network edges that also form a single Complex Agent. This Agent collectively influences the information state of the Environment, which is fed back to the Agent for future computations. Through a separate pathway, Environment is also fed back to Critic in the form of the reward gained through the given action, meaning an equilibrium can be reached between the predicted reward of given policy for a given state, and the evolving prospect of future rewards. === Serotonin === Serotonin is proposed to control the balance between short and long term reward prediction, essentially by variably "discounting" expected future reward sums that may require too much expenditure to achieve. In this way, serotonin may facilitate the expectation of reward at a quasi-emotional level, and thus either encourage or discourage persistence in reward-seeking behaviour depending on the demand of the task, and the duration of persistence required. As global reward prediction would theoretically result from Serotonin modulated computations reaching a steady state with the computations similarly modulated by Dopamine; high serotonergic signalling may override the computations of Dopamine and produce a divergent paradigm of reward not mathematically viable through the dopamine modulated computations alone. === Norepinephrine === Norepinephrine is proposed to facilitate "wide exploration" by stochastic action selection. The choice between focusing on known, effective strategies or selecting new, experimental ones is known in probability theory as the Exploration-Exploitation Problem. An interplay between situational urgency, and the effectiveness of known strategies thus influences the dilemma between reliable selection for the largest predicted reward, and exploratory selection outside known parameters. Since neuronal firing cascades (such as those required to perfectly swing a golf club) are by definition unstable and prone to variation; Norepinephrine thus selects for the most reliable known execution pattern at higher levels, and allows for more random and unreliable selection at low levels with the purpose of potentially discovering more efficient strategies in the process. === Acetylcholine === Acetylcholine is proposed to facilitate the balance between memory storage and memory renewal, finding an optimal balance between stability and effectiveness of learning algorithms for the specific environmental task. Acetylcholine thus modulates plasticity in the Hippocampus, Cerebral Cortex and Striatum to facilitate ideal learning conditions in the brain. High levels of Acetylcholine would thus allow for very rapid learning and remodelling of synaptic connections, with the consequence that existing learning may become undone. Likewise, the learning of states that takes place over an extended temporal resolution may be overridden before it reaches a functional level, and thus learning may occur too quickly to actually be performed efficiently. At lower levels of Norepinephrine, plastic changes are proposed to occur much more slowly, potentially being protective against unhelpful learning conditions or allowing for information changes to embody a much broader temporal resolution. == Metalearning == Central to the idea of Metalearning is that global learning can be modelled as function of efficient selection of these four neuromodulators. While no mechanistic model is put forward for where Metalearning ultimately exists in the hierarchy of agency, the model has thus far demonstrated the dynamics necessary to infer the existence of such an agent in biological learning as a whole. While computational models and information systems are still far away from approaching the complexity of human learning; Metalearning provides a promising path forwards for the future evolution of such systems as they increasingly approach the complexity of the biological world. == Potential Applications == The investigation of Metalearning as a neuroscientific concept has potential benefits to both the understanding and treatment of Psychiatric Disease, as well as bridging the gaps between Neural Networks, Computer Science and Machine Learning. == References == == External links == Neural Computation Unit at the Okinawa Institute of Science and Technology Neural Computation Project at the ATR Brain Information Communication Research Laboratory Group
Wikipedia/Metalearning_(neuroscience)
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances in the field of deep learning have allowed neural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance. ML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. The application of ML to business problems is known as predictive analytics. Statistics and mathematical optimisation (mathematical programming) methods comprise the foundations of machine learning. Data mining is a related field of study, focusing on exploratory data analysis (EDA) via unsupervised learning. From a theoretical viewpoint, probably approximately correct learning provides a framework for describing machine learning. == History == The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence. The synonym self-teaching computers was also used in this time period. Although the earliest machine learning model was introduced in the 1950s when Arthur Samuel invented a program that calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes. In 1949, Canadian psychologist Donald Hebb published the book The Organization of Behavior, in which he introduced a theoretical neural structure formed by certain interactions among nerve cells. Hebb's model of neurons interacting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, or artificial neurons used by computers to communicate data. Other researchers who have studied human cognitive systems contributed to the modern machine learning technologies as well, including logician Walter Pitts and Warren McCulloch, who proposed the early mathematical models of neural networks to come up with algorithms that mirror human thought processes. By the early 1960s, an experimental "learning machine" with punched tape memory, called Cybertron, had been developed by Raytheon Company to analyse sonar signals, electrocardiograms, and speech patterns using rudimentary reinforcement learning. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions. A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that an artificial neural network learns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal. Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?". Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions. == Relationships to other fields == === Artificial intelligence === As a scientific endeavour, machine learning grew out of the quest for artificial intelligence (AI). In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalised linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.: 488  However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.: 488  By 1980, expert systems had come to dominate AI, and statistics was out of favour. Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming(ILP), but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.: 708–710, 755  Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including John Hopfield, David Rumelhart, and Geoffrey Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.: 25  Machine learning (ML), reorganised and recognised as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics, fuzzy logic, and probability theory. === Data compression === === Data mining === Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data. Machine learning also has intimate ties to optimisation: Many learning problems are formulated as minimisation of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the preassigned labels of a set of examples). === Generalization === Characterizing the generalisation of various learning algorithms is an active topic of current research, especially for deep learning algorithms. === Statistics === Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalisable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field. Conventional statistical analyses require the a priori selection of a model most suitable for the study data set. In addition, only significant or theoretically relevant variables based on previous experience are included for analysis. In contrast, machine learning is not built on a pre-structured model; rather, the data shape the model by detecting underlying patterns. The more variables (input) used to train the model, the more accurate the ultimate model will be. Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random Forest. Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning. === Statistical physics === Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyse the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics. == Theory == A core objective of a learner is to generalise from its experience. Generalisation in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases. The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory via the probably approximately correct learning model. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalisation error. For the best performance in the context of generalisation, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalisation will be poorer. In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. == Approaches == Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system: Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs. Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning). Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximise. Although each algorithm has advantages and limitations, no single algorithm works for all problems. === Supervised learning === Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data, known as training data, consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimisation of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task. Types of supervised-learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, while regression algorithms are used when the outputs can take any numerical value within a range. For example, in a classification algorithm that filters emails, the input is an incoming email, and the output is the folder in which to file the email. In contrast, regression is used for tasks such as predicting a person's height based on factors like age and genetics or forecasting future temperatures based on historical data. Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification. === Unsupervised learning === Unsupervised learning algorithms find structures in data that has not been labelled, classified or categorised. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering, dimensionality reduction, and density estimation. Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity. A special type of unsupervised learning called, self-supervised learning involves training a model by generating the supervisory signal from the data itself. === Semi-supervised learning === Semi-supervised learning falls between unsupervised learning (without any labelled training data) and supervised learning (with completely labelled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabelled data, when used in conjunction with a small amount of labelled data, can produce a considerable improvement in learning accuracy. In weakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets. === Reinforcement learning === Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximise some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimisation, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In reinforcement learning, the environment is typically represented as a Markov decision process (MDP). Many reinforcement learning algorithms use dynamic programming techniques. Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent. === Dimensionality reduction === Dimensionality reduction is a process of reducing the number of random variables under consideration by obtaining a set of principal variables. In other words, it is a process of reducing the dimension of the feature set, also called the "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination or extraction. One of the popular methods of dimensionality reduction is principal component analysis (PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D). The manifold hypothesis proposes that high-dimensional data sets lie along low-dimensional manifolds, and many dimensionality reduction techniques make this assumption, leading to the area of manifold learning and manifold regularisation. === Other types === Other approaches have been developed which do not fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example, topic modelling, meta-learning. ==== Self-learning ==== Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, named crossbar adaptive array (CAA). It gives a solution to the problem learning without any external reward, by introducing emotion as an internal reward. Emotion is used as state evaluation of a self-learning agent. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion. The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine: in situation s perform action a receive a consequence situation s' compute emotion of being in the consequence situation v(s') update crossbar memory w'(a,s) = w(a,s) + v(s') It is a system with only one input, situation, and only one output, action (or behaviour) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioural environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioural environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behaviour, in an environment that contains both desirable and undesirable situations. ==== Feature learning ==== Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include principal component analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task. Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labelled input data. Examples include artificial neural networks, multilayer perceptrons, and supervised dictionary learning. In unsupervised feature learning, features are learned with unlabelled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorisation and various forms of clustering. Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros. Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors. Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. ==== Sparse dictionary learning ==== Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions and assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately. A popular heuristic method for sparse dictionary learning is the k-SVD algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot. ==== Anomaly detection ==== In data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties, noise, deviations and exceptions. In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns. Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabelled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labelled as "normal" and "abnormal" and involves training a classifier (the key difference from many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behaviour from a given normal training data set and then test the likelihood of a test instance to be generated by the model. ==== Robot learning ==== Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally meta-learning (e.g. MAML). ==== Association rules ==== Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness". Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilisation of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems. Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule { o n i o n s , p o t a t o e s } ⇒ { b u r g e r } {\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}} found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions. Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs. Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting. Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples. The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set. == Models == A machine learning model is a type of mathematical model that, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimise errors in its predictions. By extension, the term "model" can refer to several levels of specificity, from a general class of models and their associated learning algorithms to a fully trained model with all its internal parameters tuned. Various types of models have been used and researched for machine learning systems, picking the best model for a task is called model selection. === Artificial neural networks === Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition. === Decision trees === Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels, and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making. === Random forest regression === Random forest regression (RFR) falls under umbrella of decision tree-based models. RFR is an ensemble learning method that builds multiple decision trees and averages their predictions to improve accuracy and to avoid overfitting. To build decision trees, RFR uses bootstrapped sampling, for instance each decision tree is trained on random data of from training set. This random selection of RFR for training enables model to reduce bias predictions and achieve accuracy. RFR generates independent decision trees, and it can work on single output data as well multiple regressor task. This makes RFR compatible to be used in various application. === Support-vector machines === Support-vector machines (SVMs), also known as support-vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. === Regression analysis === Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularisation methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Multivariate linear regression extends the concept of linear regression to handle multiple dependent variables simultaneously. This approach estimates the relationships between a set of input variables and several output variables by fitting a multidimensional linear model. It is particularly useful in scenarios where outputs are interdependent or share underlying patterns, such as predicting multiple economic indicators or reconstructing images, which are inherently multi-dimensional. === Bayesian networks === A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalisations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. === Gaussian processes === A Gaussian process is a stochastic process in which every finite collection of the random variables in the process has a multivariate normal distribution, and it relies on a pre-defined covariance function, or kernel, that models how pairs of points relate to each other depending on their locations. Given a set of observed points, or input–output examples, the distribution of the (unobserved) output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point. Gaussian processes are popular surrogate models in Bayesian optimisation used to do hyperparameter optimisation. === Genetic algorithms === A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms. === Belief functions === The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined (e.g., Dempster's rule of combination), just like how in a pmf-based Bayesian approach would combine probabilities. However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance and uncertainty quantification. These belief function approaches that are implemented within the machine learning domain typically leverage a fusion approach of various ensemble methods to better handle the learner's decision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving. However, the computational complexity of these algorithms are dependent on the number of propositions (classes), and can lead to a much higher computation time when compared to other machine learning approaches. === Rule-based models === Rule-based machine learning (RBML) is a branch of machine learning that automatically discovers and learns 'rules' from data. It provides interpretable models, making it useful for decision-making in fields like healthcare, fraud detection, and cybersecurity. Key RBML techniques includes learning classifier systems, association rule learning, artificial immune systems, and other similar models. These methods extract patterns from data and evolve rules over time. === Training models === Typically, machine learning models require a high quantity of reliable data to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representative sample of data. Data from the training set can be as varied as a corpus of text, a collection of images, sensor data, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Biased models may result in detrimental outcomes, thereby furthering the negative impacts on society or objectives. Algorithmic bias is a potential result of data not being fully prepared for training. Machine learning ethics is becoming a field of study and notably, becoming integrated within machine learning engineering teams. ==== Federated learning ==== Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralises the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralised server. This also increases efficiency by decentralising the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google. == Applications == There are many applications for machine learning, including: In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. Shortly after the prize was awarded, Netflix realised that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly. In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis. In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software. In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognised influences among artists. In 2019 Springer Nature published the first research book created using machine learning. In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19. Machine learning was recently applied to predict the pro-environmental behaviour of travellers. Recently, machine learning technology was also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone. When applied correctly, machine learning algorithms (MLAs) can utilise a wide range of company characteristics to predict stock returns without overfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques like OLS. Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes. Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes. Other applications have been focusing on pre evacuation decisions in building fires. Machine learning is also emerging as a promising tool in geotechnical engineering, where it is used to support tasks such as ground classification, hazard prediction, and site characterization. Recent research emphasizes a move toward data-centric methods in this field, where machine learning is not a replacement for engineering judgment, but a way to enhance it using site-specific data and patterns. == Limitations == Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems. The "black box theory" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data. The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes. In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. Microsoft's Bing Chat chatbot has been reported to produce hostile and offensive response against its users. Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves. === Explainability === Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI. It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation. === Overfitting === Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalising the theory in accordance with how complex the theory is. === Other limitations and vulnerabilities === Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers often do not primarily make judgements from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies. Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel. Machine learning models are often vulnerable to manipulation or evasion via adversarial machine learning. Researchers have demonstrated how backdoors can be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type of data/software transparency is provided, possibly including white-box access. == Model assessments == Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy. In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning true positive rate (TPR) and true negative rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. Receiver operating characteristic (ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model. == Ethics == === Bias === Different machine learning approaches can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society. Systems that are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitising cultural prejudices. For example, in 1988, the UK's Commission for Racial Equality found that St. George's Medical School had been using a computer program trained from data of previous admissions staff and that this program had denied nearly 60 candidates who were found to either be women or have non-European sounding names. Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants. Another example includes predictive policing company Geolitica's predictive algorithm that resulted in "disproportionately high levels of over-policing in low-income and minority communities" after being trained with historical crime data. While responsible collection of data and documentation of algorithmic rules used by a system is considered a critical part of machine learning, some researchers blame lack of participation and representation of minority population in the field of AI for machine learning's vulnerability to biases. In fact, according to research carried out by the Computing Research Association (CRA) in 2021, "female faculty merely make up 16.1%" of all faculty members who focus on AI among several universities around the world. Furthermore, among the group of "new U.S. resident AI PhD graduates," 45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as African American, which further demonstrates a lack of diversity in the field of AI. Language models learned from data have been shown to contain human-like biases. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases. In 2016, Microsoft tested Tay, a chatbot that learned from Twitter, and it quickly picked up racist and sexist language. In an experiment carried out by ProPublica, an investigative journalism organisation, a machine learning algorithm's insight into the recidivism rates among prisoners falsely flagged "black defendants high risk twice as often as white defendants". In 2015, Google Photos once tagged a couple of black people as gorillas, which caused controversy. The gorilla label was subsequently removed, and in 2023, it still cannot recognise gorillas. Similar issues with recognising non-white people have been found in many other systems. Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains. Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good, is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who said that "[t]here's nothing artificial about AI. It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility." === Financial incentives === There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated. == Hardware == Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months. === Tensor Processing Units (TPUs) === Tensor Processing Units (TPUs) are specialised hardware accelerators developed by Google specifically for machine learning workloads. Unlike general-purpose GPUs and FPGAs, TPUs are optimised for tensor computations, making them particularly efficient for deep learning tasks such as training and inference. They are widely used in Google Cloud AI services and large-scale machine learning models like Google's DeepMind AlphaFold and large language models. TPUs leverage matrix multiplication units and high-bandwidth memory to accelerate computations while maintaining energy efficiency. Since their introduction in 2016, TPUs have become a key component of AI infrastructure, especially in cloud-based environments. === Neuromorphic computing === Neuromorphic computing refers to a class of computing systems designed to emulate the structure and functionality of biological neural networks. These systems may be implemented through software-based simulations on conventional hardware or through specialised hardware architectures. ==== physical neural networks ==== A physical neural network is a specific type of neuromorphic hardware that relies on electrically adjustable materials, such as memristors, to emulate the function of neural synapses. The term "physical neural network" highlights the use of physical hardware for computation, as opposed to software-based implementations. It broadly refers to artificial neural networks that use materials with adjustable resistance to replicate neural synapses. === Embedded machine learning === Embedded machine learning is a sub-field of machine learning where models are deployed on embedded systems with limited computing resources, such as wearable computers, edge devices and microcontrollers. Running models directly on these devices eliminates the need to transfer and store data on cloud servers for further processing, thereby reducing the risk of data breaches, privacy leaks and theft of intellectual property, personal data and business secrets. Embedded machine learning can be achieved through various techniques, such as hardware acceleration, approximate computing, and model optimisation. Common optimisation techniques include pruning, quantisation, knowledge distillation, low-rank factorisation, network architecture search, and parameter sharing. == Software == Software suites containing a variety of machine learning algorithms include the following: === Free and open-source software === === Proprietary software with free and open-source editions === KNIME RapidMiner === Proprietary software === == Journals == Journal of Machine Learning Research Machine Learning Nature Machine Intelligence Neural Computation IEEE Transactions on Pattern Analysis and Machine Intelligence == Conferences == AAAI Conference on Artificial Intelligence Association for Computational Linguistics (ACL) European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB) International Conference on Machine Learning (ICML) International Conference on Learning Representations (ICLR) International Conference on Intelligent Robots and Systems (IROS) Conference on Knowledge Discovery and Data Mining (KDD) Conference on Neural Information Processing Systems (NeurIPS) == See also == Automated machine learning – Process of automating the application of machine learning Big data – Extremely large or complex datasets Deep learning — branch of ML concerned with artificial neural networks Differentiable programming – Programming paradigm List of datasets for machine-learning research M-theory (learning framework) Machine unlearning Solomonoff's theory of inductive inference – A mathematical theory == References == == Sources == Domingos, Pedro (22 September 2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0465065707. Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2. == Further reading == == External links == International Machine Learning Society mloss is an academic database of open-source machine learning software.
Wikipedia/Learning_algorithms
In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method. It was first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. Most often, it is used for classification, as a k-NN classifier, the output of which is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. The k-NN algorithm can also be generalized for regression. In k-NN regression, also known as nearest neighbor smoothing, the output is the property value for the object. This value is the average of the values of k nearest neighbors. If k = 1, then the output is simply assigned to the value of that single nearest neighbor, also known as nearest neighbor interpolation. For both classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that nearer neighbors contribute more to the average than distant ones. For example, a common weighting scheme consists of giving each neighbor a weight of 1/d, where d is the distance to the neighbor. The input consists of the k closest training examples in a data set. The neighbors are taken from a set of objects for which the class (for k-NN classification) or the object property value (for k-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. A peculiarity (sometimes even a disadvantage) of the k-NN algorithm is its sensitivity to the local structure of the data. In k-NN classification the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance, if the features represent different physical units or come in vastly different scales, then feature-wise normalizing of the training data can greatly improve its accuracy. == Statistical setting == Suppose we have pairs ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , … , ( X n , Y n ) {\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\dots ,(X_{n},Y_{n})} taking values in R d × { 1 , 2 } {\displaystyle \mathbb {R} ^{d}\times \{1,2\}} , where Y is the class label of X, so that X | Y = r ∼ P r {\displaystyle X|Y=r\sim P_{r}} for r = 1 , 2 {\displaystyle r=1,2} (and probability distributions P r {\displaystyle P_{r}} ). Given some norm ‖ ⋅ ‖ {\displaystyle \|\cdot \|} on R d {\displaystyle \mathbb {R} ^{d}} and a point x ∈ R d {\displaystyle x\in \mathbb {R} ^{d}} , let ( X ( 1 ) , Y ( 1 ) ) , … , ( X ( n ) , Y ( n ) ) {\displaystyle (X_{(1)},Y_{(1)}),\dots ,(X_{(n)},Y_{(n)})} be a reordering of the training data such that ‖ X ( 1 ) − x ‖ ≤ ⋯ ≤ ‖ X ( n ) − x ‖ {\displaystyle \|X_{(1)}-x\|\leq \dots \leq \|X_{(n)}-x\|} . == Algorithm == The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples. In the classification phase, k is a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among the k training samples nearest to that query point. A commonly used distance metric for continuous variables is Euclidean distance. For discrete variables, such as for text classification, another metric can be used, such as the overlap metric (or Hamming distance). In the context of gene expression microarray data, for example, k-NN has been employed with correlation coefficients, such as Pearson and Spearman, as a metric. Often, the classification accuracy of k-NN can be improved significantly if the distance metric is learned with specialized algorithms such as Large Margin Nearest Neighbor or Neighbourhood components analysis. A drawback of the basic "majority voting" classification occurs when the class distribution is skewed. That is, examples of a more frequent class tend to dominate the prediction of the new example, because they tend to be common among the k nearest neighbors due to their large number. One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of its k nearest neighbors. The class (or value, in regression problems) of each of the k nearest points is multiplied by a weight proportional to the inverse of the distance from that point to the test point. Another way to overcome skew is by abstraction in data representation. For example, in a self-organizing map (SOM), each node is a representative (a center) of a cluster of similar points, regardless of their density in the original training data. K-NN can then be applied to the SOM. == Parameter selection == The best choice of k depends upon the data; generally, larger values of k reduces effect of the noise on the classification, but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques (see hyperparameter optimization). The special case where the class is predicted to be the class of the closest training sample (i.e. when k = 1) is called the nearest neighbor algorithm. The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular approach is the use of evolutionary algorithms to optimize feature scaling. Another popular approach is to scale features by the mutual information of the training data with the training classes. In binary (two class) classification problems, it is helpful to choose k to be an odd number as this avoids tied votes. One popular way of choosing the empirically optimal k in this setting is via bootstrap method. == The 1-nearest neighbor classifier == The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point x to the class of its closest neighbour in the feature space, that is C n 1 n n ( x ) = Y ( 1 ) {\displaystyle C_{n}^{1nn}(x)=Y_{(1)}} . As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data). == The weighted nearest neighbour classifier == The k-nearest neighbour classifier can be viewed as assigning the k nearest neighbours a weight 1 / k {\displaystyle 1/k} and all others 0 weight. This can be generalised to weighted nearest neighbour classifiers. That is, where the ith nearest neighbour is assigned a weight w n i {\displaystyle w_{ni}} , with ∑ i = 1 n w n i = 1 {\textstyle \sum _{i=1}^{n}w_{ni}=1} . An analogous result on the strong consistency of weighted nearest neighbour classifiers also holds. Let C n w n n {\displaystyle C_{n}^{wnn}} denote the weighted nearest classifier with weights { w n i } i = 1 n {\displaystyle \{w_{ni}\}_{i=1}^{n}} . Subject to regularity conditions, which in asymptotic theory are conditional variables which require assumptions to differentiate among parameters with some criteria. On the class distributions the excess risk has the following asymptotic expansion R R ( C n w n n ) − R R ( C Bayes ) = ( B 1 s n 2 + B 2 t n 2 ) { 1 + o ( 1 ) } , {\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{wnn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left(B_{1}s_{n}^{2}+B_{2}t_{n}^{2}\right)\{1+o(1)\},} for constants B 1 {\displaystyle B_{1}} and B 2 {\displaystyle B_{2}} where s n 2 = ∑ i = 1 n w n i 2 {\displaystyle s_{n}^{2}=\sum _{i=1}^{n}w_{ni}^{2}} and t n = n − 2 / d ∑ i = 1 n w n i { i 1 + 2 / d − ( i − 1 ) 1 + 2 / d } {\displaystyle t_{n}=n^{-2/d}\sum _{i=1}^{n}w_{ni}\left\{i^{1+2/d}-(i-1)^{1+2/d}\right\}} . The optimal weighting scheme { w n i ∗ } i = 1 n {\displaystyle \{w_{ni}^{*}\}_{i=1}^{n}} , that balances the two terms in the display above, is given as follows: set k ∗ = ⌊ B n 4 d + 4 ⌋ {\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor } , w n i ∗ = 1 k ∗ [ 1 + d 2 − d 2 k ∗ 2 / d { i 1 + 2 / d − ( i − 1 ) 1 + 2 / d } ] {\displaystyle w_{ni}^{*}={\frac {1}{k^{*}}}\left[1+{\frac {d}{2}}-{\frac {d}{2{k^{*}}^{2/d}}}\{i^{1+2/d}-(i-1)^{1+2/d}\}\right]} for i = 1 , 2 , … , k ∗ {\displaystyle i=1,2,\dots ,k^{*}} and w n i ∗ = 0 {\displaystyle w_{ni}^{*}=0} for i = k ∗ + 1 , … , n {\displaystyle i=k^{*}+1,\dots ,n} . With optimal weights the dominant term in the asymptotic expansion of the excess risk is O ( n − 4 d + 4 ) {\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})} . Similar results are true when using a bagged nearest neighbour classifier. == Properties == k-NN is a special case of a variable-bandwidth, kernel density "balloon" estimator with a uniform kernel. The naive version of the algorithm is easy to implement by computing the distances from the test example to all stored examples, but it is computationally intensive for large training sets. Using an approximate nearest neighbor search algorithm makes k-NN computationally tractable even for large data sets. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed. k-NN has some strong consistency results. As the amount of data approaches infinity, the two-class k-NN algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data). Various improvements to the k-NN speed are possible by using proximity graphs. For multi-class k-NN classification, Cover and Hart (1967) prove an upper bound error rate of R ∗ ≤ R k N N ≤ R ∗ ( 2 − M R ∗ M − 1 ) {\displaystyle R^{*}\ \leq \ R_{k\mathrm {NN} }\ \leq \ R^{*}\left(2-{\frac {MR^{*}}{M-1}}\right)} where R ∗ {\displaystyle R^{*}} is the Bayes error rate (which is the minimal error rate possible), R k N N {\displaystyle R_{kNN}} is the asymptotic k-NN error rate, and M is the number of classes in the problem. This bound is tight in the sense that both the lower and upper bounds are achievable by some distribution. For M = 2 {\displaystyle M=2} and as the Bayesian error rate R ∗ {\displaystyle R^{*}} approaches zero, this limit reduces to "not more than twice the Bayesian error rate". == Error rates == There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on ( X , Y ) {\displaystyle (X,Y)} ) consistent provided k := k n {\displaystyle k:=k_{n}} diverges and k n / n {\displaystyle k_{n}/n} converges to zero as n → ∞ {\displaystyle n\to \infty } . Let C n k n n {\displaystyle C_{n}^{knn}} denote the k nearest neighbour classifier based on a training set of size n. Under certain regularity conditions, the excess risk yields the following asymptotic expansion R R ( C n k n n ) − R R ( C Bayes ) = { B 1 1 k + B 2 ( k n ) 4 / d } { 1 + o ( 1 ) } , {\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{knn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left\{B_{1}{\frac {1}{k}}+B_{2}\left({\frac {k}{n}}\right)^{4/d}\right\}\{1+o(1)\},} for some constants B 1 {\displaystyle B_{1}} and B 2 {\displaystyle B_{2}} . The choice k ∗ = ⌊ B n 4 d + 4 ⌋ {\displaystyle k^{*}=\left\lfloor Bn^{\frac {4}{d+4}}\right\rfloor } offers a trade off between the two terms in the above display, for which the k ∗ {\displaystyle k^{*}} -nearest neighbour error converges to the Bayes error at the optimal (minimax) rate O ( n − 4 d + 4 ) {\displaystyle {\mathcal {O}}\left(n^{-{\frac {4}{d+4}}}\right)} . == Metric learning == The K-nearest neighbor classification performance can often be significantly improved through (supervised) metric learning. Popular algorithms are neighbourhood components analysis and large margin nearest neighbor. Supervised metric learning algorithms use the label information to learn a new metric or pseudo-metric. == Feature extraction == When the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g. the same measurement in both feet and meters) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Feature extraction is performed on raw data prior to applying k-NN algorithm on the transformed data in feature space. An example of a typical computer vision computation pipeline for face recognition using k-NN including feature extraction and dimension reduction pre-processing steps (usually implemented with OpenCV): Haar face detection Mean-shift tracking analysis PCA or Fisher LDA projection into feature space, followed by k-NN classification == Dimension reduction == For high-dimensional data (e.g., with number of dimensions more than 10) dimension reduction is usually performed prior to applying the k-NN algorithm in order to avoid the effects of the curse of dimensionality. The curse of dimensionality in the k-NN context basically means that Euclidean distance is unhelpful in high dimensions because all vectors are almost equidistant to the search query vector (imagine multiple points lying more or less on a circle with the query point at the center; the distance from the query to all data points in the search space is almost the same). Feature extraction and dimension reduction can be combined in one step using principal component analysis (PCA), linear discriminant analysis (LDA), or canonical correlation analysis (CCA) techniques as a pre-processing step, followed by clustering by k-NN on feature vectors in reduced-dimension space. This process is also called low-dimensional embedding. For very-high-dimensional datasets (e.g. when performing a similarity search on live video streams, DNA data or high-dimensional time series) running a fast approximate k-NN search using locality sensitive hashing, "random projections", "sketches" or other high-dimensional similarity search techniques from the VLDB toolbox might be the only feasible option. == Decision boundary == Nearest neighbor rules in effect implicitly compute the decision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity. == Data reduction == Data reduction is one of the most important problems for work with huge data sets. Usually, only some of the data points are needed for accurate classification. Those data are called the prototypes and can be found as follows: Select the class-outliers, that is, training data that are classified incorrectly by k-NN (for a given k) Separate the rest of the data into two sets: (i) the prototypes that are used for the classification decisions and (ii) the absorbed points that can be correctly classified by k-NN using prototypes. The absorbed points can then be removed from the training set. === Selection of class-outliers === A training example surrounded by examples of other classes is called a class outlier. Causes of class outliers include: random error insufficient training examples of this class (an isolated example appears instead of a cluster) missing important features (the classes are separated in other dimensions which we don't know) too many training examples of other classes (unbalanced classes) that create a "hostile" background for the given small class Class outliers with k-NN produce noise. They can be detected and separated for future analysis. Given two natural numbers, k>r>0, a training example is called a (k,r)NN class-outlier if its k nearest neighbors include more than r examples of other classes. === Condensed Nearest Neighbor for data reduction === Condensed nearest neighbor (CNN, the Hart algorithm) is an algorithm designed to reduce the data set for k-NN classification. It selects the set of prototypes U from the training data, such that 1NN with U can classify the examples almost as accurately as 1NN does with the whole data set. Given a training set X, CNN works iteratively: Scan all elements of X, looking for an element x whose nearest prototype from U has a different label than x. Remove x from X and add it to U Repeat the scan until no more prototypes are added to U. Use U instead of X for classification. The examples that are not prototypes are called "absorbed" points. It is efficient to scan the training examples in order of decreasing border ratio. The border ratio of a training example x is defined as where ‖x-y‖ is the distance to the closest example y having a different color than x, and ‖x'-y‖ is the distance from y to its closest example x' with the same label as x. The border ratio is in the interval [0,1] because ‖x'-y‖ never exceeds ‖x-y‖. This ordering gives preference to the borders of the classes for inclusion in the set of prototypes U. A point of a different label than x is called external to x. The calculation of the border ratio is illustrated by the figure on the right. The data points are labeled by colors: the initial point is x and its label is red. External points are blue and green. The closest to x external point is y. The closest to y red point is x' . The border ratio a(x) = ‖x'-y‖ / ‖x-y‖is the attribute of the initial point x. Below is an illustration of CNN in a series of figures. There are three classes (red, green and blue). Fig. 1: initially there are 60 points in each class. Fig. 2 shows the 1NN classification map: each pixel is classified by 1NN using all the data. Fig. 3 shows the 5NN classification map. White areas correspond to the unclassified regions, where 5NN voting is tied (for example, if there are two green, two red and one blue points among 5 nearest neighbors). Fig. 4 shows the reduced data set. The crosses are the class-outliers selected by the (3,2)NN rule (all the three nearest neighbors of these instances belong to other classes); the squares are the prototypes, and the empty circles are the absorbed points. The left bottom corner shows the numbers of the class-outliers, prototypes and absorbed points for all three classes. The number of prototypes varies from 15% to 20% for different classes in this example. Fig. 5 shows that the 1NN classification map with the prototypes is very similar to that with the initial data set. The figures were produced using the Mirkes applet. CNN model reduction for k-NN classifiers == k-NN regression == In k-NN regression, also known as k-NN smoothing, the k-NN algorithm is used for estimating continuous variables. One such algorithm uses a weighted average of the k nearest neighbors, weighted by the inverse of their distance. This algorithm works as follows: Compute the Euclidean or Mahalanobis distance from the query example to the labeled examples. Order the labeled examples by increasing distance. Find a heuristically optimal number k of nearest neighbors, based on RMSE. This is done using cross validation. Calculate an inverse distance weighted average with the k-nearest multivariate neighbors. == k-NN outlier == The distance to the kth nearest neighbor can also be seen as a local density estimate and thus is also a popular outlier score in anomaly detection. The larger the distance to the k-NN, the lower the local density, the more likely the query point is an outlier. Although quite simple, this outlier model, along with another classic data mining method, local outlier factor, works quite well also in comparison to more recent and more complex approaches, according to a large scale experimental analysis. == Validation of results == A confusion matrix or "matching matrix" is often used as a tool to validate the accuracy of k-NN classification. More robust statistical methods such as likelihood-ratio test can also be applied. == See also == Nearest centroid classifier Closest pair of points problem Nearest neighbor graph Segmentation-based object categorization == References == == Further reading == Dasarathy, Belur V., ed. (1991). Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. IEEE Computer Society Press. ISBN 978-0818689307. Shakhnarovich, Gregory; Darrell, Trevor; Indyk, Piotr, eds. (2005). Nearest-Neighbor Methods in Learning and Vision. MIT Press. ISBN 978-0262195478.
Wikipedia/K-nearest_neighbor_algorithm
A Siamese neural network (sometimes called a twin neural network) is an artificial neural network that uses the same weights while working in tandem on two different input vectors to compute comparable output vectors. Often one of the output vectors is precomputed, thus forming a baseline against which the other output vector is compared. This is similar to comparing fingerprints but can be described more technically as a distance function for locality-sensitive hashing. It is possible to build an architecture that is functionally similar to a twin network but implements a slightly different function. This is typically used for comparing similar instances in different type sets. Uses of similarity measures where a twin network might be used are such things as recognizing handwritten checks, automatic detection of faces in camera images, and matching queries with indexed documents. The perhaps most well-known application of twin networks are face recognition, where known images of people are precomputed and compared to an image from a turnstile or similar. It is not obvious at first, but there are two slightly different problems. One is recognizing a person among a large number of other persons, that is the facial recognition problem. DeepFace is an example of such a system. In its most extreme form this is recognizing a single person at a train station or airport. The other is face verification, that is to verify whether the photo in a pass is the same as the person claiming he or she is the same person. The twin network might be the same, but the implementation can be quite different. == Learning == Learning in twin networks can be done with triplet loss or contrastive loss. For learning by triplet loss a baseline vector (anchor image) is compared against a positive vector (truthy image) and a negative vector (falsy image). The negative vector will force learning in the network, while the positive vector will act like a regularizer. For learning by contrastive loss there must be a weight decay to regularize the weights, or some similar operation like a normalization. A distance metric for a loss function may have the following properties Non-negativity: δ ( x , y ) ≥ 0 {\displaystyle \delta (x,y)\geq 0} Identity of Non-discernibles: δ ( x , y ) = 0 ⟺ x = y {\displaystyle \delta (x,y)=0\iff x=y} Commutativity: δ ( x , y ) = δ ( y , x ) {\displaystyle \delta (x,y)=\delta (y,x)} Triangle inequality: δ ( x , z ) ≤ δ ( x , y ) + δ ( y , z ) {\displaystyle \delta (x,z)\leq \delta (x,y)+\delta (y,z)} In particular, the triplet loss algorithm is often defined with squared Euclidean (which unlike Euclidean, does not have triangle inequality) distance at its core. === Predefined metrics, Euclidean distance metric === The common learning goal is to minimize a distance metric for similar objects and maximize for distinct ones. This gives a loss function like δ ( x ( i ) , x ( j ) ) = { min ‖ f ⁡ ( x ( i ) ) − f ⁡ ( x ( j ) ) ‖ , i = j max ‖ f ⁡ ( x ( i ) ) − f ⁡ ( x ( j ) ) ‖ , i ≠ j {\displaystyle {\begin{aligned}\delta (x^{(i)},x^{(j)})={\begin{cases}\min \ \|\operatorname {f} \left(x^{(i)}\right)-\operatorname {f} \left(x^{(j)}\right)\|\,,i=j\\\max \ \|\operatorname {f} \left(x^{(i)}\right)-\operatorname {f} \left(x^{(j)}\right)\|\,,i\neq j\end{cases}}\end{aligned}}} i , j {\displaystyle i,j} are indexes into a set of vectors f ⁡ ( ⋅ ) {\displaystyle \operatorname {f} (\cdot )} function implemented by the twin network The most common distance metric used is Euclidean distance, in case of which the loss function can be rewritten in matrix form as δ ⁡ ( x ( i ) , x ( j ) ) ≈ ( x ( i ) − x ( j ) ) T ( x ( i ) − x ( j ) ) {\displaystyle \operatorname {\delta } (\mathbf {x} ^{(i)},\mathbf {x} ^{(j)})\approx (\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})^{T}(\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})} === Learned metrics, nonlinear distance metric === A more general case is where the output vector from the twin network is passed through additional network layers implementing non-linear distance metrics. if i = j then δ ⁡ [ f ⁡ ( x ( i ) ) , f ⁡ ( x ( j ) ) ] is small otherwise δ ⁡ [ f ⁡ ( x ( i ) ) , f ⁡ ( x ( j ) ) ] is large {\displaystyle {\begin{aligned}{\text{if}}\,i=j\,{\text{then}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {f} \left(x^{(j)}\right)\right]\,{\text{is small}}\\{\text{otherwise}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {f} \left(x^{(j)}\right)\right]\,{\text{is large}}\end{aligned}}} i , j {\displaystyle i,j} are indexes into a set of vectors f ⁡ ( ⋅ ) {\displaystyle \operatorname {f} (\cdot )} function implemented by the twin network δ ⁡ ( ⋅ ) {\displaystyle \operatorname {\delta } (\cdot )} function implemented by the network joining outputs from the twin network On a matrix form the previous is often approximated as a Mahalanobis distance for a linear space as δ ⁡ ( x ( i ) , x ( j ) ) ≈ ( x ( i ) − x ( j ) ) T M ( x ( i ) − x ( j ) ) {\displaystyle \operatorname {\delta } (\mathbf {x} ^{(i)},\mathbf {x} ^{(j)})\approx (\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})^{T}\mathbf {M} (\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})} This can be further subdivided in at least Unsupervised learning and Supervised learning. === Learned metrics, half-twin networks === This form also allows the twin network to be more of a half-twin, implementing a slightly different functions if i = j then δ ⁡ [ f ⁡ ( x ( i ) ) , g ⁡ ( x ( j ) ) ] is small otherwise δ ⁡ [ f ⁡ ( x ( i ) ) , g ⁡ ( x ( j ) ) ] is large {\displaystyle {\begin{aligned}{\text{if}}\,i=j\,{\text{then}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {g} \left(x^{(j)}\right)\right]\,{\text{is small}}\\{\text{otherwise}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {g} \left(x^{(j)}\right)\right]\,{\text{is large}}\end{aligned}}} i , j {\displaystyle i,j} are indexes into a set of vectors f ⁡ ( ⋅ ) , g ⁡ ( ⋅ ) {\displaystyle \operatorname {f} (\cdot ),\operatorname {g} (\cdot )} function implemented by the half-twin network δ ⁡ ( ⋅ ) {\displaystyle \operatorname {\delta } (\cdot )} function implemented by the network joining outputs from the twin network == Twin networks for object tracking == Twin networks have been used in object tracking because of its unique two tandem inputs and similarity measurement. In object tracking, one input of the twin network is user pre-selected exemplar image, the other input is a larger search image, which twin network's job is to locate exemplar inside of search image. By measuring the similarity between exemplar and each part of the search image, a map of similarity score can be given by the twin network. Furthermore, using a Fully Convolutional Network, the process of computing each sector's similarity score can be replaced with only one cross correlation layer. After being first introduced in 2016, Twin fully convolutional network has been used in many High-performance Real-time Object Tracking Neural Networks. Like CFnet, StructSiam, SiamFC-tri, DSiam, SA-Siam, SiamRPN, DaSiamRPN, Cascaded SiamRPN, SiamMask, SiamRPN++, Deeper and Wider SiamRPN. == See also == Artificial neural network Triplet loss == Further reading == Chicco, Davide (2020), "Siamese neural networks: an overview", Artificial Neural Networks, Methods in Molecular Biology, vol. 2190 (3rd ed.), New York City, New York, USA: Springer Protocols, Humana Press, pp. 73–94, doi:10.1007/978-1-0716-0826-5_3, ISBN 978-1-0716-0826-5, PMID 32804361, S2CID 221144012 == References ==
Wikipedia/Siamese_neural_network
A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks. In neuroscience, a biological neural network is a physical structure found in brains and complex nervous systems – a population of nerve cells connected by synapses. In machine learning, an artificial neural network is a mathematical model used to approximate nonlinear functions. Artificial neural networks are used to solve artificial intelligence problems. == In biology == In the context of biology, a neural network is a population of biological neurons chemically connected to each other by synapses. A given neuron can be connected to hundreds of thousands of synapses. Each neuron sends and receives electrochemical signals called action potentials to its connected neighbors. A neuron can serve an excitatory role, amplifying and propagating signals it receives, or an inhibitory role, suppressing signals instead. Populations of interconnected neurons that are smaller than neural networks are called neural circuits. Very large interconnected networks are called large scale brain networks, and many of these together form brains and nervous systems. Signals generated by neural networks in the brain eventually travel through the nervous system and across neuromuscular junctions to muscle cells, where they cause contraction and thereby motion. == In machine learning == In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines, today they are almost always implemented in software. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer). The "signal" input to each neuron is a number, specifically a linear combination of the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to its activation function. The behavior of the network depends on the strengths (or weights) of the connections between neurons. A network is trained by modifying these weights through empirical risk minimization or backpropagation in order to fit some preexisting dataset. The term deep neural network refers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers. Neural networks are used to solve problems in artificial intelligence, and have thereby found applications in many disciplines, including predictive modeling, adaptive control, facial recognition, handwriting recognition, general game playing, and generative AI. == History == The theoretical base for contemporary neural networks was independently proposed by Alexander Bain in 1873 and William James in 1890. Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949, Donald Hebb described Hebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it. Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach of connectionism. However, starting with the invention of the perceptron, a simple artificial neural network, by Warren McCulloch and Walter Pitts in 1943, followed by the implementation of one in hardware by Frank Rosenblatt in 1957, artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts. == See also == Emergence Biological cybernetics Biologically-inspired computing == References ==
Wikipedia/Neural_Network
A cellular network or mobile network is a telecommunications network where the link to and from end nodes is wireless and the network is distributed over land areas called cells, each served by at least one fixed-location transceiver (such as a base station). These base stations provide the cell with the network coverage which can be used for transmission of voice, data, and other types of content via radio waves. Each cell's coverage area is determined by factors such as the power of the transceiver, the terrain, and the frequency band being used. A cell typically uses a different set of frequencies from neighboring cells, to avoid interference and provide guaranteed service quality within each cell. When joined together, these cells provide radio coverage over a wide geographic area. This enables numerous devices, including mobile phones, tablets, laptops equipped with mobile broadband modems, and wearable devices such as smartwatches, to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the devices are moving through more than one cell during transmission. The design of cellular networks allows for seamless handover, enabling uninterrupted communication when a device moves from one cell to another. Modern cellular networks utilize advanced technologies such as Multiple Input Multiple Output (MIMO), beamforming, and small cells to enhance network capacity and efficiency. Cellular networks offer a number of desirable features: More capacity than a single large transmitter, since the same frequency can be used for multiple links as long as they are in different cells Mobile devices use less power than a single transmitter or satellite since the cell towers are closer Larger coverage area than a single terrestrial transmitter, since additional cell towers can be added indefinitely and are not limited by the horizon Capability of utilizing higher frequency signals (and thus more available bandwidth / faster data rates) that are not able to propagate at long distances With data compression and multiplexing, several video (including digital video) and audio channels may travel through a higher frequency signal on a single wideband carrier Major telecommunications providers have deployed voice and data cellular networks over most of the inhabited land area of Earth. This allows mobile phones and other devices to be connected to the public switched telephone network and public Internet access. In addition to traditional voice and data services, cellular networks now support Internet of Things (IoT) applications, connecting devices such as smart meters, vehicles, and industrial sensors. The evolution of cellular networks from 1G to 5G has progressively introduced faster speeds, lower latency, and support for a larger number of devices, enabling advanced applications in fields such as healthcare, transportation, and smart cities. Private cellular networks can be used for research or for large organizations and fleets, such as dispatch for local public safety agencies or a taxicab company, as well as for local wireless communications in enterprise and industrial settings such as factories, warehouses, mines, power plants, substations, oil and gas facilities and ports. == Concept == In a cellular radio system, a land area to be supplied with radio service is divided into cells in a pattern dependent on terrain and reception characteristics. These cell patterns roughly take the form of regular shapes, such as hexagons, squares, or circles although hexagonal cells are conventional. Each of these cells is assigned with multiple frequencies (f1 – f6) which have corresponding radio base stations. The group of frequencies can be reused in other cells, provided that the same frequencies are not reused in adjacent cells, which would cause co-channel interference. The increased capacity in a cellular network, compared with a network with a single transmitter, comes from the mobile communication switching system developed by Amos Joel of Bell Labs that permitted multiple callers in a given area to use the same frequency by switching calls to the nearest available cellular tower having that frequency available. This strategy is viable because a given radio frequency can be reused in a different area for an unrelated transmission. In contrast, a single transmitter can only handle one transmission for a given frequency. Inevitably, there is some level of interference from the signal from the other cells which use the same frequency. Consequently, there must be at least one cell gap between cells which reuse the same frequency in a standard frequency-division multiple access (FDMA) system. Consider the case of a taxi company, where each radio has a manually operated channel selector knob to tune to different frequencies. As drivers move around, they change from channel to channel. The drivers are aware of which frequency approximately covers some area. When they do not receive a signal from the transmitter, they try other channels until finding one that works. The taxi drivers only speak one at a time when invited by the base station operator. This is a form of time-division multiple access (TDMA). == History == The idea to establish a standard cellular phone network was first proposed on December 11, 1947. This proposal was put forward by Douglas H. Ring, a Bell Labs engineer, in an internal memo suggesting the development of a cellular telephone system by AT&T. The first commercial cellular network, the 1G generation, was launched in Japan by Nippon Telegraph and Telephone (NTT) in 1979, initially in the metropolitan area of Tokyo. However, NTT did not initially commercialize the system; the early launch was motivated by an effort to understand a practical cellular system rather than by an interest to profit from it. In 1981, the Nordic Mobile Telephone system was created as the first network to cover an entire country. The network was released in 1981 in Sweden and Norway, then in early 1982 in Finland and Denmark. Televerket, a state-owned corporation responsible for telecommunications in Sweden, launched the system. In September 1981, Jan Stenbeck, a financier and businessman, launched Comvik, a new Swedish telecommunications company. Comvik was the first European telecommunications firm to challenge the state's telephone monopoly on the industry. According to some sources, Comvik was the first to launch a commercial automatic cellular system before Televerket launched its own in October 1981. However, at the time of the new network’s release, the Swedish Post and Telecom Authority threatened to shut down the system after claiming that the company had used an unlicensed automatic gear that could interfere with its own networks. In December 1981, Sweden awarded Comvik with a license to operate its own automatic cellular network in the spirit of market competition. The Bell System had developed cellular technology since 1947, and had cellular networks in operation in Chicago, Illinois, and Dallas, Texas, prior to 1979; however, regulatory battles delayed AT&T's deployment of cellular service to 1983, when its Regional Holding Company Illinois Bell first provided cellular service. First-generation cellular network technology continued to expand its reach to the rest of the world. In 1990, Millicom Inc., a telecommunications service provider, strategically partnered with Comvik’s international cellular operations to become Millicom International Cellular SA. The company went on to establish a 1G systems foothold in Ghana, Africa under the brand name Mobitel. In 2006, the company’s Ghana operations were renamed to Tigo. The wireless revolution began in the early 1990s, leading to the transition from analog to digital networks. The MOSFET invented at Bell Labs between 1955 and 1960, was adapted for cellular networks by the early 1990s, with the wide adoption of power MOSFET, LDMOS (RF amplifier), and RF CMOS (RF circuit) devices leading to the development and proliferation of digital wireless mobile networks. The first commercial digital cellular network, the 2G generation, was launched in 1991. This sparked competition in the sector as the new operators challenged the incumbent 1G analog network operators. == Cell signal encoding == To distinguish signals from several different transmitters, a number of channel access methods have been developed, including frequency-division multiple access (FDMA, used by analog and D-AMPS systems), time-division multiple access (TDMA, used by GSM) and code-division multiple access (CDMA, first used for PCS, and the basis of 3G). With FDMA, the transmitting and receiving frequencies used by different users in each cell are different from each other. Each cellular call was assigned a pair of frequencies (one for base to mobile, the other for mobile to base) to provide full-duplex operation. The original AMPS systems had 666 channel pairs, 333 each for the CLEC "A" system and ILEC "B" system. The number of channels was expanded to 416 pairs per carrier, but ultimately the number of RF channels limits the number of calls that a cell site could handle. FDMA is a familiar technology to telephone companies, which used frequency-division multiplexing to add channels to their point-to-point wireline plants before time-division multiplexing rendered FDM obsolete. With TDMA, the transmitting and receiving time slots used by different users in each cell are different from each other. TDMA typically uses digital signaling to store and forward bursts of voice data that are fit into time slices for transmission, and expanded at the receiving end to produce a somewhat normal-sounding voice at the receiver. TDMA must introduce latency (time delay) into the audio signal. As long as the latency time is short enough that the delayed audio is not heard as an echo, it is not problematic. TDMA is a familiar technology for telephone companies, which used time-division multiplexing to add channels to their point-to-point wireline plants before packet switching rendered FDM obsolete. The principle of CDMA is based on spread spectrum technology developed for military use during World War II and improved during the Cold War into direct-sequence spread spectrum that was used for early CDMA cellular systems and Wi-Fi. DSSS allows multiple simultaneous phone conversations to take place on a single wideband RF channel, without needing to channelize them in time or frequency. Although more sophisticated than older multiple access schemes (and unfamiliar to legacy telephone companies because it was not developed by Bell Labs), CDMA has scaled well to become the basis for 3G cellular radio systems. Other available methods of multiplexing such as MIMO, a more sophisticated version of antenna diversity, combined with active beamforming provides much greater spatial multiplexing ability compared to original AMPS cells, that typically only addressed one to three unique spaces. Massive MIMO deployment allows much greater channel reuse, thus increasing the number of subscribers per cell site, greater data throughput per user, or some combination thereof. Quadrature Amplitude Modulation (QAM) modems offer an increasing number of bits per symbol, allowing more users per megahertz of bandwidth (and decibels of SNR), greater data throughput per user, or some combination thereof. == Frequency reuse == The key characteristic of a cellular network is the ability to reuse frequencies to increase both coverage and capacity. As described above, adjacent cells must use different frequencies, however, there is no problem with two cells sufficiently far apart operating on the same frequency, provided the masts and cellular network users' equipment do not transmit with too much power. The elements that determine frequency reuse are the reuse distance and the reuse factor. The reuse distance, D is calculated as D = R 3 N {\displaystyle D=R{\sqrt {3N}}} , where R is the cell radius and N is the number of cells per cluster. Cells may vary in radius from 1 to 30 kilometres (0.62 to 18.64 mi). The boundaries of the cells can also overlap between adjacent cells and large cells can be divided into smaller cells. The frequency reuse factor is the rate at which the same frequency can be used in the network. It is 1/K (or K according to some books) where K is the number of cells which cannot use the same frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9 and 1/12 (or 3, 4, 7, 9 and 12, depending on notation). In case of N sector antennas on the same base station site, each with different direction, the base station site can serve N different sectors. N is typically 3. A reuse pattern of N/K denotes a further division in frequency among N sector antennas per site. Some current and historical reuse patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM). If the total available bandwidth is B, each cell can only use a number of frequency channels corresponding to a bandwidth of B/K, and each sector can use a bandwidth of B/NK. Code-division multiple access-based systems use a wider frequency band to achieve the same rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites use the same frequencies, and the different base stations and users are separated by codes rather than frequencies. While N is shown as 1 in this example, that does not mean the CDMA cell has only one sector, but rather that the entire cell bandwidth is also available to each sector individually. Recently also orthogonal frequency-division multiple access based systems such as LTE are being deployed with a frequency reuse of 1. Since such systems do not spread the signal across the frequency band, inter-cell radio resource management is important to coordinate resource allocation between different cell sites and to limit the inter-cell interference. There are various means of inter-cell interference coordination (ICIC) already defined in the standard. Coordinated scheduling, multi-site MIMO or multi-site beamforming are other examples for inter-cell radio resource management that might be standardized in the future. == Directional antennas == Cell towers frequently use a directional signal to improve reception in higher-traffic areas. In the United States, the Federal Communications Commission (FCC) limits omnidirectional cell tower signals to 100 watts of power. If the tower has directional antennas, the FCC allows the cell operator to emit up to 500 watts of effective radiated power (ERP). Although the original cell towers created an even, omnidirectional signal, were at the centers of the cells and were omnidirectional, a cellular map can be redrawn with the cellular telephone towers located at the corners of the hexagons where three cells converge. Each tower has three sets of directional antennas aimed in three different directions with 120 degrees for each cell (totaling 360 degrees) and receiving/transmitting into three different cells at different frequencies. This provides a minimum of three channels, and three towers for each cell and greatly increases the chances of receiving a usable signal from at least one direction. The numbers in the illustration are channel numbers, which repeat every 3 cells. Large cells can be subdivided into smaller cells for high volume areas. Cell phone companies also use this directional signal to improve reception along highways and inside buildings like stadiums and arenas. == Broadcast messages and paging == Practically every cellular system has some kind of broadcast mechanism. This can be used directly for distributing information to multiple mobiles. Commonly, for example in mobile telephony systems, the most important use of broadcast information is to set up channels for one-to-one communication between the mobile transceiver and the base station. This is called paging. The three different paging procedures generally adopted are sequential, parallel and selective paging. The details of the process of paging vary somewhat from network to network, but normally we know a limited number of cells where the phone is located (this group of cells is called a Location Area in the GSM or UMTS system, or Routing Area if a data packet session is involved; in LTE, cells are grouped into Tracking Areas). Paging takes place by sending the broadcast message to all of those cells. Paging messages can be used for information transfer. This happens in pagers, in CDMA systems for sending SMS messages, and in the UMTS system where it allows for low downlink latency in packet-based connections. In LTE/4G, the Paging procedure is initiated by the MME when data packets need to be delivered to the UE. Paging types supported by the MME are: Basic. SGs_CS and SGs_PS. QCI_1 through QCI_9. == Movement from cell to cell and handing over == In a primitive taxi system, when the taxi moved away from a first tower and closer to a second tower, the taxi driver manually switched from one frequency to another as needed. If communication was interrupted due to a loss of a signal, the taxi driver asked the base station operator to repeat the message on a different frequency. In a cellular system, as the distributed mobile transceivers move from cell to cell during an ongoing continuous communication, switching from one cell frequency to a different cell frequency is done electronically without interruption and without a base station operator or manual switching. This is called the handover or handoff. Typically, a new channel is automatically selected for the mobile unit on the new base station which will serve it. The mobile unit then automatically switches from the current channel to the new channel and communication continues. The exact details of the mobile system's move from one base station to the other vary considerably from system to system (see the example below for how a mobile phone network manages handover). == Mobile phone network == The most common example of a cellular network is a mobile phone (cell phone) network. A mobile phone is a portable telephone which receives or makes calls through a cell site (base station) or transmitting tower. Radio waves are used to transfer signals to and from the cell phone. Modern mobile phone networks use cells because radio frequencies are a limited, shared resource. Cell-sites and handsets change frequency under computer control and use low power transmitters so that the usually limited number of radio frequencies can be simultaneously used by many callers with less interference. A cellular network is used by the mobile phone operator to achieve both coverage and capacity for their subscribers. Large geographic areas are split into smaller cells to avoid line-of-sight signal loss and to support a large number of active phones in that area. All of the cell sites are connected to telephone exchanges (or switches), which in turn connect to the public telephone network. In cities, each cell site may have a range of up to approximately 1⁄2 mile (0.80 km), while in rural areas, the range could be as much as 5 miles (8.0 km). It is possible that in clear open areas, a user may receive signals from a cell site 25 miles (40 km) away. In rural areas with low-band coverage and tall towers, basic voice and messaging service may reach 50 miles (80 km), with limitations on bandwidth and number of simultaneous calls. Since almost all mobile phones use cellular technology, including GSM, CDMA, and AMPS (analog), the term "cell phone" is in some regions, notably the US, used interchangeably with "mobile phone". However, satellite phones are mobile phones that do not communicate directly with a ground-based cellular tower but may do so indirectly by way of a satellite. There are a number of different digital cellular technologies, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN). The transition from existing analog to the digital standard followed a very different path in Europe and the US. As a consequence, multiple digital standards surfaced in the US, while Europe and many countries converged towards the GSM standard. === Structure of the mobile phone cellular network === A simple view of the cellular mobile-radio network consists of the following: A network of radio base stations forming the base station subsystem. The core circuit switched network for handling voice calls and text A packet switched network for handling mobile data The public switched telephone network to connect subscribers to the wider telephony network This network is the foundation of the GSM system network. There are many functions that are performed by this network in order to make sure customers get the desired service including mobility management, registration, call set-up, and handover. Any phone connects to the network via an RBS (Radio Base Station) at a corner of the corresponding cell which in turn connects to the Mobile switching center (MSC). The MSC provides a connection to the public switched telephone network (PSTN). The link from a phone to the RBS is called an uplink while the other way is termed downlink. Radio channels effectively use the transmission medium through the use of the following multiplexing and access schemes: frequency-division multiple access (FDMA), time-division multiple access (TDMA), code-division multiple access (CDMA), and space-division multiple access (SDMA). === Small cells === Small cells, which have a smaller coverage area than base stations, are categorised as follows: Microcell -> less than 2 kilometres, Picocell -> less than 200 metres, Femtocell -> around 10 metres, Attocell -> 1–4 metres === Cellular handover in mobile phone networks === As the phone user moves from one cell area to another cell while a call is in progress, the mobile station will search for a new channel to attach to in order not to drop the call. Once a new channel is found, the network will command the mobile unit to switch to the new channel and at the same time switch the call onto the new channel. With CDMA, multiple CDMA handsets share a specific radio channel. The signals are separated by using a pseudonoise code (PN code) that is specific to each phone. As the user moves from one cell to another, the handset sets up radio links with multiple cell sites (or sectors of the same site) simultaneously. This is known as "soft handoff" because, unlike with traditional cellular technology, there is no one defined point where the phone switches to the new cell. In IS-95 inter-frequency handovers and older analog systems such as NMT it will typically be impossible to test the target channel directly while communicating. In this case, other techniques have to be used such as pilot beacons in IS-95. This means that there is almost always a brief break in the communication while searching for the new channel followed by the risk of an unexpected return to the old channel. If there is no ongoing communication or the communication can be interrupted, it is possible for the mobile unit to spontaneously move from one cell to another and then notify the base station with the strongest signal. === Cellular frequency choice in mobile phone networks === The effect of frequency on cell coverage means that different frequencies serve better for different uses. Low frequencies, such as 450 MHz NMT, serve very well for countryside coverage. GSM 900 (900 MHz) is suitable for light urban coverage. GSM 1800 (1.8 GHz) starts to be limited by structural walls. UMTS, at 2.1 GHz is quite similar in coverage to GSM 1800. Higher frequencies are a disadvantage when it comes to coverage, but it is a decided advantage when it comes to capacity. Picocells, covering e.g. one floor of a building, become possible, and the same frequency can be used for cells which are practically neighbors. Cell service area may also vary due to interference from transmitting systems, both within and around that cell. This is true especially in CDMA based systems. The receiver requires a certain signal-to-noise ratio, and the transmitter should not send with too high transmission power in view to not cause interference with other transmitters. As the receiver moves away from the transmitter, the power received decreases, so the power control algorithm of the transmitter increases the power it transmits to restore the level of received power. As the interference (noise) rises above the received power from the transmitter, and the power of the transmitter cannot be increased anymore, the signal becomes corrupted and eventually unusable. In CDMA-based systems, the effect of interference from other mobile transmitters in the same cell on coverage area is very marked and has a special name, cell breathing. One can see examples of cell coverage by studying some of the coverage maps provided by real operators on their web sites or by looking at independently crowdsourced maps such as Opensignal or CellMapper. In certain cases they may mark the site of the transmitter; in others, it can be calculated by working out the point of strongest coverage. A cellular repeater is used to extend cell coverage into larger areas. They range from wideband repeaters for consumer use in homes and offices to smart or digital repeaters for industrial needs. === Cell size === The following table shows the dependency of the coverage area of one cell on the frequency of a CDMA2000 network: == See also == Lists and technical information: Mobile technologies 2G networks (the first digital networks, 1G and 0G were analog): GSM Circuit Switched Data (CSD) GPRS EDGE(IMT-SC) Evolved EDGE Digital AMPS Cellular Digital Packet Data (CDPD) cdmaOne (IS-95) Circuit Switched Data (CSD) Personal Handy-phone System (PHS) Personal Digital Cellular 3G networks: UMTS W-CDMA (air interface) TD-CDMA (air interface) TD-SCDMA (air interface) HSPA HSDPA HSPA+ CDMA2000 OFDMA (air interface) EVDO SVDO 4G networks: IMT Advanced LTE (TD-LTE) LTE Advanced LTE Advanced Pro WiMAX WiMAX-Advanced (WirelessMAN-Advanced) Ultra Mobile Broadband (never commercialized) MBWA (IEEE 802.20, Mobile Broadband Wireless Access, HC-SDMA, iBurst, has been shut down) 5G networks: 5G NR 5G-Advanced Starting with EVDO the following techniques can also be used to improve performance: MIMO, SDMA and Beamforming Cellular frequencies CDMA frequency bands GSM frequency bands UMTS frequency bands LTE frequency bands 5G NR frequency bands Deployed networks by technology List of UMTS networks List of CDMA2000 networks List of LTE networks List of deployed WiMAX networks List of 5G NR networks Deployed networks by country (including technology and frequencies) List of mobile network operators of Europe List of mobile network operators of the Americas List of mobile network operators of the Asia Pacific region List of mobile network operators of the Middle East and Africa List of mobile network operators (summary) Mobile country code - code, frequency, and technology for each operator in each country Comparison of mobile phone standards List of mobile phone brands by country (manufacturers) Equipment: Cellular repeater Cellular router Professional mobile radio (PMR) OpenBTS Remote radio head Baseband unit Radio access network Mobile cell sites Other: Antenna diversity Cellular traffic MIMO (multiple-input and multiple-output) Mobile edge computing Mobile phone radiation and health Network simulation Personal Communications Service Radio resource management (RRM) Routing in cellular networks Signal strength Title 47 of the Code of Federal Regulations == References == == Further reading == P. Key, D. Smith. Teletraffic Engineering in a competitive world. Elsevier Science B.V., Amsterdam Netherlands, 1999. ISBN 978-0444502681. Chapter 1 (Plenary) and 3 (mobile). William C. Y. Lee, Mobile Cellular Telecommunications Systems (1989), McGraw-Hill. ISBN 978-0-071-00790-0. == External links == Raciti, Robert C. (July 1995). "CELLULAR TECHNOLOGY". Nova Southeastern University. Archived from the original on 15 July 2013. Retrieved 2 April 2012. A History of Cellular Networks What are cellular networks? 1G to 6G Features & Evolution Technical Details with Call Flow about LTE Paging Procedure.
Wikipedia/Cellular_networks
Power control, broadly speaking, is the intelligent selection of transmitter power output in a communication system to achieve good performance within the system. The notion of "good performance" can depend on context and may include optimizing metrics such as link data rate, network capacity, outage probability, geographic coverage and range, and life of the network and network devices. Power control algorithms are used in many contexts, including cellular networks, sensor networks, wireless LANs, and DSL modems. == Transmit power control == Transmit power control is a technical mechanism used within some networking devices in order to prevent too much unwanted interference between different wireless networks (e.g. the owner's network and the neighbour's network). It is also essential component in case of cognitive radio networks deployed in a distributed fashion, aka distributed power control. The network devices supporting this feature include IEEE 802.11h Wireless LAN devices in the 5 GHz band compliant to the IEEE 802.11a. The idea of the mechanism is to automatically reduce the used transmission output power when other networks are within range. Reduced power means reduced interference problems and increased battery life. The power level of a single device can be reduced by 6 dB, which should result in an accumulated power level reduction (the sum of radiated power of all devices currently transmitting) of at least 3 dB (half of the power). == UMTS == Because of the interference in the WCDMA system, power control plays a very important role in the quality control for the different services in the UMTS system. Power control is executed 1500 times per second, whereas in the GSM system it is ~2 times per second. == See also == Cellular network Cellular traffic IEEE 802.11h Radio resource management Spectral efficiency Wireless LAN == References ==
Wikipedia/Power_control
Traffic management is a key branch within logistics. It concerns the planning, control and purchasing of transport services needed to physically move vehicles (for example aircraft, road vehicles, rolling stock and watercraft) and freight. Traffic management is implemented by people working with different job titles in different branches: Within freight and cargo logistics: traffic manager, assessment of hazardous and awkward materials, carrier choice and fees, demurrage, documentation, expediting, freight consolidation, insurance, reconsignment and tracking Within air traffic management: air traffic controller Within rail traffic management: rail traffic controller, train dispatcher or signalman Within road traffic management: traffic controller == See also == Air traffic control, a service provided by ground-based controllers who direct aircraft Road traffic control, directing vehicular and pedestrian traffic around a construction zone, accident or other road disruption Traffic control in shipping lanes Urban (peak-hour) traffic management == References == traffic control company sydney == External links == Media related to Traffic management at Wikimedia Commons
Wikipedia/Traffic_control
Salesforce, Inc. is an American cloud-based software company headquartered in San Francisco, California. It provides applications focused on sales, customer service, marketing automation, e-commerce, analytics, artificial intelligence, and application development. Founded by former Oracle executive Marc Benioff in March 1999, Salesforce grew quickly, making its initial public offering in 2004. As of September 2022, Salesforce is the 61st largest company in the world by market cap with a value of nearly US$153 billion. It became the world's largest enterprise applications firm in 2022. Salesforce ranked 491st on the 2023 edition of the Fortune 500, making $31.352 billion in revenue. Since 2020, Salesforce has also been a component of the Dow Jones Industrial Average. == History == Salesforce was founded on March 8, 1999 by former Oracle executive Marc Benioff, together with Parker Harris, Dave Moellenhoff, and Frank Dominguez as a software-as-a-service (SaaS) company. The first prototype of Salesforce was launched in November 1999. Two of Salesforce's earliest investors were Larry Ellison, the co-founder and first CEO of Oracle, and Halsey Minor, the founder of CNET. Salesforce was severely affected by the dot-com bubble bursting at the beginning of the new millennium, resulting in the company laying off 20% of its workforce. Despite its losses, Salesforce continued strong during the early 2000s. Salesforce also gained notability during this period for its "the end of software" tagline and marketing campaign, and even hired actors to hold up signs with its slogan outside a Siebel Systems conference. Salesforce's revenue continued to increase from 2000 to 2003, with 2003's revenue skyrocketing from $5.4 million in the fiscal year 2001 to over $100 million by December 2003. In 2003, Salesforce held its first annual Dreamforce conference in San Francisco. In June 2004, the company had its initial public offering on the New York Stock Exchange under the stock symbol CRM and raised US$110 million. In 2006, Salesforce launched Idea Exchange, a platform that allows customers to connect with company product managers. In 2009, Salesforce passed $1 billion in annual revenue. Also, in 2009, the company launched Service Cloud, an application that helps companies manage service conversations about their products and services. In 2014, the company released Trailhead, a free online learning platform. In October 2014, Salesforce announced the development of its Customer Success Platform. In September 2016, Salesforce announced the launch of Einstein, an artificial intelligence platform that supports several of Salesforce's cloud services. It reportedly acquired a 20-year license to be the exclusive business-oriented software company allowed to use Albert Einstein's likeness for $20 million. Salesforce launched the Sustainability Cloud (Net Zero Cloud as of 2022), which is used by companies to track progress towards achieving their net zero emissions goals. In 2020, Salesforce joined the Dow Jones Industrial Average, replacing energy giant and Standard Oil-descendant ExxonMobil. Salesforce's ascension to the Dow Jones was concurrent with that of Amgen and Honeywell. Because the Dow Jones factors its components by market price, Salesforce was the largest technology component of the index at its accession. Across 2020 and 2021, Salesforce saw some notable leadership changes; in February 2020, co-chief executive officer Keith Block stepped down from his position in the company. Marc Benioff remained as chairman and chief executive officer. In February 2021, Amy Weaver, previously the chief legal officer, became CFO. Former CFO Mark Hawkins announced that he would be retiring in October. In November 2021, Bret Taylor was named vice chair and co-CEO of the company. In December 2020, it was announced that Salesforce would acquire Slack for $27.7 billion, its largest acquisition to date. The acquisition closed in July 2021. Journalists covering the acquisition emphasized the price Salesforce paid for Slack, which was a 54% premium compared to Slack's market value. In April 2022, "Salesforce.com, Inc." changed its legal name to "Salesforce, Inc." Acceleration Economy reported that Salesforce had surpassed SAP to become the world's largest enterprise software vendor in August 2022. The next month, Salesforce announced a partnership with Meta Platforms. The deal called for Meta's consumer application WhatsApp to integrate Salesforce's Customer 360 platform to allow consumers to communicate with companies directly. In November 2022, Salesforce announced it would terminate some employees from its sales team. That same month, Salesforce announced its co-CEO and vice chair, Bret Taylor, would be stepping down from his roles at the end of January 2023, with Benioff continuing to run the company and serve as board chair. Within the week, former Tableau CEO Mark Nelson and former Slack CEO Stewart Butterfield also announced their departures. When asked about the departures, Benioff stated, "people come and people go"; Salesforce's stock dropped to a 52-week low after Nelson's resignation. In January 2023, the company announced a layoff of about 10%, or approximately 8,000 positions. According to Benioff, the company hired too aggressively during the COVID-19 pandemic and the increase in working from home led to the layoff. The company also reduced office space as part of the restructuring plan. The same month brought an announcement from activist investor Elliott Management that it would acquire a "big stake" in the company. In January 2024, Salesforce announced it was laying off 700 employees (about 1%) of its global staff. == Services == Salesforce offers several customer relationship management (CRM) services, including: Sales Cloud, Service Cloud, Marketing Cloud, and Commerce Cloud and Platform. Additional technologies include Slack. Other services include app creation, data integration and visualization, and training. Salesforce launched a suite of features called Salesforce Foundations in September 2024, bundling connected functionality across department-specific Sales Cloud and Service Cloud products. === Artificial intelligence === Launched at Dreamforce in 2016, Salesforce Einstein was the company’s first artificial intelligence product, developed from a set of technologies underlying the Salesforce platform. In March 2023, Salesforce announced ChatGPT integration in Slack was available to any organization, and the launch of Einstein GPT, a generative AI service. In March 2024, Salesforce launched Einstein Copilot: Health Actions, a conversation assistant based on its earlier artificial intelligence platform Einstein. It helps with making appointments, referrals, and gathering patient information.In July, Salesforce released an AI agent, the Einstein Service Agent, with the ability to perform customer service actions, like enabling product returns or refunds. In September 2024, the company deployed Agentforce (succeeding Salesforce Einstein), an agentic AI platform where users can create autonomous agents for customer service assistance, developing marketing campaigns, and coaching salespersons. === Salesforce Platform === Salesforce Platform (formerly known as Force.com) is a platform as a service (PaaS) that allows developers to add applications to the main Salesforce.com application. These applications are hosted on Salesforce.com infrastructure. Force.com applications are built using Apex, a proprietary Java-like programming language to generate HTML originally via the "Visualforce" framework. Beginning in 2015 the "Lightning Components" framework has been supported. The Apex compiler was designed by James Spagnola. As of 2014, the Force.com platform had 1.5 million registered developers according to Salesforce. === AppExchange === Launched in 2005, the Salesforce AppExchange is an online app store that allows users to sell third-party applications and consulting services. As of 2021, the exchange has over 5,000 apps listed. === Trailhead === Launched in 2014, Trailhead is a free online learning platform with courses focused on Salesforce technologies. === Discontinued === Desk.com was a SaaS help desk and customer support product acquired by Salesforce for $50 million in 2011,and consolidated with other services into Service Cloud Essentials in March 2018. Do.com was a cloud-based task management system for small groups and businesses, introduced in 2011, and discontinued in 2014. == Operations == Salesforce is headquartered in San Francisco in the Salesforce Tower. Salesforce has 110 offices, including ones in Hong Kong, Israel, London, Paris, Sydney and Tokyo. Standard & Poor's added Salesforce to the S&P 500 Index in September 2008. In August 2020, S&P Dow Jones Indices announced that Salesforce would replace ExxonMobil in the Dow Jones Industrial Average. === Culture === According to Marc Benioff, Salesforce corporate culture is based on the concept of Ohana. In 2021, Cynthia Perry, a design research senior manager, resigned, alleging discrimination in the workplace and posting her resignation letter on LinkedIn. On September 10, 2021, Benioff tweeted that the company is prepared to help any employee who wishes to move out of the state of Texas, following abortion legislation in Texas, announced on September 1, 2021. === Finances === For the fiscal year 2022, Salesforce reported revenue of US$26.49 billion, an increase of 25% year-over-year and 24% in constant currency. Salesforce ranked 126 on the 2022 Fortune 500 list of the largest United States companies by revenue. === IT infrastructure === In 2008, Salesforce migrated from Sun Fire E25K servers with SPARC processors running Solaris, to Dell servers with AMD processors, running Linux. In 2012, Salesforce announced plans to build a data center in the UK to handle European citizens' personal data. The center opened in 2014. In 2013, Salesforce and Oracle announced a nine-year partnership focusing on applications, platforms, and infrastructure. In 2016, Salesforce announced that it will use Amazon Web Services hosting for countries with restrictive data residency requirements and where no Salesforce data centers are operating. == Acquisitions == === 2006–2015 === In 2006, Salesforce acquired Sendia, a mobile web service firm, for $15 million and Kieden, an online advertising company. In 2007, Koral, a content management service, was acquired. In 2008, Salesforce acquired Instranet for $31.5 million. In 2010, Salesforce acquired multiple companies, including Jigsaw, a cloud-based data service provider, for $142 million, Heroku, a Ruby application platform-as-a-service, for $212 million, and Activa Live Chat, a live chat software provider. In 2011, Salesforce acquired Dimdim, a web conferencing platform, for $31 million, Radian6, a social media tracking company, for $340 million, and Rypple, a performance management software company. Rypple became known as Work.com in 2012. In 2012, Salesforce acquired Buddy Media, a social media marketer, for $689 million, and GoInstant, a browser collaboration startup, for $70 million. In 2013, Salesforce acquired ExactTarget, an email marketer, for $2.5 billion. In 2014, Salesforce acquired RelateIQ, a data company, for $390 million. In 2015, Salesforce acquired multiple companies for undisclosed sums, including Toopher, a mobile authentication company, Tempo, an AI calendar app, and MinHash, an AI platform. The company also acquired SteelBrick, a software company, for $360 million. === 2016–present === In 2016, Salesforce acquired Demandware, a cloud-based provider of e-commerce services, for $2.8 billion and Quip, a word processing app, for $750 million. In 2017, the company acquired Sequence, a user experience design agency. In 2018, Salesforce acquired several companies, including MuleSoft, a cloud service company, for $6.5 billion, as well as Rebel, an email services provider, and Datorama, an AI marketing platform, for undisclosed amounts. In 2019, Salesforce completed its acquisition of analytics software company Tableau for $15.7 billion in 2019, and Slack Technologies for $27.7 billion in 2021. Salesforce also made smaller acquisitions throughout 2019, 2020, and 2021, which included ClickSoftware for $1.35 billion, consulting firm Acumen Solutions for $570 million, CRM firm Vlocity for $1.33 billion, privacy compliance startup Phennecs for $16.5 million, and robotic process automation firm Servicetrace for an undisclosed amount. Salesforce's most recent acquisition was Slack-bot maker Troops.ai, announced in May 2022, and expected to close in 2023. In September 2023, Salesforce acquired Airkit.ai, a creator of AI-powered customer service applications and experiences. In December 2023, Salesforce announced it would acquire Spiff, an automated commission management platform for an undisclosed amount. In September 2024, Salesforce acquired data management firm Own for $1.9 billion, giving Own's about 1000 employees the deadline of January 31, 2025 in those positions. Salesforce has also acquired PredictSpring and Tenyx in 2024. In May 2025, Salesforce announced plans to acquire data management platform Informatica for about $8 billion. It had initially been in talks to purchase the company the prior year, but the two parties were unable to agree on terms. == Controversies == === Phishing attack === In November 2007, a phishing attack compromised contact information on a number of Salesforce customers. Some customers then received phishing emails that appeared to be invoices from Salesforce. Salesforce stated that "a phisher tricked someone into disclosing a password, but this intrusion did not stem from a security flaw in [the salesforce.com] application or database." === ‘Meatpistol’ presenters fired at Def Con === In 2017, at DEF CON, two security engineers were fired after giving a presentation on an internal project called MEATPISTOL. The presenters were sent a message 30 minutes prior to the presentation telling them not to go on stage, but the message wasn't seen until after they finished. The MEATPISTOL tool was anticipated to be released as open-source at the time of the presentation, but Salesforce did not release the code to developers or the public during the conference. The terminated employees called on the company to open-source the software after being dismissed. === RAICES donation refusal === The not-for-profit organization Refugee and Immigrant Center for Education and Legal Services (RAICES) rejected a US$250,000 donation from Salesforce because the company has contracts with U.S. Customs and Border Protection. === 2018 taxes === In December 2019, the Institute on Taxation and Economic Policy found that Salesforce was one of 91 companies who "paid an effective federal tax rate of 0% or less" in 2018, as a result of the Tax Cuts and Jobs Act of 2017. Their findings were published in a report based on the 379 Fortune 500 companies that declared a profit in 2018. === Sex-trafficking lawsuit === In March 2019, Salesforce faced a lawsuit by 50 anonymous women claiming to be victims and survivors of sex trafficking, abuse, and rape, alleging the company profited from and helped build technology that facilitated sex trafficking on the now-defunct Backpage.com. In March 2021, a judge granted partial dismissal of the case, dismissing charges of negligence and conspiracy, but allowed the case to proceed regarding charges of sex trafficking. In March 2024, the case was dismissed without prejudice. In September 2024, the US Court of Appeals for the Ninth Circuit denied a request to reverse the dismissal. === Disability discrimination lawsuit in Japan === In July 2021, Salesforce Japan faced a discrimination lawsuit from a former employee, according to Japanese legal media. The firm declined to comment on the suit to the media. The ex-employee, who has Autism Spectrum Disorder and ADHD, claimed she was discriminated against because of her disability and terminated in the firm's Japan web marketing team. The suit alleged that the anonymous woman, as an employee at Salesforce Japan from 2018 to 2020, faced hate speech, microaggressions and rejection of reasonable accommodation from the manager. She alleged that her attempts to resolve the problem were met with pressure from HR and job coach. The lawsuit is still continuing in Tokyo district court. In Japan, the legal disability quota for private companies is 2.3%. But Salesforce Japan has not met the quota and pay levy from 2009 to 2021 except 2017. In 2020 the firm did not report the number of disabled employees to Japanese labor official. Depending on the result of lawsuit, it is undeniable that the firm may face a risk of negative impact to disability hiring such as performance improvement plan on the disability employment act or disclosure as social punishment from the labor official. === Employee layoffs/Matthew McConaughey's salary === In January 2023, Salesforce reported that 8,000 employees had been laid off as a result of over-hiring during the Covid lockdown and a global economic downturn. In March 2023, the Wall Street Journal reported that actor Matthew McConaughey was paid 10 million dollars yearly for his role as a "creative advisor and TV pitchman". American musician will.i.am was also cited to be on the company's payroll due to his "strong understanding of technology". === Wrongful termination lawsuit === In September 2024, former Salesforce Senior Director Dina Zelikson filed a lawsuit against the company in San Francisco Superior Court, alleging wrongful termination, discrimination, and retaliation during a medical leave. == Salesforce Ventures == In 2009, Salesforce began investing in startups. These investments became Salesforce Ventures, headed by John Somorjai In September 2014, SFV set up Salesforce1 Fund, aimed at start-ups creating applications primarily for mobile phones. In December 2018, Salesforce Ventures announced the launch of the Japan Trailblazer Fund, focused on Japanese startups. In August 2018, Salesforce Ventures reported investments totaling over $1 billion in 275 companies, including CloudCraze (e-commerce), Figure Eight (artificial intelligence), Forter (online fraud prevention), and FinancialForce (automation software). In 2019, SFV's five largest investments—Domo (data-visualization software), SurveyMonkey (online survey software), Twilio (cloud-communication), Dropbox (cloud storage), and DocuSign (secure e-signature company)—accounted for nearly half of its portfolio. In 2021, Salesforce announced that its investments had resulted in $2.17 Billion annual gain. In June 2023 Salesforce increased the size of its Generative AI Fund for startups from $250 million to $500 million, and in September 2024 to $1 billion. == Office locations == Salesforce Tower (San Francisco, US) Salesforce Tower (Indianapolis, US) Salesforce Tower (London, UK) Salesforce Tower (Sydney, AU) Salesforce Singapore == Notes == == References == == External links == Business data for Salesforce, Inc.:
Wikipedia/Salesforce
In mathematics, the inverse trigonometric functions (occasionally also called antitrigonometric, cyclometric, or arcus functions) are the inverse functions of the trigonometric functions, under suitably restricted domains. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry. == Notation == Several notations for the inverse trigonometric functions exist. The most common convention is to name inverse trigonometric functions using an arc- prefix: arcsin(x), arccos(x), arctan(x), etc. (This convention is used throughout this article.) This notation arises from the following geometric relationships: when measuring in radians, an angle of θ radians will correspond to an arc whose length is rθ, where r is the radius of the circle. Thus in the unit circle, the cosine of x function is both the arc and the angle, because the arc of a circle of radius 1 is the same as the angle. Or, "the arc whose cosine is x" is the same as "the angle whose cosine is x", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians. In computer programming languages, the inverse trigonometric functions are often called by the abbreviated forms asin, acos, atan. The notations sin−1(x), cos−1(x), tan−1(x), etc., as introduced by John Herschel in 1813, are often used as well in English-language sources, much more than the also established sin[−1](x), cos[−1](x), tan[−1](x) – conventions consistent with the notation of an inverse function, that is useful (for example) to define the multivalued version of each inverse trigonometric function: tan − 1 ⁡ ( x ) = { arctan ⁡ ( x ) + π k ∣ k ∈ Z } . {\displaystyle \tan ^{-1}(x)=\{\arctan(x)+\pi k\mid k\in \mathbb {Z} \}~.} However, this might appear to conflict logically with the common semantics for expressions such as sin2(x) (although only sin2 x, without parentheses, is the really common use), which refer to numeric power rather than function composition, and therefore may result in confusion between notation for the reciprocal (multiplicative inverse) and inverse function. The confusion is somewhat mitigated by the fact that each of the reciprocal trigonometric functions has its own name — for example, (cos(x))−1 = sec(x). Nevertheless, certain authors advise against using it, since it is ambiguous. Another precarious convention used by a small number of authors is to use an uppercase first letter, along with a “−1” superscript: Sin−1(x), Cos−1(x), Tan−1(x), etc. Although it is intended to avoid confusion with the reciprocal, which should be represented by sin−1(x), cos−1(x), etc., or, better, by sin−1 x, cos−1 x, etc., it in turn creates yet another major source of ambiguity, especially since many popular high-level programming languages (e.g. Mathematica and MAGMA) use those very same capitalised representations for the standard trig functions, whereas others (Python, SymPy, NumPy, Matlab, MAPLE, etc.) use lower-case. Hence, since 2009, the ISO 80000-2 standard has specified solely the "arc" prefix for the inverse functions. == Basic concepts == === Principal values === Since none of the six trigonometric functions are one-to-one, they must be restricted in order to have inverse functions. Therefore, the result ranges of the inverse functions are proper (i.e. strict) subsets of the domains of the original functions. For example, using function in the sense of multivalued functions, just as the square root function y = x {\displaystyle y={\sqrt {x}}} could be defined from y 2 = x , {\displaystyle y^{2}=x,} the function y = arcsin ⁡ ( x ) {\displaystyle y=\arcsin(x)} is defined so that sin ⁡ ( y ) = x . {\displaystyle \sin(y)=x.} For a given real number x , {\displaystyle x,} with − 1 ≤ x ≤ 1 , {\displaystyle -1\leq x\leq 1,} there are multiple (in fact, countably infinitely many) numbers y {\displaystyle y} such that sin ⁡ ( y ) = x {\displaystyle \sin(y)=x} ; for example, sin ⁡ ( 0 ) = 0 , {\displaystyle \sin(0)=0,} but also sin ⁡ ( π ) = 0 , {\displaystyle \sin(\pi )=0,} sin ⁡ ( 2 π ) = 0 , {\displaystyle \sin(2\pi )=0,} etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value. These properties apply to all the inverse trigonometric functions. The principal inverses are listed in the following table. Note: Some authors define the range of arcsecant to be ( 0 ≤ y < π 2 {\textstyle 0\leq y<{\frac {\pi }{2}}} or π ≤ y < 3 π 2 {\textstyle \pi \leq y<{\frac {3\pi }{2}}} ), because the tangent function is nonnegative on this domain. This makes some computations more consistent. For example, using this range, tan ⁡ ( arcsec ⁡ ( x ) ) = x 2 − 1 , {\displaystyle \tan(\operatorname {arcsec}(x))={\sqrt {x^{2}-1}},} whereas with the range ( 0 ≤ y < π 2 {\textstyle 0\leq y<{\frac {\pi }{2}}} or π 2 < y ≤ π {\textstyle {\frac {\pi }{2}}<y\leq \pi } ), we would have to write tan ⁡ ( arcsec ⁡ ( x ) ) = ± x 2 − 1 , {\displaystyle \tan(\operatorname {arcsec}(x))=\pm {\sqrt {x^{2}-1}},} since tangent is nonnegative on 0 ≤ y < π 2 , {\textstyle 0\leq y<{\frac {\pi }{2}},} but nonpositive on π 2 < y ≤ π . {\textstyle {\frac {\pi }{2}}<y\leq \pi .} For a similar reason, the same authors define the range of arccosecant to be ( − π < y ≤ − π 2 {\textstyle (-\pi <y\leq -{\frac {\pi }{2}}} or 0 < y ≤ π 2 ) . {\textstyle 0<y\leq {\frac {\pi }{2}}).} ==== Domains ==== If x is allowed to be a complex number, then the range of y applies only to its real part. The table below displays names and domains of the inverse trigonometric functions along with the range of their usual principal values in radians. The symbol R = ( − ∞ , ∞ ) {\displaystyle \mathbb {R} =(-\infty ,\infty )} denotes the set of all real numbers and Z = { … , − 2 , − 1 , 0 , 1 , 2 , … } {\displaystyle \mathbb {Z} =\{\ldots ,\,-2,\,-1,\,0,\,1,\,2,\,\ldots \}} denotes the set of all integers. The set of all integer multiples of π {\displaystyle \pi } is denoted by π Z := { π n : n ∈ Z } = { … , − 2 π , − π , 0 , π , 2 π , … } . {\displaystyle \pi \mathbb {Z} ~:=~\{\pi n\;:\;n\in \mathbb {Z} \}~=~\{\ldots ,\,-2\pi ,\,-\pi ,\,0,\,\pi ,\,2\pi ,\,\ldots \}.} The symbol ∖ {\displaystyle \,\setminus \,} denotes set subtraction so that, for instance, R ∖ ( − 1 , 1 ) = ( − ∞ , − 1 ] ∪ [ 1 , ∞ ) {\displaystyle \mathbb {R} \setminus (-1,1)=(-\infty ,-1]\cup [1,\infty )} is the set of points in R {\displaystyle \mathbb {R} } (that is, real numbers) that are not in the interval ( − 1 , 1 ) . {\displaystyle (-1,1).} The Minkowski sum notation π Z + ( 0 , π ) {\textstyle \pi \mathbb {Z} +(0,\pi )} and π Z + ( − π 2 , π 2 ) {\displaystyle \pi \mathbb {Z} +{\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}} that is used above to concisely write the domains of cot , csc , tan , and sec {\displaystyle \cot ,\csc ,\tan ,{\text{ and }}\sec } is now explained. Domain of cotangent cot {\displaystyle \cot } and cosecant csc {\displaystyle \csc } : The domains of cot {\displaystyle \,\cot \,} and csc {\displaystyle \,\csc \,} are the same. They are the set of all angles θ {\displaystyle \theta } at which sin ⁡ θ ≠ 0 , {\displaystyle \sin \theta \neq 0,} i.e. all real numbers that are not of the form π n {\displaystyle \pi n} for some integer n , {\displaystyle n,} π Z + ( 0 , π ) = ⋯ ∪ ( − 2 π , − π ) ∪ ( − π , 0 ) ∪ ( 0 , π ) ∪ ( π , 2 π ) ∪ ⋯ = R ∖ π Z {\displaystyle {\begin{aligned}\pi \mathbb {Z} +(0,\pi )&=\cdots \cup (-2\pi ,-\pi )\cup (-\pi ,0)\cup (0,\pi )\cup (\pi ,2\pi )\cup \cdots \\&=\mathbb {R} \setminus \pi \mathbb {Z} \end{aligned}}} Domain of tangent tan {\displaystyle \tan } and secant sec {\displaystyle \sec } : The domains of tan {\displaystyle \,\tan \,} and sec {\displaystyle \,\sec \,} are the same. They are the set of all angles θ {\displaystyle \theta } at which cos ⁡ θ ≠ 0 , {\displaystyle \cos \theta \neq 0,} π Z + ( − π 2 , π 2 ) = ⋯ ∪ ( − 3 π 2 , − π 2 ) ∪ ( − π 2 , π 2 ) ∪ ( π 2 , 3 π 2 ) ∪ ⋯ = R ∖ ( π 2 + π Z ) {\displaystyle {\begin{aligned}\pi \mathbb {Z} +\left(-{\tfrac {\pi }{2}},{\tfrac {\pi }{2}}\right)&=\cdots \cup {\bigl (}{-{\tfrac {3\pi }{2}}},{-{\tfrac {\pi }{2}}}{\bigr )}\cup {\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}\cup {\bigl (}{\tfrac {\pi }{2}},{\tfrac {3\pi }{2}}{\bigr )}\cup \cdots \\&=\mathbb {R} \setminus \left({\tfrac {\pi }{2}}+\pi \mathbb {Z} \right)\\\end{aligned}}} === Solutions to elementary trigonometric equations === Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of 2 π : {\displaystyle 2\pi :} Sine and cosecant begin their period at 2 π k − π 2 {\textstyle 2\pi k-{\frac {\pi }{2}}} (where k {\displaystyle k} is an integer), finish it at 2 π k + π 2 , {\textstyle 2\pi k+{\frac {\pi }{2}},} and then reverse themselves over 2 π k + π 2 {\textstyle 2\pi k+{\frac {\pi }{2}}} to 2 π k + 3 π 2 . {\textstyle 2\pi k+{\frac {3\pi }{2}}.} Cosine and secant begin their period at 2 π k , {\displaystyle 2\pi k,} finish it at 2 π k + π . {\displaystyle 2\pi k+\pi .} and then reverse themselves over 2 π k + π {\displaystyle 2\pi k+\pi } to 2 π k + 2 π . {\displaystyle 2\pi k+2\pi .} Tangent begins its period at 2 π k − π 2 , {\textstyle 2\pi k-{\frac {\pi }{2}},} finishes it at 2 π k + π 2 , {\textstyle 2\pi k+{\frac {\pi }{2}},} and then repeats it (forward) over 2 π k + π 2 {\textstyle 2\pi k+{\frac {\pi }{2}}} to 2 π k + 3 π 2 . {\textstyle 2\pi k+{\frac {3\pi }{2}}.} Cotangent begins its period at 2 π k , {\displaystyle 2\pi k,} finishes it at 2 π k + π , {\displaystyle 2\pi k+\pi ,} and then repeats it (forward) over 2 π k + π {\displaystyle 2\pi k+\pi } to 2 π k + 2 π . {\displaystyle 2\pi k+2\pi .} This periodicity is reflected in the general inverses, where k {\displaystyle k} is some integer. The following table shows how inverse trigonometric functions may be used to solve equalities involving the six standard trigonometric functions. It is assumed that the given values θ , {\displaystyle \theta ,} r , {\displaystyle r,} s , {\displaystyle s,} x , {\displaystyle x,} and y {\displaystyle y} all lie within appropriate ranges so that the relevant expressions below are well-defined. Note that "for some k ∈ Z {\displaystyle k\in \mathbb {Z} } " is just another way of saying "for some integer k . {\displaystyle k.} " The symbol ⟺ {\displaystyle \,\iff \,} is logical equality and indicates that if the left hand side is true then so is the right hand side and, conversely, if the right hand side is true then so is the left hand side (see this footnote for more details and an example illustrating this concept). where the first four solutions can be written in expanded form as: For example, if cos ⁡ θ = − 1 {\displaystyle \cos \theta =-1} then θ = π + 2 π k = − π + 2 π ( 1 + k ) {\displaystyle \theta =\pi +2\pi k=-\pi +2\pi (1+k)} for some k ∈ Z . {\displaystyle k\in \mathbb {Z} .} While if sin ⁡ θ = ± 1 {\displaystyle \sin \theta =\pm 1} then θ = π 2 + π k = − π 2 + π ( k + 1 ) {\textstyle \theta ={\frac {\pi }{2}}+\pi k=-{\frac {\pi }{2}}+\pi (k+1)} for some k ∈ Z , {\displaystyle k\in \mathbb {Z} ,} where k {\displaystyle k} will be even if sin ⁡ θ = 1 {\displaystyle \sin \theta =1} and it will be odd if sin ⁡ θ = − 1. {\displaystyle \sin \theta =-1.} The equations sec ⁡ θ = − 1 {\displaystyle \sec \theta =-1} and csc ⁡ θ = ± 1 {\displaystyle \csc \theta =\pm 1} have the same solutions as cos ⁡ θ = − 1 {\displaystyle \cos \theta =-1} and sin ⁡ θ = ± 1 , {\displaystyle \sin \theta =\pm 1,} respectively. In all equations above except for those just solved (i.e. except for sin {\displaystyle \sin } / csc ⁡ θ = ± 1 {\displaystyle \csc \theta =\pm 1} and cos {\displaystyle \cos } / sec ⁡ θ = − 1 {\displaystyle \sec \theta =-1} ), the integer k {\displaystyle k} in the solution's formula is uniquely determined by θ {\displaystyle \theta } (for fixed r , s , x , {\displaystyle r,s,x,} and y {\displaystyle y} ). With the help of integer parity Parity ⁡ ( h ) = { 0 if h is even 1 if h is odd {\displaystyle \operatorname {Parity} (h)={\begin{cases}0&{\text{if }}h{\text{ is even }}\\1&{\text{if }}h{\text{ is odd }}\\\end{cases}}} it is possible to write a solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} that doesn't involve the "plus or minus" ± {\displaystyle \,\pm \,} symbol: c o s θ = x {\displaystyle cos\;\theta =x\quad } if and only if θ = ( − 1 ) h arccos ⁡ ( x ) + π h + π Parity ⁡ ( h ) {\displaystyle \quad \theta =(-1)^{h}\arccos(x)+\pi h+\pi \operatorname {Parity} (h)\quad } for some h ∈ Z . {\displaystyle h\in \mathbb {Z} .} And similarly for the secant function, s e c θ = r {\displaystyle sec\;\theta =r\quad } if and only if θ = ( − 1 ) h arcsec ⁡ ( r ) + π h + π Parity ⁡ ( h ) {\displaystyle \quad \theta =(-1)^{h}\operatorname {arcsec}(r)+\pi h+\pi \operatorname {Parity} (h)\quad } for some h ∈ Z , {\displaystyle h\in \mathbb {Z} ,} where π h + π Parity ⁡ ( h ) {\displaystyle \pi h+\pi \operatorname {Parity} (h)} equals π h {\displaystyle \pi h} when the integer h {\displaystyle h} is even, and equals π h + π {\displaystyle \pi h+\pi } when it's odd. ==== Detailed example and explanation of the "plus or minus" symbol ± ==== The solutions to cos ⁡ θ = x {\displaystyle \cos \theta =x} and sec ⁡ θ = x {\displaystyle \sec \theta =x} involve the "plus or minus" symbol ± , {\displaystyle \,\pm ,\,} whose meaning is now clarified. Only the solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} will be discussed since the discussion for sec ⁡ θ = x {\displaystyle \sec \theta =x} is the same. We are given x {\displaystyle x} between − 1 ≤ x ≤ 1 {\displaystyle -1\leq x\leq 1} and we know that there is an angle θ {\displaystyle \theta } in some interval that satisfies cos ⁡ θ = x . {\displaystyle \cos \theta =x.} We want to find this θ . {\displaystyle \theta .} The table above indicates that the solution is θ = ± arccos ⁡ x + 2 π k for some k ∈ Z {\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} } which is a shorthand way of saying that (at least) one of the following statement is true: θ = arccos ⁡ x + 2 π k {\displaystyle \,\theta =\arccos x+2\pi k\,} for some integer k , {\displaystyle k,} or θ = − arccos ⁡ x + 2 π k {\displaystyle \,\theta =-\arccos x+2\pi k\,} for some integer k . {\displaystyle k.} As mentioned above, if arccos ⁡ x = π {\displaystyle \,\arccos x=\pi \,} (which by definition only happens when x = cos ⁡ π = − 1 {\displaystyle x=\cos \pi =-1} ) then both statements (1) and (2) hold, although with different values for the integer k {\displaystyle k} : if K {\displaystyle K} is the integer from statement (1), meaning that θ = π + 2 π K {\displaystyle \theta =\pi +2\pi K} holds, then the integer k {\displaystyle k} for statement (2) is K + 1 {\displaystyle K+1} (because θ = − π + 2 π ( 1 + K ) {\displaystyle \theta =-\pi +2\pi (1+K)} ). However, if x ≠ − 1 {\displaystyle x\neq -1} then the integer k {\displaystyle k} is unique and completely determined by θ . {\displaystyle \theta .} If arccos ⁡ x = 0 {\displaystyle \,\arccos x=0\,} (which by definition only happens when x = cos ⁡ 0 = 1 {\displaystyle x=\cos 0=1} ) then ± arccos ⁡ x = 0 {\displaystyle \,\pm \arccos x=0\,} (because + arccos ⁡ x = + 0 = 0 {\displaystyle \,+\arccos x=+0=0\,} and − arccos ⁡ x = − 0 = 0 {\displaystyle \,-\arccos x=-0=0\,} so in both cases ± arccos ⁡ x {\displaystyle \,\pm \arccos x\,} is equal to 0 {\displaystyle 0} ) and so the statements (1) and (2) happen to be identical in this particular case (and so both hold). Having considered the cases arccos ⁡ x = 0 {\displaystyle \,\arccos x=0\,} and arccos ⁡ x = π , {\displaystyle \,\arccos x=\pi ,\,} we now focus on the case where arccos ⁡ x ≠ 0 {\displaystyle \,\arccos x\neq 0\,} and arccos ⁡ x ≠ π , {\displaystyle \,\arccos x\neq \pi ,\,} So assume this from now on. The solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} is still θ = ± arccos ⁡ x + 2 π k for some k ∈ Z {\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} } which as before is shorthand for saying that one of statements (1) and (2) is true. However this time, because arccos ⁡ x ≠ 0 {\displaystyle \,\arccos x\neq 0\,} and 0 < arccos ⁡ x < π , {\displaystyle \,0<\arccos x<\pi ,\,} statements (1) and (2) are different and furthermore, exactly one of the two equalities holds (not both). Additional information about θ {\displaystyle \theta } is needed to determine which one holds. For example, suppose that x = 0 {\displaystyle x=0} and that all that is known about θ {\displaystyle \theta } is that − π ≤ θ ≤ π {\displaystyle \,-\pi \leq \theta \leq \pi \,} (and nothing more is known). Then arccos ⁡ x = arccos ⁡ 0 = π 2 {\displaystyle \arccos x=\arccos 0={\frac {\pi }{2}}} and moreover, in this particular case k = 0 {\displaystyle k=0} (for both the + {\displaystyle \,+\,} case and the − {\displaystyle \,-\,} case) and so consequently, θ = ± arccos ⁡ x + 2 π k = ± ( π 2 ) + 2 π ( 0 ) = ± π 2 . {\displaystyle \theta ~=~\pm \arccos x+2\pi k~=~\pm \left({\frac {\pi }{2}}\right)+2\pi (0)~=~\pm {\frac {\pi }{2}}.} This means that θ {\displaystyle \theta } could be either π / 2 {\displaystyle \,\pi /2\,} or − π / 2. {\displaystyle \,-\pi /2.} Without additional information it is not possible to determine which of these values θ {\displaystyle \theta } has. An example of some additional information that could determine the value of θ {\displaystyle \theta } would be knowing that the angle is above the x {\displaystyle x} -axis (in which case θ = π / 2 {\displaystyle \theta =\pi /2} ) or alternatively, knowing that it is below the x {\displaystyle x} -axis (in which case θ = − π / 2 {\displaystyle \theta =-\pi /2} ). ==== Equal identical trigonometric functions ==== The table below shows how two angles θ {\displaystyle \theta } and φ {\displaystyle \varphi } must be related if their values under a given trigonometric function are equal or negatives of each other. The vertical double arrow ⇕ {\displaystyle \Updownarrow } in the last row indicates that θ {\displaystyle \theta } and φ {\displaystyle \varphi } satisfy | sin ⁡ θ | = | sin ⁡ φ | {\displaystyle \left|\sin \theta \right|=\left|\sin \varphi \right|} if and only if they satisfy | cos ⁡ θ | = | cos ⁡ φ | . {\displaystyle \left|\cos \theta \right|=\left|\cos \varphi \right|.} Set of all solutions to elementary trigonometric equations Thus given a single solution θ {\displaystyle \theta } to an elementary trigonometric equation ( sin ⁡ θ = y {\displaystyle \sin \theta =y} is such an equation, for instance, and because sin ⁡ ( arcsin ⁡ y ) = y {\displaystyle \sin(\arcsin y)=y} always holds, θ := arcsin ⁡ y {\displaystyle \theta :=\arcsin y} is always a solution), the set of all solutions to it are: === Transforming equations === The equations above can be transformed by using the reflection and shift identities: These formulas imply, in particular, that the following hold: sin ⁡ θ = − sin ⁡ ( − θ ) = − sin ⁡ ( π + θ ) = − sin ⁡ ( π − θ ) = − cos ⁡ ( π 2 + θ ) = − cos ⁡ ( π 2 − θ ) = − cos ⁡ ( − π 2 − θ ) = − cos ⁡ ( − π 2 + θ ) = − cos ⁡ ( 3 π 2 − θ ) = − cos ⁡ ( − 3 π 2 + θ ) cos ⁡ θ = − cos ⁡ ( − θ ) = − cos ⁡ ( π + θ ) = − cos ⁡ ( π − θ ) = − sin ⁡ ( π 2 + θ ) = − sin ⁡ ( π 2 − θ ) = − sin ⁡ ( − π 2 − θ ) = − sin ⁡ ( − π 2 + θ ) = − sin ⁡ ( 3 π 2 − θ ) = − sin ⁡ ( − 3 π 2 + θ ) tan ⁡ θ = − tan ⁡ ( − θ ) = − tan ⁡ ( π + θ ) = − tan ⁡ ( π − θ ) = − cot ⁡ ( π 2 + θ ) = − cot ⁡ ( π 2 − θ ) = − cot ⁡ ( − π 2 − θ ) = − cot ⁡ ( − π 2 + θ ) = − cot ⁡ ( 3 π 2 − θ ) = − cot ⁡ ( − 3 π 2 + θ ) {\displaystyle {\begin{aligned}\sin \theta &=-\sin(-\theta )&&=-\sin(\pi +\theta )&&={\phantom {-}}\sin(\pi -\theta )\\&=-\cos \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cos \left({\frac {\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {\pi }{2}}-\theta \right)\\&={\phantom {-}}\cos \left(-{\frac {\pi }{2}}+\theta \right)&&=-\cos \left({\frac {3\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\cos \theta &={\phantom {-}}\cos(-\theta )&&=-\cos(\pi +\theta )&&=-\cos(\pi -\theta )\\&={\phantom {-}}\sin \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\sin \left({\frac {\pi }{2}}-\theta \right)&&=-\sin \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\sin \left(-{\frac {\pi }{2}}+\theta \right)&&=-\sin \left({\frac {3\pi }{2}}-\theta \right)&&={\phantom {-}}\sin \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\tan \theta &=-\tan(-\theta )&&={\phantom {-}}\tan(\pi +\theta )&&=-\tan(\pi -\theta )\\&=-\cot \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {\pi }{2}}-\theta \right)&&={\phantom {-}}\cot \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\cot \left(-{\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {3\pi }{2}}-\theta \right)&&=-\cot \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\end{aligned}}} where swapping sin ↔ csc , {\displaystyle \sin \leftrightarrow \csc ,} swapping cos ↔ sec , {\displaystyle \cos \leftrightarrow \sec ,} and swapping tan ↔ cot {\displaystyle \tan \leftrightarrow \cot } gives the analogous equations for csc , sec , and cot , {\displaystyle \csc ,\sec ,{\text{ and }}\cot ,} respectively. So for example, by using the equality sin ⁡ ( π 2 − θ ) = cos ⁡ θ , {\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=\cos \theta ,} the equation cos ⁡ θ = x {\displaystyle \cos \theta =x} can be transformed into sin ⁡ ( π 2 − θ ) = x , {\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=x,} which allows for the solution to the equation sin ⁡ φ = x {\displaystyle \;\sin \varphi =x\;} (where φ := π 2 − θ {\textstyle \varphi :={\frac {\pi }{2}}-\theta } ) to be used; that solution being: φ = ( − 1 ) k arcsin ⁡ ( x ) + π k for some k ∈ Z , {\displaystyle \varphi =(-1)^{k}\arcsin(x)+\pi k\;{\text{ for some }}k\in \mathbb {Z} ,} which becomes: π 2 − θ = ( − 1 ) k arcsin ⁡ ( x ) + π k for some k ∈ Z {\displaystyle {\frac {\pi }{2}}-\theta ~=~(-1)^{k}\arcsin(x)+\pi k\quad {\text{ for some }}k\in \mathbb {Z} } where using the fact that ( − 1 ) k = ( − 1 ) − k {\displaystyle (-1)^{k}=(-1)^{-k}} and substituting h := − k {\displaystyle h:=-k} proves that another solution to cos ⁡ θ = x {\displaystyle \;\cos \theta =x\;} is: θ = ( − 1 ) h + 1 arcsin ⁡ ( x ) + π h + π 2 for some h ∈ Z . {\displaystyle \theta ~=~(-1)^{h+1}\arcsin(x)+\pi h+{\frac {\pi }{2}}\quad {\text{ for some }}h\in \mathbb {Z} .} The substitution arcsin ⁡ x = π 2 − arccos ⁡ x {\displaystyle \;\arcsin x={\frac {\pi }{2}}-\arccos x\;} may be used express the right hand side of the above formula in terms of arccos ⁡ x {\displaystyle \;\arccos x\;} instead of arcsin ⁡ x . {\displaystyle \;\arcsin x.\;} === Relationships between trigonometric functions and inverse trigonometric functions === Trigonometric functions of inverse trigonometric functions are tabulated below. A quick way to derive them is by considering the geometry of a right-angled triangle, with one side of length 1 and another side of length x , {\displaystyle x,} then applying the Pythagorean theorem and definitions of the trigonometric ratios. It is worth noting that for arcsecant and arccosecant, the diagram assumes that x {\displaystyle x} is positive, and thus the result has to be corrected through the use of absolute values and the signum (sgn) operation. === Relationships among the inverse trigonometric functions === Complementary angles: arccos ⁡ ( x ) = π 2 − arcsin ⁡ ( x ) arccot ⁡ ( x ) = π 2 − arctan ⁡ ( x ) arccsc ⁡ ( x ) = π 2 − arcsec ⁡ ( x ) {\displaystyle {\begin{aligned}\arccos(x)&={\frac {\pi }{2}}-\arcsin(x)\\[0.5em]\operatorname {arccot}(x)&={\frac {\pi }{2}}-\arctan(x)\\[0.5em]\operatorname {arccsc}(x)&={\frac {\pi }{2}}-\operatorname {arcsec}(x)\end{aligned}}} Negative arguments: arcsin ⁡ ( − x ) = − arcsin ⁡ ( x ) arccsc ⁡ ( − x ) = − arccsc ⁡ ( x ) arccos ⁡ ( − x ) = π − arccos ⁡ ( x ) arcsec ⁡ ( − x ) = π − arcsec ⁡ ( x ) arctan ⁡ ( − x ) = − arctan ⁡ ( x ) arccot ⁡ ( − x ) = π − arccot ⁡ ( x ) {\displaystyle {\begin{aligned}\arcsin(-x)&=-\arcsin(x)\\\operatorname {arccsc}(-x)&=-\operatorname {arccsc}(x)\\\arccos(-x)&=\pi -\arccos(x)\\\operatorname {arcsec}(-x)&=\pi -\operatorname {arcsec}(x)\\\arctan(-x)&=-\arctan(x)\\\operatorname {arccot}(-x)&=\pi -\operatorname {arccot}(x)\end{aligned}}} Reciprocal arguments: arcsin ⁡ ( 1 x ) = arccsc ⁡ ( x ) arccsc ⁡ ( 1 x ) = arcsin ⁡ ( x ) arccos ⁡ ( 1 x ) = arcsec ⁡ ( x ) arcsec ⁡ ( 1 x ) = arccos ⁡ ( x ) arctan ⁡ ( 1 x ) = arccot ⁡ ( x ) = π 2 − arctan ⁡ ( x ) , if x > 0 arctan ⁡ ( 1 x ) = arccot ⁡ ( x ) − π = − π 2 − arctan ⁡ ( x ) , if x < 0 arccot ⁡ ( 1 x ) = arctan ⁡ ( x ) = π 2 − arccot ⁡ ( x ) , if x > 0 arccot ⁡ ( 1 x ) = arctan ⁡ ( x ) + π = 3 π 2 − arccot ⁡ ( x ) , if x < 0 {\displaystyle {\begin{aligned}\arcsin \left({\frac {1}{x}}\right)&=\operatorname {arccsc}(x)&\\[0.3em]\operatorname {arccsc} \left({\frac {1}{x}}\right)&=\arcsin(x)&\\[0.3em]\arccos \left({\frac {1}{x}}\right)&=\operatorname {arcsec}(x)&\\[0.3em]\operatorname {arcsec} \left({\frac {1}{x}}\right)&=\arccos(x)&\\[0.3em]\arctan \left({\frac {1}{x}}\right)&=\operatorname {arccot}(x)&={\frac {\pi }{2}}-\arctan(x)\,,{\text{ if }}x>0\\[0.3em]\arctan \left({\frac {1}{x}}\right)&=\operatorname {arccot}(x)-\pi &=-{\frac {\pi }{2}}-\arctan(x)\,,{\text{ if }}x<0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&=\arctan(x)&={\frac {\pi }{2}}-\operatorname {arccot}(x)\,,{\text{ if }}x>0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&=\arctan(x)+\pi &={\frac {3\pi }{2}}-\operatorname {arccot}(x)\,,{\text{ if }}x<0\end{aligned}}} The identities above can be used with (and derived from) the fact that sin {\displaystyle \sin } and csc {\displaystyle \csc } are reciprocals (i.e. csc = 1 sin {\displaystyle \csc ={\tfrac {1}{\sin }}} ), as are cos {\displaystyle \cos } and sec , {\displaystyle \sec ,} and tan {\displaystyle \tan } and cot . {\displaystyle \cot .} Useful identities if one only has a fragment of a sine table: arcsin ⁡ ( x ) = 1 2 arccos ⁡ ( 1 − 2 x 2 ) , if 0 ≤ x ≤ 1 arcsin ⁡ ( x ) = arctan ⁡ ( x 1 − x 2 ) arccos ⁡ ( x ) = 1 2 arccos ⁡ ( 2 x 2 − 1 ) , if 0 ≤ x ≤ 1 arccos ⁡ ( x ) = arctan ⁡ ( 1 − x 2 x ) arccos ⁡ ( x ) = arcsin ⁡ ( 1 − x 2 ) , if 0 ≤ x ≤ 1 , from which you get arccos ( 1 − x 2 1 + x 2 ) = arcsin ⁡ ( 2 x 1 + x 2 ) , if 0 ≤ x ≤ 1 arcsin ( 1 − x 2 ) = π 2 − sgn ⁡ ( x ) arcsin ⁡ ( x ) arctan ⁡ ( x ) = arcsin ⁡ ( x 1 + x 2 ) arccot ⁡ ( x ) = arccos ⁡ ( x 1 + x 2 ) {\displaystyle {\begin{aligned}\arcsin(x)&={\frac {1}{2}}\arccos \left(1-2x^{2}\right)\,,{\text{ if }}0\leq x\leq 1\\\arcsin(x)&=\arctan \left({\frac {x}{\sqrt {1-x^{2}}}}\right)\\\arccos(x)&={\frac {1}{2}}\arccos \left(2x^{2}-1\right)\,,{\text{ if }}0\leq x\leq 1\\\arccos(x)&=\arctan \left({\frac {\sqrt {1-x^{2}}}{x}}\right)\\\arccos(x)&=\arcsin \left({\sqrt {1-x^{2}}}\right)\,,{\text{ if }}0\leq x\leq 1{\text{ , from which you get }}\\\arccos &\left({\frac {1-x^{2}}{1+x^{2}}}\right)=\arcsin \left({\frac {2x}{1+x^{2}}}\right)\,,{\text{ if }}0\leq x\leq 1\\\arcsin &\left({\sqrt {1-x^{2}}}\right)={\frac {\pi }{2}}-\operatorname {sgn}(x)\arcsin(x)\\\arctan(x)&=\arcsin \left({\frac {x}{\sqrt {1+x^{2}}}}\right)\\\operatorname {arccot}(x)&=\arccos \left({\frac {x}{\sqrt {1+x^{2}}}}\right)\end{aligned}}} Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real). A useful form that follows directly from the table above is arctan ⁡ ( x ) = arccos ⁡ ( 1 1 + x 2 ) , if x ≥ 0 {\displaystyle \arctan(x)=\arccos \left({\sqrt {\frac {1}{1+x^{2}}}}\right)\,,{\text{ if }}x\geq 0} . It is obtained by recognizing that cos ⁡ ( arctan ⁡ ( x ) ) = 1 1 + x 2 = cos ⁡ ( arccos ⁡ ( 1 1 + x 2 ) ) {\displaystyle \cos \left(\arctan \left(x\right)\right)={\sqrt {\frac {1}{1+x^{2}}}}=\cos \left(\arccos \left({\sqrt {\frac {1}{1+x^{2}}}}\right)\right)} . From the half-angle formula, tan ⁡ ( θ 2 ) = sin ⁡ ( θ ) 1 + cos ⁡ ( θ ) {\displaystyle \tan \left({\tfrac {\theta }{2}}\right)={\tfrac {\sin(\theta )}{1+\cos(\theta )}}} , we get: arcsin ⁡ ( x ) = 2 arctan ⁡ ( x 1 + 1 − x 2 ) arccos ⁡ ( x ) = 2 arctan ⁡ ( 1 − x 2 1 + x ) , if − 1 < x ≤ 1 arctan ⁡ ( x ) = 2 arctan ⁡ ( x 1 + 1 + x 2 ) {\displaystyle {\begin{aligned}\arcsin(x)&=2\arctan \left({\frac {x}{1+{\sqrt {1-x^{2}}}}}\right)\\[0.5em]\arccos(x)&=2\arctan \left({\frac {\sqrt {1-x^{2}}}{1+x}}\right)\,,{\text{ if }}-1<x\leq 1\\[0.5em]\arctan(x)&=2\arctan \left({\frac {x}{1+{\sqrt {1+x^{2}}}}}\right)\end{aligned}}} === Arctangent addition formula === arctan ⁡ ( u ) ± arctan ⁡ ( v ) = arctan ⁡ ( u ± v 1 ∓ u v ) ( mod π ) , u v ≠ 1 . {\displaystyle \arctan(u)\pm \arctan(v)=\arctan \left({\frac {u\pm v}{1\mp uv}}\right){\pmod {\pi }}\,,\quad uv\neq 1\,.} This is derived from the tangent addition formula tan ⁡ ( α ± β ) = tan ⁡ ( α ) ± tan ⁡ ( β ) 1 ∓ tan ⁡ ( α ) tan ⁡ ( β ) , {\displaystyle \tan(\alpha \pm \beta )={\frac {\tan(\alpha )\pm \tan(\beta )}{1\mp \tan(\alpha )\tan(\beta )}}\,,} by letting α = arctan ⁡ ( u ) , β = arctan ⁡ ( v ) . {\displaystyle \alpha =\arctan(u)\,,\quad \beta =\arctan(v)\,.} == In calculus == === Derivatives of inverse trigonometric functions === The derivatives for complex values of z are as follows: d d z arcsin ⁡ ( z ) = 1 1 − z 2 ; z ≠ − 1 , + 1 d d z arccos ⁡ ( z ) = − 1 1 − z 2 ; z ≠ − 1 , + 1 d d z arctan ⁡ ( z ) = 1 1 + z 2 ; z ≠ − i , + i d d z arccot ⁡ ( z ) = − 1 1 + z 2 ; z ≠ − i , + i d d z arcsec ⁡ ( z ) = 1 z 2 1 − 1 z 2 ; z ≠ − 1 , 0 , + 1 d d z arccsc ⁡ ( z ) = − 1 z 2 1 − 1 z 2 ; z ≠ − 1 , 0 , + 1 {\displaystyle {\begin{aligned}{\frac {d}{dz}}\arcsin(z)&{}={\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {d}{dz}}\arccos(z)&{}=-{\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {d}{dz}}\arctan(z)&{}={\frac {1}{1+z^{2}}}\;;&z&{}\neq -i,+i\\{\frac {d}{dz}}\operatorname {arccot}(z)&{}=-{\frac {1}{1+z^{2}}}\;;&z&{}\neq -i,+i\\{\frac {d}{dz}}\operatorname {arcsec}(z)&{}={\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\\{\frac {d}{dz}}\operatorname {arccsc}(z)&{}=-{\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\end{aligned}}} Only for real values of x: d d x arcsec ⁡ ( x ) = 1 | x | x 2 − 1 ; | x | > 1 d d x arccsc ⁡ ( x ) = − 1 | x | x 2 − 1 ; | x | > 1 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arcsec}(x)&{}={\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\\{\frac {d}{dx}}\operatorname {arccsc}(x)&{}=-{\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\end{aligned}}} These formulas can be derived in terms of the derivatives of trigonometric functions. For example, if x = sin ⁡ θ {\displaystyle x=\sin \theta } , then d x / d θ = cos ⁡ θ = 1 − x 2 , {\textstyle dx/d\theta =\cos \theta ={\sqrt {1-x^{2}}},} so d d x arcsin ⁡ ( x ) = d θ d x = 1 d x / d θ = 1 1 − x 2 . {\displaystyle {\frac {d}{dx}}\arcsin(x)={\frac {d\theta }{dx}}={\frac {1}{dx/d\theta }}={\frac {1}{\sqrt {1-x^{2}}}}.} === Expression as definite integrals === Integrating the derivative and fixing the value at one point gives an expression for the inverse trigonometric function as a definite integral: arcsin ⁡ ( x ) = ∫ 0 x 1 1 − z 2 d z , | x | ≤ 1 arccos ⁡ ( x ) = ∫ x 1 1 1 − z 2 d z , | x | ≤ 1 arctan ⁡ ( x ) = ∫ 0 x 1 z 2 + 1 d z , arccot ⁡ ( x ) = ∫ x ∞ 1 z 2 + 1 d z , arcsec ⁡ ( x ) = ∫ 1 x 1 z z 2 − 1 d z = π + ∫ − x − 1 1 z z 2 − 1 d z , x ≥ 1 arccsc ⁡ ( x ) = ∫ x ∞ 1 z z 2 − 1 d z = ∫ − ∞ − x 1 z z 2 − 1 d z , x ≥ 1 {\displaystyle {\begin{aligned}\arcsin(x)&{}=\int _{0}^{x}{\frac {1}{\sqrt {1-z^{2}}}}\,dz\;,&|x|&{}\leq 1\\\arccos(x)&{}=\int _{x}^{1}{\frac {1}{\sqrt {1-z^{2}}}}\,dz\;,&|x|&{}\leq 1\\\arctan(x)&{}=\int _{0}^{x}{\frac {1}{z^{2}+1}}\,dz\;,\\\operatorname {arccot}(x)&{}=\int _{x}^{\infty }{\frac {1}{z^{2}+1}}\,dz\;,\\\operatorname {arcsec}(x)&{}=\int _{1}^{x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz=\pi +\int _{-x}^{-1}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz\;,&x&{}\geq 1\\\operatorname {arccsc}(x)&{}=\int _{x}^{\infty }{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz=\int _{-\infty }^{-x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz\;,&x&{}\geq 1\\\end{aligned}}} When x equals 1, the integrals with limited domains are improper integrals, but still well-defined. === Infinite series === Similar to the sine and cosine functions, the inverse trigonometric functions can also be calculated using power series, as follows. For arcsine, the series can be derived by expanding its derivative, 1 1 − z 2 {\textstyle {\tfrac {1}{\sqrt {1-z^{2}}}}} , as a binomial series, and integrating term by term (using the integral definition as above). The series for arctangent can similarly be derived by expanding its derivative 1 1 + z 2 {\textstyle {\frac {1}{1+z^{2}}}} in a geometric series, and applying the integral definition above (see Leibniz series). arcsin ⁡ ( z ) = z + ( 1 2 ) z 3 3 + ( 1 ⋅ 3 2 ⋅ 4 ) z 5 5 + ( 1 ⋅ 3 ⋅ 5 2 ⋅ 4 ⋅ 6 ) z 7 7 + ⋯ = ∑ n = 0 ∞ ( 2 n − 1 ) ! ! ( 2 n ) ! ! z 2 n + 1 2 n + 1 = ∑ n = 0 ∞ ( 2 n ) ! ( 2 n n ! ) 2 z 2 n + 1 2 n + 1 ; | z | ≤ 1 {\displaystyle {\begin{aligned}\arcsin(z)&=z+\left({\frac {1}{2}}\right){\frac {z^{3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {z^{5}}{5}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {z^{7}}{7}}+\cdots \\[5pt]&=\sum _{n=0}^{\infty }{\frac {(2n-1)!!}{(2n)!!}}{\frac {z^{2n+1}}{2n+1}}\\[5pt]&=\sum _{n=0}^{\infty }{\frac {(2n)!}{(2^{n}n!)^{2}}}{\frac {z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\end{aligned}}} arctan ⁡ ( z ) = z − z 3 3 + z 5 5 − z 7 7 + ⋯ = ∑ n = 0 ∞ ( − 1 ) n z 2 n + 1 2 n + 1 ; | z | ≤ 1 z ≠ i , − i {\displaystyle \arctan(z)=z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{5}}-{\frac {z^{7}}{7}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\qquad z\neq i,-i} Series for the other inverse trigonometric functions can be given in terms of these according to the relationships given above. For example, arccos ⁡ ( x ) = π / 2 − arcsin ⁡ ( x ) {\displaystyle \arccos(x)=\pi /2-\arcsin(x)} , arccsc ⁡ ( x ) = arcsin ⁡ ( 1 / x ) {\displaystyle \operatorname {arccsc}(x)=\arcsin(1/x)} , and so on. Another series is given by: 2 ( arcsin ⁡ ( x 2 ) ) 2 = ∑ n = 1 ∞ x 2 n n 2 ( 2 n n ) . {\displaystyle 2\left(\arcsin \left({\frac {x}{2}}\right)\right)^{2}=\sum _{n=1}^{\infty }{\frac {x^{2n}}{n^{2}{\binom {2n}{n}}}}.} Leonhard Euler found a series for the arctangent that converges more quickly than its Taylor series: arctan ⁡ ( z ) = z 1 + z 2 ∑ n = 0 ∞ ∏ k = 1 n 2 k z 2 ( 2 k + 1 ) ( 1 + z 2 ) . {\displaystyle \arctan(z)={\frac {z}{1+z^{2}}}\sum _{n=0}^{\infty }\prod _{k=1}^{n}{\frac {2kz^{2}}{(2k+1)(1+z^{2})}}.} (The term in the sum for n = 0 is the empty product, so is 1.) Alternatively, this can be expressed as arctan ⁡ ( z ) = ∑ n = 0 ∞ 2 2 n ( n ! ) 2 ( 2 n + 1 ) ! z 2 n + 1 ( 1 + z 2 ) n + 1 . {\displaystyle \arctan(z)=\sum _{n=0}^{\infty }{\frac {2^{2n}(n!)^{2}}{(2n+1)!}}{\frac {z^{2n+1}}{(1+z^{2})^{n+1}}}.} Another series for the arctangent function is given by arctan ⁡ ( z ) = i ∑ n = 1 ∞ 1 2 n − 1 ( 1 ( 1 + 2 i / z ) 2 n − 1 − 1 ( 1 − 2 i / z ) 2 n − 1 ) , {\displaystyle \arctan(z)=i\sum _{n=1}^{\infty }{\frac {1}{2n-1}}\left({\frac {1}{(1+2i/z)^{2n-1}}}-{\frac {1}{(1-2i/z)^{2n-1}}}\right),} where i = − 1 {\displaystyle i={\sqrt {-1}}} is the imaginary unit. ==== Continued fractions for arctangent ==== Two alternatives to the power series for arctangent are these generalized continued fractions: arctan ⁡ ( z ) = z 1 + ( 1 z ) 2 3 − 1 z 2 + ( 3 z ) 2 5 − 3 z 2 + ( 5 z ) 2 7 − 5 z 2 + ( 7 z ) 2 9 − 7 z 2 + ⋱ = z 1 + ( 1 z ) 2 3 + ( 2 z ) 2 5 + ( 3 z ) 2 7 + ( 4 z ) 2 9 + ⋱ {\displaystyle \arctan(z)={\frac {z}{1+{\cfrac {(1z)^{2}}{3-1z^{2}+{\cfrac {(3z)^{2}}{5-3z^{2}+{\cfrac {(5z)^{2}}{7-5z^{2}+{\cfrac {(7z)^{2}}{9-7z^{2}+\ddots }}}}}}}}}}={\frac {z}{1+{\cfrac {(1z)^{2}}{3+{\cfrac {(2z)^{2}}{5+{\cfrac {(3z)^{2}}{7+{\cfrac {(4z)^{2}}{9+\ddots }}}}}}}}}}} The second of these is valid in the cut complex plane. There are two cuts, from −i to the point at infinity, going down the imaginary axis, and from i to the point at infinity, going up the same axis. It works best for real numbers running from −1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the first) are just (nz)2, with each perfect square appearing once. The first was developed by Leonhard Euler; the second by Carl Friedrich Gauss utilizing the Gaussian hypergeometric series. === Indefinite integrals of inverse trigonometric functions === For real and complex values of z: ∫ arcsin ⁡ ( z ) d z = z arcsin ⁡ ( z ) + 1 − z 2 + C ∫ arccos ⁡ ( z ) d z = z arccos ⁡ ( z ) − 1 − z 2 + C ∫ arctan ⁡ ( z ) d z = z arctan ⁡ ( z ) − 1 2 ln ⁡ ( 1 + z 2 ) + C ∫ arccot ⁡ ( z ) d z = z arccot ⁡ ( z ) + 1 2 ln ⁡ ( 1 + z 2 ) + C ∫ arcsec ⁡ ( z ) d z = z arcsec ⁡ ( z ) − ln ⁡ [ z ( 1 + z 2 − 1 z 2 ) ] + C ∫ arccsc ⁡ ( z ) d z = z arccsc ⁡ ( z ) + ln ⁡ [ z ( 1 + z 2 − 1 z 2 ) ] + C {\displaystyle {\begin{aligned}\int \arcsin(z)\,dz&{}=z\,\arcsin(z)+{\sqrt {1-z^{2}}}+C\\\int \arccos(z)\,dz&{}=z\,\arccos(z)-{\sqrt {1-z^{2}}}+C\\\int \arctan(z)\,dz&{}=z\,\arctan(z)-{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arccot}(z)\,dz&{}=z\,\operatorname {arccot}(z)+{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arcsec}(z)\,dz&{}=z\,\operatorname {arcsec}(z)-\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\\\int \operatorname {arccsc}(z)\,dz&{}=z\,\operatorname {arccsc}(z)+\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\end{aligned}}} For real x ≥ 1: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − ln ⁡ ( x + x 2 − 1 ) + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + ln ⁡ ( x + x 2 − 1 ) + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\end{aligned}}} For all real x not between -1 and 1: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − sgn ⁡ ( x ) ln ⁡ | x + x 2 − 1 | + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + sgn ⁡ ( x ) ln ⁡ | x + x 2 − 1 | + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\operatorname {sgn}(x)\ln \left|x+{\sqrt {x^{2}-1}}\right|+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\operatorname {sgn}(x)\ln \left|x+{\sqrt {x^{2}-1}}\right|+C\end{aligned}}} The absolute value is necessary to compensate for both negative and positive values of the arcsecant and arccosecant functions. The signum function is also necessary due to the absolute values in the derivatives of the two functions, which create two different solutions for positive and negative values of x. These can be further simplified using the logarithmic definitions of the inverse hyperbolic functions: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − arcosh ⁡ ( | x | ) + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + arcosh ⁡ ( | x | ) + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\operatorname {arcosh} (|x|)+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\operatorname {arcosh} (|x|)+C\\\end{aligned}}} The absolute value in the argument of the arcosh function creates a negative half of its graph, making it identical to the signum logarithmic function shown above. All of these antiderivatives can be derived using integration by parts and the simple derivative forms shown above. ==== Example ==== Using ∫ u d v = u v − ∫ v d u {\displaystyle \int u\,dv=uv-\int v\,du} (i.e. integration by parts), set u = arcsin ⁡ ( x ) d v = d x d u = d x 1 − x 2 v = x {\displaystyle {\begin{aligned}u&=\arcsin(x)&dv&=dx\\du&={\frac {dx}{\sqrt {1-x^{2}}}}&v&=x\end{aligned}}} Then ∫ arcsin ⁡ ( x ) d x = x arcsin ⁡ ( x ) − ∫ x 1 − x 2 d x , {\displaystyle \int \arcsin(x)\,dx=x\arcsin(x)-\int {\frac {x}{\sqrt {1-x^{2}}}}\,dx,} which by the simple substitution w = 1 − x 2 , d w = − 2 x d x {\displaystyle w=1-x^{2},\ dw=-2x\,dx} yields the final result: ∫ arcsin ⁡ ( x ) d x = x arcsin ⁡ ( x ) + 1 − x 2 + C {\displaystyle \int \arcsin(x)\,dx=x\arcsin(x)+{\sqrt {1-x^{2}}}+C} == Extension to the complex plane == Since the inverse trigonometric functions are analytic functions, they can be extended from the real line to the complex plane. This results in functions with multiple sheets and branch points. One possible way of defining the extension is: arctan ⁡ ( z ) = ∫ 0 z d x 1 + x 2 z ≠ − i , + i {\displaystyle \arctan(z)=\int _{0}^{z}{\frac {dx}{1+x^{2}}}\quad z\neq -i,+i} where the part of the imaginary axis which does not lie strictly between the branch points (−i and +i) is the branch cut between the principal sheet and other sheets. The path of the integral must not cross a branch cut. For z not on a branch cut, a straight line path from 0 to z is such a path. For z on a branch cut, the path must approach from Re[x] > 0 for the upper branch cut and from Re[x] < 0 for the lower branch cut. The arcsine function may then be defined as: arcsin ⁡ ( z ) = arctan ⁡ ( z 1 − z 2 ) z ≠ − 1 , + 1 {\displaystyle \arcsin(z)=\arctan \left({\frac {z}{\sqrt {1-z^{2}}}}\right)\quad z\neq -1,+1} where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not lie strictly between −1 and +1 is the branch cut between the principal sheet of arcsin and other sheets; arccos ⁡ ( z ) = π 2 − arcsin ⁡ ( z ) z ≠ − 1 , + 1 {\displaystyle \arccos(z)={\frac {\pi }{2}}-\arcsin(z)\quad z\neq -1,+1} which has the same cut as arcsin; arccot ⁡ ( z ) = π 2 − arctan ⁡ ( z ) z ≠ − i , i {\displaystyle \operatorname {arccot}(z)={\frac {\pi }{2}}-\arctan(z)\quad z\neq -i,i} which has the same cut as arctan; arcsec ⁡ ( z ) = arccos ⁡ ( 1 z ) z ≠ − 1 , 0 , + 1 {\displaystyle \operatorname {arcsec}(z)=\arccos \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1} where the part of the real axis between −1 and +1 inclusive is the cut between the principal sheet of arcsec and other sheets; arccsc ⁡ ( z ) = arcsin ⁡ ( 1 z ) z ≠ − 1 , 0 , + 1 {\displaystyle \operatorname {arccsc}(z)=\arcsin \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1} which has the same cut as arcsec. === Logarithmic forms === These functions may also be expressed using complex logarithms. This extends their domains to the complex plane in a natural fashion. The following identities for principal values of the functions hold everywhere that they are defined, even on their branch cuts. arcsin ⁡ ( z ) = − i ln ⁡ ( 1 − z 2 + i z ) = i ln ⁡ ( 1 − z 2 − i z ) = arccsc ⁡ ( 1 z ) arccos ⁡ ( z ) = − i ln ⁡ ( i 1 − z 2 + z ) = π 2 − arcsin ⁡ ( z ) = arcsec ⁡ ( 1 z ) arctan ⁡ ( z ) = − i 2 ln ⁡ ( i − z i + z ) = − i 2 ln ⁡ ( 1 + i z 1 − i z ) = arccot ⁡ ( 1 z ) arccot ⁡ ( z ) = − i 2 ln ⁡ ( z + i z − i ) = − i 2 ln ⁡ ( i z − 1 i z + 1 ) = arctan ⁡ ( 1 z ) arcsec ⁡ ( z ) = − i ln ⁡ ( i 1 − 1 z 2 + 1 z ) = π 2 − arccsc ⁡ ( z ) = arccos ⁡ ( 1 z ) arccsc ⁡ ( z ) = − i ln ⁡ ( 1 − 1 z 2 + i z ) = i ln ⁡ ( 1 − 1 z 2 − i z ) = arcsin ⁡ ( 1 z ) {\displaystyle {\begin{aligned}\arcsin(z)&{}=-i\ln \left({\sqrt {1-z^{2}}}+iz\right)=i\ln \left({\sqrt {1-z^{2}}}-iz\right)&{}=\operatorname {arccsc} \left({\frac {1}{z}}\right)\\[10pt]\arccos(z)&{}=-i\ln \left(i{\sqrt {1-z^{2}}}+z\right)={\frac {\pi }{2}}-\arcsin(z)&{}=\operatorname {arcsec} \left({\frac {1}{z}}\right)\\[10pt]\arctan(z)&{}=-{\frac {i}{2}}\ln \left({\frac {i-z}{i+z}}\right)=-{\frac {i}{2}}\ln \left({\frac {1+iz}{1-iz}}\right)&{}=\operatorname {arccot} \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccot}(z)&{}=-{\frac {i}{2}}\ln \left({\frac {z+i}{z-i}}\right)=-{\frac {i}{2}}\ln \left({\frac {iz-1}{iz+1}}\right)&{}=\arctan \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arcsec}(z)&{}=-i\ln \left(i{\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {1}{z}}\right)={\frac {\pi }{2}}-\operatorname {arccsc}(z)&{}=\arccos \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccsc}(z)&{}=-i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)=i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}-{\frac {i}{z}}\right)&{}=\arcsin \left({\frac {1}{z}}\right)\end{aligned}}} ==== Generalization ==== Because all of the inverse trigonometric functions output an angle of a right triangle, they can be generalized by using Euler's formula to form a right triangle in the complex plane. Algebraically, this gives us: c e i θ = c cos ⁡ ( θ ) + i c sin ⁡ ( θ ) {\displaystyle ce^{i\theta }=c\cos(\theta )+ic\sin(\theta )} or c e i θ = a + i b {\displaystyle ce^{i\theta }=a+ib} where a {\displaystyle a} is the adjacent side, b {\displaystyle b} is the opposite side, and c {\displaystyle c} is the hypotenuse. From here, we can solve for θ {\displaystyle \theta } . e ln ⁡ ( c ) + i θ = a + i b ln ⁡ c + i θ = ln ⁡ ( a + i b ) θ = Im ⁡ ( ln ⁡ ( a + i b ) ) {\displaystyle {\begin{aligned}e^{\ln(c)+i\theta }&=a+ib\\\ln c+i\theta &=\ln(a+ib)\\\theta &=\operatorname {Im} \left(\ln(a+ib)\right)\end{aligned}}} or θ = − i ln ⁡ ( a + i b c ) {\displaystyle \theta =-i\ln \left({\frac {a+ib}{c}}\right)} Simply taking the imaginary part works for any real-valued a {\displaystyle a} and b {\displaystyle b} , but if a {\displaystyle a} or b {\displaystyle b} is complex-valued, we have to use the final equation so that the real part of the result isn't excluded. Since the length of the hypotenuse doesn't change the angle, ignoring the real part of ln ⁡ ( a + b i ) {\displaystyle \ln(a+bi)} also removes c {\displaystyle c} from the equation. In the final equation, we see that the angle of the triangle in the complex plane can be found by inputting the lengths of each side. By setting one of the three sides equal to 1 and one of the remaining sides equal to our input z {\displaystyle z} , we obtain a formula for one of the inverse trig functions, for a total of six equations. Because the inverse trig functions require only one input, we must put the final side of the triangle in terms of the other two using the Pythagorean Theorem relation a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} The table below shows the values of a, b, and c for each of the inverse trig functions and the equivalent expressions for θ {\displaystyle \theta } that result from plugging the values into the equations θ = − i ln ⁡ ( a + i b c ) {\displaystyle \theta =-i\ln \left({\tfrac {a+ib}{c}}\right)} above and simplifying. a b c − i ln ⁡ ( a + i b c ) θ θ a , b ∈ R arcsin ⁡ ( z ) 1 − z 2 z 1 − i ln ⁡ ( 1 − z 2 + i z 1 ) = − i ln ⁡ ( 1 − z 2 + i z ) Im ⁡ ( ln ⁡ ( 1 − z 2 + i z ) ) arccos ⁡ ( z ) z 1 − z 2 1 − i ln ⁡ ( z + i 1 − z 2 1 ) = − i ln ⁡ ( z + z 2 − 1 ) Im ⁡ ( ln ⁡ ( z + z 2 − 1 ) ) arctan ⁡ ( z ) 1 z 1 + z 2 − i ln ⁡ ( 1 + i z 1 + z 2 ) = − i 2 ln ⁡ ( i − z i + z ) Im ⁡ ( ln ⁡ ( 1 + i z ) ) arccot ⁡ ( z ) z 1 z 2 + 1 − i ln ⁡ ( z + i z 2 + 1 ) = − i 2 ln ⁡ ( z + i z − i ) Im ⁡ ( ln ⁡ ( z + i ) ) arcsec ⁡ ( z ) 1 z 2 − 1 z − i ln ⁡ ( 1 + i z 2 − 1 z ) = − i ln ⁡ ( 1 z + 1 z 2 − 1 ) Im ⁡ ( ln ⁡ ( 1 z + 1 z 2 − 1 ) ) arccsc ⁡ ( z ) z 2 − 1 1 z − i ln ⁡ ( z 2 − 1 + i z ) = − i ln ⁡ ( 1 − 1 z 2 + i z ) Im ⁡ ( ln ⁡ ( 1 − 1 z 2 + i z ) ) {\displaystyle {\begin{aligned}&a&&b&&c&&-i\ln \left({\frac {a+ib}{c}}\right)&&\theta &&\theta _{a,b\in \mathbb {R} }\\\arcsin(z)\ \ &{\sqrt {1-z^{2}}}&&z&&1&&-i\ln \left({\frac {{\sqrt {1-z^{2}}}+iz}{1}}\right)&&=-i\ln \left({\sqrt {1-z^{2}}}+iz\right)&&\operatorname {Im} \left(\ln \left({\sqrt {1-z^{2}}}+iz\right)\right)\\\arccos(z)\ \ &z&&{\sqrt {1-z^{2}}}&&1&&-i\ln \left({\frac {z+i{\sqrt {1-z^{2}}}}{1}}\right)&&=-i\ln \left(z+{\sqrt {z^{2}-1}}\right)&&\operatorname {Im} \left(\ln \left(z+{\sqrt {z^{2}-1}}\right)\right)\\\arctan(z)\ \ &1&&z&&{\sqrt {1+z^{2}}}&&-i\ln \left({\frac {1+iz}{\sqrt {1+z^{2}}}}\right)&&=-{\frac {i}{2}}\ln \left({\frac {i-z}{i+z}}\right)&&\operatorname {Im} \left(\ln \left(1+iz\right)\right)\\\operatorname {arccot}(z)\ \ &z&&1&&{\sqrt {z^{2}+1}}&&-i\ln \left({\frac {z+i}{\sqrt {z^{2}+1}}}\right)&&=-{\frac {i}{2}}\ln \left({\frac {z+i}{z-i}}\right)&&\operatorname {Im} \left(\ln \left(z+i\right)\right)\\\operatorname {arcsec}(z)\ \ &1&&{\sqrt {z^{2}-1}}&&z&&-i\ln \left({\frac {1+i{\sqrt {z^{2}-1}}}{z}}\right)&&=-i\ln \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z^{2}}}-1}}\right)&&\operatorname {Im} \left(\ln \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z^{2}}}-1}}\right)\right)\\\operatorname {arccsc}(z)\ \ &{\sqrt {z^{2}-1}}&&1&&z&&-i\ln \left({\frac {{\sqrt {z^{2}-1}}+i}{z}}\right)&&=-i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)&&\operatorname {Im} \left(\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)\right)\\\end{aligned}}} The particular form of the simplified expression can cause the output to differ from the usual principal branch of each of the inverse trig functions. The formulations given will output the usual principal branch when using the Im ⁡ ( ln ⁡ z ) ∈ ( − π , π ] {\displaystyle \operatorname {Im} \left(\ln z\right)\in (-\pi ,\pi ]} and Re ⁡ ( z ) ≥ 0 {\displaystyle \operatorname {Re} \left({\sqrt {z}}\right)\geq 0} principal branch for every function except arccotangent in the θ {\displaystyle \theta } column. Arccotangent in the θ {\displaystyle \theta } column will output on its usual principal branch by using the Im ⁡ ( ln ⁡ z ) ∈ [ 0 , 2 π ) {\displaystyle \operatorname {Im} \left(\ln z\right)\in [0,2\pi )} and Im ⁡ ( z ) ≥ 0 {\displaystyle \operatorname {Im} \left({\sqrt {z}}\right)\geq 0} convention. In this sense, all of the inverse trig functions can be thought of as specific cases of the complex-valued log function. Since these definition work for any complex-valued z {\displaystyle z} , the definitions allow for hyperbolic angles as outputs and can be used to further define the inverse hyperbolic functions. It's possible to algebraically prove these relations by starting with the exponential forms of the trigonometric functions and solving for the inverse function. ==== Example proof ==== sin ⁡ ( ϕ ) = z ϕ = arcsin ⁡ ( z ) {\displaystyle {\begin{aligned}\sin(\phi )&=z\\\phi &=\arcsin(z)\end{aligned}}} Using the exponential definition of sine, and letting ξ = e i ϕ , {\displaystyle \xi =e^{i\phi },} z = e i ϕ − e − i ϕ 2 i 2 i z = ξ − 1 ξ 0 = ξ 2 − 2 i z ξ − 1 ξ = i z ± 1 − z 2 ϕ = − i ln ⁡ ( i z ± 1 − z 2 ) {\displaystyle {\begin{aligned}z&={\frac {e^{i\phi }-e^{-i\phi }}{2i}}\\[10mu]2iz&=\xi -{\frac {1}{\xi }}\\[5mu]0&=\xi ^{2}-2iz\xi -1\\[5mu]\xi &=iz\pm {\sqrt {1-z^{2}}}\\[5mu]\phi &=-i\ln \left(iz\pm {\sqrt {1-z^{2}}}\right)\end{aligned}}} (the positive branch is chosen) ϕ = arcsin ⁡ ( z ) = − i ln ⁡ ( i z + 1 − z 2 ) {\displaystyle \phi =\arcsin(z)=-i\ln \left(iz+{\sqrt {1-z^{2}}}\right)} == Applications == === Finding the angle of a right triangle === Inverse trigonometric functions are useful when trying to determine the remaining two angles of a right triangle when the lengths of the sides of the triangle are known. Recalling the right-triangle definitions of sine and cosine, it follows that θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) . {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right).} Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using the Pythagorean Theorem: a 2 + b 2 = h 2 {\displaystyle a^{2}+b^{2}=h^{2}} where h {\displaystyle h} is the length of the hypotenuse. Arctangent comes in handy in this situation, as the length of the hypotenuse is not needed. θ = arctan ⁡ ( opposite adjacent ) . {\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)\,.} For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angle θ with the horizontal, where θ may be computed as follows: θ = arctan ⁡ ( opposite adjacent ) = arctan ⁡ ( rise run ) = arctan ⁡ ( 8 20 ) ≈ 21.8 ∘ . {\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)=\arctan \left({\frac {\text{rise}}{\text{run}}}\right)=\arctan \left({\frac {8}{20}}\right)\approx 21.8^{\circ }\,.} === In computer science and engineering === ==== Two-argument variant of arctangent ==== The two-argument atan2 function computes the arctangent of y/x given y and x, but with a range of (−π, π]. In other words, atan2(y, x) is the angle between the positive x-axis of a plane and the point (x, y) on it, with positive sign for counter-clockwise angles (upper half-plane, y > 0), and negative sign for clockwise angles (lower half-plane, y < 0). It was first introduced in many computer programming languages, but it is now also common in other fields of science and engineering. In terms of the standard arctan function, that is with range of (−π/2, π/2), it can be expressed as follows: atan2 ⁡ ( y , x ) = { arctan ⁡ ( y x ) x > 0 arctan ⁡ ( y x ) + π y ≥ 0 , x < 0 arctan ⁡ ( y x ) − π y < 0 , x < 0 π 2 y > 0 , x = 0 − π 2 y < 0 , x = 0 undefined y = 0 , x = 0 {\displaystyle \operatorname {atan2} (y,x)={\begin{cases}\arctan \left({\frac {y}{x}}\right)&\quad x>0\\\arctan \left({\frac {y}{x}}\right)+\pi &\quad y\geq 0,\;x<0\\\arctan \left({\frac {y}{x}}\right)-\pi &\quad y<0,\;x<0\\{\frac {\pi }{2}}&\quad y>0,\;x=0\\-{\frac {\pi }{2}}&\quad y<0,\;x=0\\{\text{undefined}}&\quad y=0,\;x=0\end{cases}}} It also equals the principal value of the argument of the complex number x + iy. This limited version of the function above may also be defined using the tangent half-angle formulae as follows: atan2 ⁡ ( y , x ) = 2 arctan ⁡ ( y x 2 + y 2 + x ) {\displaystyle \operatorname {atan2} (y,x)=2\arctan \left({\frac {y}{{\sqrt {x^{2}+y^{2}}}+x}}\right)} provided that either x > 0 or y ≠ 0. However this fails if given x ≤ 0 and y = 0 so the expression is unsuitable for computational use. The above argument order (y, x) seems to be the most common, and in particular is used in ISO standards such as the C programming language, but a few authors may use the opposite convention (x, y) so some caution is warranted. (See variations at atan2 § Realizations of the function in common computer languages.) ==== Arctangent function with location parameter ==== In many applications the solution y {\displaystyle y} of the equation x = tan ⁡ ( y ) {\displaystyle x=\tan(y)} is to come as close as possible to a given value − ∞ < η < ∞ {\displaystyle -\infty <\eta <\infty } . The adequate solution is produced by the parameter modified arctangent function y = arctan η ⁡ ( x ) := arctan ⁡ ( x ) + π rni ⁡ ( η − arctan ⁡ ( x ) π ) . {\displaystyle y=\arctan _{\eta }(x):=\arctan(x)+\pi \,\operatorname {rni} \left({\frac {\eta -\arctan(x)}{\pi }}\right)\,.} The function rni {\displaystyle \operatorname {rni} } rounds to the nearest integer. ==== Numerical accuracy ==== For angles near 0 and π, arccosine is ill-conditioned, and similarly with arcsine for angles near −π/2 and π/2. Computer applications thus need to consider the stability of inputs to these functions and the sensitivity of their calculations, or use alternate methods. == See also == == Notes == == References == Abramowitz, Milton; Stegun, Irene A., eds. (1972). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover Publications. ISBN 978-0-486-61272-0. == External links == Weisstein, Eric W. "Inverse Tangent". MathWorld.
Wikipedia/Arctangent_function
In statistics, a power transform is a family of functions applied to create a monotonic transformation of data using power functions. It is a data transformation technique used to stabilize variance, make the data more normal distribution-like, improve the validity of measures of association (such as the Pearson correlation between variables), and for other data stabilization procedures. Power transforms are used in multiple fields, including multi-resolution and wavelet analysis, statistical data analysis, medical research, modeling of physical processes, geochemical data analysis, epidemiology and many other clinical, environmental and social research areas. == Definition == The power transformation is defined as a continuous function of power parameter λ, typically given in piece-wise form that makes it continuous at the point of singularity (λ = 0). For data vectors (y1,..., yn) in which each yi > 0, the power transform is y i ( λ ) = { y i λ − 1 λ ( GM ⁡ ( y ) ) λ − 1 , if λ ≠ 0 GM ⁡ ( y ) ln ⁡ y i , if λ = 0 {\displaystyle y_{i}^{(\lambda )}={\begin{cases}{\dfrac {y_{i}^{\lambda }-1}{\lambda (\operatorname {GM} (y))^{\lambda -1}}},&{\text{if }}\lambda \neq 0\\[12pt]\operatorname {GM} (y)\ln {y_{i}},&{\text{if }}\lambda =0\end{cases}}} where GM ⁡ ( y ) = ( ∏ i = 1 n y i ) 1 n = y 1 y 2 ⋯ y n n {\displaystyle \operatorname {GM} (y)=\left(\prod _{i=1}^{n}y_{i}\right)^{\frac {1}{n}}={\sqrt[{n}]{y_{1}y_{2}\cdots y_{n}}}\,} is the geometric mean of the observations y1, ..., yn. The case for λ = 0 {\displaystyle \lambda =0} is the limit as λ {\displaystyle \lambda } approaches 0. To see this, note that y i λ = exp ⁡ ( λ ln ⁡ ( y i ) ) = 1 + λ ln ⁡ ( y i ) + O ( ( λ ln ⁡ ( y i ) ) 2 ) {\displaystyle y_{i}^{\lambda }=\exp({\lambda \ln(y_{i})})=1+\lambda \ln(y_{i})+O((\lambda \ln(y_{i}))^{2})} - using Taylor series. Then y i λ − 1 λ = ln ⁡ ( y i ) + O ( λ ) {\displaystyle {\dfrac {y_{i}^{\lambda }-1}{\lambda }}=\ln(y_{i})+O(\lambda )} , and everything but ln ⁡ ( y i ) {\displaystyle \ln(y_{i})} becomes negligible for λ {\displaystyle \lambda } sufficiently small. The inclusion of the (λ − 1)th power of the geometric mean in the denominator simplifies the scientific interpretation of any equation involving y i ( λ ) {\displaystyle y_{i}^{(\lambda )}} , because the units of measurement do not change as λ changes. Box and Cox (1964) introduced the geometric mean into this transformation by first including the Jacobian of rescaled power transformation y λ − 1 λ . {\displaystyle {\frac {y^{\lambda }-1}{\lambda }}.} with the likelihood. This Jacobian is as follows: J ( λ ; y 1 , … , y n ) = ∏ i = 1 n | d y i ( λ ) / d y | = ∏ i = 1 n y i λ − 1 = GM ⁡ ( y ) n ( λ − 1 ) {\displaystyle J(\lambda ;y_{1},\ldots ,y_{n})=\prod _{i=1}^{n}|dy_{i}^{(\lambda )}/dy|=\prod _{i=1}^{n}y_{i}^{\lambda -1}=\operatorname {GM} (y)^{n(\lambda -1)}} This allows the normal log likelihood at its maximum to be written as follows: log ⁡ ( L ( μ ^ , σ ^ ) ) = ( − n / 2 ) ( log ⁡ ( 2 π σ ^ 2 ) + 1 ) + n ( λ − 1 ) log ⁡ ( GM ⁡ ( y ) ) = ( − n / 2 ) ( log ⁡ ( 2 π σ ^ 2 / GM ⁡ ( y ) 2 ( λ − 1 ) ) + 1 ) . {\displaystyle {\begin{aligned}\log({\mathcal {L}}({\hat {\mu }},{\hat {\sigma }}))&=(-n/2)(\log(2\pi {\hat {\sigma }}^{2})+1)+n(\lambda -1)\log(\operatorname {GM} (y))\\[5pt]&=(-n/2)(\log(2\pi {\hat {\sigma }}^{2}/\operatorname {GM} (y)^{2(\lambda -1)})+1).\end{aligned}}} From here, absorbing GM ⁡ ( y ) 2 ( λ − 1 ) {\displaystyle \operatorname {GM} (y)^{2(\lambda -1)}} into the expression for σ ^ 2 {\displaystyle {\hat {\sigma }}^{2}} produces an expression that establishes that minimizing the sum of squares of residuals from y i ( λ ) {\displaystyle y_{i}^{(\lambda )}} is equivalent to maximizing the sum of the normal log likelihood of deviations from ( y λ − 1 ) / λ {\displaystyle (y^{\lambda }-1)/\lambda } and the log of the Jacobian of the transformation. The value at Y = 1 for any λ is 0, and the derivative with respect to Y there is 1 for any λ. Sometimes Y is a version of some other variable scaled to give Y = 1 at some sort of average value. The transformation is a power transformation, but done in such a way as to make it continuous with the parameter λ at λ = 0. It has proved popular in regression analysis, including econometrics. Box and Cox also proposed a more general form of the transformation that incorporates a shift parameter. τ ( y i ; λ , α ) = { ( y i + α ) λ − 1 λ ( GM ⁡ ( y + α ) ) λ − 1 if λ ≠ 0 , GM ⁡ ( y + α ) ln ⁡ ( y i + α ) if λ = 0 , {\displaystyle \tau (y_{i};\lambda ,\alpha )={\begin{cases}{\dfrac {(y_{i}+\alpha )^{\lambda }-1}{\lambda (\operatorname {GM} (y+\alpha ))^{\lambda -1}}}&{\text{if }}\lambda \neq 0,\\\\\operatorname {GM} (y+\alpha )\ln(y_{i}+\alpha )&{\text{if }}\lambda =0,\end{cases}}} which holds if yi + α > 0 for all i. If τ(Y, λ, α) follows a truncated normal distribution, then Y is said to follow a Box–Cox distribution. Bickel and Doksum eliminated the need to use a truncated distribution by extending the range of the transformation to all y, as follows: τ ( y i ; λ , α ) = { sgn ⁡ ( y i + α ) | y i + α | λ − 1 λ ( GM ⁡ ( y + α ) ) λ − 1 if λ ≠ 0 , GM ⁡ ( y + α ) sgn ⁡ ( y + α ) ln ⁡ ( y i + α ) if λ = 0 , {\displaystyle \tau (y_{i};\lambda ,\alpha )={\begin{cases}{\dfrac {\operatorname {sgn} (y_{i}+\alpha )|y_{i}+\alpha |^{\lambda }-1}{\lambda (\operatorname {GM} (y+\alpha ))^{\lambda -1}}}&{\text{if }}\lambda \neq 0,\\\\\operatorname {GM} (y+\alpha )\operatorname {sgn} (y+\alpha )\ln(y_{i}+\alpha )&{\text{if }}\lambda =0,\end{cases}}} where sgn(.) is the sign function. This change in definition has little practical import as long as α {\displaystyle \alpha } is less than min ⁡ ( y i ) {\displaystyle \operatorname {min} (y_{i})} , which it usually is. Bickel and Doksum also proved that the parameter estimates are consistent and asymptotically normal under appropriate regularity conditions, though the standard Cramér–Rao lower bound can substantially underestimate the variance when parameter values are small relative to the noise variance. However, this problem of underestimating the variance may not be a substantive problem in many applications. == Box–Cox transformation == The one-parameter Box–Cox transformations are defined as y i ( λ ) = { y i λ − 1 λ if λ ≠ 0 , ln ⁡ y i if λ = 0 , {\displaystyle y_{i}^{(\lambda )}={\begin{cases}{\dfrac {y_{i}^{\lambda }-1}{\lambda }}&{\text{if }}\lambda \neq 0,\\\ln y_{i}&{\text{if }}\lambda =0,\end{cases}}} and the two-parameter Box–Cox transformations as y i ( λ ) = { ( y i + λ 2 ) λ 1 − 1 λ 1 if λ 1 ≠ 0 , ln ⁡ ( y i + λ 2 ) if λ 1 = 0 , {\displaystyle y_{i}^{({\boldsymbol {\lambda }})}={\begin{cases}{\dfrac {(y_{i}+\lambda _{2})^{\lambda _{1}}-1}{\lambda _{1}}}&{\text{if }}\lambda _{1}\neq 0,\\\ln(y_{i}+\lambda _{2})&{\text{if }}\lambda _{1}=0,\end{cases}}} as described in the original article. Moreover, the first transformations hold for y i > 0 {\displaystyle y_{i}>0} , and the second for y i > − λ 2 {\displaystyle y_{i}>-\lambda _{2}} . The parameter λ {\displaystyle \lambda } is estimated using the profile likelihood function and using goodness-of-fit tests. === Confidence interval === Confidence interval for the Box–Cox transformation can be asymptotically constructed using Wilks's theorem on the profile likelihood function to find all the possible values of λ {\displaystyle \lambda } that fulfill the following restriction: ln ⁡ ( L ( λ ) ) ≥ ln ⁡ ( L ( λ ^ ) ) − 1 2 χ 2 1 , 1 − α . {\displaystyle \ln {\big (}L(\lambda ){\big )}\geq \ln {\big (}L({\hat {\lambda }}){\big )}-{\frac {1}{2}}{\chi ^{2}}_{1,1-\alpha }.} === Example === The BUPA liver data set contains data on liver enzymes ALT and γGT. Suppose we are interested in using log(γGT) to predict ALT. A plot of the data appears in panel (a) of the figure. There appears to be non-constant variance, and a Box–Cox transformation might help. The log-likelihood of the power parameter appears in panel (b). The horizontal reference line is at a distance of χ12/2 from the maximum and can be used to read off an approximate 95% confidence interval for λ. It appears as though a value close to zero would be good, so we take logs. Possibly, the transformation could be improved by adding a shift parameter to the log transformation. Panel (c) of the figure shows the log-likelihood. In this case, the maximum of the likelihood is close to zero suggesting that a shift parameter is not needed. The final panel shows the transformed data with a superimposed regression line. Note that although Box–Cox transformations can make big improvements in model fit, there are some issues that the transformation cannot help with. In the current example, the data are rather heavy-tailed so that the assumption of normality is not realistic and a robust regression approach leads to a more precise model. === Econometric application === Economists often characterize production relationships by some variant of the Box–Cox transformation. Consider a common representation of production Q as dependent on services provided by a capital stock K and by labor hours N: τ ( Q ) = α τ ( K ) + ( 1 − α ) τ ( N ) . {\displaystyle \tau (Q)=\alpha \tau (K)+(1-\alpha )\tau (N).\,} Solving for Q by inverting the Box–Cox transformation we find Q = ( α K λ + ( 1 − α ) N λ ) 1 / λ , {\displaystyle Q={\big (}\alpha K^{\lambda }+(1-\alpha )N^{\lambda }{\big )}^{1/\lambda },\,} which is known as the constant elasticity of substitution (CES) production function. The CES production function is a homogeneous function of degree one. When λ = 1, this produces the linear production function: Q = α K + ( 1 − α ) N . {\displaystyle Q=\alpha K+(1-\alpha )N.\,} When λ → 0 this produces the famous Cobb–Douglas production function: Q = K α N 1 − α . {\displaystyle Q=K^{\alpha }N^{1-\alpha }.\,} === Activities and demonstrations === The SOCR resource pages contain a number of hands-on interactive activities demonstrating the Box–Cox (power) transformation using Java applets and charts. These directly illustrate the effects of this transform on Q–Q plots, X–Y scatterplots, time-series plots and histograms. == Yeo–Johnson transformation == The Yeo–Johnson transformation allows also for zero and negative values of y {\displaystyle y} . λ {\displaystyle \lambda } can be any real number, where λ = 1 {\displaystyle \lambda =1} produces the identity transformation. The transformation law reads: y i ( λ ) = { ( ( y i + 1 ) λ − 1 ) / λ if λ ≠ 0 , y ≥ 0 ln ⁡ ( y i + 1 ) if λ = 0 , y ≥ 0 − ( ( − y i + 1 ) ( 2 − λ ) − 1 ) / ( 2 − λ ) if λ ≠ 2 , y < 0 − ln ⁡ ( − y i + 1 ) if λ = 2 , y < 0 {\displaystyle y_{i}^{(\lambda )}={\begin{cases}((y_{i}+1)^{\lambda }-1)/\lambda &{\text{if }}\lambda \neq 0,y\geq 0\\[4pt]\ln(y_{i}+1)&{\text{if }}\lambda =0,y\geq 0\\[4pt]-((-y_{i}+1)^{(2-\lambda )}-1)/(2-\lambda )&{\text{if }}\lambda \neq 2,y<0\\[4pt]-\ln(-y_{i}+1)&{\text{if }}\lambda =2,y<0\end{cases}}} == Box-Tidwell transformation == The Box-Tidwell transformation is a statistical technique used to assess and correct non-linearity between predictor variables and the logit in a generalized linear model, particularly in logistic regression. This transformation is useful when the relationship between the independent variables and the outcome is non-linear and cannot be adequately captured by the standard model. === Overview === The Box-Tidwell transformation was developed by George E. P. Box and John W. Tidwell in 1962 as an extension of Box-Cox transformations, which are applied to the dependent variable. However, unlike the Box-Cox transformation, the Box-Tidwell transformation is applied to the independent variables in regression models. It is often used when the assumption of linearity between the predictors and the outcome is violated. === Method === The general idea behind the Box-Tidwell transformation is to apply a power transformation to each independent variable Xi in the regression model: X i ′ = X i λ {\displaystyle X_{i}'=X_{i}^{\lambda }} Where λ {\displaystyle \lambda } is the parameter estimated from the data. If Box-Tidwell Transformation is significantly different from 1, this indicates a non-linear relationship between Xi and the logit, and the transformation improves the model fit. The Box-Tidwell test is typically performed by augmenting the regression model with terms like X i log ⁡ ( X i ) {\displaystyle X_{i}\log(X_{i})} and testing the significance of the coefficients. If significant, this suggests that a transformation should be applied to achieve a linear relationship between the predictor and the logit. == Applications == === Stabilizing Continuous Predictors === The transformation is beneficial in logistic regression or proportional hazards models where non-linearity in continuous predictors can distort the relationship with the dependent variable. It is a flexible tool that allows the researcher to fit a more appropriate model to the data without guessing the relationship's functional form in advance. === Verifying Linearity in Logistic Regression === In logistic regression, a key assumption is that continuous independent variables exhibit a linear relationship with the logit of the dependent variable. Violations of this assumption can lead to biased estimates and reduced model performance. The Box-Tidwell transformation is a method used to assess and correct such violations by determining whether a continuous predictor requires transformation to achieve linearity with the logit. ==== Method for Verifying Linearity ==== The Box-Tidwell transformation introduces an interaction term between each continuous variable Xi and its natural logarithm log ⁡ ( X i ) {\displaystyle \log(X_{i})} : X i log ⁡ ( X i ) {\displaystyle X_{i}\log(X_{i})} This term is included in the logistic regression model to test whether the relationship between Xi and the logit is non-linear. A statistically significant coefficient for this interaction term indicates a violation of the linearity assumption, suggesting the need for a transformation of the predictor. the Box-Tidwell transformation provides an appropriate power transformation to linearize the relationship, thereby improving model accuracy and validity. Conversely, non-significant results support the assumption of linearity. ==== Limitations ==== One limitation of the Box-Tidwell transformation is that it only works for positive values of the independent variables. If your data contains negative values, the transformation cannot be applied directly without modifying the variables (e.g., adding a constant). == Notes == == References == Box, George E. P.; Cox, D. R. (1964). "An analysis of transformations". Journal of the Royal Statistical Society, Series B. 26 (2): 211–252. JSTOR 2984418. MR 0192611. Carroll, R. J.; Ruppert, D. (1981). "On prediction and the power transformation family" (PDF). Biometrika. 68 (3): 609–615. doi:10.1093/biomet/68.3.609. DeGroot, M. H. (1987). "A Conversation with George Box" (PDF). Statistical Science. 2 (3): 239–258. doi:10.1214/ss/1177013223. Handelsman, D. J. (2002). "Optimal Power Transformations for Analysis of Sperm Concentration and Other Semen Variables". Journal of Andrology. 23 (5). Gluzman, S.; Yukalov, V. I. (2006). "Self-similar power transforms in extrapolation problems". Journal of Mathematical Chemistry. 39 (1): 47–56. arXiv:cond-mat/0606104. Bibcode:2006cond.mat..6104G. doi:10.1007/s10910-005-9003-7. S2CID 118965098. Howarth, R. J.; Earle, S. A. M. (1979). "Application of a generalized power transformation to geochemical data". Journal of the International Association for Mathematical Geology. 11 (1): 45–62. doi:10.1007/BF01043245. S2CID 121582755. Box, G.E.P. and Tidwell, P.W. (1962) Transformation of Independent Variables. Technometrics, 4, 531-550. https://doi.org/10.1080/00401706.1962.10490038 (a.k.a: Box-Tidwell transformation) == External links == Nishii, R. (2001) [1994], "Box–Cox transformation", Encyclopedia of Mathematics, EMS Press (fixed link) Sanford Weisberg, Yeo-Johnson Power Transformations
Wikipedia/Box–Cox_transformation
RMI, or Rocky Mountain Institute, is a global, independent, non-partisan non-profit organization co-founded in the United States by Amory Lovins. As of 2025, RMI's stated mission is to transform "global energy systems through market-driven solutions to secure a prosperous, resilient, clean energy future for all." Established in 1982, RMI has grown into a broad-based institution that is active in more than 60 countries with over 700 staff and annual revenues of $170 million in 2023–24. RMI works with businesses, policymakers, financial institutions, local communities, and other partners to drive investment in clean energy solutions. == History == By 1978, experimental physicist Amory Lovins had published many books and consulted globally. Lovins is the leading proponent of the soft energy path. Later in 1979, Lovins married L. Hunter Sheldon, a lawyer, forester, and social scientist. Hunter received her undergraduate degree in sociology and political studies from Pitzer College, and her J.D. from Loyola Marymount's School of Law. In 1982, Amory and Hunter founded Rocky Mountain Institute, based in Colorado. Together with a group of colleagues, the Lovinses fostered efficient resource use and policy development that they believed would promote global security. RMI ultimately grew into an organization with a staff of around fifty. By the mid-1980s, the Lovinses were featured on major network TV programs, such as 60 Minutes. The Lovinses described the "hard energy path" as involving inefficient liquid-fuel automotive transport, as well as giant centralized electricity-generating facilities, often burning fossil fuels such as coal or petroleum, or harnessing a fission reaction, greatly complicated by electricity wastage and loss. The "soft energy path" which they wholly preferred involves efficient use of energy, diversity of energy production methods (and matched in scale and quality to end uses), and special reliance on "soft technologies" (alternative technology) such as solar, wind, biofuels, and geothermal. According to the institute, large-scale electricity production facilities had an important place, but it was a place that they were already filling in the middle 1970s; in general, more would not be needed. In a 1989 speech, Amory Lovins introduced the related concept of Negawatt power, in which creating a market for trading increased efficiency could supply additional electrical energy to consumers without increasing generation capacity—such as building more power plants. In the 1990s, RMI convened a team of designers and engineers to develop a super-efficient prototype automobile, which they dubbed the Hypercar. In December 2014, RMI merged with Carbon War Room, an organization with similar goals but a different approach. In June 2017, RMI merged with WattTime, an organization providing real-time power plant data to consumer devices for automatic dispatchable power consumption. RMI, in 2021, launched Canary Media, a nonprofit newsroom covering the clean energy transition. In 2021, the Rocky Mountain Institute rebranded itself as RMI. == Programs == RMI operates programs in many countries: Carbon-Free Electricity Carbon-Free Buildings Carbon-Free Mobility Climate-Aligned Industries Breakthrough Technologies Climate Intelligence Urban Transformation Strategic Analysis & Engagement Global South China Program India Program US Program. == Electric vehicles == In January 2008, led by John E. Waters, Bright Automotive launched from RMI with the goal of building on the work of a consortium of organizations, including Alcoa, Google.org, Johnson Controls and the Turner Foundation. Bright Automotive sought with its Bright IDEA project to develop a brand new, 100 miles per US gallon (2.4 L/100 km; 120 mpg‑imp) plug-in hybrid electric (PHEV) fleet vehicle. It launched Bright eSolutions to consult on engineering, design, powertrain, battery technology and plug-in hybrid conversion technology services. Bright Automotive secured a conversion contract with the U.S. Army Tank-automotive and Armaments Command (TACOM) to convert military non-combat vehicles into a parallel PHEV for evaluation, including V2G testing. The venture failed. Advanced Energy, in partnership with RMI, announced a Request for Information (RFI) for Electric Vehicle Supply Equipment (EVSE) specific to charging stations for plug-in electric vehicles (EV). == Books == Books published by RMI include: Winning the Oil Endgame: Innovation for Profit, Jobs and Security (2005) ISBN 1-84407-194-4 (Available Online in PDF) Small is Profitable: The Hidden Economic Benefits of Making Electrical Resources the Right Size (2003) ISBN 1-881071-07-3 Natural Capitalism: Creating the Next Industrial Revolution (2000) ISBN 1-85383-763-6 Reinventing Fire: Bold Business Solutions for the New Energy Era (2011) ISBN 978-1-60358-371-8. == Recognition == Co-founder Amory Lovins received many awards. == See also == Association négaWatt E. Kyle Datta Negawatt power Plug-in Hybrid Electric Vehicle Soft energy path Soft energy technology Transition town == References == == External links == RMI YouTube channel
Wikipedia/RMI_(energy_organization)
In biochemistry and pharmacology, the Hill equation refers to two closely related equations that reflect the binding of ligands to macromolecules, as a function of the ligand concentration. A ligand is "a substance that forms a complex with a biomolecule to serve a biological purpose", and a macromolecule is a very large molecule, such as a protein, with a complex structure of components. Protein-ligand binding typically changes the structure of the target protein, thereby changing its function in a cell. The distinction between the two Hill equations is whether they measure occupancy or response. The Hill equation reflects the occupancy of macromolecules: the fraction that is saturated or bound by the ligand. This equation is formally equivalent to the Langmuir isotherm. Conversely, the Hill equation proper reflects the cellular or tissue response to the ligand: the physiological output of the system, such as muscle contraction. The Hill equation was originally formulated by Archibald Hill in 1910 to describe the sigmoidal O2 binding curve of hemoglobin. The binding of a ligand to a macromolecule is often enhanced if there are already other ligands present on the same macromolecule (this is known as cooperative binding). The Hill equation is useful for determining the degree of cooperativity of the ligand(s) binding to the enzyme or receptor. The Hill coefficient provides a way to quantify the degree of interaction between ligand binding sites. The Hill equation (for response) is important in the construction of dose-response curves. == Proportion of ligand-bound receptors == The Hill equation is commonly expressed in the following ways: θ = [ L ] n K d + [ L ] n = [ L ] n ( K A ) n + [ L ] n = 1 1 + ( K A [ L ] ) n {\displaystyle {\begin{aligned}\theta &={[{\ce {L}}]^{n} \over K_{d}+[{\ce {L}}]^{n}}\\&={[{\ce {L}}]^{n} \over (K_{A})^{n}+[{\ce {L}}]^{n}}\\&={1 \over 1+\left({K_{A} \over [{\ce {L}}]}\right)^{n}}\end{aligned}}} , where θ {\displaystyle \theta } is the fraction of the receptor protein concentration that is bound by the ligand, [ L ] {\displaystyle {\ce {[L]}}} is the total ligand concentration, K d {\displaystyle K_{d}} is the apparent dissociation constant derived from the law of mass action, K A {\displaystyle K_{A}} is the ligand concentration producing half occupation, n {\displaystyle n} is the Hill coefficient. The special case where n = 1 {\displaystyle n=1} is a Monod equation. === Constants === In pharmacology, θ {\displaystyle \theta } is often written as p AR {\displaystyle p_{{\ce {AR}}}} , where A {\displaystyle {\ce {A}}} is the ligand, equivalent to L, and R {\displaystyle {\ce {R}}} is the receptor. θ {\displaystyle \theta } can be expressed in terms of the total amount of receptor and ligand-bound receptor concentrations: θ = [ LR ] [ R total ] {\displaystyle \theta ={\frac {\ce {[LR]}}{\ce {[R_{\rm {total}}]}}}} . K d {\displaystyle K_{d}} is equal to the ratio of the dissociation rate of the ligand-receptor complex to its association rate ( K d = k d k a {\textstyle K_{\rm {d}}={k_{\rm {d}} \over k_{\rm {a}}}} ). Kd is the equilibrium constant for dissociation. K A {\textstyle K_{A}} is defined so that ( K A ) n = K d = k d k a {\textstyle (K_{A})^{n}=K_{\rm {d}}={k_{\rm {d}} \over k_{\rm {a}}}} , this is also known as the microscopic dissociation constant and is the ligand concentration occupying half of the binding sites. In recent literature, this constant is sometimes referred to as K D {\textstyle K_{D}} . === Gaddum equation === The Gaddum equation is a further generalisation of the Hill-equation, incorporating the presence of a reversible competitive antagonist. The Gaddum equation is derived similarly to the Hill-equation but with 2 equilibria: both the ligand with the receptor and the antagonist with the receptor. Hence, the Gaddum equation has 2 constants: the equilibrium constants of the ligand and that of the antagonist === Hill plot === The Hill plot is the rearrangement of the Hill equation into a straight line. Taking the reciprocal of both sides of the Hill equation, rearranging, and inverting again yields: θ 1 − θ = [ L ] n K d = [ L ] n ( K A ) n {\displaystyle {\theta \over 1-\theta }={[{\ce {L}}]^{n} \over K_{d}}={[{\ce {L}}]^{n} \over (K_{A})^{n}}} . Taking the logarithm of both sides of the equation leads to an alternative formulation of the Hill-Langmuir equation: log ⁡ ( θ 1 − θ ) = n log ⁡ [ L ] − log ⁡ K d = n log ⁡ [ L ] − n log ⁡ K A {\displaystyle {\begin{aligned}\log \left({\theta \over 1-\theta }\right)&=n\log {[{\ce {L}}]}-\log {K_{d}}\\&=n\log {[{\ce {L}}]}-n\log {K_{A}}\end{aligned}}} . This last form of the Hill equation is advantageous because a plot of log ⁡ ( θ 1 − θ ) {\textstyle \log \left({\theta \over 1-\theta }\right)} versus log ⁡ [ L ] {\displaystyle \log {[{\ce {L}}]}} yields a linear plot, which is called a Hill plot. Because the slope of a Hill plot is equal to the Hill coefficient for the biochemical interaction, the slope is denoted by n H {\displaystyle n_{H}} . A slope greater than one thus indicates positively cooperative binding between the receptor and the ligand, while a slope less than one indicates negatively cooperative binding. Transformations of equations into linear forms such as this were very useful before the widespread use of computers, as they allowed researchers to determine parameters by fitting lines to data. However, these transformations affect error propagation, and this may result in undue weight to error in data points near 0 or 1. This impacts the parameters of linear regression lines fitted to the data. Furthermore, the use of computers enables more robust analysis involving nonlinear regression. == Tissue response == A distinction should be made between quantification of drugs binding to receptors and drugs producing responses. There may not necessarily be a linear relationship between the two values. In contrast to this article's previous definition of the Hill equation, the IUPHAR defines the Hill equation in terms of the tissue response ( E ) {\displaystyle (E)} , as E E m a x = [ A ] n EC 50 n + [ A ] n = 1 1 + ( EC 50 [ A ] ) n {\displaystyle {\begin{aligned}{\frac {E}{E_{\mathrm {max} }}}&={\frac {[A]^{n}}{{\text{EC}}_{50}^{n}+[A]^{n}}}\\&={\frac {1}{1+\left({\frac {{\text{EC}}_{50}}{[A]}}\right)^{n}}}\end{aligned}}} where [ A ] {\displaystyle {\ce {[A]}}} is the drug concentration, n {\displaystyle n} is the Hill coefficient, and EC 50 {\displaystyle {\text{EC}}_{50}} is the drug concentration that produces a 50% maximal response. Dissociation constants (in the previous section) relate to ligand binding, while EC 50 {\displaystyle {\text{EC}}_{50}} reflects tissue response. This form of the equation can reflect tissue/cell/population responses to drugs and can be used to generate dose response curves. The relationship between K d {\displaystyle K_{d}} and EC50 may be quite complex as a biological response will be the sum of myriad factors; a drug will have a different biological effect if more receptors are present, regardless of its affinity. The Del-Castillo Katz model is used to relate the Hill equation to receptor activation by including a second equilibrium of the ligand-bound receptor to an activated form of the ligand-bound receptor. Statistical analysis of response as a function of stimulus may be performed by regression methods such as the probit model or logit model, or other methods such as the Spearman–Kärber method. Empirical models based on nonlinear regression are usually preferred over the use of some transformation of the data that linearizes the dose-response relationship. == Hill coefficient == The Hill coefficient is a measure of ultrasensitivity (i.e. how steep is the response curve). The Hill coefficient, n {\displaystyle n} or n H {\displaystyle n_{H}} , may describe cooperativity (or possibly other biochemical properties, depending on the context in which the Hill equation is being used). When appropriate, the value of the Hill coefficient describes the cooperativity of ligand binding in the following way: n > 1 {\displaystyle n>1} . Positively cooperative binding: Once one ligand molecule is bound to the enzyme, its affinity for other ligand molecules increases. For example, the Hill coefficient of oxygen binding to haemoglobin (an example of positive cooperativity) falls within the range of 1.7–3.2. n < 1 {\displaystyle n<1} . Negatively cooperative binding: Once one ligand molecule is bound to the enzyme, its affinity for other ligand molecules decreases. n = 1 {\displaystyle n=1} . Noncooperative (completely independent) binding: The affinity of the enzyme for a ligand molecule is not dependent on whether or not other ligand molecules are already bound. When n=1, we obtain a model that can be modeled by Michaelis–Menten kinetics, in which K D = K A = K M {\textstyle K_{D}=K_{A}=K_{M}} , the Michaelis–Menten constant. The Hill coefficient can be calculated approximately in terms of the cooperativity index of Taketa and Pogell as follows: n = log 10 ⁡ ( 81 ) log 10 ⁡ ( EC 90 / EC 10 ) {\displaystyle n={\frac {\log _{10}(81)}{\log _{10}({\ce {EC90}}/{\ce {EC10}})}}} . where EC 90 {\displaystyle {\ce {EC90}}} and EC 10 {\displaystyle {\ce {EC10}}} are the input values needed to produce the 10% and 90% of the maximal response, respectively. == Reversible form == The most common form of the Hill equation is its irreversible form. However, when building computational models a reversible form is often required in order to model product inhibition. For this reason, Hofmeyr and Cornish-Bowden devised the reversible Hill equation. == Relationship to the elasticity coefficients == The Hill coefficient is also intimately connected to the elasticity coefficient where the Hill coefficient can be shown to equal: n = ε s v 1 1 − θ {\displaystyle n=\varepsilon _{s}^{v}{\frac {1}{1-\theta }}} where θ {\displaystyle \theta } is the fractional saturation, E S / E t {\displaystyle ES/E_{t}} , and ε s v {\displaystyle \varepsilon _{s}^{v}} the elasticity coefficient. This is derived by taking the slope of the Hill equation: n = d log ⁡ θ 1 − θ d log ⁡ s {\displaystyle n={\frac {d\log {\frac {\theta }{1-\theta }}}{d\log s}}} and expanding the slope using the quotient rule. The result shows that the elasticity can never exceed n {\displaystyle n} since the equation above can be rearranged to: ε s v = n ( 1 − θ ) {\displaystyle \varepsilon _{s}^{v}=n(1-\theta )} == Applications == The Hill equation is used extensively in pharmacology to quantify the functional parameters of a drug and are also used in other areas of biochemistry. The Hill equation can be used to describe dose-response relationships, for example ion channel open-probability (P-open) vs. ligand concentration. === Regulation of gene transcription === The Hill equation can be applied in modelling the rate at which a gene product is produced when its parent gene is being regulated by transcription factors (e.g., activators and/or repressors). Doing so is appropriate when a gene is regulated by multiple binding sites for transcription factors, in which case the transcription factors may bind the DNA in a cooperative fashion. If the production of protein from gene X is up-regulated (activated) by a transcription factor Y, then the rate of production of protein X can be modeled as a differential equation in terms of the concentration of activated Y protein: d d t [ X p r o d u c e d ] = k ⋅ [ Y a c t i v e ] n ( K A ) n + [ Y a c t i v e ] n {\displaystyle {\mathrm {d} \over \mathrm {d} t}[{\rm {X_{produced}}}]=k\ \cdot {{[{\rm {Y_{active}}}]^{\mathit {n}}} \over {(K_{A})^{n}\ +\ {[{\rm {Y_{active}}}]^{\mathit {n}}}}}} , where k is the maximal transcription rate of gene X. Likewise, if the production of protein from gene Y is down-regulated (repressed) by a transcription factor Z, then the rate of production of protein Y can be modeled as a differential equation in terms of the concentration of activated Z protein: d d t [ Y p r o d u c e d ] = k ⋅ ( K A ) n ( K A ) n + [ Z a c t i v e ] n {\displaystyle {\mathrm {d} \over \mathrm {d} t}[{\rm {Y_{produced}}}]=k\ \cdot {{(K_{A})^{\mathit {n}}} \over {(K_{A})^{n}\ +\ {[{\rm {Z_{active}}}]^{\mathit {n}}}}}} , where k is the maximal transcription rate of gene Y. == Limitations == Because of its assumption that ligand molecules bind to a receptor simultaneously, the Hill equation has been criticized as a physically unrealistic model. Moreover, the Hill coefficient should not be considered a reliable approximation of the number of cooperative ligand binding sites on a receptor except when the binding of the first and subsequent ligands results in extreme positive cooperativity. Unlike more complex models, the relatively simple Hill equation provides little insight into underlying physiological mechanisms of protein-ligand interactions. This simplicity, however, is what makes the Hill equation a useful empirical model, since its use requires little a priori knowledge about the properties of either the protein or ligand being studied. Nevertheless, other, more complex models of cooperative binding have been proposed. For more information and examples of such models, see Cooperative binding. Global sensitivity measure such as Hill coefficient do not characterise the local behaviours of the s-shaped curves. Instead, these features are well captured by the response coefficient measure. There is a link between Hill Coefficient and Response coefficient, as follows. Altszyler et al. (2017) have shown that these ultrasensitivity measures can be linked. == See also == Binding coefficient Bjerrum plot Cooperative binding Gompertz curve Langmuir adsorption model Logistic function Michaelis–Menten kinetics Monod equation == Notes == == References == == Further reading == Dorland's Illustrated Medical Dictionary Coval, ML (December 1970). "Analysis of Hill interaction coefficients and the invalidity of the Kwon and Brown equation". J. Biol. Chem. 245 (23): 6335–6. doi:10.1016/S0021-9258(18)62614-6. PMID 5484812. d'A Heck, Henry (1971). "Statistical theory of cooperative binding to proteins. Hill equation and the binding potential". J. Am. Chem. Soc. 93 (1): 23–29. Bibcode:1971JAChS..93...23H. doi:10.1021/ja00730a004. PMID 5538860. Atkins, Gordon L. (1973). "A simple digital-computer program for estimating the parameter of the Hill Equation". Eur. J. Biochem. 33 (1): 175–180. doi:10.1111/j.1432-1033.1973.tb02667.x. PMID 4691349. Endrenyi, Laszlo; Kwong, F. H. F.; Fajszi, Csaba (1975). "Evaluation of Hill slopes and Hill coefficients when the saturation binding or velocity is not known". Eur. J. Biochem. 51 (2): 317–328. doi:10.1111/j.1432-1033.1975.tb03931.x. PMID 1149734. Voet, Donald; Voet, Judith G. (2004). Biochemistry. Weiss, J. N. (1997). "The Hill equation revisited: uses and misuses". FASEB Journal. 11 (11): 835–841. doi:10.1096/fasebj.11.11.9285481. PMID 9285481. S2CID 827335. Kurganov, B. I.; Lobanov, A. V. (2001). "Criterion for Hill equation validity for description of biosensor calibration curves". Anal. Chim. Acta. 427 (1): 11–19. Bibcode:2001AcAC..427...11K. doi:10.1016/S0003-2670(00)01167-3. Goutelle, Sylvain; Maurin, Michel; Rougier, Florent; Barbaut, Xavier; Bourguignon, Laurent; Ducher, Michel; Maire, Pascal (2008). "The Hill equation: a review of its capabilities in pharmacological modelling". Fundamental & Clinical Pharmacology. 22 (6): 633–648. doi:10.1111/j.1472-8206.2008.00633.x. PMID 19049668. S2CID 4979109. Gesztelyi R; Zsuga J; Kemeny-Beke A; Varga B; Juhasz B; Tosaki A (2012). "The Hill equation and the origin of quantitative pharmacology". Archive for History of Exact Sciences. 66 (4): 427–38. doi:10.1007/s00407-012-0098-5. S2CID 122929930. Colquhoun D (2006). "The quantitative analysis of drug-receptor interactions: a short history". Trends Pharmacol Sci. 27 (3): 149–57. doi:10.1016/j.tips.2006.01.008. PMID 16483674. Rang HP (2006). "The receptor concept: pharmacology's big idea". Br J Pharmacol. 147 (Suppl 1): S9–16. doi:10.1038/sj.bjp.0706457. PMC 1760743. PMID 16402126. == External links == Hill equation calculator
Wikipedia/Hill–Langmuir_equation
IEEE Transactions on Neural Networks and Learning Systems is a monthly peer-reviewed scientific journal published by the IEEE Computational Intelligence Society. It covers the theory, design, and applications of neural networks and related learning systems. According to the Journal Citation Reports, the journal had a 2021 impact factor of 14.255. The journal was established in 1990 by the IEEE Neural Networks Council. == Editors-in-chief == Yongduan Song (Chongqing University), 2022–present Haibo He (University of Rhode Island), 2016–2021 Derong Liu (University of Illinois), 2010–2015 Marios M. Polycarpou (University of Cyprus), 2004–2009 Jacek M. Zurada (University of Louisville), 1998–2003 Robert J. Marks II (Baylor University), 1992–1997 Michael W. Roth (Johns Hopkins University), 1991 Herbert E. Rauch (Lockheed Palo Alto Research Laboratory), 1990 == References == == External links == Official website
Wikipedia/IEEE_Transactions_on_Neural_Networks
In the context of artificial neural networks, the rectifier or ReLU (rectified linear unit) activation function is an activation function defined as the non-negative part of its argument, i.e., the ramp function: ReLU ⁡ ( x ) = x + = max ( 0 , x ) = x + | x | 2 = { x if x > 0 , 0 x ≤ 0 {\displaystyle \operatorname {ReLU} (x)=x^{+}=\max(0,x)={\frac {x+|x|}{2}}={\begin{cases}x&{\text{if }}x>0,\\0&x\leq 0\end{cases}}} where x {\displaystyle x} is the input to a neuron. This is analogous to half-wave rectification in electrical engineering. ReLU is one of the most popular activation functions for artificial neural networks, and finds application in computer vision and speech recognition using deep neural nets and computational neuroscience. == History == The ReLU was first used by Alston Householder in 1941 as a mathematical abstraction of biological neural networks. Kunihiko Fukushima in 1969 used ReLU in the context of visual feature extraction in hierarchical neural networks. Thirty years later, Hahnloser et al. argued that ReLU approximates the biological relationship between neural firing rates and input current, in addition to enabling recurrent neural network dynamics to stabilise under weaker criteria. Prior to 2010, most activation functions used were the logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more numerically efficient counterpart, the hyperbolic tangent. Around 2010, the use of ReLU became common again. Jarrett et al. (2009) noted that rectification by either absolute or ReLU (which they called "positive part") was critical for object recognition in convolutional neural networks (CNNs), specifically because it allows average pooling without neighboring filter outputs cancelling each other out. They hypothesized that the use of sigmoid or tanh was responsible for poor performance in previous CNNs. Nair and Hinton (2010) made a theoretical argument that the softplus activation function should be used, in that the softplus function numerically approximates the sum of an exponential number of linear models that share parameters. They then proposed ReLU as a good approximation to it. Specifically, they began by considering a single binary neuron in a Boltzmann machine that takes x {\displaystyle x} as input, and produces 1 as output with probability σ ( x ) = 1 1 + e − x {\displaystyle \sigma (x)={\frac {1}{1+e^{-x}}}} . They then considered extending its range of output by making infinitely many copies of it X 1 , X 2 , X 3 , … {\displaystyle X_{1},X_{2},X_{3},\dots } , that all take the same input, offset by an amount 0.5 , 1.5 , 2.5 , … {\displaystyle 0.5,1.5,2.5,\dots } , then their outputs are added together as ∑ i = 1 ∞ X i {\displaystyle \sum _{i=1}^{\infty }X_{i}} . They then demonstrated that ∑ i = 1 ∞ X i {\displaystyle \sum _{i=1}^{\infty }X_{i}} is approximately equal to N ( log ⁡ ( 1 + e x ) , σ ( x ) ) {\displaystyle {\mathcal {N}}(\log(1+e^{x}),\sigma (x))} , which is also approximately equal to ReLU ⁡ ( N ( x , σ ( x ) ) ) {\displaystyle \operatorname {ReLU} ({\mathcal {N}}(x,\sigma (x)))} , where N {\displaystyle {\mathcal {N}}} stands for the gaussian distribution. They also argued for another reason for using ReLU: that it allows "intensity equivariance" in image recognition. That is, multiplying input image by a constant k {\displaystyle k} multiplies the output also. In contrast, this is false for other activation functions like sigmoid or tanh. They found that ReLU activation allowed good empirical performance in restricted Boltzmann machines. Glorot et al (2011) argued that ReLU has the following advantages over sigmoid or tanh. ReLU is more similar to biological neurons' responses in their main operating regime. ReLU avoids vanishing gradients. ReLU is cheaper to compute. ReLU creates sparse representation naturally, because many hidden units output exactly zero for a given input. They also found empirically that deep networks trained with ReLU can achieve strong performance without unsupervised pre-training, especially on large, purely supervised tasks. == Advantages == Advantages of ReLU include: Sparse activation: for example, in a randomly initialized network, only about 50% of hidden units are activated (i.e. have a non-zero output). Better gradient propagation: fewer vanishing gradient problems compared to sigmoidal activation functions that saturate in both directions. Efficiency: only requires comparison and addition. Scale-invariant (homogeneous, or "intensity equivariance"): max ( 0 , a x ) = a max ( 0 , x ) for a ≥ 0 {\displaystyle \max(0,ax)=a\max(0,x){\text{ for }}a\geq 0} . == Potential problems == Possible downsides can include: Non-differentiability at zero (however, it is differentiable anywhere else, and the value of the derivative at zero can be chosen to be 0 or 1 arbitrarily). Not zero-centered: ReLU outputs are always non-negative. This can make it harder for the network to learn during backpropagation, because gradient updates tend to push weights in one direction (positive or negative). Batch normalization can help address this. ReLU is unbounded. Redundancy of the parametrization: Because ReLU is scale-invariant, the network computes the exact same function by scaling the weights and biases in front of a ReLU activation by k {\displaystyle k} , and the weights after by 1 / k {\displaystyle 1/k} . Dying ReLU: ReLU neurons can sometimes be pushed into states in which they become inactive for essentially all inputs. In this state, no gradients flow backward through the neuron, and so the neuron becomes stuck in a perpetually inactive state (it "dies"). This is a form of the vanishing gradient problem. In some cases, large numbers of neurons in a network can become stuck in dead states, effectively decreasing the model capacity and potentially even halting the learning process. This problem typically arises when the learning rate is set too high. It may be mitigated by using "leaky" ReLU instead, where a small positive slope is assigned for x < 0 {\displaystyle x<0} . However, depending on the task, performance may be reduced. == Variants == === Piecewise-linear variants === Leaky ReLU allows a small, positive gradient when the unit is inactive, helping to mitigate the vanishing gradient problem. This gradient is defined by a parameter α {\displaystyle \alpha } , typically set to 0.01–0.3. f ( x ) = { x x > 0 , α x x ≤ 0 , f ′ ( x ) = { 1 x > 0 , α x ≤ 0. {\displaystyle f(x)={\begin{cases}x&x>0,\\\alpha x&x\leq 0,\end{cases}}\qquad f'(x)={\begin{cases}1&x>0,\\\alpha &x\leq 0.\end{cases}}} The same function can also be expressed without the piecewise notation as: f ( x ) = 1 + α 2 x + 1 − α 2 | x | {\displaystyle f(x)={\frac {1+\alpha }{2}}x+{\frac {1-\alpha }{2}}|x|} Parametric ReLU (PReLU) takes this idea further by making α {\displaystyle \alpha } a learnable parameter along with the other network parameters. Note that for α ≤ 1 {\displaystyle \alpha \leq 1} , this is equivalent to f ( x ) = max ( x , α x ) {\displaystyle f(x)=\max(x,\alpha x)} and thus has a relation to "maxout" networks. Concatenated ReLU (CReLU) preserves positive and negative phase information: f ( x ) = [ ReLU ⁡ ( x ) , ReLU ⁡ ( − x ) ] . {\displaystyle f(x)=[\operatorname {ReLU} (x),\operatorname {ReLU} (-x)].} === Other non-linear variants === ==== DELU ==== ExtendeD Exponential Linear Unit (DELU) is an activation function which is smoother within the neighborhood of zero and sharper for bigger values, allowing better allocation of neurons in the learning process for higher performance. Thanks to its unique design, it has been shown that DELU may obtain higher classification accuracy than ReLU and ELU. f ( x ) = { x x > x c , ( e a x − 1 ) / b x ≤ x c f ′ ( x ) = { 1 x > x c , ( a / b ) e a x x ≤ x c {\displaystyle f(x)={\begin{cases}x&x>x_{c},\\(e^{ax}-1)/b&x\leq x_{c}\end{cases}}\qquad f'(x)={\begin{cases}1&x>x_{c},\\(a/b)e^{ax}&x\leq x_{c}\end{cases}}} In these formulas, a {\displaystyle a} , b {\displaystyle b} and x c {\displaystyle x_{c}} are hyperparameter values which could be set as default constraints a = 1 {\displaystyle a=1} , b = 2 {\displaystyle b=2} and x c = 1.25643 {\displaystyle x_{c}=1.25643} , as done in the original work. ==== Gaussian-error linear unit (GELU) ==== GELU is a smooth approximation to the rectifier: f ( x ) = x Φ ( x ) , {\displaystyle f(x)=x\Phi (x),} f ′ ( x ) = x Φ ′ ( x ) + Φ ( x ) {\displaystyle f'(x)=x\Phi '(x)+\Phi (x)} where Φ ( x ) = P ( X ⩽ x ) {\displaystyle \Phi (x)=P(X\leqslant x)} is the cumulative distribution function of the standard normal distribution. This activation function is illustrated in the figure at the start of this article. It has a "bump" to the left of x < 0 and serves as the default activation for models such as BERT. ==== SiLU ==== The SiLU (sigmoid linear unit) or swish function is another smooth approximation which uses the sigmoid function, first introduced in the GELU paper: f ( x ) = x ⋅ sigmoid ⁡ ( x ) , {\displaystyle f(x)=x\cdot \operatorname {sigmoid} (x),} f ′ ( x ) = x ⋅ sigmoid ′ ⁡ ( x ) + sigmoid ⁡ ( x ) {\displaystyle f'(x)=x\cdot \operatorname {sigmoid} '(x)+\operatorname {sigmoid} (x)} ==== Softplus ==== A smooth approximation to the rectifier is the analytic function f ( x ) = ln ⁡ ( 1 + e x ) , f ′ ( x ) = e x 1 + e x = 1 1 + e − x {\displaystyle f(x)=\ln(1+e^{x}),\qquad f'(x)={\frac {e^{x}}{1+e^{x}}}={\frac {1}{1+e^{-x}}}} which is called the softplus or SmoothReLU function. For large negative x {\displaystyle x} it is roughly ln ⁡ 1 {\displaystyle \ln 1} , so just above 0, while for large positive x {\displaystyle x} it is roughly ln ⁡ ( e x ) {\displaystyle \ln(e^{x})} , so just above x {\displaystyle x} . This function can be approximated as: ln ⁡ ( 1 + e x ) ≈ { ln ⁡ 2 , x = 0 , x 1 − e − x / ln ⁡ 2 , x ≠ 0 {\displaystyle \ln \left(1+e^{x}\right)\approx {\begin{cases}\ln 2,&x=0,\\[6pt]{\frac {x}{1-e^{-x/\ln 2}}},&x\neq 0\end{cases}}} By making the change of variables x = y ln ⁡ ( 2 ) {\displaystyle x=y\ln(2)} , this is equivalent to log 2 ⁡ ( 1 + 2 y ) ≈ { 1 , y = 0 , y 1 − e − y , y ≠ 0 {\displaystyle \log _{2}(1+2^{y})\approx {\begin{cases}1,&y=0,\\[6pt]{\frac {y}{1-e^{-y}}},&y\neq 0\end{cases}}} A sharpness parameter k {\displaystyle k} may be included: f ( x ) = ln ⁡ ( 1 + e k x ) k , f ′ ( x ) = e k x 1 + e k x = 1 1 + e − k x {\displaystyle f(x)={\frac {\ln(1+e^{kx})}{k}},\qquad f'(x)={\frac {e^{kx}}{1+e^{kx}}}={\frac {1}{1+e^{-kx}}}} The derivative of softplus is the logistic function. The logistic sigmoid function is a smooth approximation of the derivative of the rectifier, the Heaviside step function. The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero: L S E 0 + ⁡ ( x 1 , … , x n ) := LSE ⁡ ( 0 , x 1 , … , x n ) = ln ⁡ ( 1 + e x 1 + ⋯ + e x n ) {\displaystyle \operatorname {LSE_{0}} ^{+}(x_{1},\dots ,x_{n}):=\operatorname {LSE} (0,x_{1},\dots ,x_{n})=\ln(1+e^{x_{1}}+\cdots +e^{x_{n}})} The LogSumExp function is LSE ⁡ ( x 1 , … , x n ) = ln ⁡ ( e x 1 + ⋯ + e x n ) {\displaystyle \operatorname {LSE} (x_{1},\dots ,x_{n})=\ln(e^{x_{1}}+\cdots +e^{x_{n}})} and its gradient is the softmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning. ==== ELU ==== Exponential linear units try to make the mean activations closer to zero, which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs. f ( x ) = { x x > 0 , α ( e x − 1 ) x ≤ 0 f ′ ( x ) = { 1 x > 0 , α e x x ≤ 0 {\displaystyle f(x)={\begin{cases}x&x>0,\\\alpha \left(e^{x}-1\right)&x\leq 0\end{cases}}\qquad f'(x)={\begin{cases}1&x>0,\\\alpha e^{x}&x\leq 0\end{cases}}} In these formulas, α {\displaystyle \alpha } is a hyperparameter to be tuned with the constraint α ≥ 0 {\displaystyle \alpha \geq 0} . Given the same interpretation of α {\displaystyle \alpha } , ELU can be viewed as a smoothed version of a shifted ReLU (SReLU), which has the form f ( x ) = max ( − α , x ) {\displaystyle f(x)=\max(-\alpha ,x)} . ==== Mish ==== The mish function can also be used as a smooth approximation of the rectifier. It is defined as f ( x ) = x tanh ⁡ ( softplus ⁡ ( x ) ) , {\displaystyle f(x)=x\tanh {\big (}\operatorname {softplus} (x){\big )},} where tanh ⁡ ( x ) {\displaystyle \tanh(x)} is the hyperbolic tangent, and softplus ⁡ ( x ) {\displaystyle \operatorname {softplus} (x)} is the softplus function. Mish is non-monotonic and self-gated. It was inspired by Swish, itself a variant of ReLU. ==== Squareplus ==== Squareplus is the function f ( x ) = x + x 2 + b 2 {\displaystyle f(x)={\frac {x+{\sqrt {x^{2}+b}}}{2}}} where b ≥ 0 {\displaystyle b\geq 0} is a hyperparameter that determines the "size" of the curved region near x = 0 {\displaystyle x=0} . (For example, letting b = 0 {\displaystyle b=0} yields ReLU, and letting b = 4 {\displaystyle b=4} yields the metallic mean function.) Squareplus shares many properties with softplus: It is monotonic, strictly positive, approaches 0 as x → − ∞ {\displaystyle x\to -\infty } , approaches the identity as x → + ∞ {\displaystyle x\to +\infty } , and is C ∞ {\displaystyle C^{\infty }} smooth. However, squareplus can be computed using only algebraic functions, making it well-suited for settings where computational resources or instruction sets are limited. Additionally, squareplus requires no special consideration to ensure numerical stability when x {\displaystyle x} is large. == See also == Softmax function Sigmoid function Tobit model Layer (deep learning) == References ==
Wikipedia/Softplus_function
The generalized logistic function or curve is an extension of the logistic or sigmoid functions. Originally developed for growth modelling, it allows for more flexible S-shaped curves. The function is sometimes named Richards's curve after F. J. Richards, who proposed the general form for the family of models in 1959. == Definition == Richards's curve has the following form: Y ( t ) = A + K − A ( C + Q e − B t ) 1 / ν {\displaystyle Y(t)=A+{K-A \over (C+Qe^{-Bt})^{1/\nu }}} where Y {\displaystyle Y} = weight, height, size etc., and t {\displaystyle t} = time. It has six parameters: A {\displaystyle A} : the left horizontal asymptote; K {\displaystyle K} : the right horizontal asymptote when C = 1 {\displaystyle C=1} . If A = 0 {\displaystyle A=0} and C = 1 {\displaystyle C=1} then K {\displaystyle K} is called the carrying capacity; B {\displaystyle B} : the growth rate; ν > 0 {\displaystyle \nu >0} : affects near which asymptote maximum growth occurs. Q {\displaystyle Q} : is related to the value Y ( 0 ) {\displaystyle Y(0)} C {\displaystyle C} : typically takes a value of 1. Otherwise, the upper asymptote is A + K − A C 1 / ν {\displaystyle A+{K-A \over C^{\,1/\nu }}} The equation can also be written: Y ( t ) = A + K − A ( C + e − B ( t − M ) ) 1 / ν {\displaystyle Y(t)=A+{K-A \over (C+e^{-B(t-M)})^{1/\nu }}} where M {\displaystyle M} can be thought of as a starting time, at which Y ( M ) = A + K − A ( C + 1 ) 1 / ν {\displaystyle Y(M)=A+{K-A \over (C+1)^{1/\nu }}} . Including both Q {\displaystyle Q} and M {\displaystyle M} can be convenient: Y ( t ) = A + K − A ( C + Q e − B ( t − M ) ) 1 / ν {\displaystyle Y(t)=A+{K-A \over (C+Qe^{-B(t-M)})^{1/\nu }}} this representation simplifies the setting of both a starting time and the value of Y {\displaystyle Y} at that time. The logistic function, with maximum growth rate at time M {\displaystyle M} , is the case where Q = ν = 1 {\displaystyle Q=\nu =1} . == Generalised logistic differential equation == A particular case of the generalised logistic function is: Y ( t ) = K ( 1 + Q e − α ν ( t − t 0 ) ) 1 / ν {\displaystyle Y(t)={K \over (1+Qe^{-\alpha \nu (t-t_{0})})^{1/\nu }}} which is the solution of the Richards's differential equation (RDE): Y ′ ( t ) = α ( 1 − ( Y K ) ν ) Y {\displaystyle Y^{\prime }(t)=\alpha \left(1-\left({\frac {Y}{K}}\right)^{\nu }\right)Y} with initial condition Y ( t 0 ) = Y 0 {\displaystyle Y(t_{0})=Y_{0}} where Q = − 1 + ( K Y 0 ) ν {\displaystyle Q=-1+\left({\frac {K}{Y_{0}}}\right)^{\nu }} provided that ν > 0 {\displaystyle \nu >0} and α > 0 {\displaystyle \alpha >0} The classical logistic differential equation is a particular case of the above equation, with ν = 1 {\displaystyle \nu =1} , whereas the Gompertz curve can be recovered in the limit ν → 0 + {\displaystyle \nu \rightarrow 0^{+}} provided that: α = O ( 1 ν ) {\displaystyle \alpha =O\left({\frac {1}{\nu }}\right)} In fact, for small ν {\displaystyle \nu } it is Y ′ ( t ) = Y r 1 − exp ⁡ ( ν ln ⁡ ( Y K ) ) ν ≈ r Y ln ⁡ ( Y K ) {\displaystyle Y^{\prime }(t)=Yr{\frac {1-\exp \left(\nu \ln \left({\frac {Y}{K}}\right)\right)}{\nu }}\approx rY\ln \left({\frac {Y}{K}}\right)} The RDE models many growth phenomena, arising in fields such as oncology and epidemiology. == Gradient of generalized logistic function == When estimating parameters from data, it is often necessary to compute the partial derivatives of the logistic function with respect to parameters at a given data point t {\displaystyle t} (see). For the case where C = 1 {\displaystyle C=1} , ∂ Y ∂ A = 1 − ( 1 + Q e − B ( t − M ) ) − 1 / ν ∂ Y ∂ K = ( 1 + Q e − B ( t − M ) ) − 1 / ν ∂ Y ∂ B = ( K − A ) ( t − M ) Q e − B ( t − M ) ν ( 1 + Q e − B ( t − M ) ) 1 ν + 1 ∂ Y ∂ ν = ( K − A ) ln ⁡ ( 1 + Q e − B ( t − M ) ) ν 2 ( 1 + Q e − B ( t − M ) ) 1 ν ∂ Y ∂ Q = − ( K − A ) e − B ( t − M ) ν ( 1 + Q e − B ( t − M ) ) 1 ν + 1 ∂ Y ∂ M = − ( K − A ) Q B e − B ( t − M ) ν ( 1 + Q e − B ( t − M ) ) 1 ν + 1 {\displaystyle {\begin{aligned}\\{\frac {\partial Y}{\partial A}}&=1-(1+Qe^{-B(t-M)})^{-1/\nu }\\\\{\frac {\partial Y}{\partial K}}&=(1+Qe^{-B(t-M)})^{-1/\nu }\\\\{\frac {\partial Y}{\partial B}}&={\frac {(K-A)(t-M)Qe^{-B(t-M)}}{\nu (1+Qe^{-B(t-M)})^{{\frac {1}{\nu }}+1}}}\\\\{\frac {\partial Y}{\partial \nu }}&={\frac {(K-A)\ln(1+Qe^{-B(t-M)})}{\nu ^{2}(1+Qe^{-B(t-M)})^{\frac {1}{\nu }}}}\\\\{\frac {\partial Y}{\partial Q}}&=-{\frac {(K-A)e^{-B(t-M)}}{\nu (1+Qe^{-B(t-M)})^{{\frac {1}{\nu }}+1}}}\\\\{\frac {\partial Y}{\partial M}}&=-{\frac {(K-A)QBe^{-B(t-M)}}{\nu (1+Qe^{-B(t-M)})^{{\frac {1}{\nu }}+1}}}\\\end{aligned}}} == Special cases == The following functions are specific cases of Richards's curves: Logistic function Gompertz curve Von Bertalanffy function Monomolecular curve == Footnotes == == References == Richards, F. J. (1959). "A Flexible Growth Function for Empirical Use". Journal of Experimental Botany. 10 (2): 290–300. doi:10.1093/jxb/10.2.290. Pella, J. S.; Tomlinson, P. K. (1969). "A Generalised Stock-Production Model". Bull. Inter-Am. Trop. Tuna Comm. 13: 421–496. Lei, Y. C.; Zhang, S. Y. (2004). "Features and Partial Derivatives of Bertalanffy–Richards Growth Model in Forestry". Nonlinear Analysis: Modelling and Control. 9 (1): 65–73. doi:10.15388/NA.2004.9.1.15171.
Wikipedia/Generalised_logistic_function
A bell-shaped function or simply 'bell curve' is a mathematical function having a characteristic "bell"-shaped curve. These functions are typically continuous or smooth, asymptotically approach zero for large negative/positive x, and have a single, unimodal maximum at small x. Hence, the integral of a bell-shaped function is typically a sigmoid function. Bell shaped functions are also commonly symmetric. Many common probability distribution functions are bell curves. Some bell shaped functions, such as the Gaussian function and the probability distribution of the Cauchy distribution, can be used to construct sequences of functions with decreasing variance that approach the Dirac delta distribution. Indeed, the Dirac delta can roughly be thought of as a bell curve with variance tending to zero. Some examples include: Gaussian function, the probability density function of the normal distribution. This is the archetypal bell shaped function and is frequently encountered in nature as a consequence of the central limit theorem. f ( x ) = a e − ( x − b ) 2 / ( 2 c 2 ) {\displaystyle f(x)=ae^{-(x-b)^{2}/(2c^{2})}} Fuzzy Logic generalized membership bell-shaped function f ( x ) = 1 1 + | x − c a | 2 b {\displaystyle f(x)={\frac {1}{1+\left|{\frac {x-c}{a}}\right|^{2b}}}} Hyperbolic secant. This is also the derivative of the Gudermannian function. f ( x ) = sech ⁡ ( x ) = 2 e x + e − x {\displaystyle f(x)=\operatorname {sech} (x)={\frac {2}{e^{x}+e^{-x}}}} Witch of Agnesi, the probability density function of the Cauchy distribution. This is also a scaled version of the derivative of the arctangent function. f ( x ) = 8 a 3 x 2 + 4 a 2 {\displaystyle f(x)={\frac {8a^{3}}{x^{2}+4a^{2}}}} Bump function φ b ( x ) = { exp ⁡ b 2 x 2 − b 2 | x | < b , 0 | x | ≥ b . {\displaystyle \varphi _{b}(x)={\begin{cases}\exp {\frac {b^{2}}{x^{2}-b^{2}}}&|x|<b,\\0&|x|\geq b.\end{cases}}} Raised cosines type like the raised cosine distribution or the raised-cosine filter f ( x ; μ , s ) = { 1 2 s [ 1 + cos ⁡ ( x − μ s π ) ] for μ − s ≤ x ≤ μ + s , 0 otherwise. {\displaystyle f(x;\mu ,s)={\begin{cases}{\frac {1}{2s}}\left[1+\cos \left({\frac {x-\mu }{s}}\pi \right)\right]&{\text{for }}\mu -s\leq x\leq \mu +s,\\[3pt]0&{\text{otherwise.}}\end{cases}}} Most of the window functions like the Kaiser window The derivative of the logistic function. This is a scaled version of the derivative of the hyperbolic tangent function. f ( x ) = e x ( 1 + e x ) 2 {\displaystyle f(x)={\frac {e^{x}}{\left(1+e^{x}\right)^{2}}}} Some algebraic functions. For example f ( x ) = 1 ( 1 + x 2 ) 3 / 2 {\displaystyle f(x)={\frac {1}{(1+x^{2})^{3/2}}}} == Gallery == == References ==
Wikipedia/Bell_shaped_function
The Van Genuchten–Gupta model is an inverted S-curve applicable to crop yield and soil salinity relations. It is named after Martinus Theodore van Genuchten and Satyandra K. Gupta's work from the 1990s. == Equation == The mathematical expression is: Y = Y m 1 + ( C / C 50 ) P {\displaystyle Y={\frac {Y_{\rm {m}}}{1+(C/C_{50})^{P}}}} where Y is the yield, Ym is the maximum yield of the model, C is salt concentration of the soil, C50 is the C value at 50% yield, and P is an exponent to be found by optimization and maximizing the model's goodness of fit to the data. In the figure: Ym = 3.1, C50 = 12.4, P = 3.75 == Alternative one == As an alternative, the logistic S-function can be used. The mathematical expression is: Y ∧ = 1 1 + exp ⁡ ( A X C + B ) {\displaystyle Y^{\wedge }={\frac {1}{1+\exp(AX^{C}+B)}}} where: Y ∧ = Y − Y n Y m − Y n {\displaystyle Y^{\wedge }={\frac {Y-Y_{\rm {n}}}{Y_{\rm {m}}-Y_{\rm {n}}}}} with Y being the yield, Yn the minimum Y, Ym the maximum Y, X the salt concentration of the soil, while A, B and C are constants to be determined by optimization and maximizing the model's goodness of fit to the data. If the minimum Yn=0 then the expression can be simplified to: Y = Y m 1 + exp ⁡ ( A X C + B ) {\displaystyle Y={\frac {Y_{\rm {m}}}{1+\exp(AX^{C}+B)}}} In the figure: Ym = 3.43, Yn = 0.47, A = 0.112, B = -3.16, C = 1.42. == Alternative two == The third degree or cubic regression also offers a useful alternative. The equation reads: Y = A X 3 + B X 2 + C X + D {\displaystyle Y=AX^{3}+BX^{2}+CX+D} with Y the yield, X the salt concentration of the soil, while A, B, C and D are constants to be determined by the regression. In the figure: A = 0.0017, B = 0.0604, C=0.3874, D = 2.3788. These values were calculated with Microsoft Excel The curvature is more pronounced than in the other models. == See also == Maas–Hoffman model == References ==
Wikipedia/Van_Genuchten–Gupta_model
BioMed Central (BMC) is a United Kingdom-based, for-profit scientific open access publisher that produces over 250 scientific journals. All its journals are published online only. BioMed Central describes itself as the first and largest open access science publisher. It was founded in 2000 and has been owned by Springer, now Springer Nature, since 2008. == History == BioMed Central was founded in 2000 as part of the Current Science Group (now Science Navigation Group, SNG), a nursery of scientific publishing companies. SNG chairman Vitek Tracz developed the concept for the company after NIH director Harold Varmus's PubMed Central concept for open-access publishing was scaled back. The first director of the company was Jan Velterop. Chemistry Central was established in 2006 and the PhysMath Central journal imprint in 2007. In 2002, the company introduced article processing charges, and these have since been the primary source of revenue. In 2007, Yale University Libraries stopped subsidizing BioMed Central article processing charges for Yale researchers. In October 2008, it was announced that BioMed Central (along with Chemistry Central and PhysMath Central) had been acquired by Springer Science+Business Media, the second largest STM publisher. The Chemistry Central and PhysMath Central brands have since been retired. In November 2008, BioMed Central became an official supporting organisation of Healthcare Information For All. Following the merger of BMC into Springer Nature, BMC journals were gradually converted to the general Springer Nature software. The software migration meant the loss of several features, often related to open science requirements, like the ability to download a machine-readable version of the paper (in XML format), direct download of PDF files and the ability to read articles without cookies. == Journals == BioMed Central's flagship journals include BMC Bioinformatics, BMC Biology, BMC Medicine, Genome Biology and Genome Medicine. It also produces the BMC Series of journals covering the fields of biology and medicine. Most of the other journals published by BioMed Central are owned and produced independently by societies and academic editorial boards, with BioMed Central providing the hosting, publishing platform and marketing. All journals are published online; some of the flagship journals have in the past also been available as print subscriptions, such as Arthritis Research & Therapy. Publications in BioMed Central journals are, immediately upon publication, released under the Creative Commons "Attribution" license which grants permission to reuse publications and produce derivative work. The only exceptions to this (as of 2010) were the flagship journals, which reserved rights on review and commentary content; those articles were available to purchase on a subscription or on a pay-per-view basis, becoming freely available (but not fully open access) to all after six months; however, as of January 2015, "no subscription fees apply to these journals or to any articles published in them." === Open peer review === In 2001, BioMed Central was the first publisher to carry out open peer review as default, by openly posting named peer reviewer reports alongside published articles as part of a 'pre-publication history' for all medical journals in the BMC series. As of 2020, 70 BMC journals were operating fully open peer review. === BMC Series === The BMC Series is a collection of several dozen online research journals published by BioMed Central. Like all other BioMed Central journals, they have a policy of open access to the research articles they publish. Between them, they cover all major subject areas within biology and medicine. Two of the journals, BMC Biology and BMC Medicine, have a broad scope, and aim to publish particularly significant research. A third journal, BMC Research Notes, publishes scientifically valid research outputs that cannot be considered as full research or methodology articles across all scientific and clinical disciplines, while BMC Proceedings publishes conference proceedings and BMC Medical Research Methodology focuses on methodological approaches to healthcare research (especially methodology of epidemiological research, clinical trials and meta-analysis/systematic review). The other journals specialise on a particular subject area. Due to their free licensing, images from BMC journals can be reused in other places. Most BMC Series journals have an impact factor. As of 2016, for the 53 journals with impact factors, BMC Biology had the highest at 7.98. == Databases == The company also has hosted biomedical databases, including the ISRCTN registry (previously Current Controlled Trials), a Primary Registry of clinical trials in the WHO Registry Network. The Biology Image Library and the Cases Database, a database of medical case reports, were closed in 2014. The company also provided hosting for institutional repositories of publications based on the DSpace platform under the brand Open Repository. The Open Repository activity was sold to Atmire in 2016. == See also == Open Access Scholarly Publishers Association, of which BioMed Central is a founding member Trials (journal) == References == == External links == Official website
Wikipedia/BMC_Medical_Research_Methodology
Variable-order Bayesian network (VOBN) models provide an important extension of both the Bayesian network models and the variable-order Markov models. VOBN models are used in machine learning in general and have shown great potential in bioinformatics applications. These models extend the widely used position weight matrix (PWM) models, Markov models, and Bayesian network (BN) models. In contrast to the BN models, where each random variable depends on a fixed subset of random variables, in VOBN models these subsets may vary based on the specific realization of observed variables. The observed realizations are often called the context and, hence, VOBN models are also known as context-specific Bayesian networks. The flexibility in the definition of conditioning subsets of variables turns out to be a real advantage in classification and analysis applications, as the statistical dependencies between random variables in a sequence of variables (not necessarily adjacent) may be taken into account efficiently, and in a position-specific and context-specific manner. == See also == Markov chain Examples of Markov chains Variable order Markov models Markov process Markov chain Monte Carlo Semi-Markov process Artificial intelligence == References == == External links == VOMBAT: https://www2.informatik.uni-halle.de:8443/VOMBAT/
Wikipedia/Variable-order_Bayesian_network
In machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer. When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors. After this learning step, a DBN can be further trained with supervision to perform classification. DBNs can be viewed as a composition of simple, unsupervised networks such as restricted Boltzmann machines (RBMs) or autoencoders, where each sub-network's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative energy-based model with a "visible" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the "lowest" pair of layers (the lowest visible layer is a training set). The observation that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep learning algorithms.: 6  Overall, there are many attractive implementations and uses of DBNs in real-life applications and scenarios (e.g., electroencephalography, drug discovery). == Training == The training method for RBMs proposed by Geoffrey Hinton for use with training "Product of Experts" models is called contrastive divergence (CD). CD provides an approximation to the maximum likelihood method that would ideally be applied for learning the weights. In training a single RBM, weight updates are performed with gradient descent via the following equation: w i j ( t + 1 ) = w i j ( t ) + η ∂ log ⁡ ( p ( v ) ) ∂ w i j {\displaystyle w_{ij}(t+1)=w_{ij}(t)+\eta {\frac {\partial \log(p(v))}{\partial w_{ij}}}} where, p ( v ) {\displaystyle p(v)} is the probability of a visible vector, which is given by p ( v ) = 1 Z ∑ h e − E ( v , h ) {\displaystyle p(v)={\frac {1}{Z}}\sum _{h}e^{-E(v,h)}} . Z {\displaystyle Z} is the partition function (used for normalizing) and E ( v , h ) {\displaystyle E(v,h)} is the energy function assigned to the state of the network. A lower energy indicates the network is in a more "desirable" configuration. The gradient ∂ log ⁡ ( p ( v ) ) ∂ w i j {\displaystyle {\frac {\partial \log(p(v))}{\partial w_{ij}}}} has the simple form ⟨ v i h j ⟩ data − ⟨ v i h j ⟩ model {\displaystyle \langle v_{i}h_{j}\rangle _{\text{data}}-\langle v_{i}h_{j}\rangle _{\text{model}}} where ⟨ ⋯ ⟩ p {\displaystyle \langle \cdots \rangle _{p}} represent averages with respect to distribution p {\displaystyle p} . The issue arises in sampling ⟨ v i h j ⟩ model {\displaystyle \langle v_{i}h_{j}\rangle _{\text{model}}} because this requires extended alternating Gibbs sampling. CD replaces this step by running alternating Gibbs sampling for n {\displaystyle n} steps (values of n = 1 {\displaystyle n=1} perform well). After n {\displaystyle n} steps, the data are sampled and that sample is used in place of ⟨ v i h j ⟩ model {\displaystyle \langle v_{i}h_{j}\rangle _{\text{model}}} . The CD procedure works as follows: Initialize the visible units to a training vector. Update the hidden units in parallel given the visible units: p ( h j = 1 ∣ V ) = σ ( b j + ∑ i v i w i j ) {\displaystyle p(h_{j}=1\mid {\textbf {V}})=\sigma (b_{j}+\sum _{i}v_{i}w_{ij})} . σ {\displaystyle \sigma } is the sigmoid function and b j {\displaystyle b_{j}} is the bias of h j {\displaystyle h_{j}} . Update the visible units in parallel given the hidden units: p ( v i = 1 ∣ H ) = σ ( a i + ∑ j h j w i j ) {\displaystyle p(v_{i}=1\mid {\textbf {H}})=\sigma (a_{i}+\sum _{j}h_{j}w_{ij})} . a i {\displaystyle a_{i}} is the bias of v i {\displaystyle v_{i}} . This is called the "reconstruction" step. Re-update the hidden units in parallel given the reconstructed visible units using the same equation as in step 2. Perform the weight update: Δ w i j ∝ ⟨ v i h j ⟩ data − ⟨ v i h j ⟩ reconstruction {\displaystyle \Delta w_{ij}\propto \langle v_{i}h_{j}\rangle _{\text{data}}-\langle v_{i}h_{j}\rangle _{\text{reconstruction}}} . Once an RBM is trained, another RBM is "stacked" atop it, taking its input from the final trained layer. The new visible layer is initialized to a training vector, and values for the units in the already-trained layers are assigned using the current weights and biases. The new RBM is then trained with the procedure above. This whole process is repeated until the desired stopping criterion is met. Although the approximation of CD to maximum likelihood is crude (does not follow the gradient of any function), it is empirically effective. == See also == Bayesian network Convolutional deep belief network Deep learning Energy based model Stacked Restricted Boltzmann Machine == References == == External links == Hinton, Geoffrey E. (2009-05-31). "Deep belief networks". Scholarpedia. 4 (5): 5947. Bibcode:2009SchpJ...4.5947H. doi:10.4249/scholarpedia.5947. ISSN 1941-6016. "Deep Belief Networks". Deep Learning Tutorials. "Deep Belief Network Example". Deeplearning4j Tutorials. Archived from the original on 2016-10-03. Retrieved 2015-02-22.
Wikipedia/Deep_belief_network
In the mathematical field of graph theory, the intersection number of a graph G = ( V , E ) {\displaystyle G=(V,E)} is the smallest number of elements in a representation of G {\displaystyle G} as an intersection graph of finite sets. In such a representation, each vertex is represented as a set, and two vertices are connected by an edge whenever their sets have a common element. Equivalently, the intersection number is the smallest number of cliques needed to cover all of the edges of G {\displaystyle G} . A set of cliques that cover all edges of a graph is called a clique edge cover or edge clique cover, or even just a clique cover, although the last term is ambiguous: a clique cover can also be a set of cliques that cover all vertices of a graph. Sometimes "covering" is used in place of "cover". As well as being called the intersection number, the minimum number of these cliques has been called the R-content, edge clique cover number, or clique cover number. The problem of computing the intersection number has been called the intersection number problem, the intersection graph basis problem, covering by cliques, the edge clique cover problem, and the keyword conflict problem. Every graph with n {\displaystyle n} vertices and m {\displaystyle m} edges has intersection number at most min ( m , n 2 / 4 ) {\displaystyle \min(m,n^{2}/4)} . The intersection number is NP-hard to compute or approximate, but fixed-parameter tractable. == Definitions == === Intersection graphs === Let F {\displaystyle {\mathcal {F}}} be any family of sets, allowing sets in F {\displaystyle {\mathcal {F}}} to be repeated. Then the intersection graph of F {\displaystyle {\mathcal {F}}} is an undirected graph that has a vertex for each set in F {\displaystyle {\mathcal {F}}} and an edge between each two sets that have a nonempty intersection. Every graph can be represented as an intersection graph in this way. The intersection number of the graph is the smallest number k {\displaystyle k} such that there exists a representation of this type for which the union of the sets in F {\displaystyle {\mathcal {F}}} has k {\displaystyle k} elements. The problem of finding an intersection representation of a graph, using a given number of elements, is known as the intersection graph basis problem. === Clique edge covers === An alternative definition of the intersection number of a graph G {\displaystyle G} is that it is the smallest number of cliques in G {\displaystyle G} (complete subgraphs of G {\displaystyle G} ) that together cover all of the edges of G {\displaystyle G} . A set of cliques with this property is known as a clique edge cover or edge clique cover, and for this reason the intersection number is also sometimes called the edge clique cover number. === Equivalence === The equality of the intersection number and the edge clique cover number has a short proof. In one direction, suppose that G {\displaystyle G} is the intersection graph of a family F {\displaystyle {\mathcal {F}}} of sets whose union U {\displaystyle U} has k {\displaystyle k} elements. Then for any element x ∈ U {\displaystyle x\in U} , the sets in F {\displaystyle {\mathcal {F}}} that contain x {\displaystyle x} form a clique in G {\displaystyle G} , because each pair of these sets has a non-empty intersection containing x {\displaystyle x} . Further, the cliques formed in this way cover every edge in G {\displaystyle G} : if two sets in F {\displaystyle {\mathcal {F}}} form an edge by having a non-empty intersection, then that edge is contained in the clique for any element of the intersection. Therefore, the edges of G {\displaystyle G} can be covered by k {\displaystyle k} cliques, one per element of U {\displaystyle U} . In the other direction, if a graph G {\displaystyle G} can be covered by k {\displaystyle k} cliques, then each vertex v {\displaystyle v} of G {\displaystyle G} may be represented by a subset of the cliques, the ones that contain vertex v {\displaystyle v} . Two of these subsets, for two vertices u {\displaystyle u} and v {\displaystyle v} , have a nonempty intersection if and only if there is a clique in the intersection that contains both of them, if and only if there is an edge u v {\displaystyle uv} included in one of the covering cliques. == Applications == The representation of a graph as an abstract intersection graph of sets can be used to construct more concrete geometric intersection representations of the same graph. In particular, if a graph has intersection number k {\displaystyle k} , it can be represented as an intersection graph of k {\displaystyle k} -dimensional unit hyperspheres (its sphericity is at most k {\displaystyle k} ). A clique cover can be used as a kind of adjacency labelling scheme for a graph, in which one labels each vertex by a binary value with a bit for each clique, zero if it does not belong to the clique and one if it belongs. Then two vertices are adjacent if and only if the bitwise and of their labels is nonzero. The length of the labels is the intersection number of the graph. When this length is small, the graph can be represented in a computer using only these labels, in less memory than explicit methods such as adjacency lists, and with faster tests for whether two vertices are adjacent. This method was used in an early application of intersection numbers, for labeling a set of keywords so that conflicting keywords could be quickly detected, by E. Kellerman of IBM. For this reason, another name for the problem of computing intersection numbers is the keyword conflict problem. Similarly, in computational geometry, representations based on the intersection number have been considered as a compact representation for visibility graphs, but there exist geometric inputs for which this representation requires a near-quadratic number of cliques. Another class of applications comes from scheduling problems in which multiple users of a shared resource should be scheduled for time slots, in such a way that incompatible requests are never scheduled for the same time slot but all pairs of compatible requests are given at least one time slot together. The intersection number of a graph of compatibilities gives the minimum number of time slots needed for such a schedule: one slot for each clique in a clique cover of the compatibility graph. In the design of compilers for very long instruction word computers, a small clique cover of a graph of incompatible operations can be used to represent their incompatibilities by a small number of artificial resources (one resource per clique), allowing resource-based scheduling techniques to be used to assign operations to instruction slots. Shephard and Vetta observe that the intersection number of any network equals the minimum number of constraints needed in an integer programming formulation of the problem of computing maximum independent sets, in which one has a 0-1 variable per vertex and a constraint that in each clique of a clique cover the variables sum to at most one. They argue that, for the intersection graphs of paths in certain fiber optic communications networks, these intersection numbers are small, explaining the relative ease of solving certain optimization problems in allocating bandwidth on the networks. In statistics and data visualization, edge clique covers of a graph representing statistically indistinguishable pairs of variables are used to produce compact letter displays that assist in visualizing multiple pairwise comparisons. These displays are formed by choosing a letter or other visual marker for each clique, and then labeling each variable by the letters for the cliques that it belongs to. This method provides a graphical representation of which variables are indistinguishable: they are indistinguishable if they share at least one letter in their labels. In the analysis of food webs describing predator-prey relationships among animal species, a competition graph or niche overlap graph is an undirected graph in which the vertices represent species, and edges represent pairs of species that both compete for the same prey. These can be derived from a directed acyclic graph representing predator-prey relations by drawing an edge u − v {\displaystyle u-v} in the competition graph whenever there exists a prey species w {\displaystyle w} such that the predator-prey relation graph has edges u → w {\displaystyle u\to w} and v → w {\displaystyle v\to w} . Every competition graph must have at least one isolated vertex, and the competition number of an arbitrary graph represents the smallest number of isolated vertices that could be added to make it into a competition graph. Biologically, if part of a competition graph is observed, then the competition number represents the smallest possible number of unobserved prey species needed to explain it. The competition number is at most equal to the intersection number: one can transform any undirected graph into a competition graph by adding a prey species for each clique in an edge clique cover. However, this relation is not exact, because it is also possible for the predator species to be prey of other species. In a graph with n {\displaystyle n} vertices, at most n − 2 {\displaystyle n-2} of them can be the prey of more than one other species, so the competition number is at least the intersection number minus n − 2 {\displaystyle n-2} . Edge clique covers have also been used to infer the existence of protein complexes, systems of mutually interacting proteins, from protein–protein interaction networks describing only the pairwise interactions between proteins. More generally, Guillaume and Latapy have argued that, for complex networks of all types, replacing the network by a bipartite graph connecting its vertices to the cliques in a clique cover highlights the structure in the network. == Upper bounds == Any graph with m {\displaystyle m} edges has intersection number at most m {\displaystyle m} . Each edge is itself a two-vertex clique. There are m {\displaystyle m} of these cliques and together they cover all the edges. It is also true that every graph with n {\displaystyle n} vertices has intersection number at most ⌊ n 2 / 4 ⌋ {\displaystyle \lfloor n^{2}/4\rfloor } . More strongly, the edges of every n {\displaystyle n} -vertex graph can be covered by at most ⌊ n 2 / 4 ⌋ {\displaystyle \lfloor n^{2}/4\rfloor } cliques, all of which are either single edges or triangles. A greedy algorithm can find this cover: remove any two adjacent vertices and inductively cover the remaining graph. Restoring the two removed vertices, cover edges to their shared neighbors by triangles, leaving edges to unshared neighbors as two-vertex cliques. The inductive cover has at most ⌊ ( n − 2 ) 2 / 4 ⌋ {\displaystyle \lfloor (n-2)^{2}/4\rfloor } cliques, and the two removed vertices contribute at most n − 1 {\displaystyle n-1} cliques, maximized when all other vertices are unshared neighbors and the edge between the two vertices must be used as a clique. Adding these two quantities gives ⌊ n 2 / 4 ⌋ {\displaystyle \lfloor n^{2}/4\rfloor } cliques total. This generalizes Mantel's theorem that a triangle-free graph has at most ⌊ n 2 / 4 ⌋ {\displaystyle \lfloor n^{2}/4\rfloor } edges, for in a triangle-free graph the only optimal clique edge cover has one clique per edge and therefore the intersection number equals the number of edges. An even tighter bound is possible when the number of edges is strictly greater than n 2 4 {\displaystyle {\tfrac {n^{2}}{4}}} . Let p {\displaystyle p} be the number of pairs of vertices that are not connected by an edge in the given graph G {\displaystyle G} , and let t {\displaystyle t} be the unique integer for which ( t − 1 ) t ≤ p < t ( t + 1 ) {\displaystyle (t-1)t\leq p<t(t+1)} . Then the intersection number of G {\displaystyle G} is at most p + t {\displaystyle p+t} . Graphs that are the complement of a sparse graph have small intersection numbers: the intersection number of any n {\displaystyle n} -vertex graph G {\displaystyle G} is at most 2 e 2 ( d + 1 ) 2 ln ⁡ n {\displaystyle 2e^{2}(d+1)^{2}\ln n} , where e {\displaystyle e} is the base of the natural logarithm and d {\displaystyle d} is the maximum degree of the complement graph of G {\displaystyle G} . It follows from deep results on the structure of claw-free graphs that, when a connected n {\displaystyle n} -vertex claw-free graph has at least three independent vertices, it has intersection number at most n {\displaystyle n} . It remains an unsolved problem whether this is true of all claw-free graphs without requiring them to have large independent sets. An important subclass of the claw-free graphs are the line graphs, graphs representing edges and touching pairs of edges of some other graph G {\displaystyle G} . An optimal clique cover of the line graph L ( G ) {\displaystyle L(G)} may be formed with one clique for each triangle in G {\displaystyle G} that has two or three degree-2 vertices, and one clique for each vertex that has degree at least two and is not a degree-two vertex of one of these triangles. The intersection number is the number of cliques of these two types. In the Erdős–Rényi–Gilbert model of random graphs, in which all graphs on n {\displaystyle n} labeled vertices are equally likely (or equivalently, each edge is present or absent, independently of other edges, with probability 1 2 {\displaystyle {\tfrac {1}{2}}} ) the intersection number of an n {\displaystyle n} -vertex random graph is with high probability Θ ( n 2 log 2 ⁡ n ) , {\displaystyle \Theta \left({\frac {n^{2}}{\log ^{2}n}}\right),} smaller by a factor of log 2 ⁡ n {\displaystyle \log ^{2}n} than the number of edges. In these graphs, the maximum cliques have (with high probability) only a logarithmic number of vertices, implying that this many of them are needed to cover all edges. The tricker part of the bound is proving that it is possible to find enough logarithmically sized cliques to cover many edges, allowing the remaining leftover edges to be covered by two-vertex cliques. Much of the early research on intersection numbers involved calculating these numbers on various specific graphs, such as the graphs formed by removing a complete subgraph or a perfect matching from a larger complete graph. == Computational complexity == Testing whether a given graph G {\displaystyle G} has intersection number at most a given number k {\displaystyle k} is NP-complete. Therefore, it is also NP-hard to compute the intersection number of a given graph. In turn, the hardness of the intersection number has been used to prove that it is NP-complete to recognize the squares of split graphs. The problem of computing the intersection number is, however, fixed-parameter tractable: that is, it can be solved in an amount of time bounded by a polynomial in n {\displaystyle n} multiplied by a larger but computable function of the intersection number k {\displaystyle k} . This may be shown by observing that there are at most 2 k {\displaystyle 2^{k}} distinct closed neighborhoods in the graph – two vertices that belong to the same set of cliques have the same neighborhood – and that the graph formed by selecting one vertex per closed neighborhood has the same intersection number as the original graph. Therefore, in polynomial time the input can be reduced to a smaller kernel with at most 2 k {\textstyle 2^{k}} vertices. Applying a brute-force search over at most 2 k ! {\displaystyle 2^{k}!} assignments of distinct sets of cliques to the remaining vertices gives time double exponential in k {\displaystyle k} . The double-exponential dependence on k {\displaystyle k} cannot be reduced to single exponential by a kernelization of polynomial size, unless the polynomial hierarchy collapses, and if the exponential time hypothesis is true then double-exponential dependence is necessary regardless of whether kernelization is used. On graphs of bounded treewidth, dynamic programming on a tree decomposition of the graph can find the intersection number in linear time, but simpler algorithms based on finite sets of reduction rules do not work. There exists a constant c > 0 {\displaystyle c>0} such that the problem cannot be approximated in polynomial time with an approximation ratio better than n c {\displaystyle n^{c}} . The best approximation ratio that has been found is better than the trivial O ( n 2 ) {\displaystyle O(n^{2})} by only a polylogarithmic factor. Researchers in this area have also investigated the computational efficiency of heuristics, without guarantees on the solution quality they produce, and their behavior on real-world networks. More efficient algorithms are known for certain special classes of graphs. The intersection number of an interval graph is always equal to its number of maximal cliques, which may be computed in polynomial time. More generally, in chordal graphs, the intersection number may be computed by an algorithm that considers the vertices in an elimination ordering of the graph (an ordering in which each vertex and its later neighbors form a clique) and that, for each vertex v {\displaystyle v} , forms a clique for v {\displaystyle v} and its later neighbors whenever at least one of the edges incident to v {\displaystyle v} is not covered by any earlier clique. It is also possible to find the intersection number in linear time in circular-arc graphs. However, although these graphs have only a polynomial number of cliques to choose among for the cover, having few cliques alone is not enough to make the problem easy: there exist families of graphs with polynomially many cliques for which the intersection number remains NP-hard. The intersection number can also be found in polynomial time for graphs whose maximum degree is five, but is NP-hard for graphs of maximum degree six. On planar graphs, computing the intersection number exactly remains NP-hard, but it has a polynomial-time approximation scheme based on Baker's technique. == See also == Bipartite dimension, the smallest number of bicliques needed to cover all edges of a graph Bound graph, a type of graph characterized by clique edge covers of a special form Clique cover, the NP-hard problem of finding a small number of cliques that cover all vertices of a graph == References == == External links == Weisstein, Eric W., "Intersection Number", MathWorld{{cite web}}: CS1 maint: overridden setting (link)
Wikipedia/Intersection_number_(graph_theory)
In decision theory, a scoring rule provides evaluation metrics for probabilistic predictions or forecasts. While "regular" loss functions (such as mean squared error) assign a goodness-of-fit score to a predicted value and an observed value, scoring rules assign such a score to a predicted probability distribution and an observed value. On the other hand, a scoring function provides a summary measure for the evaluation of point predictions, i.e. one predicts a property or functional T ( F ) {\displaystyle T(F)} , like the expectation or the median. Scoring rules answer the question "how good is a predicted probability distribution compared to an observation?" Scoring rules that are (strictly) proper are proven to have the lowest expected score if the predicted distribution equals the underlying distribution of the target variable. Although this might differ for individual observations, this should result in a minimization of the expected score if the "correct" distributions are predicted. Scoring rules and scoring functions are often used as "cost functions" or "loss functions" of probabilistic forecasting models. They are evaluated as the empirical mean of a given sample, the "score". Scores of different predictions or models can then be compared to conclude which model is best. For example, consider a model, that predicts (based on an input x {\displaystyle x} ) a mean μ ∈ R {\displaystyle \mu \in \mathbb {R} } and standard deviation σ ∈ R + {\displaystyle \sigma \in \mathbb {R} _{+}} . Together, those variables define a gaussian distribution N ( μ , σ 2 ) {\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})} , in essence predicting the target variable as a probability distribution. A common interpretation of probabilistic models is that they aim to quantify their own predictive uncertainty. In this example, an observed target variable y ∈ R {\displaystyle y\in \mathbb {R} } is then held compared to the predicted distribution N ( μ , σ 2 ) {\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})} and assigned a score L ( N ( μ , σ 2 ) , y ) ∈ R {\displaystyle {\mathcal {L}}({\mathcal {N}}(\mu ,\sigma ^{2}),y)\in \mathbb {R} } . When training on a scoring rule, it should "teach" a probabilistic model to predict when its uncertainty is low, and when its uncertainty is high, and it should result in calibrated predictions, while minimizing the predictive uncertainty. Although the example given concerns the probabilistic forecasting of a real valued target variable, a variety of different scoring rules have been designed with different target variables in mind. Scoring rules exist for binary and categorical probabilistic classification, as well as for univariate and multivariate probabilistic regression. == Definitions == Consider a sample space Ω {\displaystyle \Omega } , a σ-algebra A {\displaystyle {\mathcal {A}}} of subsets of Ω {\displaystyle \Omega } and a convex class F {\displaystyle {\mathcal {F}}} of probability measures on ( Ω , A ) {\displaystyle (\Omega ,{\mathcal {A}})} . A function defined on Ω {\displaystyle \Omega } and taking values in the extended real line, R ¯ = [ − ∞ , ∞ ] {\displaystyle {\overline {\mathbb {R} }}=[-\infty ,\infty ]} , is F {\displaystyle {\mathcal {F}}} -quasi-integrable if it is measurable with respect to A {\displaystyle {\mathcal {A}}} and is quasi-integrable with respect to all F ∈ F {\displaystyle F\in {\mathcal {F}}} . === Probabilistic forecast === A probabilistic forecast is any probability measure F ∈ F {\displaystyle F\in {\mathcal {F}}} . I.e. it is a distribution of potential future observations. === Scoring rule === A scoring rule is any extended real-valued function S : F × Ω → R {\displaystyle \mathbf {S} :{\mathcal {F}}\times \Omega \rightarrow \mathbb {R} } such that S ( F , ⋅ ) {\displaystyle \mathbf {S} (F,\cdot )} is F {\displaystyle {\mathcal {F}}} -quasi-integrable for all F ∈ F {\displaystyle F\in {\mathcal {F}}} . S ( F , y ) {\displaystyle \mathbf {S} (F,y)} represents the loss or penalty when the forecast F ∈ F {\displaystyle F\in {\mathcal {F}}} is issued and the observation y ∈ Ω {\displaystyle y\in \Omega } materializes. === Point forecast === A point forecast is a functional, i.e. a potentially set-valued mapping F → T ( F ) ⊆ Ω {\displaystyle F\rightarrow T(F)\subseteq \Omega } . === Scoring function === A scoring function is any real-valued function S : Ω × Ω → R {\displaystyle S:\Omega \times \Omega \rightarrow \mathbb {R} } where S ( x , y ) {\displaystyle S(x,y)} represents the loss or penalty when the point forecast x ∈ Ω {\displaystyle x\in \Omega } is issued and the observation y ∈ Ω {\displaystyle y\in \Omega } materializes. === Orientation === Scoring rules S ( F , y ) {\displaystyle \mathbf {S} (F,y)} and scoring functions S ( x , y ) {\displaystyle S(x,y)} are negatively (positively) oriented if smaller (larger) values mean better. Here we adhere to negative orientation, hence the association with "loss". === Expected score === We write for the expected score of a prediction F {\displaystyle F} under Q ∈ F {\displaystyle Q\in {\mathcal {F}}} as the expected score of the predicted distribution F ∈ F {\displaystyle F\in {\mathcal {F}}} , when sampling observations from distribution Q {\displaystyle Q} . E Y ∼ Q [ S ( F , Y ) ] = ∫ S ( F , ω ) d Q ( ω ) {\displaystyle \mathbb {E} _{Y\sim Q}[S(F,Y)]=\int \mathbf {S} (F,\omega )\mathrm {d} Q(\omega )} === Sample average score === Many probabilistic forecasting models are training via the sample average score, in which a set of predicted distributions F 1 , … , F n ∈ F {\displaystyle F_{1},\ldots ,F_{n}\in {\mathcal {F}}} is evaluated against a set of observations y 1 , … , y n ∈ Ω {\displaystyle y_{1},\ldots ,y_{n}\in \Omega } . L = 1 n ∑ i = 1 n S ( F i , y i ) {\displaystyle {\mathcal {L}}={\frac {1}{n}}\sum _{i=1}^{n}S(F_{i},y_{i})} == Propriety and consistency == Strictly proper scoring rules and strictly consistent scoring functions encourage honest forecasts by maximization of the expected reward: If a forecaster is given a reward of − S ( F , y ) {\displaystyle -\mathbf {S} (F,y)} if y {\displaystyle y} realizes (e.g. y = r a i n {\displaystyle y=rain} ), then the highest expected reward (lowest score) is obtained by reporting the true probability distribution. === Proper scoring rules === A scoring rule S {\displaystyle \mathbf {S} } is proper relative to F {\displaystyle {\mathcal {F}}} if (assuming negative orientation) its expected score is minimized when the forecasted distribution matches the distribution of the observation. E Y ∼ Q [ S ( Q , Y ) ] ≤ E Y ∼ Q [ S ( F , Y ) ] {\displaystyle \mathbb {E} _{Y\sim Q}[S(Q,Y)]\leq \mathbb {E} _{Y\sim Q}[S(F,Y)]} for all F , Q ∈ F {\displaystyle F,Q\in {\mathcal {F}}} . It is strictly proper if the above equation holds with equality if and only if F = Q {\displaystyle F=Q} . === Consistent scoring functions === A scoring function S {\displaystyle S} is consistent for the functional T {\displaystyle T} relative to the class F {\displaystyle {\mathcal {F}}} if E Y ∼ F [ S ( t , Y ) ] ≤ E Y ∼ F [ S ( x , Y ) ] {\displaystyle \mathbb {E} _{Y\sim F}[S(t,Y)]\leq \mathbb {E} _{Y\sim F}[S(x,Y)]} for all F ∈ F {\displaystyle F\in {\mathcal {F}}} , all t ∈ T ( F ) {\displaystyle t\in T(F)} and all x ∈ Ω {\displaystyle x\in \Omega } . It is strictly consistent if it is consistent and equality in the above equation implies that x ∈ T ( F ) {\displaystyle x\in T(F)} . === Superior scoring rules === To enforce that correct forecasts are always strictly preferred, Ahmadian et al. (2024) introduced two superior variants: Penalized Brier Score (PBS) and Penalized Logarithmic Loss (PLL), which add a fixed penalty whenever the predicted class ( arg ⁡ max p {\displaystyle \arg \max p} ) differs from the true class ( arg ⁡ max y {\displaystyle \arg \max y} ). - PBS augments the Brier score by adding ( c − 1 ) / c {\displaystyle (c-1)/c} for any misclassification (with c {\displaystyle c} the number of classes). - PLL augments the logarithmic score by adding − log ⁡ ( 1 / c ) {\displaystyle -\log(1/c)} for any misclassification. Despite these penalties, PBS and PLL remain strictly proper. Their expected score is uniquely minimized when the forecast equals the true distribution, satisfying the superior property that every correct classification is scored strictly better than any incorrect one. Note: Neither the standard Brier Score nor the logarithmic score satisfy the superior criterion. They remain strictly proper but can assign better scores to incorrect predictions than to certain correct ones—an issue resolved by PBS and PLL. == Example application of scoring rules == An example of probabilistic forecasting is in meteorology where a weather forecaster may give the probability of rain on the next day. One could note the number of times that a 25% probability was quoted, over a long period, and compare this with the actual proportion of times that rain fell. If the actual percentage was substantially different from the stated probability we say that the forecaster is poorly calibrated. A poorly calibrated forecaster might be encouraged to do better by a bonus system. A bonus system designed around a proper scoring rule will incentivize the forecaster to report probabilities equal to his personal beliefs. In addition to the simple case of a binary decision, such as assigning probabilities to 'rain' or 'no rain', scoring rules may be used for multiple classes, such as 'rain', 'snow', or 'clear', or continuous responses like the amount of rain per day. The image to the right shows an example of a scoring rule, the logarithmic scoring rule, as a function of the probability reported for the event that actually occurred. One way to use this rule would be as a cost based on the probability that a forecaster or algorithm assigns, then checking to see which event actually occurs. == Examples of proper scoring rules == There are an infinite number of scoring rules, including entire parameterized families of strictly proper scoring rules. The ones shown below are simply popular examples. === Categorical variables === For a categorical response variable with m {\displaystyle m} mutually exclusive events, Y ∈ Ω = { 1 , … , m } {\displaystyle Y\in \Omega =\{1,\ldots ,m\}} , a probabilistic forecaster or algorithm will return a probability vector r {\displaystyle \mathbf {r} } with a probability for each of the m {\displaystyle m} outcomes. ==== Logarithmic score ==== The logarithmic scoring rule is a local strictly proper scoring rule. This is also the negative of surprisal, which is commonly used as a scoring criterion in Bayesian inference; the goal is to minimize expected surprise. This scoring rule has strong foundations in information theory. L ( r , i ) = ln ⁡ ( r i ) {\displaystyle L(\mathbf {r} ,i)=\ln(r_{i})} Here, the score is calculated as the logarithm of the probability estimate for the actual outcome. That is, a prediction of 80% that correctly proved true would receive a score of ln(0.8) = −0.22. This same prediction also assigns 20% likelihood to the opposite case, and so if the prediction proves false, it would receive a score based on the 20%: ln(0.2) = −1.6. The goal of a forecaster is to maximize the score and for the score to be as large as possible, and −0.22 is indeed larger than −1.6. If one treats the truth or falsity of the prediction as a variable x with value 1 or 0 respectively, and the expressed probability as p, then one can write the logarithmic scoring rule as x ln(p) + (1 − x) ln(1 − p). Note that any logarithmic base may be used, since strictly proper scoring rules remain strictly proper under linear transformation. That is: L ( r , i ) = log b ⁡ ( r i ) {\displaystyle L(\mathbf {r} ,i)=\log _{b}(r_{i})} is strictly proper for all b > 1 {\displaystyle b>1} . ==== Brier/Quadratic score ==== The quadratic scoring rule is a strictly proper scoring rule Q ( r , i ) = 2 r i − r ⋅ r = 2 r i − ∑ j = 1 C r j 2 {\displaystyle Q(\mathbf {r} ,i)=2r_{i}-\mathbf {r} \cdot \mathbf {r} =2r_{i}-\sum _{j=1}^{C}r_{j}^{2}} where r i {\displaystyle r_{i}} is the probability assigned to the correct answer and C {\displaystyle C} is the number of classes. The Brier score, originally proposed by Glenn W. Brier in 1950, can be obtained by an affine transform from the quadratic scoring rule. B ( r , i ) = ∑ j = 1 C ( y j − r j ) 2 {\displaystyle B(\mathbf {r} ,i)=\sum _{j=1}^{C}(y_{j}-r_{j})^{2}} Where y j = 1 {\displaystyle y_{j}=1} when the j {\displaystyle j} th event is correct and y j = 0 {\displaystyle y_{j}=0} otherwise and C {\displaystyle C} is the number of classes. An important difference between these two rules is that a forecaster should strive to maximize the quadratic score Q {\displaystyle Q} yet minimize the Brier score B {\displaystyle B} . This is due to a negative sign in the linear transformation between them. ==== Spherical score ==== The spherical scoring rule is also a strictly proper scoring rule S ( r , i ) = r i ‖ r ‖ = r i r 1 2 + ⋯ + r C 2 {\displaystyle S(\mathbf {r} ,i)={\frac {r_{i}}{\lVert \mathbf {r} \rVert }}={\frac {r_{i}}{\sqrt {r_{1}^{2}+\cdots +r_{C}^{2}}}}} ==== Ranked Probability Score ==== The ranked probability score (RPS) is a strictly proper scoring rule, that can be expressed as: R P S ( r , i ) = ∑ k = 1 C − 1 ( ∑ j = 1 k r j − y j ) 2 {\displaystyle RPS(\mathbf {r} ,i)=\sum _{k=1}^{C-1}\left(\sum _{j=1}^{k}r_{j}-y_{j}\right)^{2}} Where y j = 1 {\displaystyle y_{j}=1} when the j {\displaystyle j} th event is correct and y j = 0 {\displaystyle y_{j}=0} otherwise, and C {\displaystyle C} is the number of classes. Other than other scoring rules, the ranked probability score considers the distance between classes, i.e. classes 1 and 2 are considered closer than classes 1 and 3. The score assigns better scores to probabilistic forecasts with high probabilities assigned to classes close to the correct class. For example, when considering probabilistic forecasts r 1 = ( 0.5 , 0.5 , 0 ) {\displaystyle \mathbf {r} _{1}=(0.5,0.5,0)} and r 2 = ( 0.5 , 0 , 0.5 ) {\displaystyle \mathbf {r} _{2}=(0.5,0,0.5)} , we find that R P S ( r 1 , 1 ) = 0.25 {\displaystyle RPS(\mathbf {r} _{1},1)=0.25} , while R P S ( r 2 , 1 ) = 0.5 {\displaystyle RPS(\mathbf {r} _{2},1)=0.5} , despite both probabilistic forecasts assigning identical probability to the correct class. ==== Comparison of categorical strictly proper scoring rules ==== Shown below on the left is a graphical comparison of the Logarithmic, Quadratic, and Spherical scoring rules for a binary classification problem. The x-axis indicates the reported probability for the event that actually occurred. It is important to note that each of the scores have different magnitudes and locations. The magnitude differences are not relevant however as scores remain proper under affine transformation. Therefore, to compare different scores it is necessary to move them to a common scale. A reasonable choice of normalization is shown at the picture on the right where all scores intersect the points (0.5,0) and (1,1). This ensures that they yield 0 for a uniform distribution (two probabilities of 0.5 each), reflecting no cost or reward for reporting what is often the baseline distribution. All normalized scores below also yield 1 when the true class is assigned a probability of 1. === Univariate continuous variables === The scoring rules listed below aim to evaluate probabilistic predictions when the predicted distributions are univariate continuous probability distribution's, i.e. the predicted distributions are defined over a univariate target variable X ∈ R {\displaystyle X\in \mathbb {R} } and have a probability density function f : R → R + {\displaystyle f:\mathbb {R} \to \mathbb {R} _{+}} . ==== Logarithmic score for continuous variables ==== The logarithmic score is a local strictly proper scoring rule. It is defined as L ( D , y ) = − ln ⁡ ( f D ( y ) ) {\displaystyle L(D,y)=-\ln(f_{D}(y))} where f D {\displaystyle f_{D}} denotes the probability density function of the predicted distribution D {\displaystyle D} . It is a local, strictly proper scoring rule. The logarithmic score for continuous variables has strong ties to Maximum likelihood estimation. However, in many applications, the continuous ranked probability score is often preferred over the logarithmic score, as the logarithmic score can be heavily influenced by slight deviations in the tail densities of forecasted distributions. ==== Continuous ranked probability score ==== The continuous ranked probability score (CRPS) is a strictly proper scoring rule much used in meteorology. It is defined as C R P S ( D , y ) = ∫ R ( F D ( x ) − H ( x − y ) ) 2 d x {\displaystyle CRPS(D,y)=\int _{\mathbb {R} }(F_{D}(x)-H(x-y))^{2}dx} where F D {\displaystyle F_{D}} is the cumulative distribution function of the forecasted distribution D {\displaystyle D} , H {\displaystyle H} is the Heaviside step function and y ∈ R {\displaystyle y\in \mathbb {R} } is the observation. For distributions with finite first moment, the continuous ranked probability score can be written as: C R P S ( D , y ) = E X ∼ D [ | X − y | ] − 1 2 E X , X ′ ∼ D [ | X − X ′ | ] {\displaystyle CRPS(D,y)=\mathbb {E} _{X\sim D}[|X-y|]-{\frac {1}{2}}\mathbb {E} _{X,X'\sim D}[|X-X'|]} where X {\displaystyle X} and X ′ {\displaystyle X'} are independent random variables, sampled from the distribution D {\displaystyle D} . Furthermore, when the cumulative probability function F {\displaystyle F} is continuous, the continuous ranked probability score can also be written as C R P S ( D , y ) = E X ∼ D [ | X − y | ] + E X ∼ D [ X ] − 2 E X ∼ D [ X ⋅ F D ( X ) ] {\displaystyle CRPS(D,y)=\mathbb {E} _{X\sim D}[|X-y|]+\mathbb {E} _{X\sim D}[X]-2\mathbb {E} _{X\sim D}[X\cdot F_{D}(X)]} The continuous ranked probability score can be seen as both an continuous extension of the ranked probability score, as well as quantile regression. The continuous ranked probability score over the empirical distribution D ^ q {\displaystyle {\hat {D}}_{q}} of an ordered set points q 1 ≤ … ≤ q n {\displaystyle q_{1}\leq \ldots \leq q_{n}} (i.e. every point has 1 / n {\displaystyle 1/n} probability of occurring), is equal to twice the mean quantile loss applied on those points with evenly spread quantiles ( τ 1 , … , τ n ) = ( 1 / ( 2 n ) , … , ( 2 n − 1 ) / ( 2 n ) ) {\displaystyle (\tau _{1},\ldots ,\tau _{n})=(1/(2n),\ldots ,(2n-1)/(2n))} : C R P S ( D ^ q , y ) = 2 n ∑ i = 1 n τ i ( y − q i ) + + ( 1 − τ i ) ( q i − y ) + {\displaystyle CRPS\left({\hat {D}}_{q},y\right)={\frac {2}{n}}\sum _{i=1}^{n}\tau _{i}(y-q_{i})_{+}+(1-\tau _{i})(q_{i}-y)_{+}} For many popular families of distributions, closed-form expressions for the continuous ranked probability score have been derived. The continuous ranked probability score has been used as a loss function for artificial neural networks, in which weather forecasts are postprocessed to a Gaussian probability distribution. CRPS was also adapted to survival analysis to cover censored events. CRPS is also known as Cramer–von Mises distance and can be seen as an improvement of Wasserstein distance (often used in machine learning) and further Cramer distance performed better in ordinal regression than KL distance or the Wasserstein metric. While CRPS is widely used for evaluating probabilistic forecasts, it has critical theoretical limitations. It has been shown that CRPS can produce systematically misleading evaluations by favoring probabilistic forecasts whose medians are close to the observed outcome, regardless of the actual probability assigned to that region, potentially resulting in higher scores for forecasts that allocate negligible (or even zero) probability mass to the true outcome. Furthermore, CRPS is not invariant under smooth transformations of the forecast variable, and its ranking of forecast systems may reverse under such transformations, raising concerns about its consistency for evaluation purposes. === Multivariate continuous variables === The scoring rules listed below aim to evaluate probabilistic predictions when the predicted distributions are univariate continuous probability distribution's, i.e. the predicted distributions are defined over a multivariate target variable X ∈ R n {\displaystyle X\in \mathbb {R} ^{n}} and have a probability density function f : R n → R + {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} _{+}} . ==== Multivariate logarithmic score ==== The multivariate logarithmic score is similar to the univariate logarithmic score: L ( D , y ) = − ln ⁡ ( f D ( y ) ) {\displaystyle L(D,y)=-\ln(f_{D}(y))} where f D {\displaystyle f_{D}} denotes the probability density function of the predicted multivariate distribution D {\displaystyle D} . It is a local, strictly proper scoring rule. ==== Hyvärinen scoring rule ==== The Hyvärinen scoring function (of a density p) is defined by s ( p ) = 2 Δ y log ⁡ p ( y ) + ‖ ∇ y log ⁡ p ( y ) ‖ 2 2 {\displaystyle s(p)=2\Delta _{y}\log p(y)+\|\nabla _{y}\log p(y)\|_{2}^{2}} Where Δ {\displaystyle \Delta } denotes the Hessian trace and ∇ {\displaystyle \nabla } denotes the gradient. This scoring rule can be used to computationally simplify parameter inference and address Bayesian model comparison with arbitrarily-vague priors. It was also used to introduce new information-theoretic quantities beyond the existing information theory. ==== Energy score ==== The energy score is a multivariate extension of the continuous ranked probability score: E S β ( D , Y ) = E X ∼ D [ ‖ X − Y ‖ 2 β ] − 1 2 E X , X ′ ∼ D [ ‖ X − X ′ ‖ 2 β ] {\displaystyle ES_{\beta }(D,Y)=\mathbb {E} _{X\sim D}[\lVert X-Y\rVert _{2}^{\beta }]-{\frac {1}{2}}\mathbb {E} _{X,X'\sim D}[\lVert X-X'\rVert _{2}^{\beta }]} Here, β ∈ ( 0 , 2 ) {\displaystyle \beta \in (0,2)} , ‖ ‖ 2 {\displaystyle \lVert \rVert _{2}} denotes the n {\displaystyle n} -dimensional Euclidean distance and X , X ′ {\displaystyle X,X'} are independently sampled random variables from the probability distribution D {\displaystyle D} . The energy score is strictly proper for distributions D {\displaystyle D} for which E X ∼ D [ ‖ X ‖ 2 ] {\displaystyle \mathbb {E} _{X\sim D}[\lVert X\rVert _{2}]} is finite. It has been suggested that the energy score is somewhat ineffective when evaluating the intervariable dependency structure of the forecasted multivariate distribution. The energy score is equal to twice the energy distance between the predicted distribution and the empirical distribution of the observation. ==== Variogram score ==== The variogram score of order p {\displaystyle p} is given by: V S p ( D , Y ) = ∑ i , j = 1 n w i j ( | Y i − Y j | p − E X ∼ D [ | X i − X j | p ] ) 2 {\displaystyle VS_{p}(D,Y)=\sum _{i,j=1}^{n}w_{ij}(|Y_{i}-Y_{j}|^{p}-\mathbb {E} _{X\sim D}[|X_{i}-X_{j}|^{p}])^{2}} Here, w i j {\displaystyle w_{ij}} are weights, often set to 1, and p > 0 {\displaystyle p>0} can be arbitrarily chosen, but p = 0.5 , 1 {\displaystyle p=0.5,1} or 2 {\displaystyle 2} are often used. X i {\displaystyle X_{i}} is here to denote the i {\displaystyle i} 'th marginal random variable of X {\displaystyle X} . The variogram score is proper for distributions for which the ( 2 p ) {\displaystyle (2p)} 'th moment is finite for all components, but is never strictly proper. Compared to the energy score, the variogram score is claimed to be more discriminative with respect to the predicted correlation structure. ==== Conditional continuous ranked probability score ==== The conditional continuous ranked probability score (Conditional CRPS or CCRPS) is a family of (strictly) proper scoring rules. Conditional CRPS evaluates a forecasted multivariate distribution D {\displaystyle D} by evaluation of CRPS over a prescribed set of univariate conditional probability distributions of the predicted multivariate distribution: C C R P S T ( D , Y ) = ∑ i = 1 k C R P S ( P X ∼ D ( X v i | X j = Y j for j ∈ C i ) , Y v i ) {\displaystyle CCRPS_{\mathcal {T}}(D,Y)=\sum _{i=1}^{k}CRPS(P_{X\sim D}(X_{v_{i}}|X_{j}=Y_{j}{\text{ for }}j\in {\mathcal {C}}_{i}),Y_{v_{i}})} Here, X i {\displaystyle X_{i}} is the i {\displaystyle i} 'th marginal variable of X ∼ D {\displaystyle X\sim D} , T = ( v i , C i ) i = 1 k {\displaystyle {\mathcal {T}}=(v_{i},{\mathcal {C}}_{i})_{i=1}^{k}} is a set of tuples that defines a conditional specification (with v i ∈ { 1 , … , n } {\displaystyle v_{i}\in \{1,\ldots ,n\}} and C i ⊆ { 1 , … , n } ∖ { v i } {\displaystyle {\mathcal {C}}_{i}\subseteq \{1,\ldots ,n\}\setminus \{v_{i}\}} ), and P X ∼ D ( X v i | X j = Y j for j ∈ C i ) {\displaystyle P_{X\sim D}(X_{v_{i}}|X_{j}=Y_{j}{\text{ for }}j\in {\mathcal {C}}_{i})} denotes the conditional probability distribution for X v i {\displaystyle X_{v_{i}}} given that all variables X j {\displaystyle X_{j}} for j ∈ C i {\displaystyle j\in {\mathcal {C}}_{i}} are equal to their respective observations. In the case that P X ∼ D ( X v i | X j = Y j for j ∈ C i ) {\displaystyle P_{X\sim D}(X_{v_{i}}|X_{j}=Y_{j}{\text{ for }}j\in {\mathcal {C}}_{i})} is ill-defined (i.e. its conditional event has zero likelihood), CRPS scores over this distribution are defined as infinite. Conditional CRPS is strictly proper for distributions with finite first moment, if the chain rule is included in the conditional specification, meaning that there exists a permutation ϕ 1 , … , ϕ n {\displaystyle \phi _{1},\ldots ,\phi _{n}} of 1 , … , n {\displaystyle 1,\ldots ,n} such that for all 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} : ( ϕ i , { ϕ 1 , … , ϕ i − 1 } ) ∈ T {\displaystyle (\phi _{i},\{\phi _{1},\ldots ,\phi _{i-1}\})\in {\mathcal {T}}} . === Interpretation of proper scoring rules === All proper scoring rules are equal to weighted sums (integral with a non-negative weighting functional) of the losses in a set of simple two-alternative decision problems that use the probabilistic prediction, each such decision problem having a particular combination of associated cost parameters for false positive and false negative decisions. A strictly proper scoring rule corresponds to having a nonzero weighting for all possible decision thresholds. Any given proper scoring rule is equal to the expected losses with respect to a particular probability distribution over the decision thresholds; thus the choice of a scoring rule corresponds to an assumption about the probability distribution of decision problems for which the predicted probabilities will ultimately be employed, with for example the quadratic loss (or Brier) scoring rule corresponding to a uniform probability of the decision threshold being anywhere between zero and one. The classification accuracy score (percent classified correctly), a single-threshold scoring rule which is zero or one depending on whether the predicted probability is on the appropriate side of 0.5, is a proper scoring rule but not a strictly proper scoring rule because it is optimized (in expectation) not only by predicting the true probability but by predicting any probability on the same side of 0.5 as the true probability. == Characteristics == === Affine transformation === A strictly proper scoring rule, whether binary or multiclass, after an affine transformation remains a strictly proper scoring rule. That is, if S ( r , i ) {\displaystyle S(\mathbf {r} ,i)} is a strictly proper scoring rule then a + b S ( r , i ) {\displaystyle a+bS(\mathbf {r} ,i)} with b ≠ 0 {\displaystyle b\neq 0} is also a strictly proper scoring rule, though if b < 0 {\displaystyle b<0} then the optimization sense of the scoring rule switches between maximization and minimization. === Locality === A proper scoring rule is said to be local if its estimate for the probability of a specific event depends only on the probability of that event. This statement is vague in most descriptions but we can, in most cases, think of this as the optimal solution of the scoring problem "at a specific event" is invariant to all changes in the observation distribution that leave the probability of that event unchanged. All binary scores are local because the probability assigned to the event that did not occur is determined so there is no degree of flexibility to vary over. Affine functions of the logarithmic scoring rule are the only strictly proper local scoring rules on a finite set that is not binary. === Decomposition === The expectation value of a proper scoring rule S {\displaystyle S} can be decomposed into the sum of three components, called uncertainty, reliability, and resolution, which characterize different attributes of probabilistic forecasts: E ( S ) = U N C + R E L − R E S . {\displaystyle E(S)=\mathrm {UNC} +\mathrm {REL} -\mathrm {RES} .} If a score is proper and negatively oriented (such as the Brier Score), all three terms are positive definite. The uncertainty component is equal to the expected score of the forecast which constantly predicts the average event frequency. The reliability component penalizes poorly calibrated forecasts, in which the predicted probabilities do not coincide with the event frequencies. The equations for the individual components depend on the particular scoring rule. For the Brier Score, they are given by U N C = x ¯ ( 1 − x ¯ ) {\displaystyle \mathrm {UNC} ={\bar {x}}(1-{\bar {x}})} R E L = E ( p − π ( p ) ) 2 {\displaystyle \mathrm {REL} =E(p-\pi (p))^{2}} R E S = E ( π ( p ) − x ¯ ) 2 {\displaystyle \mathrm {RES} =E(\pi (p)-{\bar {x}})^{2}} where x ¯ {\displaystyle {\bar {x}}} is the average probability of occurrence of the binary event x {\displaystyle x} , and π ( p ) {\displaystyle \pi (p)} is the conditional event probability, given p {\displaystyle p} , i.e. π ( p ) = P ( x = 1 ∣ p ) {\displaystyle \pi (p)=P(x=1\mid p)} == See also == Coherence Decision rule == Literature == Strictly Proper Scoring Rules, Prediction, and Estimation. Tilmann Gneiting &Adrian E Raftery Pages 359-378, https://doi.org/10.1198/016214506000001437, pdf == References == == External links == Video comparing spherical, quadratic and logarithmic scoring rules Local Proper Scoring Rules Scoring Rules and Decision Analysis Education Strictly Proper Scoring Rules Scoring Rules and uncertainty Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules Closed-form expressions of the continuous ranked probability score
Wikipedia/Scoring_function
A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. Graphical models are commonly used in probability theory, statistics—particularly Bayesian statistics—and machine learning. == Types of graphical models == Generally, probabilistic graphical models use a graph-based representation as the foundation for encoding a distribution over a multi-dimensional space and a graph that is a compact or factorized representation of a set of independences that hold in the specific distribution. Two branches of graphical representations of distributions are commonly used, namely, Bayesian networks and Markov random fields. Both families encompass the properties of factorization and independences, but they differ in the set of independences they can encode and the factorization of the distribution that they induce. === Undirected Graphical Model === The undirected graph shown may have one of several interpretations; the common feature is that the presence of an edge implies some sort of dependence between the corresponding random variables. From this graph, we might deduce that B, C, and D are all conditionally independent given A. This means that if the value of A is known, then the values of B, C, and D provide no further information about each other. Equivalently (in this case), the joint probability distribution can be factorized as: P [ A , B , C , D ] = f A B [ A , B ] ⋅ f A C [ A , C ] ⋅ f A D [ A , D ] {\displaystyle P[A,B,C,D]=f_{AB}[A,B]\cdot f_{AC}[A,C]\cdot f_{AD}[A,D]} for some non-negative functions f A B , f A C , f A D {\displaystyle f_{AB},f_{AC},f_{AD}} . === Bayesian network === If the network structure of the model is a directed acyclic graph, the model represents a factorization of the joint probability of all random variables. More precisely, if the events are X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} then the joint probability satisfies P [ X 1 , … , X n ] = ∏ i = 1 n P [ X i | pa ( X i ) ] {\displaystyle P[X_{1},\ldots ,X_{n}]=\prod _{i=1}^{n}P[X_{i}|{\text{pa}}(X_{i})]} where pa ( X i ) {\displaystyle {\text{pa}}(X_{i})} is the set of parents of node X i {\displaystyle X_{i}} (nodes with edges directed towards X i {\displaystyle X_{i}} ). In other words, the joint distribution factors into a product of conditional distributions. For example, in the directed acyclic graph shown in the Figure this factorization would be P [ A , B , C , D ] = P [ A ] ⋅ P [ B | A ] ⋅ P [ C | A ] ⋅ P [ D | A , C ] {\displaystyle P[A,B,C,D]=P[A]\cdot P[B|A]\cdot P[C|A]\cdot P[D|A,C]} . Any two nodes are conditionally independent given the values of their parents. In general, any two sets of nodes are conditionally independent given a third set if a criterion called d-separation holds in the graph. Local independences and global independences are equivalent in Bayesian networks. This type of graphical model is known as a directed graphical model, Bayesian network, or belief network. Classic machine learning models like hidden Markov models, neural networks and newer models such as variable-order Markov models can be considered special cases of Bayesian networks. One of the simplest Bayesian Networks is the Naive Bayes classifier. === Cyclic Directed Graphical Models === The next figure depicts a graphical model with a cycle. This may be interpreted in terms of each variable 'depending' on the values of its parents in some manner. The particular graph shown suggests a joint probability density that factors as P [ A , B , C , D ] = P [ A ] ⋅ P [ B ] ⋅ P [ C , D | A , B ] {\displaystyle P[A,B,C,D]=P[A]\cdot P[B]\cdot P[C,D|A,B]} , but other interpretations are possible. === Other types === Dependency network where cycles are allowed Tree-augmented classifier or TAN model Targeted Bayesian network learning (TBNL) A factor graph is an undirected bipartite graph connecting variables and factors. Each factor represents a function over the variables it is connected to. This is a helpful representation for understanding and implementing belief propagation. A clique tree or junction tree is a tree of cliques, used in the junction tree algorithm. A chain graph is a graph which may have both directed and undirected edges, but without any directed cycles (i.e. if we start at any vertex and move along the graph respecting the directions of any arrows, we cannot return to the vertex we started from if we have passed an arrow). Both directed acyclic graphs and undirected graphs are special cases of chain graphs, which can therefore provide a way of unifying and generalizing Bayesian and Markov networks. An ancestral graph is a further extension, having directed, bidirected and undirected edges. Random field techniques A Markov random field, also known as a Markov network, is a model over an undirected graph. A graphical model with many repeated subunits can be represented with plate notation. A conditional random field is a discriminative model specified over an undirected graph. A restricted Boltzmann machine is a bipartite generative model specified over an undirected graph. == Applications == The framework of the models, which provides algorithms for discovering and analyzing structure in complex distributions to describe them succinctly and extract the unstructured information, allows them to be constructed and utilized effectively. Applications of graphical models include causal inference, information extraction, speech recognition, computer vision, decoding of low-density parity-check codes, modeling of gene regulatory networks, gene finding and diagnosis of diseases, and graphical models for protein structure. == See also == Belief propagation Structural equation model == Notes == == Further reading == === Books and book chapters === Barber, David (2012). Bayesian Reasoning and Machine Learning. Cambridge University Press. ISBN 978-0-521-51814-7. Bishop, Christopher M. (2006). "Chapter 8. Graphical Models" (PDF). Pattern Recognition and Machine Learning. Springer. pp. 359–422. ISBN 978-0-387-31073-2. MR 2247587. Cowell, Robert G.; Dawid, A. Philip; Lauritzen, Steffen L.; Spiegelhalter, David J. (1999). Probabilistic networks and expert systems. Berlin: Springer. ISBN 978-0-387-98767-5. MR 1697175. A more advanced and statistically oriented book Jensen, Finn (1996). An introduction to Bayesian networks. Berlin: Springer. ISBN 978-0-387-91502-9. Pearl, Judea (1988). Probabilistic Reasoning in Intelligent Systems (2nd revised ed.). San Mateo, CA: Morgan Kaufmann. ISBN 978-1-55860-479-7. MR 0965765. A computational reasoning approach, where the relationships between graphs and probabilities were formally introduced. === Journal articles === Edoardo M. Airoldi (2007). "Getting Started in Probabilistic Graphical Models". PLOS Computational Biology. 3 (12): e252. arXiv:0706.2040. Bibcode:2007PLSCB...3..252A. doi:10.1371/journal.pcbi.0030252. PMC 2134967. PMID 18069887. Jordan, M. I. (2004). "Graphical Models". Statistical Science. 19: 140–155. doi:10.1214/088342304000000026. Ghahramani, Zoubin (May 2015). "Probabilistic machine learning and artificial intelligence". Nature. 521 (7553): 452–459. Bibcode:2015Natur.521..452G. doi:10.1038/nature14541. PMID 26017444. S2CID 216356. === Other === Heckerman's Bayes Net Learning Tutorial A Brief Introduction to Graphical Models and Bayesian Networks Sargur Srihari's lecture slides on probabilistic graphical models == External links == Graphical models and Conditional Random Fields Probabilistic Graphical Models taught by Eric Xing at CMU
Wikipedia/Probabilistic_graphical_model
In statistics, econometrics, epidemiology, genetics and related disciplines, causal graphs (also known as path diagrams, causal Bayesian networks or DAGs) are probabilistic graphical models used to encode assumptions about the data-generating process. Causal graphs can be used for communication and for inference. They are complementary to other forms of causal reasoning, for instance using causal equality notation. As communication devices, the graphs provide formal and transparent representation of the causal assumptions that researchers may wish to convey and defend. As inference tools, the graphs enable researchers to estimate effect sizes from non-experimental data, derive testable implications of the assumptions encoded, test for external validity, and manage missing data and selection bias. Causal graphs were first used by the geneticist Sewall Wright under the rubric "path diagrams". They were later adopted by social scientists and, to a lesser extent, by economists. These models were initially confined to linear equations with fixed parameters. Modern developments have extended graphical models to non-parametric analysis, and thus achieved a generality and flexibility that has transformed causal analysis in computer science, epidemiology, and social science. Recent advances include the development of large-scale causality graphs, such as CauseNet, which compiles over 11 million causal relations extracted from web sources to support causal question answering and reasoning. == Construction and terminology == The causal graph can be drawn in the following way. Each variable in the model has a corresponding vertex or node and an arrow is drawn from a variable X to a variable Y whenever Y is judged to respond to changes in X when all other variables are being held constant. Variables connected to Y through direct arrows are called parents of Y, or "direct causes of Y," and are denoted by Pa(Y). Causal models often include "error terms" or "omitted factors" which represent all unmeasured factors that influence a variable Y when Pa(Y) are held constant. In most cases, error terms are excluded from the graph. However, if the graph author suspects that the error terms of any two variables are dependent (e.g. the two variables have an unobserved or latent common cause) then a bidirected arc is drawn between them. Thus, the presence of latent variables is taken into account through the correlations they induce between the error terms, as represented by bidirected arcs. == Fundamental tools == A fundamental tool in graphical analysis is d-separation, which allows researchers to determine, by inspection, whether the causal structure implies that two sets of variables are independent given a third set. In recursive models without correlated error terms (sometimes called Markovian), these conditional independences represent all of the model's testable implications. == Example == Suppose we wish to estimate the effect of attending an elite college on future earnings. Simply regressing earnings on college rating will not give an unbiased estimate of the target effect because elite colleges are highly selective, and students attending them are likely to have qualifications for high-earning jobs prior to attending the school. Assuming that the causal relationships are linear, this background knowledge can be expressed in the following structural equation model (SEM) specification. Model 1 Q 1 = U 1 C = a ⋅ Q 1 + U 2 Q 2 = c ⋅ C + d ⋅ Q 1 + U 3 S = b ⋅ C + e ⋅ Q 2 + U 4 , {\displaystyle {\begin{aligned}Q_{1}&=U_{1}\\C&=a\cdot Q_{1}+U_{2}\\Q_{2}&=c\cdot C+d\cdot Q_{1}+U_{3}\\S&=b\cdot C+e\cdot Q_{2}+U_{4},\end{aligned}}} where Q 1 {\displaystyle Q_{1}} represents the individual's qualifications prior to college, Q 2 {\displaystyle Q_{2}} represents qualifications after college, C {\displaystyle C} contains attributes representing the quality of the college attended, and S {\displaystyle S} the individual's salary. Figure 1 is a causal graph that represents this model specification. Each variable in the model has a corresponding node or vertex in the graph. Additionally, for each equation, arrows are drawn from the independent variables to the dependent variables. These arrows reflect the direction of causation. In some cases, we may label the arrow with its corresponding structural coefficient as in Figure 1. If Q 1 {\displaystyle Q_{1}} and Q 2 {\displaystyle Q_{2}} are unobserved or latent variables their influence on C {\displaystyle C} and S {\displaystyle S} can be attributed to their error terms. By removing them, we obtain the following model specification: Model 2 C = U C S = β C + U S {\displaystyle {\begin{aligned}C&=U_{C}\\S&=\beta C+U_{S}\end{aligned}}} The background information specified by Model 1 imply that the error term of S {\displaystyle S} , U S {\displaystyle U_{S}} , is correlated with C's error term, U C {\displaystyle U_{C}} . As a result, we add a bidirected arc between S and C, as in Figure 2. Since U S {\displaystyle U_{S}} is correlated with U C {\displaystyle U_{C}} and, therefore, C {\displaystyle C} , C {\displaystyle C} is endogenous and β {\displaystyle \beta } is not identified in Model 2. However, if we include the strength of an individual's college application, A {\displaystyle A} , as shown in Figure 3, we obtain the following model: Model 3 Q 1 = U 1 A = a ⋅ Q 1 + U 2 C = b ⋅ A + U 3 Q 2 = e ⋅ Q 1 + d ⋅ C + U 4 S = c ⋅ C + f ⋅ Q 2 + U 5 , {\displaystyle {\begin{aligned}Q_{1}&=U_{1}\\A&=a\cdot Q_{1}+U_{2}\\C&=b\cdot A+U_{3}\\Q_{2}&=e\cdot Q_{1}+d\cdot C+U_{4}\\S&=c\cdot C+f\cdot Q_{2}+U_{5},\end{aligned}}} By removing the latent variables from the model specification we obtain: Model 4 A = a ⋅ Q 1 + U A C = b ⋅ A + U C S = β ⋅ C + U S , {\displaystyle {\begin{aligned}A&=a\cdot Q_{1}+U_{A}\\C&=b\cdot A+U_{C}\\S&=\beta \cdot C+U_{S},\end{aligned}}} with U A {\displaystyle U_{A}} correlated with U S {\displaystyle U_{S}} . Now, β {\displaystyle \beta } is identified and can be estimated using the regression of S {\displaystyle S} on C {\displaystyle C} and A {\displaystyle A} . This can be verified using the single-door criterion, a necessary and sufficient graphical condition for the identification of a structural coefficients, like β {\displaystyle \beta } , using regression. == References ==
Wikipedia/Causal_graph
Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. Recurrent neural networks, or neural networks with loops allow information from later processing stages to feed back to earlier stages for sequence processing. However, at every stage of inference a feedforward multiplication remains the core, essential for backpropagation or backpropagation through time. Thus neural networks cannot contain feedback like negative feedback or positive feedback where the outputs feed back to the very same inputs and modify them, because this forms an infinite loop which is not possible to rewind in time to generate an error signal through backpropagation. This issue and nomenclature appear to be a point of confusion between some computer scientists and scientists in other fields studying brain networks. == Mathematical foundations == === Activation function === The two historically common activation functions are both sigmoids, and are described by y ( v i ) = tanh ⁡ ( v i ) and y ( v i ) = ( 1 + e − v i ) − 1 {\displaystyle y(v_{i})=\tanh(v_{i})~~{\textrm {and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}} . The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here y i {\displaystyle y_{i}} is the output of the i {\displaystyle i} th node (neuron) and v i {\displaystyle v_{i}} is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models). In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids. === Learning === Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation. We can represent the degree of error in an output node j {\displaystyle j} in the n {\displaystyle n} th data point (training example) by e j ( n ) = d j ( n ) − y j ( n ) {\displaystyle e_{j}(n)=d_{j}(n)-y_{j}(n)} , where d j ( n ) {\displaystyle d_{j}(n)} is the desired target value for n {\displaystyle n} th data point at node j {\displaystyle j} , and y j ( n ) {\displaystyle y_{j}(n)} is the value produced at node j {\displaystyle j} when the n {\displaystyle n} th data point is given as an input. The node weights can then be adjusted based on corrections that minimize the error in the entire output for the n {\displaystyle n} th data point, given by E ( n ) = 1 2 ∑ output node j e j 2 ( n ) {\displaystyle {\mathcal {E}}(n)={\frac {1}{2}}\sum _{{\text{output node }}j}e_{j}^{2}(n)} . Using gradient descent, the change in each weight w i j {\displaystyle w_{ij}} is Δ w j i ( n ) = − η ∂ E ( n ) ∂ v j ( n ) y i ( n ) {\displaystyle \Delta w_{ji}(n)=-\eta {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}y_{i}(n)} where y i ( n ) {\displaystyle y_{i}(n)} is the output of the previous neuron i {\displaystyle i} , and η {\displaystyle \eta } is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression, ∂ E ( n ) ∂ v j ( n ) {\displaystyle {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}} denotes the partial derivate of the error E ( n ) {\displaystyle {\mathcal {E}}(n)} according to the weighted sum v j ( n ) {\displaystyle v_{j}(n)} of the input connections of neuron i {\displaystyle i} . The derivative to be calculated depends on the induced local field v j {\displaystyle v_{j}} , which itself varies. It is easy to prove that for an output node this derivative can be simplified to − ∂ E ( n ) ∂ v j ( n ) = e j ( n ) ϕ ′ ( v j ( n ) ) {\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=e_{j}(n)\phi ^{\prime }(v_{j}(n))} where ϕ ′ {\displaystyle \phi ^{\prime }} is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is − ∂ E ( n ) ∂ v j ( n ) = ϕ ′ ( v j ( n ) ) ∑ k − ∂ E ( n ) ∂ v k ( n ) w k j ( n ) {\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=\phi ^{\prime }(v_{j}(n))\sum _{k}-{\frac {\partial {\mathcal {E}}(n)}{\partial v_{k}(n)}}w_{kj}(n)} . This depends on the change in weights of the k {\displaystyle k} th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. == History == === Timeline === Circa 1800, Legendre (1805) and Gauss (1795) created the simplest feedforward network which consists of a single weight layer with linear activation functions. It was trained by the least squares method for minimising mean squared error, also known as linear regression. Legendre and Gauss used it for the prediction of planetary movement from training data. In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks. In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections. R. D. Joseph (1960) mentions an even earlier perceptron-like device: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject." In 1960, Joseph also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962): section 16  cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning. In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling, the first working deep learning algorithm, a method to train arbitrarily deep neural networks. It is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates." It was used to train an eight-layer neural net in 1971. In 1967, Shun'ichi Amari reported the first multilayered neural network trained by stochastic gradient descent, which was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work. In 2003, interest in backpropagation networks returned due to the successes of deep learning being applied to language modelling by Yoshua Bengio with co-authors. === Linear regression === === Perceptron === If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron. (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function. Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent. === Multilayer perceptron === A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable. == Other feedforward networks == Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function. == See also == Hopfield network Feed-forward Backpropagation Rprop == References == == External links == Feedforward neural networks tutorial Feedforward Neural Network: Example Feedforward Neural Networks: An Introduction
Wikipedia/Feed-forward_network
"A Logical Calculus of the Ideas Immanent to Nervous Activity" is a 1943 article written by Warren McCulloch and Walter Pitts. The paper, published in the journal The Bulletin of Mathematical Biophysics, proposed a mathematical model of the nervous system as a network of simple logical elements, later known as artificial neurons, or McCulloch-Pitts neurons. These neurons receive inputs, perform a weighted sum, and fire an output signal based on a threshold function. By connecting these units in various configurations, McCulloch and Pitts demonstrated that their model could perform all logical functions. It is a seminal work in cognitive science, computational neuroscience, computer science, and artificial intelligence. It was a foundational result in automata theory. John von Neumann cited it as a significant result. == Mathematics == The artificial neuron used in the original paper is slightly different from the modern version. They considered neural networks that operate in discrete steps of time t = 0 , 1 , … {\displaystyle t=0,1,\dots } . The neural network contains a number of neurons. Let the state of a neuron i {\displaystyle i} at time t {\displaystyle t} be N i ( t ) {\displaystyle N_{i}(t)} . The state of a neuron can either be 0 or 1, standing for "not firing" and "firing". Each neuron also has a firing threshold θ {\displaystyle \theta } , such that it fires if the total input exceeds the threshold. Each neuron can connect to any other neuron (including itself) with positive synapses (excitatory) or negative synapses (inhibitory). That is, each neuron can connect to another neuron with a weight w {\displaystyle w} taking an integer value. A peripheral afferent is a neuron with no incoming synapses. We can regard each neural network as a directed graph, with the nodes being the neurons, and the directed edges being the synapses. A neural network has a circle or a circuit if there exists a directed circle in the graph. Let w i j ( t ) {\displaystyle w_{ij}(t)} be the connection weight from neuron j {\displaystyle j} to neuron i {\displaystyle i} at time t {\displaystyle t} , then its next state is N i ( t + 1 ) = H ( ∑ j = 1 n w i j ( t ) N j ( t ) − θ i ( t ) ) , {\displaystyle N_{i}(t+1)=H\left(\sum _{j=1}^{n}w_{ij}(t)N_{j}(t)-\theta _{i}(t)\right),} where H {\displaystyle H} is the Heaviside step function (outputting 1 if the input is greater than or equal to 0, and 0 otherwise). === Symbolic logic === The paper used, as a logical language for describing neural networks, "Language II" from The Logical Syntax of Language by Rudolf Carnap with some notations taken from Principia Mathematica by Alfred North Whitehead and Bertrand Russell. Language II covers substantial parts of classical mathematics, including real analysis and portions of set theory. To describe a neural network with peripheral afferents N 1 , N 2 , … , N p {\displaystyle N_{1},N_{2},\dots ,N_{p}} and non-peripheral afferents N p + 1 , N p + 2 , … , N n {\displaystyle N_{p+1},N_{p+2},\dots ,N_{n}} they considered logical predicate of form P r ( N 1 , N 2 , … , N p , t ) {\displaystyle Pr(N_{1},N_{2},\dots ,N_{p},t)} where P r {\displaystyle Pr} is a first-order logic predicate function (a function that outputs a boolean), N 1 , … , N p {\displaystyle N_{1},\dots ,N_{p}} are predicates that take t {\displaystyle t} as an argument, and t {\displaystyle t} is the only free variable in the predicate. Intuitively speaking, N 1 , … , N p {\displaystyle N_{1},\dots ,N_{p}} specifies the binary input patterns going into the neural network over all time, and P r ( N 1 , N 2 , … , N n , t ) {\displaystyle Pr(N_{1},N_{2},\dots ,N_{n},t)} is a function that takes some binary input patterns, and constructs an output binary pattern P r ( N 1 , N 2 , … , N n , 0 ) , P r ( N 1 , N 2 , … , N n , 1 ) , … {\displaystyle Pr(N_{1},N_{2},\dots ,N_{n},0),Pr(N_{1},N_{2},\dots ,N_{n},1),\dots } . A logical sentence P r ( N 1 , N 2 , … , N n , t ) {\displaystyle Pr(N_{1},N_{2},\dots ,N_{n},t)} is realized by a neural network iff there exists a time-delay T ≥ 0 {\displaystyle T\geq 0} , a neuron i {\displaystyle i} in the network, and an initial state for the non-peripheral neurons N p + 1 ( 0 ) , … , N n ( 0 ) {\displaystyle N_{p+1}(0),\dots ,N_{n}(0)} , such that for any time t {\displaystyle t} , the truth-value of the logical sentence is equal to the state of the neuron i {\displaystyle i} at time t + T {\displaystyle t+T} . That is, ∀ t = 0 , 1 , 2 , … , P r ( N 1 , N 2 , … , N p , t ) = N i ( t + T ) {\displaystyle \forall t=0,1,2,\dots ,\quad Pr(N_{1},N_{2},\dots ,N_{p},t)=N_{i}(t+T)} === Equivalence === In the paper, they considered some alternative definitions of artificial neural networks, and have shown them to be equivalent, that is, neural networks under one definition realizes precisely the same logical sentences as neural networks under another definition. They considered three forms of inhibition: relative inhibition, absolute inhibition, and extinction. The definition above is relative inhibition. By "absolute inhibition" they meant that if any negative synapse fires, then the neuron will not fire. By "extinction" they meant that if at time t {\displaystyle t} , any inhibitory synapse fires on a neuron i {\displaystyle i} , then θ i ( t + j ) = θ i ( 0 ) + b j {\displaystyle \theta _{i}(t+j)=\theta _{i}(0)+b_{j}} for j = 1 , 2 , 3 , … {\displaystyle j=1,2,3,\dots } , until the next time an inhibitory synapse fires on i {\displaystyle i} . It is required that b j = 0 {\displaystyle b_{j}=0} for all large j {\displaystyle j} . Theorem 4 and 5 state that these are equivalent. They considered three forms of excitation: spatial summation, temporal summation, and facilitation. The definition above is spatial summation (which they pictured as having multiple synapses placed close together, so that the effect of their firing sums up). By "temporal summation" they meant that the total incoming signal is ∑ τ = 0 T ∑ j = 1 n w i j ( t ) N j ( t − τ ) {\displaystyle \sum _{\tau =0}^{T}\sum _{j=1}^{n}w_{ij}(t)N_{j}(t-\tau )} for some T ≥ 1 {\displaystyle T\geq 1} . By "facilitation" they meant the same as extinction, except that b j ≤ 0 {\displaystyle b_{j}\leq 0} . Theorem 6 states that these are equivalent. They considered neural networks that do not change, and those that change by Hebbian learning. That is, they assume that at t = 0 {\displaystyle t=0} , some excitatory synaptic connections are not active. If at any t {\displaystyle t} , both N i ( t ) = 1 , N j ( t ) = 1 {\displaystyle N_{i}(t)=1,N_{j}(t)=1} , then any latent excitatory synapse between i , j {\displaystyle i,j} becomes active. Theorem 7 states that these are equivalent. === Logical expressivity === They considered "temporal propositional expressions" (TPE), which are propositional formulas with one free variable t {\displaystyle t} . For example, N 1 ( t ) ∨ N 2 ( t ) ∧ ¬ N 3 ( t ) {\displaystyle N_{1}(t)\vee N_{2}(t)\wedge \neg N_{3}(t)} is such an expression. Theorem 1 and 2 together showed that neural nets without circles are equivalent to TPE. For neural nets with loops, they noted that "realizable P r {\displaystyle Pr} may involve reference to past events of an indefinite degree of remoteness". These then encodes for sentences like "There was some x such that x was a ψ" or ( ∃ x ) ( ψ x ) {\displaystyle (\exists x)(\psi x)} . Theorems 8 to 10 showed that neural nets with loops can encode all first-order logic with equality and conversely, any looped neural networks is equivalent to a sentence in first-order logic with equality, thus showing that they are equivalent in logical expressiveness. As a remark, they noted that a neural network, if furnished with a tape, scanners, and write-heads, is equivalent to a Turing machine, and conversely, every Turing machine is equivalent to some such neural network. Thus, these neural networks are equivalent to Turing computability, Church's lambda-definability, and Kleene's primitive recursiveness. == Context == === Previous work === The paper built upon several previous strands of work. In the symbolic logic side, it built on the previous work by Carnap, Whitehead, and Russell. This was contributed by Walter Pitts, who had a strong proficiency with symbolic logic. Pitts provided mathematical and logical rigor to McCulloch’s vague ideas on psychons (atoms of psychological events) and circular causality. In the neuroscience side, it built on previous work by the mathematical biology research group centered around Nicolas Rashevsky, of which McCulloch was a member. The paper was published in the Bulletin of Mathematical Biophysics, which was founded by Rashevsky in 1939. During the late 1930s, Rashevsky's research group was producing papers that had difficulty publishing in other journals at the time, so Rashevsky decided to found a new journal exclusively devoted to mathematical biophysics. Also in the Rashevsky's group was Alston Scott Householder, who in 1941 published an abstract model of the steady-state activity of biological neural networks. The model, in modern language, is an artificial neural network with ReLU activation function. In a series of papers, Householder calculated the stable states of very simple networks: a chain, a circle, and a bouquet. Walter Pitts' first two papers formulated a mathematical theory of learning and conditioning. The next three were mathematical developments of Householder’s model. In 1938, at age 15, Pitts ran away from home in Detroit and arrived in the University of Chicago. Later, he walked into Rudolf Carnap's office with Carnap's book filled with corrections and suggested improvements. He started studying under Carnap and attending classes during 1938--1943. He wrote several early papers on neuronal network modelling and regularly attended Rashevsky's seminars in theoretical biology. The seminar attendants included Gerhard von Bonin and Householder. In 1940, von Bonin introduced Lettvin to McCulloch. In 1942, both Lettvin and Pitts had moved in with McCulloch's home. McCulloch had been interested in circular causality from studies with causalgia after amputation, epileptic activity of surgically isolated brain, and Lorente de Nò's research showing recurrent neural networks are needed to explain vestibular nystagmus. He had difficulty with treating circular causality until Pitts demonstrated how it can be treated by the appropriate mathematical tools of modular arithmetics and symbolic logic. Both authors' affiliation in the article was given as "University of Illinois, College of Medicine, Department of Psychiatry at the Illinois Neuropsychiatric Institute, University of Chicago, Chicago, U.S.A." === Subsequent work === It was a foundational result in automata theory. John von Neumann cited it as a significant result. This work led to work on neural networks and their link to finite automata. Kleene introduced the term "regular" for "regular language" in a 1951 technical report, where Kleene proved that regular languages are all that could be generated by neural networks, among other results. The term "regular" was meant to be suggestive of "regularly occurring events" that the neural net automaton must process and respond to. Marvin Minsky was influenced by McCulloch, built an early example of neural network SNARC (1951), and did a PhD thesis on neural networks (1954). McCulloch was the chair to the ten Macy conferences (1946--1953) on "Circular Causal and Feedback Mechanisms in Biological and Social Systems". This was a key event in the beginning of cybernetics, and what later became known as cognitive science. Pitts also attended the conferences. In the 1943 paper, they described how memories can be formed by a neural network with loops in it, or alterable synapses, which are operating over time, and implements logical universals -- "there exists" and "for all". This was generalized for spatial objects, such as geometric figures, in their 1947 paper How we know universals. Norbert Wiener found this a significant evidence for a general method for how animals recognizing objects, by scanning a scene from multiple transformations and finding a canonical representation. He hypothesized that this "scanning" activity is clocked by the alpha wave, which he mistakenly thought was tightly regulated at 10 Hz (instead of the 8 -- 13 Hz as modern research shows). McCulloch worked with Manuel Blum in studying how a neural network can be "logically stable", that is, can implement a boolean function even if the activation thresholds of individual neurons are varied.: 64  They were inspired by the problem of how the brain can perform the same functions, such as breathing, under influence of caffeine or alcohol, which shifts the activation threshold over the entire brain. == See also == Artificial neural network Perceptron Connectionism Principia Mathematica History of artificial neural networks == References ==
Wikipedia/A_Logical_Calculus_of_the_Ideas_Immanent_in_Nervous_Activity
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss. == Examples == === Regret === Leonard J. Savage argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. === Quadratic loss function === The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is λ ( x ) = C ( t − x ) 2 {\displaystyle \lambda (x)=C(t-x)^{2}\;} for some constant C; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as the squared error loss (SEL). Many common statistics, including t-tests, regression models, design of experiments, and much else, use least squares methods applied using linear regression theory, which is based on the quadratic loss function. The quadratic loss function is also used in linear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a quadratic form in the deviations of the variables of interest from their desired values; this approach is tractable because it results in linear first-order conditions. In the context of stochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like the Huber, Log-Cosh and SMAE losses are used when the data has many large outliers. === 0-1 loss function === In statistics and decision theory, a frequently used loss function is the 0-1 loss function L ( y ^ , y ) = [ y ^ ≠ y ] {\displaystyle L({\hat {y}},y)=\left[{\hat {y}}\neq y\right]} using Iverson bracket notation, i.e. it evaluates to 1 when y ^ ≠ y {\displaystyle {\hat {y}}\neq y} , and 0 otherwise. == Constructing loss and objective functions == In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences. In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers. Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities and the European subsidies for equalizing unemployment rates among 271 German regions. == Expected loss == In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X. === Statistics === Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms. ==== Frequentist expected loss ==== We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the probability distribution, Pθ, of the observed data, X. This is also referred to as the risk function of the decision rule δ and the parameter θ. Here the decision rule depends on the outcome of X. The risk function is given by: R ( θ , δ ) = E θ ⁡ L ( θ , δ ( X ) ) = ∫ X L ( θ , δ ( x ) ) d P θ ( x ) . {\displaystyle R(\theta ,\delta )=\operatorname {E} _{\theta }L{\big (}\theta ,\delta (X){\big )}=\int _{X}L{\big (}\theta ,\delta (x){\big )}\,\mathrm {d} P_{\theta }(x).} Here, θ is a fixed but possibly unknown state of nature, X is a vector of observations stochastically drawn from a population, E θ {\displaystyle \operatorname {E} _{\theta }} is the expectation over all population values of X, dPθ is a probability measure over the event space of X (parametrized by θ) and the integral is evaluated over the entire support of X. ==== Bayes Risk ==== In a Bayesian approach, the expectation is calculated using the prior distribution π* of the parameter θ: ρ ( π ∗ , a ) = ∫ Θ ∫ X L ( θ , a ( x ) ) d P ( x | θ ) d π ∗ ( θ ) = ∫ X ∫ Θ L ( θ , a ( x ) ) d π ∗ ( θ | x ) d M ( x ) {\displaystyle \rho (\pi ^{*},a)=\int _{\Theta }\int _{\mathbf {X}}L(\theta ,a({\mathbf {x}}))\,\mathrm {d} P({\mathbf {x}}\vert \theta )\,\mathrm {d} \pi ^{*}(\theta )=\int _{\mathbf {X}}\int _{\Theta }L(\theta ,a({\mathbf {x}}))\,\mathrm {d} \pi ^{*}(\theta \vert {\mathbf {x}})\,\mathrm {d} M({\mathbf {x}})} where m(x) is known as the predictive likelihood wherein θ has been "integrated out," π* (θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the action a* which minimises this expected loss, which is referred to as Bayes Risk. In the latter equation, the integrand inside dx is known as the Posterior Risk, and minimising it with respect to decision a also minimizes the overall Bayes Risk. This optimal decision, a* is known as the Bayes (decision) Rule - it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. ==== Examples in statistics ==== For a scalar parameter θ, a decision function whose output θ ^ {\displaystyle {\hat {\theta }}} is an estimate of θ, and a quadratic loss function (squared error loss) L ( θ , θ ^ ) = ( θ − θ ^ ) 2 , {\displaystyle L(\theta ,{\hat {\theta }})=(\theta -{\hat {\theta }})^{2},} the risk function becomes the mean squared error of the estimate, R ( θ , θ ^ ) = E θ ⁡ [ ( θ − θ ^ ) 2 ] . {\displaystyle R(\theta ,{\hat {\theta }})=\operatorname {E} _{\theta }\left[(\theta -{\hat {\theta }})^{2}\right].} An Estimator found by minimizing the Mean squared error estimates the Posterior distribution's mean. In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm, L ( f , f ^ ) = ‖ f − f ^ ‖ 2 2 , {\displaystyle L(f,{\hat {f}})=\|f-{\hat {f}}\|_{2}^{2}\,,} the risk function becomes the mean integrated squared error R ( f , f ^ ) = E ⁡ ( ‖ f − f ^ ‖ 2 ) . {\displaystyle R(f,{\hat {f}})=\operatorname {E} \left(\|f-{\hat {f}}\|^{2}\right).\,} === Economic choice under uncertainty === In economics, decision-making under uncertainty is often modelled using the von Neumann–Morgenstern utility function of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. == Decision rules == A decision rule makes a choice using an optimality criterion. Some commonly used criteria are: Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: a r g m i n δ max θ ∈ Θ R ( θ , δ ) . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\ \max _{\theta \in \Theta }\ R(\theta ,\delta ).} Invariance: Choose the decision rule which satisfies an invariance requirement. Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): a r g m i n δ E θ ∈ Θ ⁡ [ R ( θ , δ ) ] = a r g m i n δ ∫ θ ∈ Θ R ( θ , δ ) p ( θ ) d θ . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\operatorname {E} _{\theta \in \Theta }[R(\theta ,\delta )]={\underset {\delta }{\operatorname {arg\,min} }}\ \int _{\theta \in \Theta }R(\theta ,\delta )\,p(\theta )\,d\theta .} == Selecting a loss function == Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances. A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering. For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used loss functions are the squared loss, L ( a ) = a 2 {\displaystyle L(a)=a^{2}} , and the absolute loss, L ( a ) = | a | {\displaystyle L(a)=|a|} . However the absolute loss has the disadvantage that it is not differentiable at a = 0 {\displaystyle a=0} . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of a {\displaystyle a} 's (as in ∑ i = 1 n L ( a i ) {\textstyle \sum _{i=1}^{n}L(a_{i})} ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties. Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others. W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases. == See also == Bayesian regret Loss functions for classification Discounted maximum loss Hinge loss Scoring rule Statistical risk == References == == Further reading == Aretz, Kevin; Bartram, Söhnke M.; Pope, Peter F. (April–June 2011). "Asymmetric Loss Functions and the Rationality of Expected Stock Returns" (PDF). International Journal of Forecasting. 27 (2): 413–437. doi:10.1016/j.ijforecast.2009.10.008. SSRN 889323. Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2. MR 0804611. Cecchetti, S. (2000). "Making monetary policy: Objectives and rules". Oxford Review of Economic Policy. 16 (4): 43–59. doi:10.1093/oxrep/16.4.43. Horowitz, Ann R. (1987). "Loss functions and public policy". Journal of Macroeconomics. 9 (4): 489–504. doi:10.1016/0164-0704(87)90016-4. Waud, Roger N. (1976). "Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty". Econometrica. 44 (1): 53–66. doi:10.2307/1911380. JSTOR 1911380.
Wikipedia/0-1_loss_function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss. == Examples == === Regret === Leonard J. Savage argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. === Quadratic loss function === The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is λ ( x ) = C ( t − x ) 2 {\displaystyle \lambda (x)=C(t-x)^{2}\;} for some constant C; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as the squared error loss (SEL). Many common statistics, including t-tests, regression models, design of experiments, and much else, use least squares methods applied using linear regression theory, which is based on the quadratic loss function. The quadratic loss function is also used in linear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a quadratic form in the deviations of the variables of interest from their desired values; this approach is tractable because it results in linear first-order conditions. In the context of stochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like the Huber, Log-Cosh and SMAE losses are used when the data has many large outliers. === 0-1 loss function === In statistics and decision theory, a frequently used loss function is the 0-1 loss function L ( y ^ , y ) = [ y ^ ≠ y ] {\displaystyle L({\hat {y}},y)=\left[{\hat {y}}\neq y\right]} using Iverson bracket notation, i.e. it evaluates to 1 when y ^ ≠ y {\displaystyle {\hat {y}}\neq y} , and 0 otherwise. == Constructing loss and objective functions == In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences. In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers. Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities and the European subsidies for equalizing unemployment rates among 271 German regions. == Expected loss == In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X. === Statistics === Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms. ==== Frequentist expected loss ==== We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the probability distribution, Pθ, of the observed data, X. This is also referred to as the risk function of the decision rule δ and the parameter θ. Here the decision rule depends on the outcome of X. The risk function is given by: R ( θ , δ ) = E θ ⁡ L ( θ , δ ( X ) ) = ∫ X L ( θ , δ ( x ) ) d P θ ( x ) . {\displaystyle R(\theta ,\delta )=\operatorname {E} _{\theta }L{\big (}\theta ,\delta (X){\big )}=\int _{X}L{\big (}\theta ,\delta (x){\big )}\,\mathrm {d} P_{\theta }(x).} Here, θ is a fixed but possibly unknown state of nature, X is a vector of observations stochastically drawn from a population, E θ {\displaystyle \operatorname {E} _{\theta }} is the expectation over all population values of X, dPθ is a probability measure over the event space of X (parametrized by θ) and the integral is evaluated over the entire support of X. ==== Bayes Risk ==== In a Bayesian approach, the expectation is calculated using the prior distribution π* of the parameter θ: ρ ( π ∗ , a ) = ∫ Θ ∫ X L ( θ , a ( x ) ) d P ( x | θ ) d π ∗ ( θ ) = ∫ X ∫ Θ L ( θ , a ( x ) ) d π ∗ ( θ | x ) d M ( x ) {\displaystyle \rho (\pi ^{*},a)=\int _{\Theta }\int _{\mathbf {X}}L(\theta ,a({\mathbf {x}}))\,\mathrm {d} P({\mathbf {x}}\vert \theta )\,\mathrm {d} \pi ^{*}(\theta )=\int _{\mathbf {X}}\int _{\Theta }L(\theta ,a({\mathbf {x}}))\,\mathrm {d} \pi ^{*}(\theta \vert {\mathbf {x}})\,\mathrm {d} M({\mathbf {x}})} where m(x) is known as the predictive likelihood wherein θ has been "integrated out," π* (θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the action a* which minimises this expected loss, which is referred to as Bayes Risk. In the latter equation, the integrand inside dx is known as the Posterior Risk, and minimising it with respect to decision a also minimizes the overall Bayes Risk. This optimal decision, a* is known as the Bayes (decision) Rule - it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. ==== Examples in statistics ==== For a scalar parameter θ, a decision function whose output θ ^ {\displaystyle {\hat {\theta }}} is an estimate of θ, and a quadratic loss function (squared error loss) L ( θ , θ ^ ) = ( θ − θ ^ ) 2 , {\displaystyle L(\theta ,{\hat {\theta }})=(\theta -{\hat {\theta }})^{2},} the risk function becomes the mean squared error of the estimate, R ( θ , θ ^ ) = E θ ⁡ [ ( θ − θ ^ ) 2 ] . {\displaystyle R(\theta ,{\hat {\theta }})=\operatorname {E} _{\theta }\left[(\theta -{\hat {\theta }})^{2}\right].} An Estimator found by minimizing the Mean squared error estimates the Posterior distribution's mean. In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm, L ( f , f ^ ) = ‖ f − f ^ ‖ 2 2 , {\displaystyle L(f,{\hat {f}})=\|f-{\hat {f}}\|_{2}^{2}\,,} the risk function becomes the mean integrated squared error R ( f , f ^ ) = E ⁡ ( ‖ f − f ^ ‖ 2 ) . {\displaystyle R(f,{\hat {f}})=\operatorname {E} \left(\|f-{\hat {f}}\|^{2}\right).\,} === Economic choice under uncertainty === In economics, decision-making under uncertainty is often modelled using the von Neumann–Morgenstern utility function of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. == Decision rules == A decision rule makes a choice using an optimality criterion. Some commonly used criteria are: Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: a r g m i n δ max θ ∈ Θ R ( θ , δ ) . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\ \max _{\theta \in \Theta }\ R(\theta ,\delta ).} Invariance: Choose the decision rule which satisfies an invariance requirement. Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): a r g m i n δ E θ ∈ Θ ⁡ [ R ( θ , δ ) ] = a r g m i n δ ∫ θ ∈ Θ R ( θ , δ ) p ( θ ) d θ . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\operatorname {E} _{\theta \in \Theta }[R(\theta ,\delta )]={\underset {\delta }{\operatorname {arg\,min} }}\ \int _{\theta \in \Theta }R(\theta ,\delta )\,p(\theta )\,d\theta .} == Selecting a loss function == Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances. A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering. For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used loss functions are the squared loss, L ( a ) = a 2 {\displaystyle L(a)=a^{2}} , and the absolute loss, L ( a ) = | a | {\displaystyle L(a)=|a|} . However the absolute loss has the disadvantage that it is not differentiable at a = 0 {\displaystyle a=0} . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of a {\displaystyle a} 's (as in ∑ i = 1 n L ( a i ) {\textstyle \sum _{i=1}^{n}L(a_{i})} ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties. Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others. W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases. == See also == Bayesian regret Loss functions for classification Discounted maximum loss Hinge loss Scoring rule Statistical risk == References == == Further reading == Aretz, Kevin; Bartram, Söhnke M.; Pope, Peter F. (April–June 2011). "Asymmetric Loss Functions and the Rationality of Expected Stock Returns" (PDF). International Journal of Forecasting. 27 (2): 413–437. doi:10.1016/j.ijforecast.2009.10.008. SSRN 889323. Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2. MR 0804611. Cecchetti, S. (2000). "Making monetary policy: Objectives and rules". Oxford Review of Economic Policy. 16 (4): 43–59. doi:10.1093/oxrep/16.4.43. Horowitz, Ann R. (1987). "Loss functions and public policy". Journal of Macroeconomics. 9 (4): 489–504. doi:10.1016/0164-0704(87)90016-4. Waud, Roger N. (1976). "Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty". Econometrica. 44 (1): 53–66. doi:10.2307/1911380. JSTOR 1911380.
Wikipedia/Loss_functions
A wireless ad hoc network (WANET) or mobile ad hoc network (MANET) is a decentralized type of wireless network. The network is ad hoc because it does not rely on a pre-existing infrastructure, such as routers or wireless access points. Instead, each node participates in routing by forwarding data for other nodes. The determination of which nodes forward data is made dynamically on the basis of network connectivity and the routing algorithm in use. Such wireless networks lack the complexities of infrastructure setup and administration, enabling devices to create and join networks "on the fly". Each device in a MANET is free to move independently in any direction, and will therefore change its links to other devices frequently. Each must forward traffic unrelated to its own use, and therefore be a router. The primary challenge in building a MANET is equipping each device to continuously maintain the information required to properly route traffic. This becomes harder as the scale of the MANET increases due to 1) the desire to route packets to/through every other node, 2) the percentage of overhead traffic needed to maintain real-time routing status, 3) each node has its own goodput to route independent and unaware of others needs, and 4) all must share limited communication bandwidth, such as a slice of radio spectrum. Such networks may operate by themselves or may be connected to the larger Internet. They may contain one or multiple and different transceivers between nodes. This results in a highly dynamic, autonomous topology. MANETs usually have a routable networking environment on top of a link layer ad hoc network. == History == === Packet radio === The earliest wireless data network was called PRNET, the packet radio network, and was sponsored by Defense Advanced Research Projects Agency (DARPA) in the early 1970s. Bolt, Beranek and Newman Inc. (BBN) and SRI International designed, built, and experimented with these earliest systems. Experimenters included Robert Kahn, Jerry Burchfiel, and Ray Tomlinson. Similar experiments took place in the amateur radio community with the x25 protocol. These early packet radio systems predated the Internet, and indeed were part of the motivation of the original Internet Protocol suite. Later DARPA experiments included the Survivable Radio Network (SURAN) project, which took place in the 1980s. A successor to these systems was fielded in the mid-1990s for the US Army, and later other nations, as the Near-term digital radio. Another third wave of academic and research activity started in the mid-1990s with the advent of inexpensive 802.11 radio cards for personal computers. Current wireless ad hoc networks are designed primarily for military utility. Problems with packet radios are: (1) bulky elements, (2) slow data rate, (3) unable to maintain links if mobility is high. The project did not proceed much further until the early 1990s when wireless ad hoc networks were born. === Early work on MANET === The growth of laptops and 802.11/Wi-Fi wireless networking have made MANETs a popular research topic since the mid-1990s. Many academic papers evaluate protocols and their abilities, assuming varying degrees of mobility within a bounded space, usually with all nodes within a few hops of each other. Different protocols are then evaluated based on measures such as the packet drop rate, the overhead introduced by the routing protocol, end-to-end packet delays, network throughput, ability to scale, etc. In the early 1990s, Charles Perkins from SUN Microsystems USA, and Chai Keong Toh from Cambridge University separately started to work on a different Internet, that of a wireless ad hoc network. Perkins was working on the dynamic addressing issues. Toh worked on a new routing protocol, which was known as ABR – associativity-based routing. Perkins eventually proposed DSDV – Destination Sequence Distance Vector routing, which was based on distributed distance vector routing. Toh's proposal was an on-demand based routing, i.e. routes are discovered on-the-fly in real-time as and when needed. ABR was submitted to IETF as RFCs. ABR was implemented successfully into Linux OS on Lucent WaveLAN 802.11a enabled laptops and a practical ad hoc mobile network was therefore proven to be possible in 1999. Another routing protocol known as AODV was subsequently introduced and later proven and implemented in 2005. In 2007, David Johnson and Dave Maltz proposed DSR – Dynamic Source Routing. == Applications == The decentralized nature of wireless ad hoc networks makes them suitable for a variety of applications where central nodes can't be relied on and may improve the scalability of networks compared to wireless managed networks, though theoretical and practical limits to the overall capacity of such networks have been identified. Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like natural disasters or military conflicts. The presence of dynamic and adaptive routing protocols enables ad hoc networks to be formed quickly. === Mobile ad hoc networks (MANETs) === A mobile ad hoc network (MANET) is a continuously self-configuring, self-organizing, infrastructure-less network of mobile devices connected without wires. It is sometimes known as "on-the-fly" networks or "spontaneous networks". === Vehicular ad hoc networks (VANETs) === VANETs are used for communication between vehicles and roadside equipment. Intelligent vehicular ad hoc networks (InVANETs) are a kind of artificial intelligence that helps vehicles to behave in intelligent manners during vehicle-to-vehicle collisions, accidents. Vehicles are using radio waves to communicate with each other, creating communication networks instantly on-the-fly while vehicles move along roads. VANET needs to be secured with lightweight protocols. === Smartphone ad hoc networks (SPANs) === A SPAN leverages existing hardware (primarily Wi-Fi and Bluetooth) and software (protocols) in commercially available smartphones to create peer-to-peer networks without relying on cellular carrier networks, wireless access points, or traditional network infrastructure. SPANs differ from traditional hub and spoke networks, such as Wi-Fi Direct, in that they support multi-hop relays and there is no notion of a group leader so peers can join and leave at will without destroying the network. Apple's iPhone with iOS version 7.0 and higher is capable of multi-peer ad hoc mesh networking. === Wireless mesh networks === Mesh networks take their name from the topology of the resultant network. In a fully connected mesh, each node is connected to every other node, forming a "mesh". A partial mesh, by contrast, has a topology in which some nodes are not connected to others, although this term is seldom in use. Wireless ad hoc networks can take the form of a mesh networks or others. A wireless ad hoc network does not have fixed topology, and its connectivity among nodes is totally dependent on the behavior of the devices, their mobility patterns, distance with each other, etc. Hence, wireless mesh networks are a particular type of wireless ad hoc networks, with special emphasis on the resultant network topology. While some wireless mesh networks (particularly those within a home) have relatively infrequent mobility and thus infrequent link breaks, other more mobile mesh networks require frequent routing adjustments to account for lost links. === Army tactical MANETs === Military or tactical MANETs are used by military units with emphasis on data rate, real-time requirement, fast re-routing during mobility, data security, radio range, and integration with existing systems. Common radio waveforms include the US Army's JTRS SRW, Silvus Technologies MN-MIMO Waveform (Mobile Networked MIMO), and Codan DTC MeshUltra Waveform. Ad hoc mobile communications come in well to fulfill this need, especially its infrastructureless nature, fast deployment and operation. Military MANETs are used by military units with an emphasis on rapid deployment, infrastructureless, all-wireless networks (no fixed radio towers), robustness (link breaks are no problem), security, range, and instant operation. === Air Force UAV ad hoc networks === Flying ad hoc networks (FANETs) are composed of unmanned aerial vehicles, allowing great mobility and providing connectivity to remote areas. Unmanned aerial vehicle, is an aircraft with no pilot on board. UAVs can be remotely controlled (i.e., flown by a pilot at a ground control station) or can fly autonomously based on pre-programmed flight plans. Civilian usage of UAV include modeling 3D terrains, package delivery (Logistics), etc. UAVs have also been used by US Air Force for data collection and situation sensing, without risking the pilot in a foreign unfriendly environment. With wireless ad hoc network technology embedded into the UAVs, multiple UAVs can communicate with each other and work as a team, collaboratively to complete a task and mission. If a UAV is destroyed by an enemy, its data can be quickly offloaded wirelessly to other neighboring UAVs. The UAV ad hoc communication network is also sometimes referred to UAV instant sky network. More generally, aerial MANET in UAVs are now (as of 2021) successfully implemented and operational as mini tactical reconnaissance ISR UAVs like the BRAMOR C4EYE from Slovenia. === Navy ad hoc networks === Navy ships traditionally use satellite communications and other maritime radios to communicate with each other or with ground station back on land. However, such communications are restricted by delays and limited bandwidth. Wireless ad hoc networks enable ship-area-networks to be formed while at sea, enabling high-speed wireless communications among ships, enhancing their sharing of imaging and multimedia data, and better co-ordination in battlefield operations. Some defense companies (such as Rockwell Collins, Silvus Technologies and Rohde & Schwartz) have produced products that enhance ship-to-ship and ship-to-shore communications. === Sensor networks === Sensors are useful devices that collect information related to a specific parameter, such as noise, temperature, humidity, pressure, etc. Sensors are increasingly connected via wireless to allow large-scale collection of sensor data. With a large sample of sensor data, analytics processing can be used to make sense out of these data. The connectivity of wireless sensor networks rely on the principles behind wireless ad hoc networks, since sensors can now be deploy without any fixed radio towers, and they can now form networks on-the-fly. "Smart Dust" was one of the early projects done at U C Berkeley, where tiny radios were used to interconnect smart dust. More recently, mobile wireless sensor networks (MWSNs) have also become an area of academic interest. === Robotics === Efforts have been made to co-ordinate and control a group of robots to undertake collaborative work to complete a task. Centralized control is often based on a "star" approach, where robots take turns to talk to the controller station. However, with wireless ad hoc networks, robots can form a communication network on-the-fly, i.e., robots can now "talk" to each other and collaborate in a distributed fashion. With a network of robots, the robots can communicate among themselves, share local information, and distributively decide how to resolve a task in the most effective and efficient way. === Disaster response === Another civilian use of wireless ad hoc network is for public safety. At times of disasters (floods, storms, earthquakes, fires, etc.), a quick and instant wireless communication network is necessary. Especially at times of earthquakes when radio towers had collapsed or were destroyed, wireless ad hoc networks can be formed independently. Firefighters and rescue workers can use ad hoc networks to communicate and rescue those injured. Commercial radios with such capability are available on the market. === Hospital ad hoc network === Wireless ad hoc networks allow sensors, videos, instruments, and other devices to be deployed and interconnected wirelessly for clinic and hospital patient monitoring, doctor and nurses alert notification, and also making senses of such data quickly at fusion points, so that lives can be saved. === Data monitoring and mining === MANETS can be used for facilitating the collection of sensor data for data mining for a variety of applications such as air pollution monitoring and different types of architectures can be used for such applications. A key characteristic of such applications is that nearby sensor nodes monitoring an environmental feature typically register similar values. This kind of data redundancy due to the spatial correlation between sensor observations inspires the techniques for in-network data aggregation and mining. By measuring the spatial correlation between data sampled by different sensors, a wide class of specialized algorithms can be developed to develop more efficient spatial data mining algorithms as well as more efficient routing strategies. Also, researchers have developed performance models for MANET to apply queueing theory. == Challenges == Several books and works have revealed the technical and research challenges facing wireless ad hoc networks or MANETs. The advantages for users, the technical difficulties in implementation, and the side effect on radio spectrum pollution can be briefly summarized below: === Advantages for users === The obvious appeal of MANETs is that the network is decentralised and nodes/devices are mobile, that is to say there is no fixed infrastructure which provides the possibility for numerous applications in different areas such as environmental monitoring, disaster relief and military communications. Since the early 2000s, interest in MANETs has greatly increased which, in part, is due to the fact mobility can improve network capacity, shown by Grossglauser and Tse along with the introduction of new technologies. One main advantage to a decentralised network is that they are typically more robust than centralised networks due to the multi-hop fashion in which information is relayed. For example, in the cellular network setting, a drop in coverage occurs if a base station stops working, however the chance of a single point of failure in a MANET is reduced significantly since the data can take multiple paths. Since the MANET architecture evolves with time it has the potential to resolve issues such as isolation/disconnection from the network. Further advantages of MANETS over networks with a fixed topology include flexibility (an ad hoc network can be created anywhere with mobile devices), scalability (adding nodes to the network is easy) and lower administration costs (no need to build an infrastructure first). === Implementation difficulties === With a time evolving network it is clear we should expect variations in network performance due to no fixed architecture (no fixed connections). Furthermore, since network topology determines interference and thus connectivity, the mobility pattern of devices within the network will impact on network performance, possibly resulting in data having to be resent a lot of times (increased delay) and finally allocation of network resources such as power remains unclear. Finally, finding a model that accurately represents human mobility whilst remaining mathematically tractable remains an open problem due to the large range of factors that influence it. Some typical models used include the random walk, random waypoint and levy flight models. === Side effects === Use of unlicensed frequency spectrum, contributing to radio spectrum pollution. == Radios and modulation == Wireless ad hoc networks can operate over different types of radios. All radios use modulation to move information over a certain bandwidth of radio frequencies. Given the need to move large amounts of information quickly over long distances, a MANET radio channel ideally has large bandwidth (e.g. amount of radio spectrum), lower frequencies, and higher power. Given the desire to communicate with many other nodes ideally simultaneously, many channels are needed. Given radio spectrum is shared and regulated, there is less bandwidth available at lower frequencies. Processing many radio channels requires many resources. Given the need for mobility, small size and lower power consumption are very important. Picking a MANET radio and modulation has many trade-offs; many start with the specific frequency and bandwidth they are allowed to use. Radios can be UHF (300 – 3000 MHz), SHF (3 – 30 GHz), and EHF (30 – 300 GHz). Wi-Fi ad hoc uses the unlicensed ISM 2.4 GHz radios. They can also be used on 5.8 GHz radios. The higher the frequency, such as those of 300 GHz, absorption of the signal will be more predominant. Army tactical radios usually employ a variety of UHF and SHF radios, including those of VHF to provide a variety of communication modes. At the 800, 900, 1200, 1800 MHz range, cellular radios are predominant. Some cellular radios use ad hoc communications to extend cellular range to areas and devices not reachable by the cellular base station. Next generation Wi-Fi known as 802.11ax provides low delay, high capacity (up to 10 Gbit/s) and low packet loss rate, offering 12 streams – 8 streams at 5 GHz and 4 streams at 2.4 GHz. IEEE 802.11ax uses 8x8 MU-MIMO, OFDMA, and 80 MHz channels. Hence, 802.11ax has the ability to form high capacity Wi-Fi ad hoc networks. At 60 GHz, there is another form of Wi-Fi known as WiGi – wireless gigabit. This has the ability to offer up to 7 Gbit/s throughput. Currently, WiGi is targeted to work with 5G cellular networks. Circa 2020, the general consensus finds the 'best' modulation for moving information over higher frequency waves to be orthogonal frequency-division multiplexing, as used in 4G LTE, 5G, and Wi-Fi. == Protocol stack == The challenges affecting MANETs span from various layers of the OSI protocol stack. The media access layer (MAC) has to be improved to resolve collisions and hidden terminal problems. The network layer routing protocol has to be improved to resolve dynamically changing network topologies and broken routes. The transport layer protocol has to be improved to handle lost or broken connections. The session layer protocol has to deal with discovery of servers and services. A major limitation with mobile nodes is that they have high mobility, causing links to be frequently broken and reestablished. Moreover, the bandwidth of a wireless channel is also limited, and nodes operate on limited battery power, which will eventually be exhausted. These factors make the design of a mobile ad hoc network challenging. The cross-layer design deviates from the traditional network design approach in which each layer of the stack would be made to operate independently. The modified transmission power will help that node to dynamically vary its propagation range at the physical layer. This is because the propagation distance is always directly proportional to transmission power. This information is passed from the physical layer to the network layer so that it can take optimal decisions in routing protocols. A major advantage of this protocol is that it allows access of information between physical layer and top layers (MAC and network layer). Some elements of the software stack were developed to allow code updates in situ, i.e., with the nodes embedded in their physical environment and without needing to bring the nodes back into the lab facility. Such software updating relied on epidemic mode of dissemination of information and had to be done both efficiently (few network transmissions) and fast. == Routing == Routing in wireless ad hoc networks or MANETs generally falls into three categories, namely: proactive routing, reactive routing, and hybrid routing. === Proactive routing === This type of protocols maintains fresh lists of destinations and their routes by periodically distributing routing tables throughout the network. The main disadvantages of such algorithms are: Respective amount of data for maintenance. Slow reaction on restructuring and failures. Example: Optimized Link State Routing Protocol (OLSR) ==== Distance vector routing ==== As in a fix net nodes maintain routing tables. Distance-vector protocols are based on calculating the direction and distance to any link in a network. "Direction" usually means the next hop address and the exit interface. "Distance" is a measure of the cost to reach a certain node. The least cost route between any two nodes is the route with minimum distance. Each node maintains a vector (table) of minimum distance to every node. The cost of reaching a destination is calculated using various route metrics. RIP uses the hop count of the destination whereas IGRP takes into account other information such as node delay and available bandwidth. === Reactive routing === This type of protocol finds a route based on user and traffic demand by flooding the network with Route Request or Discovery packets. The main disadvantages of such algorithms are: High latency time in route finding. Excessive flooding can lead to network clogging. However, clustering can be used to limit flooding. The latency incurred during route discovery is not significant compared to periodic route update exchanges by all nodes in the network. Example: Ad hoc On-Demand Distance Vector Routing (AODV) ==== Flooding ==== Is a simple routing algorithm in which every incoming packet is sent through every outgoing link except the one it arrived on. Flooding is used in bridging and in systems such as Usenet and peer-to-peer file sharing and as part of some routing protocols, including OSPF, DVMRP, and those used in wireless ad hoc networks. === Hybrid routing === This type of protocol combines the advantages of proactive and reactive routing. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding. The choice of one or the other method requires predetermination for typical cases. The main disadvantages of such algorithms are: Advantage depends on number of other nodes activated. Reaction to traffic demand depends on gradient of traffic volume. Example: Zone Routing Protocol (ZRP) === Position-based routing === Position-based routing methods use information on the exact locations of the nodes. This information is obtained for example via a GPS receiver. Based on the exact location the best path between source and destination nodes can be determined. Example: "Location-Aided Routing in mobile ad hoc networks" (LAR) == Technical requirements for implementation == An ad hoc network is made up of multiple "nodes" connected by "links." Links are influenced by the node's resources (e.g., transmitter power, computing power and memory) and behavioral properties (e.g., reliability), as well as link properties (e.g. length-of-link and signal loss, interference and noise). Since links can be connected or disconnected at any time, a functioning network must be able to cope with this dynamic restructuring, preferably in a way that is timely, efficient, reliable, robust, and scalable. The network must allow any two nodes to communicate by relaying the information via other nodes. A "path" is a series of links that connects two nodes. Various routing methods use one or two paths between any two nodes; flooding methods use all or most of the available paths. == Medium-access control == In most wireless ad hoc networks, the nodes compete for access to shared wireless medium, often resulting in collisions (interference). Collisions can be handled using centralized scheduling or distributed contention access protocols. Using cooperative wireless communications improves immunity to interference by having the destination node combine self-interference and other-node interference to improve decoding of the desired signals. == Simulation == One key problem in wireless ad hoc networks is foreseeing the variety of possible situations that can occur. As a result, modeling and simulation (M&S) using extensive parameter sweeping and what-if analysis becomes an extremely important paradigm for use in ad hoc networks. One solution is the use of simulation tools like OPNET, NetSim or ns2. A comparative study of various simulators for VANETs reveal that factors such as constrained road topology, multi-path fading and roadside obstacles, traffic flow models, trip models, varying vehicular speed and mobility, traffic lights, traffic congestion, drivers' behavior, etc., have to be taken into consideration in the simulation process to reflect realistic conditions. === Emulation testbed === In 2009, the U.S. Army Research Laboratory (ARL) and Naval Research Laboratory (NRL) developed a Mobile Ad-Hoc Network emulation testbed, where algorithms and applications were subjected to representative wireless network conditions. The testbed was based on a version of the "MANE" (Mobile Ad hoc Network Emulator) software originally developed by NRL. === Mathematical models === The traditional model is the random geometric graph. Early work included simulating ad hoc mobile networks on sparse and densely connected topologies. Nodes are firstly scattered in a constrained physical space randomly. Each node then has a predefined fixed cell size (radio range). A node is said to be connected to another node if this neighbor is within its radio range. Nodes are then moved (migrated away) based on a random model, using random walk or brownian motion. Different mobility and number of nodes present yield different route length and hence different number of multi-hops. These are graphs consisting of a set of nodes placed according to a point process in some usually bounded subset of the n-dimensional plane, mutually coupled according to a Boolean probability mass function of their spatial separation (see e.g. unit disk graphs). The connections between nodes may have different weights to model the difference in channel attenuations. One can then study network observables (such as connectivity, centrality or the degree distribution) from a graph-theoretic perspective. One can further study network protocols and algorithms to improve network throughput and fairness. == Security == Most wireless ad hoc networks do not implement any network access control, leaving these networks vulnerable to resource consumption attacks where a malicious node injects packets into the network with the goal of depleting the resources of the nodes relaying the packets. To thwart or prevent such attacks, it was necessary to employ authentication mechanisms that ensure that only authorized nodes can inject traffic into the network. Even with authentication, these networks are vulnerable to packet dropping or delaying attacks, whereby an intermediate node drops the packet or delays it, rather than promptly sending it to the next hop. In a multicast and dynamic environment, establishing temporary 1:1 secure 'sessions' using PKI with every other node is not feasible (like is done with HTTPS, most VPNs, etc. at the transport layer). Instead, a common solution is to use pre-shared keys for symmetric, authenticated encryption at the link layer, for example MACsec using AES-256-GCM. With this method, every properly formatted packet received is authenticated then passed along for decryption or dropped. It also means the key(s) in each node must be changed more often and simultaneously (e.g. to avoid reusing an IV). === Trust management === Trust establishment and management in MANETs face challenges due to resource constraints and the complex interdependency of networks. Managing trust in a MANET needs to consider the interactions between the composite cognitive, social, information and communication networks, and take into account the resource constraints (e.g., computing power, energy, bandwidth, time), and dynamics (e.g., topology changes, node mobility, node failure, propagation channel conditions). Researchers of trust management in MANET suggested that such complex interactions require a composite trust metric that captures aspects of communications and social networks, and corresponding trust measurement, trust distribution, and trust management schemes. Continuous monitoring of every node within a MANET is necessary for trust and reliability but difficult because it by definition is dis-continuous, 2) it requires input from the node itself and 3) from its 'nearby' peers. == See also == Ad hoc wireless distribution service Delay-tolerant networking Independent basic service set (IBSS) List of ad hoc routing protocols Mobile wireless sensor network Personal area network (PAN) Smart meter Wi-Fi Direct Wireless community network Wireless mesh network Wireless sensor network == References == == Further reading == Satyajeet, D.; Deshmukh, A. R.; Dorle, S. S. (January 2016). "Article: Heterogeneous Approaches for Cluster based Routing Protocol in Vehicular Ad Hoc Network (VANET)". International Journal of Computer Applications. 134 (12): 1–8. Bibcode:2016IJCA..134l...1S. doi:10.5120/ijca2016908080. Royer, E.; Chai Keong Toh (April 1999). "A Review of Current Routing Protocols for Ad Hoc Mobile Wireless Networks". IEEE Personal Communications. 6 (2): 46–55. CiteSeerX 10.1.1.11.8637. doi:10.1109/98.760423. Mauve, M.; Widmer, J.; Hartenstein, H. (December 2001). "A Survey on Position-Based Routing in Mobile Ad Hoc Networks". IEEE Network. 1 (6): 30–39. CiteSeerX 10.1.1.25.2774. doi:10.1109/65.967595. Djenouri, D.; Kheladi, L.; Badache, N. (October 2005). "A Survey of Security Issues in Mobile Ad hoc and Sensor Networks". IEEE Communications Surveys and Tutorials. 7 (4): 2–28. doi:10.1109/COMST.2005.1593277. S2CID 11135536. Cano, Jose; Cano, Juan-Carlos; Toh, Chai-Keong; Calafate, Carlos T.; Manzoni, Pietro (2010). "EasyMANET: an extensible and configurable platform for service provisioning in MANET environments". IEEE Communications Magazine. 48 (12): 159–167. doi:10.1109/mcom.2010.5673087. S2CID 20381835. Kahn, Robert E. (January 1977). "The Organization of Computer Resources into a Packet Radio Network". IEEE Transactions on Communications. COM-25 (1): 169–178. doi:10.1109/tcom.1977.1093714. Jubin, J.; Tornow, J. D. (January 1987). "The DARPA Packet Radio Network Protocols". Proceedings of the IEEE. 75 (1): 21–32. Bibcode:1987IEEEP..75...21J. doi:10.1109/proc.1987.13702. S2CID 13345464. Schacham, N.; Westcott, J. (January 1987). "Future directions in packet radio architectures and protocols". Proceedings of the IEEE. 75 (1): 83–99. Bibcode:1987IEEEP..75...83S. doi:10.1109/PROC.1987.13707. S2CID 1779198. == External links == IETF MANET group
Wikipedia/Mobile_ad_hoc_network
fNetwork emulation is a technique for testing the performance of real applications over a virtual network. This is different from network simulation where virtual models of traffic, network models, channels, and protocols are applied. The aim is to assess performance, predict the impact of change, or otherwise optimize technology decision-making. == Methods of emulation == Network emulation is the act of testing the behavior of a network (5G, wireless, MANETs, etc) in a lab. A personal computer or virtual machine runs software to perform the network emulation; a dedicated emulation device is sometimes used for link emulation. Networks introduce delay, errors, and drop packets. The primary goal of network emulation is to create an environment whereby users can connect the devices, applications, products, and/or services being tested to validate their performance, stability, or functionality against real-world network scenarios. Once tested in a controlled environment against actual network conditions, users can have confidence that the item being tested will perform as expected. == Emulation, simulation, and traffic generation == Emulation differs from simulation in that a network emulator appears to be a network; end-systems such as computers can be attached to the emulator and will behave as if they are attached to a network. A network emulator mirrors the network which connects end-systems, not the end-systems themselves. Network simulators are typically programs that run on a single computer, take an abstract description of the network traffic such as a flow arrival process, and yield performance statistics such as throughput, delay, loss etc. These products are typically found in the Development and QA environments of Service Providers, Network Equipment Manufacturers, and Enterprises. == Network emulation software == Software developers typically want to analyze the response time and sensitivity to packet loss of client-server applications and emulate specific network effects (of 5G, Smart homes, industrial IOT, military networks, etc.,) with different round-trip-times, throughputs, bit error rates, and packet drops. Two open-source network emulators are Common Open Research Emulator (CORE) and Extendable Mobile Ad hoc Network Emulator (EMANE). They both support operation as network black boxes, i.e. external machines/devices can be hooked up to the emulated network with no knowledge of emulation. They also support both wired and wireless network emulation with various degrees of fidelity. A CORE is more useful for quick network layouts (layer 3 and above) and single-machine emulation. EMANE is better suited for distributed high-fidelity large-scale network emulation (layers 1/2). == Traffic generation software == The network performance under maximum throughput conditions can be analyzed by network traffic measurement in a testbed network, using a network traffic generator such as iperf. The traffic generator sends dummy packets, often with a unique packet identifier, making it possible to keep track of the packet delivery in the network using a network analyzer. == See also == Network simulator Network traffic simulation Traffic generation model == Further reading == Beuran, Razvan (2012). Introduction to Network Emulation. Pan Stanford. ISBN 978-981-4310-91-8. == External links == EMANE - US Navy
Wikipedia/Network_emulation
A traffic generation model is a stochastic model of the traffic flows or data sources in a communication network, for example a cellular network or a computer network. A packet generation model is a traffic generation model of the packet flows or data sources in a packet-switched network. For example, a web traffic model is a model of the data that is sent or received by a user's web-browser. These models are useful during the development of telecommunication technologies, in view to analyse the performance and capacity of various protocols, algorithms and network topologies . == Application == The network performance can be analyzed by network traffic measurement in a testbed network, using a network traffic generator such as iperf, bwping and Mausezahn. The traffic generator sends dummy packets, often with a unique packet identifier, making it possible to keep track of the packet delivery in the network. Numerical analysis using network simulation is often a less expensive approach. An analytical approach using queueing theory may be possible for a simplified traffic model but is often too complicated if a realistic traffic model is used. == The greedy source model == A simplified packet data model is the greedy source model. It may be useful in analyzing the maximum throughput for best-effort traffic (without any quality-of-service guarantees). Many traffic generators are greedy sources. == Poisson traffic model == Another simplified traditional traffic generation model for packet data, is the Poisson process, where the number of incoming packets and/or the packet lengths are modeled as an exponential distribution. When the packets interarrival time is exponential, with constant packet size it resembles an M/D/1 system. When both packet inter arrivals and sizes are exponential, it is an M/M/1 queue. == Long-tail traffic models == However, the Poisson traffic model is memoryless, which means that it does not reflect the bursty nature of packet data, also known as the long-range dependency. For a more realistic model, a self-similar process such as the Pareto distribution can be used as a long-tail traffic model. == Payload data model == The actual content of the payload data is typically not modeled, but replaced by dummy packets. However, if the payload data is to be analyzed on the receiver side, for example regarding bit-error rate, a Bernoulli process is often assumed, i.e. a random sequence of independent binary numbers. In this case, a channel model reflects channel impairments such as noise, interference and distortion. === 3GPP2 model === One of the 3GPP2 models is described in. This document describes the following types of traffic flows: Downlink: HTTP/TCP FTP/TCP Wireless Application Protocol near real-time Video Voice Uplink: HTTP/TCP FTP/TCP Wireless Application Protocol Voice Mobile Network Gaming The main idea is to partly implement HTTP, FTP and TCP protocols. For example, an HTTP traffic generator simulates the download of a web-page, consisting of a number of small objects (like images). A TCP stream (that's why TCP generator is a must in this model) is used to download these objects according to HTTP1.0 or HTTP1.1 specifications. These models take into account the details of these protocols' work. The Voice, WAP and Mobile Network Gaming are modelled in a less complicated way. == See also == Channel model Measuring network throughput Mobility model Network emulation Network traffic simulation Network simulation Radio propagation model Queueing theory Packet generator Packet sniffer == References ==
Wikipedia/Traffic_generation_model
A cognitive radio (CR) is a radio that can be programmed and configured dynamically to use the best channels in its vicinity to avoid user interference and congestion. Such a radio automatically detects available channels, then accordingly changes its transmission or reception parameters to allow more concurrent wireless communications in a given band at one location. This process is a form of dynamic spectrum management. == Description == In response to the operator's commands, the cognitive engine is capable of configuring radio-system parameters. These parameters include "waveform, protocol, operating frequency, and networking". This functions as an autonomous unit in the communications environment, exchanging information about the environment with the networks it accesses and other cognitive radios (CRs). A CR "monitors its own performance continuously", in addition to "reading the radio's outputs"; it then uses this information to "determine the RF environment, channel conditions, link performance, etc.", and adjusts the "radio's settings to deliver the required quality of service subject to an appropriate combination of user requirements, operational limitations, and regulatory constraints". Some "smart radio" proposals combine wireless mesh network—dynamically changing the path messages take between two given nodes using cooperative diversity; cognitive radio—dynamically changing the frequency band used by messages between two consecutive nodes on the path; and software-defined radio—dynamically changing the protocol used by message between two consecutive nodes. == History == The concept of cognitive radio was first proposed by Joseph Mitola III in a seminar at KTH Royal Institute of Technology in Stockholm in 1998 and published in an article by Mitola and Gerald Q. Maguire, Jr. in 1999. It was a novel approach in wireless communications, which Mitola later described as: The point in which wireless personal digital assistants (PDAs) and the related networks are sufficiently computationally intelligent about radio resources and related computer-to-computer communications to detect user communications needs as a function of use context, and to provide radio resources and wireless services most appropriate to those needs. Cognitive radio is considered as a goal towards which a software-defined radio platform should evolve: a fully reconfigurable wireless transceiver which automatically adapts its communication parameters to network and user demands. Traditional regulatory structures have been built for an analog model and are not optimized for cognitive radio. Regulatory bodies in the world (including the Federal Communications Commission in the United States and Ofcom in the United Kingdom) as well as different independent measurement campaigns found that most radio frequency spectrum was inefficiently utilized. Cellular network bands are overloaded in most parts of the world, but other frequency bands (such as military, amateur radio and paging frequencies) are insufficiently utilized. Independent studies performed in some countries confirmed that observation, and concluded that spectrum utilization depends on time and place. Moreover, fixed spectrum allocation prevents rarely used frequencies (those assigned to specific services) from being used, even when any unlicensed users would not cause noticeable interference to the assigned service. Regulatory bodies in the world have been considering whether to allow unlicensed users in licensed bands if they would not cause any interference to licensed users. These initiatives have focused cognitive-radio research on dynamic spectrum access. The first cognitive radio wireless regional area network standard, IEEE 802.22, was developed by IEEE 802 LAN/MAN Standard Committee (LMSC) and published in 2011. This standard uses geolocation and spectrum sensing for spectral awareness. Geolocation combines with a database of licensed transmitters in the area to identify available channels for use by the cognitive radio network. Spectrum sensing observes the spectrum and identifies occupied channels. IEEE 802.22 was designed to utilize the unused frequencies or fragments of time in a location. This white space is unused television channels in the geolocated areas. However, cognitive radio cannot occupy the same unused space all the time. As spectrum availability changes, the network adapts to prevent interference with licensed transmissions. == Terminology == Depending on transmission and reception parameters, there are two main types of cognitive radio: Full Cognitive Radio (Mitola radio), in which every possible parameter observable by a wireless node (or network) is considered. Spectrum-Sensing Cognitive Radio, in which only the radio-frequency spectrum is considered. Other types are dependent on parts of the spectrum available for cognitive radio: Licensed-Band Cognitive Radio, capable of using bands assigned to licensed users (except for unlicensed bands, such as the U-NII band or the ISM band). The IEEE 802.22 working group is developing a standard for wireless regional area network (WRAN), which will operate on unused television channels, also known as TV white spaces. Unlicensed-Band Cognitive Radio, which can only utilize unlicensed parts of the radio frequency (RF) spectrum. One such system is described in the IEEE 802.15 Task Group 2 specifications, which focus on the coexistence of IEEE 802.11 and Bluetooth. Spectrum mobility: Process by which a cognitive-radio user changes its frequency of operation. Cognitive-radio networks aim to use the spectrum in a dynamic manner by allowing radio terminals to operate in the best available frequency band, maintaining seamless communication requirements during transitions to better spectrum. Spectrum sharing: Spectrum sharing cognitive radio networks allow cognitive radio users to share the spectrum bands of the licensed-band users. However, the cognitive radio users have to restrict their transmit power so that the interference caused to the licensed-band users is kept below a certain threshold. Sensing-based Spectrum sharing: In sensing-based spectrum sharing cognitive radio networks, cognitive radio users first listen to the spectrum allocated to the licensed users to detect the state of the licensed users. Based on the detection results, cognitive radio users decide their transmission strategies. If the licensed users are not using the bands, cognitive radio users will transmit over those bands. If the licensed users are using the bands, cognitive radio users share the spectrum bands with the licensed users by restricting their transmit power. Database-enabled Spectrum Sharing,,: In this modality of spectrum sharing, cognitive radio users are required to access a white space database prior to be allowed, or denied, access to the shared spectrum. The white space database contain algorithms, mathematical models and local regulations to predict the spectrum utilization in a geographical area and to infer on the risk of interference posed to incumbent services by a cognitive radio user accessing the shared spectrum. If the white space database judges that destructive interference to incumbents will happen, the cognitive radio user is denied access to the shared spectrum. == Technology == Although cognitive radio was initially thought of as a software-defined radio extension (full cognitive radio), most research work focuses on spectrum-sensing cognitive radio (particularly in the TV bands). The chief problem in spectrum-sensing cognitive radio is designing high-quality spectrum-sensing devices and algorithms for exchanging spectrum-sensing data between nodes. It has been shown that a simple energy detector cannot guarantee the accurate detection of signal presence, calling for more sophisticated spectrum sensing techniques and requiring information about spectrum sensing to be regularly exchanged between nodes. Increasing the number of cooperating sensing nodes decreases the probability of false detection. Filling free RF bands adaptively, using OFDMA, is a possible approach. Timo A. Weiss and Friedrich K. Jondral of the University of Karlsruhe proposed a spectrum pooling system, in which free bands (sensed by nodes) were immediately filled by OFDMA subbands. Applications of spectrum-sensing cognitive radio include emergency-network and WLAN higher throughput and transmission-distance extensions. The evolution of cognitive radio toward cognitive networks is underway; the concept of cognitive networks is to intelligently organize a network of cognitive radios. === Functions === The main functions of cognitive radios are: Power Control: Power control is usually used for spectrum sharing CR systems to maximize the capacity of secondary users with interference power constraints to protect the primary users. Spectrum sensing: Detecting unused spectrum and sharing it, without harmful interference to other users; an important requirement of the cognitive-radio network is to sense empty spectrum. Detecting primary users is the most efficient way to detect empty spectrum. Spectrum-sensing techniques may be grouped into three categories: Transmitter detection: Cognitive radios must have the capability to determine if a signal from a primary transmitter is locally present in a certain spectrum. There are several proposed approaches to transmitter detection: Matched filter detection Energy detection: Energy detection is a spectrum sensing method that detects the presence/absence of a signal just by measuring the received signal power. This signal detection approach is quite easy and convenient for practical implementation. To implement energy detector, however, noise variance information is required. It has been shown that an imperfect knowledge of the noise power (noise uncertainty) may lead to the phenomenon of the SNR wall, which is a SNR level below which the energy detector can not reliably detect any transmitted signal even increasing the observation time. It has also been shown that the SNR wall is not caused by the presence of a noise uncertainty itself, but by an insufficient refinement of the noise power estimation while the observation time increases. Cyclostationary-feature detection: These types of spectrum sensing algorithms are motivated because most man-made communication signals, such as BPSK, QPSK, AM, OFDM, etc. exhibit cyclostationary behavior. However, noise signals (typically white noise) do not exhibit cyclostationary behavior. These detectors are robust against noise variance uncertainty. The aim of such detectors is to exploit the cyclostationary nature of man-made communication signals buried in noise. Their main decision parameter is comparing the non zero values obtained by CSD of the primary signal. Cyclostationary detectors can be either single cycle or multicycle cyclostationary. Wideband spectrum sensing: refers to spectrum sensing over large spectral bandwidth, typically hundreds of MHz or even several GHz. Since current ADC technology cannot afford the high sampling rate with high resolution, it requires revolutional techniques, e.g., compressive sensing and sub-Nyquist sampling. Cooperative detection: Refers to spectrum-sensing methods where information from multiple cognitive-radio users is incorporated for primary-user detection Interference-based detection Null-space based CR: With the aid of multiple antennas, CR detects the null-space of the primary-user and then transmits within the null-space, such that its subsequent transmission causes less interference to the primary-user Spectrum management: Capturing the best available spectrum to meet user communication requirements, while not creating undue interference to other (primary) users. Cognitive radios should decide on the best spectrum band (of all bands available) to meet quality of service requirements; therefore, spectrum-management functions are required for cognitive radios. Spectrum-management functions are classified as: Spectrum analysis Spectrum decision The practical implementation of spectrum-management functions is a complex and multifaceted issue, since it must address a variety of technical and legal requirements. An example of the former is choosing an appropriate sensing threshold to detect other users, while the latter is exemplified by the need to meet the rules and regulations set out for radio spectrum access in international (ITU radio regulations) and national (telecommunications law) legislation. Artificial Intelligence based algorithms algorithm for dynamic spectrum allocation and interference management in order to reduce harmful interference to other services and networks will be a key technology enabler towards 6G. This will pave the way for more flexibility in the management and regulation of the radioelectric spectrum. === Intelligent antenna (IA) === An intelligent antenna (or smart antenna) is an antenna technology that uses spatial beam-formation and spatial coding to cancel interference; however, applications are emerging for extension to intelligent multiple or cooperative-antenna arrays for application to complex communication environments. Cognitive radio, by comparison, allows user terminals to sense whether a portion of the spectrum is being used in order to share spectrum with neighbor users. The following table compares the two: Note that both techniques can be combined as illustrated in many contemporary transmission scenarios. Cooperative MIMO (CO-MIMO) combines both techniques. == Applications == Cognitive Radio (CR) can sense its environment and, without the intervention of the user, can adapt to the user's communications needs while conforming to FCC rules in the United States. In theory, the amount of spectrum is infinite; practically, for propagation and other reasons it is finite because of the desirability of certain spectrum portions. Assigned spectrum is far from being fully utilized, and efficient spectrum use is a growing concern; CR offers a solution to this problem. A CR can intelligently detect whether any portion of the spectrum is in use, and can temporarily use it without interfering with the transmissions of other users. According to Bruce Fette, "Some of the radio's other cognitive abilities include determining its location, sensing spectrum use by neighboring devices, changing frequency, adjusting output power or even altering transmission parameters and characteristics. All of these capabilities, and others yet to be realized, will provide wireless spectrum users with the ability to adapt to real-time spectrum conditions, offering regulators, licenses and the general public flexible, efficient and comprehensive use of the spectrum". Examples of applications include: The application of CR networks to emergency and public safety communications by utilizing white space The potential of CR networks for executing dynamic spectrum access (DSA) Application of CR networks to military action such as chemical biological radiological and nuclear attack detection and investigation, command control, obtaining information of battle damage evaluations, battlefield surveillance, intelligence assistance, and targeting. They are also proven to be helpful in establishing Medical Body Area Networks which can be utilized in omnipresent patient monitoring that aids in immediately notifying the doctors regarding vital information of patients such as sugar level, blood pressure, blood oxygen and electrocardiogram (ECG), etc. This gives the additional advantage of reducing the risk of infections and also increases the patient's mobility. Cognitive radio is practical also to wireless sensor networks, where packet relaying can take place using primary and secondary queues to forward packets without delays and with minimum power consumption. == Simulation of CR networks == At present, modeling & simulation is the only paradigm which allows the simulation of complex behavior in a given environment's cognitive radio networks. Network simulators like OPNET, NetSim, MATLAB and ns2 can be used to simulate a cognitive radio network. CogNS is an open-source NS2-based simulation framework for cognitive radio networks. Areas of research using network simulators include: Spectrum sensing & incumbent detection Spectrum allocation Measurement and/or modeling of spectrum usage Efficiency of spectrum utilization Network Simulator 3 (ns-3) is also a viable option for simulating CR. ns-3 can be also used to emulate and experiment CR networks with the aid from commodity hardware like Atheros WiFi devices. == Future plans == The success of the unlicensed band in accommodating a range of wireless devices and services has led the FCC to consider opening further bands for unlicensed use. In contrast, the licensed bands are underutilized due to static frequency allocation. Realizing that CR technology has the potential to exploit the inefficiently utilized licensed bands without causing interference to incumbent users, the FCC released a Notice of Proposed Rule Making which would allow unlicensed radios to operate in the TV-broadcast bands. The IEEE 802.22 working group, formed in November 2004, is tasked with defining the air-interface standard for wireless regional area networks (based on CR sensing) for the operation of unlicensed devices in the spectrum allocated to TV service. To comply with later FCC regulations on unlicensed utilization of TV spectrum, the IEEE 802.22 has defined interfaces to the mandatory TV White Space Database in order to avoid interference to incumbent services. Although spectrum geolocation databases allow reducing the receiver complexity, and interference probability, for instance from sensing errors or hidden nodes, this comes at the cost of a lower spectrum utilization efficiency as the databases can not capture a fine-grained quantification of spectrum utilization and are not updated in real-time. Collaborative sensing, and distributed spectrum management based on artificial intelligence could contribute in the future towards a better balance between spectrum utilization efficiency and interference mitigation. == See also == Channel allocation schemes Channel-dependent scheduling Cognitive network LTE Advanced Network Simulator OFDMA Radio resource management (RRM) White spaces (radio) White spaces (database) Software-defined radio == References == == External links == Berkeley Wireless Research Center Cognitive Radio Workshop – first workshop on cognitive radio; its focus was mainly on research issues within the topic Center for Wireless Telecommunications (CWT), Virginia Tech Cognitive Radio Technologies Proceeding of Federal Communications Commission – Federal Communications Commission rules on cognitive radio IEEE DySPAN Conference
Wikipedia/Cognitive_Radio_Networks
The noisy channel model is a framework used in spell checkers, question answering, speech recognition, and machine translation. In this model, the goal is to find the intended word given a word where the letters have been scrambled in some manner. == In spell-checking == See Chapter B of. Given an alphabet Σ {\displaystyle \Sigma } , let Σ ∗ {\displaystyle \Sigma ^{*}} be the set of all finite strings over Σ {\displaystyle \Sigma } . Let the dictionary D {\displaystyle D} of valid words be some subset of Σ ∗ {\displaystyle \Sigma ^{*}} , i.e., D ⊆ Σ ∗ {\displaystyle D\subseteq \Sigma ^{*}} . The noisy channel is the matrix Γ w s = Pr ( s | w ) {\displaystyle \Gamma _{ws}=\Pr(s|w)} , where w ∈ D {\displaystyle w\in D} is the intended word and s ∈ Σ ∗ {\displaystyle s\in \Sigma ^{*}} is the scrambled word that was actually received. The goal of the noisy channel model is to find the intended word given the scrambled word that was received. The decision function σ : Σ ∗ → D {\displaystyle \sigma :\Sigma ^{*}\to D} is a function that, given a scrambled word, returns the intended word. Methods of constructing a decision function include the maximum likelihood rule, the maximum a posteriori rule, and the minimum distance rule. In some cases, it may be better to accept the scrambled word as the intended word rather than attempt to find an intended word in the dictionary. For example, the word schönfinkeling may not be in the dictionary, but might in fact be the intended word. === Example === Consider the English alphabet Σ = { a , b , c , . . . , y , z , A , B , . . . , Z , . . . } {\displaystyle \Sigma =\{a,b,c,...,y,z,A,B,...,Z,...\}} . Some subset D ⊆ Σ ∗ {\displaystyle D\subseteq \Sigma ^{*}} makes up the dictionary of valid English words. There are several mistakes that may occur while typing, including: Missing letters, e.g., leter instead of letter Accidental letter additions, e.g., misstake instead of mistake Swapping letters, e.g., recieved instead of received Replacing letters, e.g., fimite instead of finite To construct the noisy channel matrix Γ {\displaystyle \Gamma } , we must consider the probability of each mistake, given the intended word ( Pr ( s | w ) {\displaystyle \Pr(s|w)} for all w ∈ D {\displaystyle w\in D} and s ∈ Σ ∗ {\displaystyle s\in \Sigma ^{*}} ). These probabilities may be gathered, for example, by considering the Damerau–Levenshtein distance between s {\displaystyle s} and w {\displaystyle w} or by comparing the draft of an essay with one that has been manually edited for spelling. == In machine translation == One naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography. When I look at an article in Russian, I say: 'This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode. See chapter 1, and chapter 25 of. Suppose we want to translate a foreign language to English, we could model P ( E | F ) {\displaystyle P(E|F)} directly: the probability that we have English sentence E given foreign sentence F, then we pick the most likely one E ^ = arg ⁡ max E P ( E | F ) {\displaystyle {\hat {E}}=\arg \max _{E}P(E|F)} . However, by Bayes law, we have the equivalent equation: E ^ = argmax E ∈ English P ( F ∣ E ) ⏞ translation model P ( E ) ⏞ language model {\displaystyle {\hat {E}}={\underset {E\in {\text{ English }}}{\operatorname {argmax} }}\overbrace {P(F\mid E)} ^{\text{translation model }}\overbrace {P(E)} ^{\text{language model}}} The benefit of the noisy-channel model is in terms of data: If collecting a parallel corpus is costly, then we would have only a small parallel corpus, so we can only train a moderately good English-to-foreign translation model, and a moderately good foreign-to-English translation model. However, we can collect a large corpus in the foreign language only, and a large corpus in the English language only, to train two good language models. Combining these four models, we immediately get a good English-to-foreign translator and a good foreign-to-English translator. The cost of noisy-channel model is that using Bayesian inference is more costly than using a translation model directly. Instead of reading out the most likely translation by arg ⁡ max E P ( E | F ) {\displaystyle \arg \max _{E}P(E|F)} , it would have to read out predictions by both the translation model and the language model, multiply them, and search for the highest number. == In speech recognition == Speech recognition can be thought of as translating from a sound-language to a text-language. Consequently, we have T ^ = argmax T ∈ Text P ( S ∣ T ) ⏞ speech model P ( T ) ⏞ language model {\displaystyle {\hat {T}}={\underset {T\in {\text{ Text }}}{\operatorname {argmax} }}\overbrace {P(S\mid T)} ^{\text{speech model }}\overbrace {P(T)} ^{\text{language model}}} where P ( S | T ) {\displaystyle P(S|T)} is the probability that a speech sound S is produced if the speaker is intending to say text T. Intuitively, this equation states that the most likely text is a text that's both a likely text in the language, and produces the speech sound with high probability. The utility of the noisy-channel model is not in capacity. Theoretically, any noisy-channel model can be replicated by a direct P ( T | S ) {\displaystyle P(T|S)} model. However, the noisy-channel model factors the model into two parts which are appropriate for the situation, and consequently it is generally more well-behaved. When a human speaks, it does not produce the sound directly, but first produces the text it wants to speak in the language centers of the brain, then the text is translated into sound by the motor cortex, vocal cords, and other parts of the body. The noisy-channel model matches this model of the human, and so it is appropriate. This is justified in the practical success of noisy-channel model in speech recognition. === Example === Consider the sound-language sentence (written in IPA for English) S = aɪ wʊd laɪk wʌn tuː. There are three possible texts T 1 , T 2 , T 3 {\displaystyle T_{1},T_{2},T_{3}} : T 1 = {\displaystyle T_{1}=} I would like one to. T 2 = {\displaystyle T_{2}=} I would like one too. T 3 = {\displaystyle T_{3}=} I would like one two. that are equally likely, in the sense that P ( S | T 1 ) = P ( S | T 2 ) = P ( S | T 3 ) {\displaystyle P(S|T_{1})=P(S|T_{2})=P(S|T_{3})} . With a good English language model, we would have P ( T 2 ) > P ( T 1 ) > P ( T 3 ) {\displaystyle P(T_{2})>P(T_{1})>P(T_{3})} , since the second sentence is grammatical, the first is not quite, but close to a grammatical one (such as "I would like one to [go]."), while the third one is far from grammatical. Consequently, the noisy-channel model would output T 2 {\displaystyle T_{2}} as the best transcription. == See also == Coding theory == References ==
Wikipedia/Noisy_channel_model
Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually termed "data") as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes: To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables. To derive a lower bound for the marginal likelihood (sometimes called the evidence) of the observed data (i.e. the marginal probability of the data given the model, with marginalization performed over unobserved variables). This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data. (See also the Bayes factor article.) In the former purpose (that of approximating a posterior probability), variational Bayes is an alternative to Monte Carlo sampling methods—particularly, Markov chain Monte Carlo methods such as Gibbs sampling—for taking a fully Bayesian approach to statistical inference over complex distributions that are difficult to evaluate directly or sample. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, variational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior. Variational Bayes can be seen as an extension of the expectation–maximization (EM) algorithm from maximum likelihood (ML) or maximum a posteriori (MAP) estimation of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entire posterior distribution of the parameters and latent variables. As in EM, it finds a set of optimal parameter values, and it has the same alternating structure as does EM, based on a set of interlocked (mutually dependent) equations that cannot be solved analytically. For many applications, variational Bayes produces solutions of comparable accuracy to Gibbs sampling at greater speed. However, deriving the set of equations used to update the parameters iteratively often requires a large amount of work compared with deriving the comparable Gibbs sampling equations. This is the case even for many models that are conceptually quite simple, as is demonstrated below in the case of a basic non-hierarchical model with only two parameters and no latent variables. == Mathematical derivation == === Problem === In variational inference, the posterior distribution over a set of unobserved variables Z = { Z 1 … Z n } {\displaystyle \mathbf {Z} =\{Z_{1}\dots Z_{n}\}} given some data X {\displaystyle \mathbf {X} } is approximated by a so-called variational distribution, Q ( Z ) : {\displaystyle Q(\mathbf {Z} ):} P ( Z ∣ X ) ≈ Q ( Z ) . {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )\approx Q(\mathbf {Z} ).} The distribution Q ( Z ) {\displaystyle Q(\mathbf {Z} )} is restricted to belong to a family of distributions of simpler form than P ( Z ∣ X ) {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )} (e.g. a family of Gaussian distributions), selected with the intention of making Q ( Z ) {\displaystyle Q(\mathbf {Z} )} similar to the true posterior, P ( Z ∣ X ) {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )} . The similarity (or dissimilarity) is measured in terms of a dissimilarity function d ( Q ; P ) {\displaystyle d(Q;P)} and hence inference is performed by selecting the distribution Q ( Z ) {\displaystyle Q(\mathbf {Z} )} that minimizes d ( Q ; P ) {\displaystyle d(Q;P)} . === KL divergence === The most common type of variational Bayes uses the Kullback–Leibler divergence (KL-divergence) of Q from P as the choice of dissimilarity function. This choice makes this minimization tractable. The KL-divergence is defined as D K L ( Q ∥ P ) ≜ ∑ Z Q ( Z ) log ⁡ Q ( Z ) P ( Z ∣ X ) . {\displaystyle D_{\mathrm {KL} }(Q\parallel P)\triangleq \sum _{\mathbf {Z} }Q(\mathbf {Z} )\log {\frac {Q(\mathbf {Z} )}{P(\mathbf {Z} \mid \mathbf {X} )}}.} Note that Q and P are reversed from what one might expect. This use of reversed KL-divergence is conceptually similar to the expectation–maximization algorithm. (Using the KL-divergence in the other way produces the expectation propagation algorithm.) === Intractability === Variational techniques are typically used to form an approximation for: P ( Z ∣ X ) = P ( X ∣ Z ) P ( Z ) P ( X ) = P ( X ∣ Z ) P ( Z ) ∫ Z P ( X , Z ′ ) d Z ′ {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} \mid \mathbf {Z} )P(\mathbf {Z} )}{P(\mathbf {X} )}}={\frac {P(\mathbf {X} \mid \mathbf {Z} )P(\mathbf {Z} )}{\int _{\mathbf {Z} }P(\mathbf {X} ,\mathbf {Z} ')\,d\mathbf {Z} '}}} The marginalization over Z {\displaystyle \mathbf {Z} } to calculate P ( X ) {\displaystyle P(\mathbf {X} )} in the denominator is typically intractable, because, for example, the search space of Z {\displaystyle \mathbf {Z} } is combinatorially large. Therefore, we seek an approximation, using Q ( Z ) ≈ P ( Z ∣ X ) {\displaystyle Q(\mathbf {Z} )\approx P(\mathbf {Z} \mid \mathbf {X} )} . === Evidence lower bound === Given that P ( Z ∣ X ) = P ( X , Z ) P ( X ) {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} ,\mathbf {Z} )}{P(\mathbf {X} )}}} , the KL-divergence above can also be written as D K L ( Q ∥ P ) = ∑ Z Q ( Z ) [ log ⁡ Q ( Z ) P ( Z , X ) + log ⁡ P ( X ) ] = ∑ Z Q ( Z ) [ log ⁡ Q ( Z ) − log ⁡ P ( Z , X ) ] + ∑ Z Q ( Z ) [ log ⁡ P ( X ) ] {\displaystyle {\begin{array}{rl}D_{\mathrm {KL} }(Q\parallel P)&=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log {\frac {Q(\mathbf {Z} )}{P(\mathbf {Z} ,\mathbf {X} )}}+\log P(\mathbf {X} )\right]\\&=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log P(\mathbf {X} )\right]\end{array}}} Because P ( X ) {\displaystyle P(\mathbf {X} )} is a constant with respect to Z {\displaystyle \mathbf {Z} } and ∑ Z Q ( Z ) = 1 {\displaystyle \sum _{\mathbf {Z} }Q(\mathbf {Z} )=1} because Q ( Z ) {\displaystyle Q(\mathbf {Z} )} is a distribution, we have D K L ( Q ∥ P ) = ∑ Z Q ( Z ) [ log ⁡ Q ( Z ) − log ⁡ P ( Z , X ) ] + log ⁡ P ( X ) {\displaystyle D_{\mathrm {KL} }(Q\parallel P)=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\log P(\mathbf {X} )} which, according to the definition of expected value (for a discrete random variable), can be written as follows D K L ( Q ∥ P ) = E Q [ log ⁡ Q ( Z ) − log ⁡ P ( Z , X ) ] + log ⁡ P ( X ) {\displaystyle D_{\mathrm {KL} }(Q\parallel P)=\mathbb {E} _{\mathbf {Q} }\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\log P(\mathbf {X} )} which can be rearranged to become log ⁡ P ( X ) = D K L ( Q ∥ P ) − E Q [ log ⁡ Q ( Z ) − log ⁡ P ( Z , X ) ] = D K L ( Q ∥ P ) + L ( Q ) {\displaystyle {\begin{array}{rl}\log P(\mathbf {X} )&=D_{\mathrm {KL} }(Q\parallel P)-\mathbb {E} _{\mathbf {Q} }\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]\\&=D_{\mathrm {KL} }(Q\parallel P)+{\mathcal {L}}(Q)\end{array}}} As the log-evidence log ⁡ P ( X ) {\displaystyle \log P(\mathbf {X} )} is fixed with respect to Q {\displaystyle Q} , maximizing the final term L ( Q ) {\displaystyle {\mathcal {L}}(Q)} minimizes the KL divergence of Q {\displaystyle Q} from P {\displaystyle P} . By appropriate choice of Q {\displaystyle Q} , L ( Q ) {\displaystyle {\mathcal {L}}(Q)} becomes tractable to compute and to maximize. Hence we have both an analytical approximation Q {\displaystyle Q} for the posterior P ( Z ∣ X ) {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )} , and a lower bound L ( Q ) {\displaystyle {\mathcal {L}}(Q)} for the log-evidence log ⁡ P ( X ) {\displaystyle \log P(\mathbf {X} )} (since the KL-divergence is non-negative). The lower bound L ( Q ) {\displaystyle {\mathcal {L}}(Q)} is known as the (negative) variational free energy in analogy with thermodynamic free energy because it can also be expressed as a negative energy E Q ⁡ [ log ⁡ P ( Z , X ) ] {\displaystyle \operatorname {E} _{Q}[\log P(\mathbf {Z} ,\mathbf {X} )]} plus the entropy of Q {\displaystyle Q} . The term L ( Q ) {\displaystyle {\mathcal {L}}(Q)} is also known as Evidence Lower Bound, abbreviated as ELBO, to emphasize that it is a lower (worst-case) bound on the log-evidence of the data. === Proofs === By the generalized Pythagorean theorem of Bregman divergence, of which KL-divergence is a special case, it can be shown that: D K L ( Q ∥ P ) ≥ D K L ( Q ∥ Q ∗ ) + D K L ( Q ∗ ∥ P ) , ∀ Q ∗ ∈ C {\displaystyle D_{\mathrm {KL} }(Q\parallel P)\geq D_{\mathrm {KL} }(Q\parallel Q^{*})+D_{\mathrm {KL} }(Q^{*}\parallel P),\forall Q^{*}\in {\mathcal {C}}} where C {\displaystyle {\mathcal {C}}} is a convex set and the equality holds if: Q = Q ∗ ≜ arg ⁡ min Q ∈ C D K L ( Q ∥ P ) . {\displaystyle Q=Q^{*}\triangleq \arg \min _{Q\in {\mathcal {C}}}D_{\mathrm {KL} }(Q\parallel P).} In this case, the global minimizer Q ∗ ( Z ) = q ∗ ( Z 1 ∣ Z 2 ) q ∗ ( Z 2 ) = q ∗ ( Z 2 ∣ Z 1 ) q ∗ ( Z 1 ) , {\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})q^{*}(\mathbf {Z} _{2})=q^{*}(\mathbf {Z} _{2}\mid \mathbf {Z} _{1})q^{*}(\mathbf {Z} _{1}),} with Z = { Z 1 , Z 2 } , {\displaystyle \mathbf {Z} =\{\mathbf {Z_{1}} ,\mathbf {Z_{2}} \},} can be found as follows: q ∗ ( Z 2 ) = P ( X ) ζ ( X ) P ( Z 2 ∣ X ) exp ⁡ ( D K L ( q ∗ ( Z 1 ∣ Z 2 ) ∥ P ( Z 1 ∣ Z 2 , X ) ) ) = 1 ζ ( X ) exp ⁡ E q ∗ ( Z 1 ∣ Z 2 ) ( log ⁡ P ( Z , X ) q ∗ ( Z 1 ∣ Z 2 ) ) , {\displaystyle {\begin{array}{rl}q^{*}(\mathbf {Z} _{2})&={\frac {P(\mathbf {X} )}{\zeta (\mathbf {X} )}}{\frac {P(\mathbf {Z} _{2}\mid \mathbf {X} )}{\exp(D_{\mathrm {KL} }(q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})\parallel P(\mathbf {Z} _{1}\mid \mathbf {Z} _{2},\mathbf {X} )))}}\\&={\frac {1}{\zeta (\mathbf {X} )}}\exp \mathbb {E} _{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}\left(\log {\frac {P(\mathbf {Z} ,\mathbf {X} )}{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}}\right),\end{array}}} in which the normalizing constant is: ζ ( X ) = P ( X ) ∫ Z 2 P ( Z 2 ∣ X ) exp ⁡ ( D K L ( q ∗ ( Z 1 ∣ Z 2 ) ∥ P ( Z 1 ∣ Z 2 , X ) ) ) = ∫ Z 2 exp ⁡ E q ∗ ( Z 1 ∣ Z 2 ) ( log ⁡ P ( Z , X ) q ∗ ( Z 1 ∣ Z 2 ) ) . {\displaystyle {\begin{array}{rl}\zeta (\mathbf {X} )&=P(\mathbf {X} )\int _{\mathbf {Z} _{2}}{\frac {P(\mathbf {Z} _{2}\mid \mathbf {X} )}{\exp(D_{\mathrm {KL} }(q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})\parallel P(\mathbf {Z} _{1}\mid \mathbf {Z} _{2},\mathbf {X} )))}}\\&=\int _{\mathbf {Z} _{2}}\exp \mathbb {E} _{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}\left(\log {\frac {P(\mathbf {Z} ,\mathbf {X} )}{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}}\right).\end{array}}} The term ζ ( X ) {\displaystyle \zeta (\mathbf {X} )} is often called the evidence lower bound (ELBO) in practice, since P ( X ) ≥ ζ ( X ) = exp ⁡ ( L ( Q ∗ ) ) {\displaystyle P(\mathbf {X} )\geq \zeta (\mathbf {X} )=\exp({\mathcal {L}}(Q^{*}))} , as shown above. By interchanging the roles of Z 1 {\displaystyle \mathbf {Z} _{1}} and Z 2 , {\displaystyle \mathbf {Z} _{2},} we can iteratively compute the approximated q ∗ ( Z 1 ) {\displaystyle q^{*}(\mathbf {Z} _{1})} and q ∗ ( Z 2 ) {\displaystyle q^{*}(\mathbf {Z} _{2})} of the true model's marginals P ( Z 1 ∣ X ) {\displaystyle P(\mathbf {Z} _{1}\mid \mathbf {X} )} and P ( Z 2 ∣ X ) , {\displaystyle P(\mathbf {Z} _{2}\mid \mathbf {X} ),} respectively. Although this iterative scheme is guaranteed to converge monotonically, the converged Q ∗ {\displaystyle Q^{*}} is only a local minimizer of D K L ( Q ∥ P ) {\displaystyle D_{\mathrm {KL} }(Q\parallel P)} . If the constrained space C {\displaystyle {\mathcal {C}}} is confined within independent space, i.e. q ∗ ( Z 1 ∣ Z 2 ) = q ∗ ( Z 1 ) , {\displaystyle q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})=q^{*}(\mathbf {Z_{1}} ),} the above iterative scheme will become the so-called mean field approximation Q ∗ ( Z ) = q ∗ ( Z 1 ) q ∗ ( Z 2 ) , {\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1})q^{*}(\mathbf {Z} _{2}),} as shown below. == Mean field approximation == The variational distribution Q ( Z ) {\displaystyle Q(\mathbf {Z} )} is usually assumed to factorize over some partition of the latent variables, i.e. for some partition of the latent variables Z {\displaystyle \mathbf {Z} } into Z 1 … Z M {\displaystyle \mathbf {Z} _{1}\dots \mathbf {Z} _{M}} , Q ( Z ) = ∏ i = 1 M q i ( Z i ∣ X ) {\displaystyle Q(\mathbf {Z} )=\prod _{i=1}^{M}q_{i}(\mathbf {Z} _{i}\mid \mathbf {X} )} It can be shown using the calculus of variations (hence the name "variational Bayes") that the "best" distribution q j ∗ {\displaystyle q_{j}^{*}} for each of the factors q j {\displaystyle q_{j}} (in terms of the distribution minimizing the KL divergence, as described above) satisfies: q j ∗ ( Z j ∣ X ) = e E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] ∫ e E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] d Z j {\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )={\frac {e^{\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}}{\int e^{\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}\,d\mathbf {Z} _{j}}}} where E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] {\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]} is the expectation of the logarithm of the joint probability of the data and latent variables, taken with respect to q ∗ {\displaystyle q^{*}} over all variables not in the partition: refer to Lemma 4.1 of for a derivation of the distribution q j ∗ ( Z j ∣ X ) {\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )} . In practice, we usually work in terms of logarithms, i.e.: ln ⁡ q j ∗ ( Z j ∣ X ) = E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] + constant {\displaystyle \ln q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )=\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]+{\text{constant}}} The constant in the above expression is related to the normalizing constant (the denominator in the expression above for q j ∗ {\displaystyle q_{j}^{*}} ) and is usually reinstated by inspection, as the rest of the expression can usually be recognized as being a known type of distribution (e.g. Gaussian, gamma, etc.). Using the properties of expectations, the expression E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] {\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]} can usually be simplified into a function of the fixed hyperparameters of the prior distributions over the latent variables and of expectations (and sometimes higher moments such as the variance) of latent variables not in the current partition (i.e. latent variables not included in Z j {\displaystyle \mathbf {Z} _{j}} ). This creates circular dependencies between the parameters of the distributions over variables in one partition and the expectations of variables in the other partitions. This naturally suggests an iterative algorithm, much like EM (the expectation–maximization algorithm), in which the expectations (and possibly higher moments) of the latent variables are initialized in some fashion (perhaps randomly), and then the parameters of each distribution are computed in turn using the current values of the expectations, after which the expectation of the newly computed distribution is set appropriately according to the computed parameters. An algorithm of this sort is guaranteed to converge. In other words, for each of the partitions of variables, by simplifying the expression for the distribution over the partition's variables and examining the distribution's functional dependency on the variables in question, the family of the distribution can usually be determined (which in turn determines the value of the constant). The formula for the distribution's parameters will be expressed in terms of the prior distributions' hyperparameters (which are known constants), but also in terms of expectations of functions of variables in other partitions. Usually these expectations can be simplified into functions of expectations of the variables themselves (i.e. the means); sometimes expectations of squared variables (which can be related to the variance of the variables), or expectations of higher powers (i.e. higher moments) also appear. In most cases, the other variables' distributions will be from known families, and the formulas for the relevant expectations can be looked up. However, those formulas depend on those distributions' parameters, which depend in turn on the expectations about other variables. The result is that the formulas for the parameters of each variable's distributions can be expressed as a series of equations with mutual, nonlinear dependencies among the variables. Usually, it is not possible to solve this system of equations directly. However, as described above, the dependencies suggest a simple iterative algorithm, which in most cases is guaranteed to converge. An example will make this process clearer. == A duality formula for variational inference == The following theorem is referred to as a duality formula for variational inference. It explains some important properties of the variational distributions used in variational Bayes methods. Theorem Consider two probability spaces ( Θ , F , P ) {\displaystyle (\Theta ,{\mathcal {F}},P)} and ( Θ , F , Q ) {\displaystyle (\Theta ,{\mathcal {F}},Q)} with Q ≪ P {\displaystyle Q\ll P} . Assume that there is a common dominating probability measure λ {\displaystyle \lambda } such that P ≪ λ {\displaystyle P\ll \lambda } and Q ≪ λ {\displaystyle Q\ll \lambda } . Let h {\displaystyle h} denote any real-valued random variable on ( Θ , F , P ) {\displaystyle (\Theta ,{\mathcal {F}},P)} that satisfies h ∈ L 1 ( P ) {\displaystyle h\in L_{1}(P)} . Then the following equality holds log ⁡ E P [ exp ⁡ h ] = sup Q ≪ P { E Q [ h ] − D KL ( Q ∥ P ) } . {\displaystyle \log E_{P}[\exp h]={\text{sup}}_{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.} Further, the supremum on the right-hand side is attained if and only if it holds q ( θ ) p ( θ ) = exp ⁡ h ( θ ) E P [ exp ⁡ h ] , {\displaystyle {\frac {q(\theta )}{p(\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},} almost surely with respect to probability measure Q {\displaystyle Q} , where p ( θ ) = d P / d λ {\displaystyle p(\theta )=dP/d\lambda } and q ( θ ) = d Q / d λ {\displaystyle q(\theta )=dQ/d\lambda } denote the Radon–Nikodym derivatives of the probability measures P {\displaystyle P} and Q {\displaystyle Q} with respect to λ {\displaystyle \lambda } , respectively. == A basic example == Consider a simple non-hierarchical Bayesian model consisting of a set of i.i.d. observations from a Gaussian distribution, with unknown mean and variance. In the following, we work through this model in great detail to illustrate the workings of the variational Bayes method. For mathematical convenience, in the following example we work in terms of the precision — i.e. the reciprocal of the variance (or in a multivariate Gaussian, the inverse of the covariance matrix) — rather than the variance itself. (From a theoretical standpoint, precision and variance are equivalent since there is a one-to-one correspondence between the two.) === The mathematical model === We place conjugate prior distributions on the unknown mean μ {\displaystyle \mu } and precision τ {\displaystyle \tau } , i.e. the mean also follows a Gaussian distribution while the precision follows a gamma distribution. In other words: τ ∼ Gamma ⁡ ( a 0 , b 0 ) μ | τ ∼ N ( μ 0 , ( λ 0 τ ) − 1 ) { x 1 , … , x N } ∼ N ( μ , τ − 1 ) N = number of data points {\displaystyle {\begin{aligned}\tau &\sim \operatorname {Gamma} (a_{0},b_{0})\\\mu |\tau &\sim {\mathcal {N}}(\mu _{0},(\lambda _{0}\tau )^{-1})\\\{x_{1},\dots ,x_{N}\}&\sim {\mathcal {N}}(\mu ,\tau ^{-1})\\N&={\text{number of data points}}\end{aligned}}} The hyperparameters μ 0 , λ 0 , a 0 {\displaystyle \mu _{0},\lambda _{0},a_{0}} and b 0 {\displaystyle b_{0}} in the prior distributions are fixed, given values. They can be set to small positive numbers to give broad prior distributions indicating ignorance about the prior distributions of μ {\displaystyle \mu } and τ {\displaystyle \tau } . We are given N {\displaystyle N} data points X = { x 1 , … , x N } {\displaystyle \mathbf {X} =\{x_{1},\ldots ,x_{N}\}} and our goal is to infer the posterior distribution q ( μ , τ ) = p ( μ , τ ∣ x 1 , … , x N ) {\displaystyle q(\mu ,\tau )=p(\mu ,\tau \mid x_{1},\ldots ,x_{N})} of the parameters μ {\displaystyle \mu } and τ . {\displaystyle \tau .} === The joint probability === The joint probability of all variables can be rewritten as p ( X , μ , τ ) = p ( X ∣ μ , τ ) p ( μ ∣ τ ) p ( τ ) {\displaystyle p(\mathbf {X} ,\mu ,\tau )=p(\mathbf {X} \mid \mu ,\tau )p(\mu \mid \tau )p(\tau )} where the individual factors are p ( X ∣ μ , τ ) = ∏ n = 1 N N ( x n ∣ μ , τ − 1 ) p ( μ ∣ τ ) = N ( μ ∣ μ 0 , ( λ 0 τ ) − 1 ) p ( τ ) = Gamma ⁡ ( τ ∣ a 0 , b 0 ) {\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\tau )&=\prod _{n=1}^{N}{\mathcal {N}}(x_{n}\mid \mu ,\tau ^{-1})\\p(\mu \mid \tau )&={\mathcal {N}}\left(\mu \mid \mu _{0},(\lambda _{0}\tau )^{-1}\right)\\p(\tau )&=\operatorname {Gamma} (\tau \mid a_{0},b_{0})\end{aligned}}} where N ( x ∣ μ , σ 2 ) = 1 2 π σ 2 e − ( x − μ ) 2 2 σ 2 Gamma ⁡ ( τ ∣ a , b ) = 1 Γ ( a ) b a τ a − 1 e − b τ {\displaystyle {\begin{aligned}{\mathcal {N}}(x\mid \mu ,\sigma ^{2})&={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{\frac {-(x-\mu )^{2}}{2\sigma ^{2}}}\\\operatorname {Gamma} (\tau \mid a,b)&={\frac {1}{\Gamma (a)}}b^{a}\tau ^{a-1}e^{-b\tau }\end{aligned}}} === Factorized approximation === Assume that q ( μ , τ ) = q ( μ ) q ( τ ) {\displaystyle q(\mu ,\tau )=q(\mu )q(\tau )} , i.e. that the posterior distribution factorizes into independent factors for μ {\displaystyle \mu } and τ {\displaystyle \tau } . This type of assumption underlies the variational Bayesian method. The true posterior distribution does not in fact factor this way (in fact, in this simple case, it is known to be a Gaussian-gamma distribution), and hence the result we obtain will be an approximation. === Derivation of q(μ) === Then ln ⁡ q μ ∗ ( μ ) = E τ ⁡ [ ln ⁡ p ( X ∣ μ , τ ) + ln ⁡ p ( μ ∣ τ ) + ln ⁡ p ( τ ) ] + C = E τ ⁡ [ ln ⁡ p ( X ∣ μ , τ ) ] + E τ ⁡ [ ln ⁡ p ( μ ∣ τ ) ] + E τ ⁡ [ ln ⁡ p ( τ ) ] + C = E τ ⁡ [ ln ⁡ ∏ n = 1 N N ( x n ∣ μ , τ − 1 ) ] + E τ ⁡ [ ln ⁡ N ( μ ∣ μ 0 , ( λ 0 τ ) − 1 ) ] + C 2 = E τ ⁡ [ ln ⁡ ∏ n = 1 N τ 2 π e − ( x n − μ ) 2 τ 2 ] + E τ ⁡ [ ln ⁡ λ 0 τ 2 π e − ( μ − μ 0 ) 2 λ 0 τ 2 ] + C 2 = E τ ⁡ [ ∑ n = 1 N ( 1 2 ( ln ⁡ τ − ln ⁡ 2 π ) − ( x n − μ ) 2 τ 2 ) ] + E τ ⁡ [ 1 2 ( ln ⁡ λ 0 + ln ⁡ τ − ln ⁡ 2 π ) − ( μ − μ 0 ) 2 λ 0 τ 2 ] + C 2 = E τ ⁡ [ ∑ n = 1 N − ( x n − μ ) 2 τ 2 ] + E τ ⁡ [ − ( μ − μ 0 ) 2 λ 0 τ 2 ] + E τ ⁡ [ ∑ n = 1 N 1 2 ( ln ⁡ τ − ln ⁡ 2 π ) ] + E τ ⁡ [ 1 2 ( ln ⁡ λ 0 + ln ⁡ τ − ln ⁡ 2 π ) ] + C 2 = E τ ⁡ [ ∑ n = 1 N − ( x n − μ ) 2 τ 2 ] + E τ ⁡ [ − ( μ − μ 0 ) 2 λ 0 τ 2 ] + C 3 = − E τ ⁡ [ τ ] 2 { ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 } + C 3 {\displaystyle {\begin{aligned}\ln q_{\mu }^{*}(\mu )&=\operatorname {E} _{\tau }\left[\ln p(\mathbf {X} \mid \mu ,\tau )+\ln p(\mu \mid \tau )+\ln p(\tau )\right]+C\\&=\operatorname {E} _{\tau }\left[\ln p(\mathbf {X} \mid \mu ,\tau )\right]+\operatorname {E} _{\tau }\left[\ln p(\mu \mid \tau )\right]+\operatorname {E} _{\tau }\left[\ln p(\tau )\right]+C\\&=\operatorname {E} _{\tau }\left[\ln \prod _{n=1}^{N}{\mathcal {N}}\left(x_{n}\mid \mu ,\tau ^{-1}\right)\right]+\operatorname {E} _{\tau }\left[\ln {\mathcal {N}}\left(\mu \mid \mu _{0},(\lambda _{0}\tau )^{-1}\right)\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\ln \prod _{n=1}^{N}{\sqrt {\frac {\tau }{2\pi }}}e^{-{\frac {(x_{n}-\mu )^{2}\tau }{2}}}\right]+\operatorname {E} _{\tau }\left[\ln {\sqrt {\frac {\lambda _{0}\tau }{2\pi }}}e^{-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}}\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}\left({\frac {1}{2}}(\ln \tau -\ln 2\pi )-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right)\right]+\operatorname {E} _{\tau }\left[{\frac {1}{2}}(\ln \lambda _{0}+\ln \tau -\ln 2\pi )-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}{\frac {1}{2}}(\ln \tau -\ln 2\pi )\right]+\operatorname {E} _{\tau }\left[{\frac {1}{2}}(\ln \lambda _{0}+\ln \tau -\ln 2\pi )\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right\}+C_{3}\end{aligned}}} In the above derivation, C {\displaystyle C} , C 2 {\displaystyle C_{2}} and C 3 {\displaystyle C_{3}} refer to values that are constant with respect to μ {\displaystyle \mu } . Note that the term E τ ⁡ [ ln ⁡ p ( τ ) ] {\displaystyle \operatorname {E} _{\tau }[\ln p(\tau )]} is not a function of μ {\displaystyle \mu } and will have the same value regardless of the value of μ {\displaystyle \mu } . Hence in line 3 we can absorb it into the constant term at the end. We do the same thing in line 7. The last line is simply a quadratic polynomial in μ {\displaystyle \mu } . Since this is the logarithm of q μ ∗ ( μ ) {\displaystyle q_{\mu }^{*}(\mu )} , we can see that q μ ∗ ( μ ) {\displaystyle q_{\mu }^{*}(\mu )} itself is a Gaussian distribution. With a certain amount of tedious math (expanding the squares inside of the braces, separating out and grouping the terms involving μ {\displaystyle \mu } and μ 2 {\displaystyle \mu ^{2}} and completing the square over μ {\displaystyle \mu } ), we can derive the parameters of the Gaussian distribution: ln ⁡ q μ ∗ ( μ ) = − E τ ⁡ [ τ ] 2 { ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 } + C 3 = − E τ ⁡ [ τ ] 2 { ∑ n = 1 N ( x n 2 − 2 x n μ + μ 2 ) + λ 0 ( μ 2 − 2 μ 0 μ + μ 0 2 ) } + C 3 = − E τ ⁡ [ τ ] 2 { ( ∑ n = 1 N x n 2 ) − 2 ( ∑ n = 1 N x n ) μ + ( ∑ n = 1 N μ 2 ) + λ 0 μ 2 − 2 λ 0 μ 0 μ + λ 0 μ 0 2 } + C 3 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 } + C 3 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ } + C 4 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) ( λ 0 + N ) μ } + C 4 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) ( μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) μ ) } + C 4 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) ( μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) μ + ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 − ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 ) } + C 4 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) ( μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) μ + ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 ) } + C 5 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) ( μ − λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 } + C 5 = − 1 2 ( λ 0 + N ) E τ ⁡ [ τ ] ( μ − λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 + C 5 {\displaystyle {\begin{aligned}\ln q_{\mu }^{*}(\mu )&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}^{2}-2x_{n}\mu +\mu ^{2})+\lambda _{0}(\mu ^{2}-2\mu _{0}\mu +\mu _{0}^{2})\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\left(\sum _{n=1}^{N}x_{n}^{2}\right)-2\left(\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}\mu ^{2}\right)+\lambda _{0}\mu ^{2}-2\lambda _{0}\mu _{0}\mu +\lambda _{0}\mu _{0}^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu \right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)(\lambda _{0}+N)\mu \right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu \right)\right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu +\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}-\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right)\right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu +\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right)\right\}+C_{5}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu -{\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right\}+C_{5}\\&=-{\frac {1}{2}}(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\left(\mu -{\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}+C_{5}\end{aligned}}} Note that all of the above steps can be shortened by using the formula for the sum of two quadratics. In other words: q μ ∗ ( μ ) ∼ N ( μ ∣ μ N , λ N − 1 ) μ N = λ 0 μ 0 + N x ¯ λ 0 + N λ N = ( λ 0 + N ) E τ ⁡ [ τ ] x ¯ = 1 N ∑ n = 1 N x n {\displaystyle {\begin{aligned}q_{\mu }^{*}(\mu )&\sim {\mathcal {N}}(\mu \mid \mu _{N},\lambda _{N}^{-1})\\\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\end{aligned}}} === Derivation of q(τ) === The derivation of q τ ∗ ( τ ) {\displaystyle q_{\tau }^{*}(\tau )} is similar to above, although we omit some of the details for the sake of brevity. ln ⁡ q τ ∗ ( τ ) = E μ ⁡ [ ln ⁡ p ( X ∣ μ , τ ) + ln ⁡ p ( μ ∣ τ ) ] + ln ⁡ p ( τ ) + constant = ( a 0 − 1 ) ln ⁡ τ − b 0 τ + 1 2 ln ⁡ τ + N 2 ln ⁡ τ − τ 2 E μ ⁡ [ ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 ] + constant {\displaystyle {\begin{aligned}\ln q_{\tau }^{*}(\tau )&=\operatorname {E} _{\mu }[\ln p(\mathbf {X} \mid \mu ,\tau )+\ln p(\mu \mid \tau )]+\ln p(\tau )+{\text{constant}}\\&=(a_{0}-1)\ln \tau -b_{0}\tau +{\frac {1}{2}}\ln \tau +{\frac {N}{2}}\ln \tau -{\frac {\tau }{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]+{\text{constant}}\end{aligned}}} Exponentiating both sides, we can see that q τ ∗ ( τ ) {\displaystyle q_{\tau }^{*}(\tau )} is a gamma distribution. Specifically: q τ ∗ ( τ ) ∼ Gamma ⁡ ( τ ∣ a N , b N ) a N = a 0 + N + 1 2 b N = b 0 + 1 2 E μ ⁡ [ ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 ] {\displaystyle {\begin{aligned}q_{\tau }^{*}(\tau )&\sim \operatorname {Gamma} (\tau \mid a_{N},b_{N})\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\end{aligned}}} === Algorithm for computing the parameters === Let us recap the conclusions from the previous sections: q μ ∗ ( μ ) ∼ N ( μ ∣ μ N , λ N − 1 ) μ N = λ 0 μ 0 + N x ¯ λ 0 + N λ N = ( λ 0 + N ) E τ ⁡ [ τ ] x ¯ = 1 N ∑ n = 1 N x n {\displaystyle {\begin{aligned}q_{\mu }^{*}(\mu )&\sim {\mathcal {N}}(\mu \mid \mu _{N},\lambda _{N}^{-1})\\\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\end{aligned}}} and q τ ∗ ( τ ) ∼ Gamma ⁡ ( τ ∣ a N , b N ) a N = a 0 + N + 1 2 b N = b 0 + 1 2 E μ ⁡ [ ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 ] {\displaystyle {\begin{aligned}q_{\tau }^{*}(\tau )&\sim \operatorname {Gamma} (\tau \mid a_{N},b_{N})\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\end{aligned}}} In each case, the parameters for the distribution over one of the variables depend on expectations taken with respect to the other variable. We can expand the expectations, using the standard formulas for the expectations of moments of the Gaussian and gamma distributions: E ⁡ [ τ ∣ a N , b N ] = a N b N E ⁡ [ μ ∣ μ N , λ N − 1 ] = μ N E ⁡ [ X 2 ] = Var ⁡ ( X ) + ( E ⁡ [ X ] ) 2 E ⁡ [ μ 2 ∣ μ N , λ N − 1 ] = λ N − 1 + μ N 2 {\displaystyle {\begin{aligned}\operatorname {E} [\tau \mid a_{N},b_{N}]&={\frac {a_{N}}{b_{N}}}\\\operatorname {E} \left[\mu \mid \mu _{N},\lambda _{N}^{-1}\right]&=\mu _{N}\\\operatorname {E} \left[X^{2}\right]&=\operatorname {Var} (X)+(\operatorname {E} [X])^{2}\\\operatorname {E} \left[\mu ^{2}\mid \mu _{N},\lambda _{N}^{-1}\right]&=\lambda _{N}^{-1}+\mu _{N}^{2}\end{aligned}}} Applying these formulas to the above equations is trivial in most cases, but the equation for b N {\displaystyle b_{N}} takes more work: b N = b 0 + 1 2 E μ ⁡ [ ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 ] = b 0 + 1 2 E μ ⁡ [ ( λ 0 + N ) μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 ] = b 0 + 1 2 [ ( λ 0 + N ) E μ ⁡ [ μ 2 ] − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) E μ ⁡ [ μ ] + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 ] = b 0 + 1 2 [ ( λ 0 + N ) ( λ N − 1 + μ N 2 ) − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ N + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 ] {\displaystyle {\begin{aligned}b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\\&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\operatorname {E} _{\mu }[\mu ^{2}]-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\operatorname {E} _{\mu }[\mu ]+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\left(\lambda _{N}^{-1}+\mu _{N}^{2}\right)-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu _{N}+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\\end{aligned}}} We can then write the parameter equations as follows, without any expectations: μ N = λ 0 μ 0 + N x ¯ λ 0 + N λ N = ( λ 0 + N ) a N b N x ¯ = 1 N ∑ n = 1 N x n a N = a 0 + N + 1 2 b N = b 0 + 1 2 [ ( λ 0 + N ) ( λ N − 1 + μ N 2 ) − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ N + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 ] {\displaystyle {\begin{aligned}\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N){\frac {a_{N}}{b_{N}}}\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\left(\lambda _{N}^{-1}+\mu _{N}^{2}\right)-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu _{N}+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\end{aligned}}} Note that there are circular dependencies among the formulas for λ N {\displaystyle \lambda _{N}} and b N {\displaystyle b_{N}} . This naturally suggests an EM-like algorithm: Compute ∑ n = 1 N x n {\displaystyle \sum _{n=1}^{N}x_{n}} and ∑ n = 1 N x n 2 . {\displaystyle \sum _{n=1}^{N}x_{n}^{2}.} Use these values to compute μ N {\displaystyle \mu _{N}} and a N . {\displaystyle a_{N}.} Initialize λ N {\displaystyle \lambda _{N}} to some arbitrary value. Use the current value of λ N , {\displaystyle \lambda _{N},} along with the known values of the other parameters, to compute b N {\displaystyle b_{N}} . Use the current value of b N , {\displaystyle b_{N},} along with the known values of the other parameters, to compute λ N {\displaystyle \lambda _{N}} . Repeat the last two steps until convergence (i.e. until neither value has changed more than some small amount). We then have values for the hyperparameters of the approximating distributions of the posterior parameters, which we can use to compute any properties we want of the posterior — e.g. its mean and variance, a 95% highest-density region (the smallest interval that includes 95% of the total probability), etc. It can be shown that this algorithm is guaranteed to converge to a local maximum. Note also that the posterior distributions have the same form as the corresponding prior distributions. We did not assume this; the only assumption we made was that the distributions factorize, and the form of the distributions followed naturally. It turns out (see below) that the fact that the posterior distributions have the same form as the prior distributions is not a coincidence, but a general result whenever the prior distributions are members of the exponential family, which is the case for most of the standard distributions. == Further discussion == === Step-by-step recipe === The above example shows the method by which the variational-Bayesian approximation to a posterior probability density in a given Bayesian network is derived: Describe the network with a graphical model, identifying the observed variables (data) X {\displaystyle \mathbf {X} } and unobserved variables (parameters Θ {\displaystyle {\boldsymbol {\Theta }}} and latent variables Z {\displaystyle \mathbf {Z} } ) and their conditional probability distributions. Variational Bayes will then construct an approximation to the posterior probability p ( Z , Θ ∣ X ) {\displaystyle p(\mathbf {Z} ,{\boldsymbol {\Theta }}\mid \mathbf {X} )} . The approximation has the basic property that it is a factorized distribution, i.e. a product of two or more independent distributions over disjoint subsets of the unobserved variables. Partition the unobserved variables into two or more subsets, over which the independent factors will be derived. There is no universal procedure for doing this; creating too many subsets yields a poor approximation, while creating too few makes the entire variational Bayes procedure intractable. Typically, the first split is to separate the parameters and latent variables; often, this is enough by itself to produce a tractable result. Assume that the partitions are called Z 1 , … , Z M {\displaystyle \mathbf {Z} _{1},\ldots ,\mathbf {Z} _{M}} . For a given partition Z j {\displaystyle \mathbf {Z} _{j}} , write down the formula for the best approximating distribution q j ∗ ( Z j ∣ X ) {\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )} using the basic equation ln ⁡ q j ∗ ( Z j ∣ X ) = E i ≠ j ⁡ [ ln ⁡ p ( Z , X ) ] + constant {\displaystyle \ln q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )=\operatorname {E} _{i\neq j}[\ln p(\mathbf {Z} ,\mathbf {X} )]+{\text{constant}}} . Fill in the formula for the joint probability distribution using the graphical model. Any component conditional distributions that don't involve any of the variables in Z j {\displaystyle \mathbf {Z} _{j}} can be ignored; they will be folded into the constant term. Simplify the formula and apply the expectation operator, following the above example. Ideally, this should simplify into expectations of basic functions of variables not in Z j {\displaystyle \mathbf {Z} _{j}} (e.g. first or second raw moments, expectation of a logarithm, etc.). In order for the variational Bayes procedure to work well, these expectations should generally be expressible analytically as functions of the parameters and/or hyperparameters of the distributions of these variables. In all cases, these expectation terms are constants with respect to the variables in the current partition. The functional form of the formula with respect to the variables in the current partition indicates the type of distribution. In particular, exponentiating the formula generates the probability density function (PDF) of the distribution (or at least, something proportional to it, with unknown normalization constant). In order for the overall method to be tractable, it should be possible to recognize the functional form as belonging to a known distribution. Significant mathematical manipulation may be required to convert the formula into a form that matches the PDF of a known distribution. When this can be done, the normalization constant can be reinstated by definition, and equations for the parameters of the known distribution can be derived by extracting the appropriate parts of the formula. When all expectations can be replaced analytically with functions of variables not in the current partition, and the PDF put into a form that allows identification with a known distribution, the result is a set of equations expressing the values of the optimum parameters as functions of the parameters of variables in other partitions. When this procedure can be applied to all partitions, the result is a set of mutually linked equations specifying the optimum values of all parameters. An expectation–maximization (EM) type procedure is then applied, picking an initial value for each parameter and the iterating through a series of steps, where at each step we cycle through the equations, updating each parameter in turn. This is guaranteed to converge. === Most important points === Due to all of the mathematical manipulations involved, it is easy to lose track of the big picture. The important things are: The idea of variational Bayes is to construct an analytical approximation to the posterior probability of the set of unobserved variables (parameters and latent variables), given the data. This means that the form of the solution is similar to other Bayesian inference methods, such as Gibbs sampling — i.e. a distribution that seeks to describe everything that is known about the variables. As in other Bayesian methods — but unlike e.g. in expectation–maximization (EM) or other maximum likelihood methods — both types of unobserved variables (i.e. parameters and latent variables) are treated the same, i.e. as random variables. Estimates for the variables can then be derived in the standard Bayesian ways, e.g. calculating the mean of the distribution to get a single point estimate or deriving a credible interval, highest density region, etc. "Analytical approximation" means that a formula can be written down for the posterior distribution. The formula generally consists of a product of well-known probability distributions, each of which factorizes over a set of unobserved variables (i.e. it is conditionally independent of the other variables, given the observed data). This formula is not the true posterior distribution, but an approximation to it; in particular, it will generally agree fairly closely in the lowest moments of the unobserved variables, e.g. the mean and variance. The result of all of the mathematical manipulations is (1) the identity of the probability distributions making up the factors, and (2) mutually dependent formulas for the parameters of these distributions. The actual values of these parameters are computed numerically, through an alternating iterative procedure much like EM. === Compared with expectation–maximization (EM) === Variational Bayes (VB) is often compared with expectation–maximization (EM). The actual numerical procedure is quite similar, in that both are alternating iterative procedures that successively converge on optimum parameter values. The initial steps to derive the respective procedures are also vaguely similar, both starting out with formulas for probability densities and both involving significant amounts of mathematical manipulations. However, there are a number of differences. Most important is what is being computed. EM computes point estimates of posterior distribution of those random variables that can be categorized as "parameters", but only estimates of the actual posterior distributions of the latent variables (at least in "soft EM", and often only when the latent variables are discrete). The point estimates computed are the modes of these parameters; no other information is available. VB, on the other hand, computes estimates of the actual posterior distribution of all variables, both parameters and latent variables. When point estimates need to be derived, generally the mean is used rather than the mode, as is normal in Bayesian inference. Concomitant with this, the parameters computed in VB do not have the same significance as those in EM. EM computes optimum values of the parameters of the Bayes network itself. VB computes optimum values of the parameters of the distributions used to approximate the parameters and latent variables of the Bayes network. For example, a typical Gaussian mixture model will have parameters for the mean and variance of each of the mixture components. EM would directly estimate optimum values for these parameters. VB, however, would first fit a distribution to these parameters — typically in the form of a prior distribution, e.g. a normal-scaled inverse gamma distribution — and would then compute values for the parameters of this prior distribution, i.e. essentially hyperparameters. In this case, VB would compute optimum estimates of the four parameters of the normal-scaled inverse gamma distribution that describes the joint distribution of the mean and variance of the component. == A more complex example == Imagine a Bayesian Gaussian mixture model described as follows: π ∼ SymDir ⁡ ( K , α 0 ) Λ i = 1 … K ∼ W ( W 0 , ν 0 ) μ i = 1 … K ∼ N ( μ 0 , ( β 0 Λ i ) − 1 ) z [ i = 1 … N ] ∼ Mult ⁡ ( 1 , π ) x i = 1 … N ∼ N ( μ z i , Λ z i − 1 ) K = number of mixing components N = number of data points {\displaystyle {\begin{aligned}\mathbf {\pi } &\sim \operatorname {SymDir} (K,\alpha _{0})\\\mathbf {\Lambda } _{i=1\dots K}&\sim {\mathcal {W}}(\mathbf {W} _{0},\nu _{0})\\\mathbf {\mu } _{i=1\dots K}&\sim {\mathcal {N}}(\mathbf {\mu } _{0},(\beta _{0}\mathbf {\Lambda } _{i})^{-1})\\\mathbf {z} [i=1\dots N]&\sim \operatorname {Mult} (1,\mathbf {\pi } )\\\mathbf {x} _{i=1\dots N}&\sim {\mathcal {N}}(\mathbf {\mu } _{z_{i}},{\mathbf {\Lambda } _{z_{i}}}^{-1})\\K&={\text{number of mixing components}}\\N&={\text{number of data points}}\end{aligned}}} Note: SymDir() is the symmetric Dirichlet distribution of dimension K {\displaystyle K} , with the hyperparameter for each component set to α 0 {\displaystyle \alpha _{0}} . The Dirichlet distribution is the conjugate prior of the categorical distribution or multinomial distribution. W ( ) {\displaystyle {\mathcal {W}}()} is the Wishart distribution, which is the conjugate prior of the precision matrix (inverse covariance matrix) for a multivariate Gaussian distribution. Mult() is a multinomial distribution over a single observation (equivalent to a categorical distribution). The state space is a "one-of-K" representation, i.e., a K {\displaystyle K} -dimensional vector in which one of the elements is 1 (specifying the identity of the observation) and all other elements are 0. N ( ) {\displaystyle {\mathcal {N}}()} is the Gaussian distribution, in this case specifically the multivariate Gaussian distribution. The interpretation of the above variables is as follows: X = { x 1 , … , x N } {\displaystyle \mathbf {X} =\{\mathbf {x} _{1},\dots ,\mathbf {x} _{N}\}} is the set of N {\displaystyle N} data points, each of which is a D {\displaystyle D} -dimensional vector distributed according to a multivariate Gaussian distribution. Z = { z 1 , … , z N } {\displaystyle \mathbf {Z} =\{\mathbf {z} _{1},\dots ,\mathbf {z} _{N}\}} is a set of latent variables, one per data point, specifying which mixture component the corresponding data point belongs to, using a "one-of-K" vector representation with components z n k {\displaystyle z_{nk}} for k = 1 … K {\displaystyle k=1\dots K} , as described above. π {\displaystyle \mathbf {\pi } } is the mixing proportions for the K {\displaystyle K} mixture components. μ i = 1 … K {\displaystyle \mathbf {\mu } _{i=1\dots K}} and Λ i = 1 … K {\displaystyle \mathbf {\Lambda } _{i=1\dots K}} specify the parameters (mean and precision) associated with each mixture component. The joint probability of all variables can be rewritten as p ( X , Z , π , μ , Λ ) = p ( X ∣ Z , μ , Λ ) p ( Z ∣ π ) p ( π ) p ( μ ∣ Λ ) p ( Λ ) {\displaystyle p(\mathbf {X} ,\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )=p(\mathbf {X} \mid \mathbf {Z} ,\mathbf {\mu } ,\mathbf {\Lambda } )p(\mathbf {Z} \mid \mathbf {\pi } )p(\mathbf {\pi } )p(\mathbf {\mu } \mid \mathbf {\Lambda } )p(\mathbf {\Lambda } )} where the individual factors are p ( X ∣ Z , μ , Λ ) = ∏ n = 1 N ∏ k = 1 K N ( x n ∣ μ k , Λ k − 1 ) z n k p ( Z ∣ π ) = ∏ n = 1 N ∏ k = 1 K π k z n k p ( π ) = Γ ( K α 0 ) Γ ( α 0 ) K ∏ k = 1 K π k α 0 − 1 p ( μ ∣ Λ ) = ∏ k = 1 K N ( μ k ∣ μ 0 , ( β 0 Λ k ) − 1 ) p ( Λ ) = ∏ k = 1 K W ( Λ k ∣ W 0 , ν 0 ) {\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mathbf {Z} ,\mathbf {\mu } ,\mathbf {\Lambda } )&=\prod _{n=1}^{N}\prod _{k=1}^{K}{\mathcal {N}}(\mathbf {x} _{n}\mid \mathbf {\mu } _{k},\mathbf {\Lambda } _{k}^{-1})^{z_{nk}}\\p(\mathbf {Z} \mid \mathbf {\pi } )&=\prod _{n=1}^{N}\prod _{k=1}^{K}\pi _{k}^{z_{nk}}\\p(\mathbf {\pi } )&={\frac {\Gamma (K\alpha _{0})}{\Gamma (\alpha _{0})^{K}}}\prod _{k=1}^{K}\pi _{k}^{\alpha _{0}-1}\\p(\mathbf {\mu } \mid \mathbf {\Lambda } )&=\prod _{k=1}^{K}{\mathcal {N}}(\mathbf {\mu } _{k}\mid \mathbf {\mu } _{0},(\beta _{0}\mathbf {\Lambda } _{k})^{-1})\\p(\mathbf {\Lambda } )&=\prod _{k=1}^{K}{\mathcal {W}}(\mathbf {\Lambda } _{k}\mid \mathbf {W} _{0},\nu _{0})\end{aligned}}} where N ( x ∣ μ , Σ ) = 1 ( 2 π ) D / 2 1 | Σ | 1 / 2 exp ⁡ { − 1 2 ( x − μ ) T Σ − 1 ( x − μ ) } W ( Λ ∣ W , ν ) = B ( W , ν ) | Λ | ( ν − D − 1 ) / 2 exp ⁡ ( − 1 2 Tr ⁡ ( W − 1 Λ ) ) B ( W , ν ) = | W | − ν / 2 { 2 ν D / 2 π D ( D − 1 ) / 4 ∏ i = 1 D Γ ( ν + 1 − i 2 ) } − 1 D = dimensionality of each data point {\displaystyle {\begin{aligned}{\mathcal {N}}(\mathbf {x} \mid \mathbf {\mu } ,\mathbf {\Sigma } )&={\frac {1}{(2\pi )^{D/2}}}{\frac {1}{|\mathbf {\Sigma } |^{1/2}}}\exp \left\{-{\frac {1}{2}}(\mathbf {x} -\mathbf {\mu } )^{\rm {T}}\mathbf {\Sigma } ^{-1}(\mathbf {x} -\mathbf {\mu } )\right\}\\{\mathcal {W}}(\mathbf {\Lambda } \mid \mathbf {W} ,\nu )&=B(\mathbf {W} ,\nu )|\mathbf {\Lambda } |^{(\nu -D-1)/2}\exp \left(-{\frac {1}{2}}\operatorname {Tr} (\mathbf {W} ^{-1}\mathbf {\Lambda } )\right)\\B(\mathbf {W} ,\nu )&=|\mathbf {W} |^{-\nu /2}\left\{2^{\nu D/2}\pi ^{D(D-1)/4}\prod _{i=1}^{D}\Gamma \left({\frac {\nu +1-i}{2}}\right)\right\}^{-1}\\D&={\text{dimensionality of each data point}}\end{aligned}}} Assume that q ( Z , π , μ , Λ ) = q ( Z ) q ( π , μ , Λ ) {\displaystyle q(\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )=q(\mathbf {Z} )q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )} . Then ln ⁡ q ∗ ( Z ) = E π , μ , Λ ⁡ [ ln ⁡ p ( X , Z , π , μ , Λ ) ] + constant = E π ⁡ [ ln ⁡ p ( Z ∣ π ) ] + E μ , Λ ⁡ [ ln ⁡ p ( X ∣ Z , μ , Λ ) ] + constant = ∑ n = 1 N ∑ k = 1 K z n k ln ⁡ ρ n k + constant {\displaystyle {\begin{aligned}\ln q^{*}(\mathbf {Z} )&=\operatorname {E} _{\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } }[\ln p(\mathbf {X} ,\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )]+{\text{constant}}\\&=\operatorname {E} _{\mathbf {\pi } }[\ln p(\mathbf {Z} \mid \mathbf {\pi } )]+\operatorname {E} _{\mathbf {\mu } ,\mathbf {\Lambda } }[\ln p(\mathbf {X} \mid \mathbf {Z} ,\mathbf {\mu } ,\mathbf {\Lambda } )]+{\text{constant}}\\&=\sum _{n=1}^{N}\sum _{k=1}^{K}z_{nk}\ln \rho _{nk}+{\text{constant}}\end{aligned}}} where we have defined ln ⁡ ρ n k = E ⁡ [ ln ⁡ π k ] + 1 2 E ⁡ [ ln ⁡ | Λ k | ] − D 2 ln ⁡ ( 2 π ) − 1 2 E μ k , Λ k ⁡ [ ( x n − μ k ) T Λ k ( x n − μ k ) ] {\displaystyle \ln \rho _{nk}=\operatorname {E} [\ln \pi _{k}]+{\frac {1}{2}}\operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]-{\frac {D}{2}}\ln(2\pi )-{\frac {1}{2}}\operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]} Exponentiating both sides of the formula for ln ⁡ q ∗ ( Z ) {\displaystyle \ln q^{*}(\mathbf {Z} )} yields q ∗ ( Z ) ∝ ∏ n = 1 N ∏ k = 1 K ρ n k z n k {\displaystyle q^{*}(\mathbf {Z} )\propto \prod _{n=1}^{N}\prod _{k=1}^{K}\rho _{nk}^{z_{nk}}} Requiring that this be normalized ends up requiring that the ρ n k {\displaystyle \rho _{nk}} sum to 1 over all values of k {\displaystyle k} , yielding q ∗ ( Z ) = ∏ n = 1 N ∏ k = 1 K r n k z n k {\displaystyle q^{*}(\mathbf {Z} )=\prod _{n=1}^{N}\prod _{k=1}^{K}r_{nk}^{z_{nk}}} where r n k = ρ n k ∑ j = 1 K ρ n j {\displaystyle r_{nk}={\frac {\rho _{nk}}{\sum _{j=1}^{K}\rho _{nj}}}} In other words, q ∗ ( Z ) {\displaystyle q^{*}(\mathbf {Z} )} is a product of single-observation multinomial distributions, and factors over each individual z n {\displaystyle \mathbf {z} _{n}} , which is distributed as a single-observation multinomial distribution with parameters r n k {\displaystyle r_{nk}} for k = 1 … K {\displaystyle k=1\dots K} . Furthermore, we note that E ⁡ [ z n k ] = r n k {\displaystyle \operatorname {E} [z_{nk}]=r_{nk}\,} which is a standard result for categorical distributions. Now, considering the factor q ( π , μ , Λ ) {\displaystyle q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )} , note that it automatically factors into q ( π ) ∏ k = 1 K q ( μ k , Λ k ) {\displaystyle q(\mathbf {\pi } )\prod _{k=1}^{K}q(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})} due to the structure of the graphical model defining our Gaussian mixture model, which is specified above. Then, ln ⁡ q ∗ ( π ) = ln ⁡ p ( π ) + E Z ⁡ [ ln ⁡ p ( Z ∣ π ) ] + constant = ( α 0 − 1 ) ∑ k = 1 K ln ⁡ π k + ∑ n = 1 N ∑ k = 1 K r n k ln ⁡ π k + constant {\displaystyle {\begin{aligned}\ln q^{*}(\mathbf {\pi } )&=\ln p(\mathbf {\pi } )+\operatorname {E} _{\mathbf {Z} }[\ln p(\mathbf {Z} \mid \mathbf {\pi } )]+{\text{constant}}\\&=(\alpha _{0}-1)\sum _{k=1}^{K}\ln \pi _{k}+\sum _{n=1}^{N}\sum _{k=1}^{K}r_{nk}\ln \pi _{k}+{\text{constant}}\end{aligned}}} Taking the exponential of both sides, we recognize q ∗ ( π ) {\displaystyle q^{*}(\mathbf {\pi } )} as a Dirichlet distribution q ∗ ( π ) ∼ Dir ⁡ ( α ) {\displaystyle q^{*}(\mathbf {\pi } )\sim \operatorname {Dir} (\mathbf {\alpha } )\,} where α k = α 0 + N k {\displaystyle \alpha _{k}=\alpha _{0}+N_{k}\,} where N k = ∑ n = 1 N r n k {\displaystyle N_{k}=\sum _{n=1}^{N}r_{nk}\,} Finally ln ⁡ q ∗ ( μ k , Λ k ) = ln ⁡ p ( μ k , Λ k ) + ∑ n = 1 N E ⁡ [ z n k ] ln ⁡ N ( x n ∣ μ k , Λ k − 1 ) + constant {\displaystyle \ln q^{*}(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})=\ln p(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})+\sum _{n=1}^{N}\operatorname {E} [z_{nk}]\ln {\mathcal {N}}(\mathbf {x} _{n}\mid \mathbf {\mu } _{k},\mathbf {\Lambda } _{k}^{-1})+{\text{constant}}} Grouping and reading off terms involving μ k {\displaystyle \mathbf {\mu } _{k}} and Λ k {\displaystyle \mathbf {\Lambda } _{k}} , the result is a Gaussian-Wishart distribution given by q ∗ ( μ k , Λ k ) = N ( μ k ∣ m k , ( β k Λ k ) − 1 ) W ( Λ k ∣ W k , ν k ) {\displaystyle q^{*}(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})={\mathcal {N}}(\mathbf {\mu } _{k}\mid \mathbf {m} _{k},(\beta _{k}\mathbf {\Lambda } _{k})^{-1}){\mathcal {W}}(\mathbf {\Lambda } _{k}\mid \mathbf {W} _{k},\nu _{k})} given the definitions β k = β 0 + N k m k = 1 β k ( β 0 μ 0 + N k x ¯ k ) W k − 1 = W 0 − 1 + N k S k + β 0 N k β 0 + N k ( x ¯ k − μ 0 ) ( x ¯ k − μ 0 ) T ν k = ν 0 + N k N k = ∑ n = 1 N r n k x ¯ k = 1 N k ∑ n = 1 N r n k x n S k = 1 N k ∑ n = 1 N r n k ( x n − x ¯ k ) ( x n − x ¯ k ) T {\displaystyle {\begin{aligned}\beta _{k}&=\beta _{0}+N_{k}\\\mathbf {m} _{k}&={\frac {1}{\beta _{k}}}(\beta _{0}\mathbf {\mu } _{0}+N_{k}{\bar {\mathbf {x} }}_{k})\\\mathbf {W} _{k}^{-1}&=\mathbf {W} _{0}^{-1}+N_{k}\mathbf {S} _{k}+{\frac {\beta _{0}N_{k}}{\beta _{0}+N_{k}}}({\bar {\mathbf {x} }}_{k}-\mathbf {\mu } _{0})({\bar {\mathbf {x} }}_{k}-\mathbf {\mu } _{0})^{\rm {T}}\\\nu _{k}&=\nu _{0}+N_{k}\\N_{k}&=\sum _{n=1}^{N}r_{nk}\\{\bar {\mathbf {x} }}_{k}&={\frac {1}{N_{k}}}\sum _{n=1}^{N}r_{nk}\mathbf {x} _{n}\\\mathbf {S} _{k}&={\frac {1}{N_{k}}}\sum _{n=1}^{N}r_{nk}(\mathbf {x} _{n}-{\bar {\mathbf {x} }}_{k})(\mathbf {x} _{n}-{\bar {\mathbf {x} }}_{k})^{\rm {T}}\end{aligned}}} Finally, notice that these functions require the values of r n k {\displaystyle r_{nk}} , which make use of ρ n k {\displaystyle \rho _{nk}} , which is defined in turn based on E ⁡ [ ln ⁡ π k ] {\displaystyle \operatorname {E} [\ln \pi _{k}]} , E ⁡ [ ln ⁡ | Λ k | ] {\displaystyle \operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]} , and E μ k , Λ k ⁡ [ ( x n − μ k ) T Λ k ( x n − μ k ) ] {\displaystyle \operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]} . Now that we have determined the distributions over which these expectations are taken, we can derive formulas for them: E μ k , Λ k ⁡ [ ( x n − μ k ) T Λ k ( x n − μ k ) ] = D β k − 1 + ν k ( x n − m k ) T W k ( x n − m k ) ln ⁡ Λ ~ k ≡ E ⁡ [ ln ⁡ | Λ k | ] = ∑ i = 1 D ψ ( ν k + 1 − i 2 ) + D ln ⁡ 2 + ln ⁡ | W k | ln ⁡ π ~ k ≡ E ⁡ [ ln ⁡ | π k | ] = ψ ( α k ) − ψ ( ∑ i = 1 K α i ) {\displaystyle {\begin{aligned}\operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]&=D\beta _{k}^{-1}+\nu _{k}(\mathbf {x} _{n}-\mathbf {m} _{k})^{\rm {T}}\mathbf {W} _{k}(\mathbf {x} _{n}-\mathbf {m} _{k})\\\ln {\widetilde {\Lambda }}_{k}&\equiv \operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]=\sum _{i=1}^{D}\psi \left({\frac {\nu _{k}+1-i}{2}}\right)+D\ln 2+\ln |\mathbf {W} _{k}|\\\ln {\widetilde {\pi }}_{k}&\equiv \operatorname {E} \left[\ln |\pi _{k}|\right]=\psi (\alpha _{k})-\psi \left(\sum _{i=1}^{K}\alpha _{i}\right)\end{aligned}}} These results lead to r n k ∝ π ~ k Λ ~ k 1 / 2 exp ⁡ { − D 2 β k − ν k 2 ( x n − m k ) T W k ( x n − m k ) } {\displaystyle r_{nk}\propto {\widetilde {\pi }}_{k}{\widetilde {\Lambda }}_{k}^{1/2}\exp \left\{-{\frac {D}{2\beta _{k}}}-{\frac {\nu _{k}}{2}}(\mathbf {x} _{n}-\mathbf {m} _{k})^{\rm {T}}\mathbf {W} _{k}(\mathbf {x} _{n}-\mathbf {m} _{k})\right\}} These can be converted from proportional to absolute values by normalizing over k {\displaystyle k} so that the corresponding values sum to 1. Note that: The update equations for the parameters β k {\displaystyle \beta _{k}} , m k {\displaystyle \mathbf {m} _{k}} , W k {\displaystyle \mathbf {W} _{k}} and ν k {\displaystyle \nu _{k}} of the variables μ k {\displaystyle \mathbf {\mu } _{k}} and Λ k {\displaystyle \mathbf {\Lambda } _{k}} depend on the statistics N k {\displaystyle N_{k}} , x ¯ k {\displaystyle {\bar {\mathbf {x} }}_{k}} , and S k {\displaystyle \mathbf {S} _{k}} , and these statistics in turn depend on r n k {\displaystyle r_{nk}} . The update equations for the parameters α 1 … K {\displaystyle \alpha _{1\dots K}} of the variable π {\displaystyle \mathbf {\pi } } depend on the statistic N k {\displaystyle N_{k}} , which depends in turn on r n k {\displaystyle r_{nk}} . The update equation for r n k {\displaystyle r_{nk}} has a direct circular dependence on β k {\displaystyle \beta _{k}} , m k {\displaystyle \mathbf {m} _{k}} , W k {\displaystyle \mathbf {W} _{k}} and ν k {\displaystyle \nu _{k}} as well as an indirect circular dependence on W k {\displaystyle \mathbf {W} _{k}} , ν k {\displaystyle \nu _{k}} and α 1 … K {\displaystyle \alpha _{1\dots K}} through π ~ k {\displaystyle {\widetilde {\pi }}_{k}} and Λ ~ k {\displaystyle {\widetilde {\Lambda }}_{k}} . This suggests an iterative procedure that alternates between two steps: An E-step that computes the value of r n k {\displaystyle r_{nk}} using the current values of all the other parameters. An M-step that uses the new value of r n k {\displaystyle r_{nk}} to compute new values of all the other parameters. Note that these steps correspond closely with the standard EM algorithm to derive a maximum likelihood or maximum a posteriori (MAP) solution for the parameters of a Gaussian mixture model. The responsibilities r n k {\displaystyle r_{nk}} in the E step correspond closely to the posterior probabilities of the latent variables given the data, i.e. p ( Z ∣ X ) {\displaystyle p(\mathbf {Z} \mid \mathbf {X} )} ; the computation of the statistics N k {\displaystyle N_{k}} , x ¯ k {\displaystyle {\bar {\mathbf {x} }}_{k}} , and S k {\displaystyle \mathbf {S} _{k}} corresponds closely to the computation of corresponding "soft-count" statistics over the data; and the use of those statistics to compute new values of the parameters corresponds closely to the use of soft counts to compute new parameter values in normal EM over a Gaussian mixture model. == Exponential-family distributions == Note that in the previous example, once the distribution over unobserved variables was assumed to factorize into distributions over the "parameters" and distributions over the "latent data", the derived "best" distribution for each variable was in the same family as the corresponding prior distribution over the variable. This is a general result that holds true for all prior distributions derived from the exponential family. == See also == Variational message passing: a modular algorithm for variational Bayesian inference. Variational autoencoder: an artificial neural network belonging to the families of probabilistic graphical models and Variational Bayesian methods. Expectation–maximization algorithm: a related approach which corresponds to a special case of variational Bayesian inference. Generalized filtering: a variational filtering scheme for nonlinear state space models. Calculus of variations: the field of mathematical analysis that deals with maximizing or minimizing functionals. Maximum entropy discrimination: This is a variational inference framework that allows for introducing and accounting for additional large-margin constraints == References == == External links == The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay provides an introduction to variational methods (p. 422). A Tutorial on Variational Bayes. Fox, C. and Roberts, S. 2012. Artificial Intelligence Review, doi:10.1007/s10462-011-9236-8. Variational-Bayes Repository A repository of research papers, software, and links related to the use of variational methods for approximate Bayesian learning up to 2003. Variational Algorithms for Approximate Bayesian Inference, by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs. High-Level Explanation of Variational Inference by Jason Eisner may be worth reading before a more mathematically detailed treatment. Copula Variational Bayes inference via information geometry (pdf) by Tran, V.H. 2018. This paper is primarily written for students. Via Bregman divergence, the paper shows that Variational Bayes is simply a generalized Pythagorean projection of true model onto an arbitrarily correlated (copula) distributional space, of which the independent space is merely a special case. An in depth introduction to Variational Bayes note. Nguyen, D. 2023
Wikipedia/Variational_inference
Super-resolution imaging (SR) is a class of techniques that improve the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced. In some radar and sonar imaging applications (e.g. magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g. MUSIC) and compressed sensing-based algorithms (e.g., SAMV) are employed to achieve SR over standard periodogram algorithm. Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy. == Basic concepts == Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles: Diffraction limit: The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by laws of physics, whether formulated by the diffraction equations in the wave theory of light or equivalently the uncertainty principle for photons in quantum mechanics. Information transfer can never be increased beyond this boundary, but packets outside the limits can be cleverly swapped for (or multiplexed with) some inside it. One does not so much “break” as “run around” the diffraction limit. New procedures probing electro-magnetic disturbances at the molecular level (in the so-called near field) remain fully consistent with Maxwell's equations. Spatial-frequency domain: A succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred into the optical image, i.e., are not resolved. But in fact what is set by diffraction theory is the width of the passband, not a fixed upper limit. No laws of physics are broken when a spatial frequency band beyond the cut-off spatial frequency is swapped for one inside it: this has long been implemented in dark-field microscopy. Nor are information-theoretical rules broken when superimposing several bands, disentangling them in the received image needs assumptions of object invariance during multiple exposures, i.e., the substitution of one kind of uncertainty for another. Information: When the term super-resolution is used in techniques of inferring object details from statistical treatment of the image within standard resolution limits, for example, averaging multiple exposures, it involves an exchange of one kind of information (extracting signal from noise) for another (the assumption that the target has remained invariant). Resolution and localization: True resolution involves the distinction of whether a target, e.g. a star or a spectral line, is single or double, ordinarily requiring separable peaks in the image. When a target is known to be single, its location can be determined with higher precision than the image width by finding the centroid (center of gravity) of its image light distribution. The word ultra-resolution had been proposed for this process but it did not catch on, and the high-precision localization procedure is typically referred to as super-resolution. The technical achievements of enhancing the performance of imaging-forming and –sensing devices now classified as super-resolution use to the fullest but always stay within the bounds imposed by the laws of physics and information theory. == Techniques == === Optical or diffractive super-resolution === Substituting spatial-frequency bands: Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum. Dark-field illumination in microscopy is an example. See also aperture synthesis. ==== Multiplexing spatial-frequency bands ==== An image is formed using the normal passband of the optical device. Then some known light structure, for example a set of light fringes that need not even be within the passband, is superimposed on the target. The image now contains components resulting from the combination of the target and the superimposed light structure, e.g. moiré fringes, and carries information about target detail which simple unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed. For an example, see structured illumination (figure to left). ==== Multiple parameter use within traditional diffraction limit ==== If a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would use normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution. ==== Probing near-field electromagnetic disturbance ==== The usual discussion of super-resolution involved conventional imagery of an object by an optical system. But modern technology allows probing the electromagnetic disturbance within molecular distances of the source which has superior resolution properties, see also evanescent waves and the development of the new super lens. === Geometrical or image-processing super-resolution === ==== Multi-exposure image noise reduction ==== When an image is degraded by noise, there can be more detail in the average of many exposures, even within the diffraction limit. See example on the right. ==== Single-frame deblurring ==== Known defects in a given imaging situation, such as defocus or aberrations, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it. ==== Sub-pixel image localization ==== The location of a single source can be determined by computing the "center of gravity" (centroid) of the light distribution extending over several adjacent pixels (see figure on the left). Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has become known as super-resolution microscopy, e.g. stochastic optical reconstruction microscopy (STORM), where fluorescent probes attached to molecules give nanoscale distance information. It is also the mechanism underlying visual hyperacuity. ==== Bayesian induction beyond traditional diffraction limit ==== Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object. The classical example is Toraldo di Francia's proposition of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?" The approach can take the form of extrapolating the image in the frequency domain, by assuming that the object is an analytic function, and that we can exactly know the function values in some interval. This method is severely limited by the ever-present noise in digital imaging systems, but it can work for radar, astronomy, microscopy or magnetic resonance imaging. More recently, a fast single image super-resolution algorithm based on a closed-form solution to ℓ 2 − ℓ 2 {\displaystyle \ell _{2}-\ell _{2}} problems has been proposed and demonstrated to accelerate most of the existing Bayesian super-resolution methods significantly. == Aliasing == Geometrical SR reconstruction algorithms are possible if and only if the input low resolution images have been under-sampled and therefore contain aliasing. Because of this aliasing, the high-frequency content of the desired reconstruction image is embedded in the low-frequency content of each of the observed images. Given a sufficient number of observation images, and if the set of observations vary in their phase (i.e. if the images of the scene are shifted by a sub-pixel amount), then the phase information can be used to separate the aliased high-frequency content from the true low-frequency content, and the full-resolution image can be accurately reconstructed. In practice, this frequency-based approach is not used for reconstruction, but even in the case of spatial approaches (e.g. shift-add fusion), the presence of aliasing is still a necessary condition for SR reconstruction. == Technical implementations == There are many both single-frame and multiple-frame variants of SR. Multiple-frame SR uses the sub-pixel shifts between multiple low resolution images of the same scene. It creates an improved resolution image fusing information from all low resolution images, and the created higher resolution images are better descriptions of the scene. Single-frame SR methods attempt to magnify the image without producing blur. These methods use other parts of the low resolution images, or other unrelated images, to guess what the high-resolution image should look like. Algorithms can also be divided by their domain: frequency or space domain. Originally, super-resolution methods worked well only on grayscale images, but researchers have found methods to adapt them to color camera images. Recently, the use of super-resolution for 3D data has also been shown. == Research == There is promising research on using deep convolutional networks to perform super-resolution. In particular work has been demonstrated showing the transformation of a 20x microscope image of pollen grains into a 1500x scanning electron microscope image using it. While this technique can increase the information content of an image, there is no guarantee that the upscaled features exist in the original image and deep convolutional upscalers should not be used in analytical applications with ambiguous inputs. These methods can hallucinate image features, which can make them unsafe for medical use. == See also == Optical resolution Oversampling Video super-resolution Single-particle trajectory Superoscillation == References == === Other related work === Curtis, Craig H.; Milster, Tom D. (October 1992). "Analysis of Superresolution in Magneto-Optic Data Storage Devices". Applied Optics. 31 (29): 6272–6279. Bibcode:1992ApOpt..31.6272M. doi:10.1364/AO.31.006272. PMID 20733840. Zalevsky, Z.; Mendlovic, D. (2003). Optical Superresolution. Springer. ISBN 978-0-387-00591-1. Caron, J.N. (September 2004). "Rapid supersampling of multiframe sequences by use of blind deconvolution". Optics Letters. 29 (17): 1986–1988. Bibcode:2004OptL...29.1986C. doi:10.1364/OL.29.001986. PMID 15455755. Clement, G.T.; Huttunen, J.; Hynynen, K. (2005). "Superresolution ultrasound imaging using back-projected reconstruction". Journal of the Acoustical Society of America. 118 (6): 3953–3960. Bibcode:2005ASAJ..118.3953C. doi:10.1121/1.2109167. PMID 16419839. Geisler, W.S.; Perry, J.S. (2011). "Statistics for optimal point prediction in natural images". Journal of Vision. 11 (12): 14. doi:10.1167/11.12.14. PMC 5144165. PMID 22011382. Cheung, V.; Frey, B. J.; Jojic, N. (20–25 June 2005). Video epitomes (PDF). Conference on Computer Vision and Pattern Recognition (CVPR). Vol. 1. pp. 42–49. doi:10.1109/CVPR.2005.366. Bertero, M.; Boccacci, P. (October 2003). "Super-resolution in computational imaging". Micron. 34 (6–7): 265–273. doi:10.1016/s0968-4328(03)00051-9. PMID 12932769. Borman, S.; Stevenson, R. (1998). "Spatial Resolution Enhancement of Low-Resolution Image Sequences – A Comprehensive Review with Directions for Future Research" (Technical report). University of Notre Dame. Borman, S.; Stevenson, R. (1998). Super-resolution from image sequences — a review (PDF). Midwest Symposium on Circuits and Systems. Park, S. C.; Park, M. K.; Kang, M. G. (May 2003). "Super-resolution image reconstruction: a technical overview". IEEE Signal Processing Magazine. 20 (3): 21–36. Bibcode:2003ISPM...20...21P. doi:10.1109/MSP.2003.1203207. S2CID 12320918. Farsiu, S.; Robinson, D.; Elad, M.; Milanfar, P. (August 2004). "Advances and Challenges in Super-Resolution". International Journal of Imaging Systems and Technology. 14 (2): 47–57. doi:10.1002/ima.20007. S2CID 12351561. Elad, M.; Hel-Or, Y. (August 2001). "Fast Super-Resolution Reconstruction Algorithm for Pure Translational Motion and Common Space-Invariant Blur". IEEE Transactions on Image Processing. 10 (8): 1187–1193. Bibcode:2001ITIP...10.1187E. CiteSeerX 10.1.1.11.2502. doi:10.1109/83.935034. PMID 18255535. Irani, M.; Peleg, S. (June 1990). Super Resolution From Image Sequences (PDF). International Conference on Pattern Recognition. Vol. 2. pp. 115–120. Sroubek, F.; Cristobal, G.; Flusser, J. (2007). "A Unified Approach to Superresolution and Multichannel Blind Deconvolution". IEEE Transactions on Image Processing. 16 (9): 2322–2332. Bibcode:2007ITIP...16.2322S. doi:10.1109/TIP.2007.903256. PMID 17784605. S2CID 6367149. Calabuig, Alejandro; Micó, Vicente; Garcia, Javier; Zalevsky, Zeev; Ferreira, Carlos (March 2011). "Single-exposure super-resolved interferometric microscopy by red–green–blue multiplexing". Optics Letters. 36 (6): 885–887. Bibcode:2011OptL...36..885C. doi:10.1364/OL.36.000885. PMID 21403717. Chan, Wai-San; Lam, Edmund; Ng, Michael K.; Mak, Giuseppe Y. (September 2007). "Super-resolution reconstruction in a computational compound-eye imaging system". Multidimensional Systems and Signal Processing. 18 (2–3): 83–101. Bibcode:2007MSySP..18...83C. doi:10.1007/s11045-007-0022-3. S2CID 16452552. Ng, Michael K.; Shen, Huanfeng; Lam, Edmund Y.; Zhang, Liangpei (2007). "A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video". EURASIP Journal on Advances in Signal Processing. 2007: 074585. Bibcode:2007EJASP2007..104N. doi:10.1155/2007/74585. hdl:10722/73871. Glasner, D.; Bagon, S.; Irani, M. (October 2009). Super-Resolution from a Single Image (PDF). International Conference on Computer Vision (ICCV).; "example and results". Ben-Ezra, M.; Lin, Zhouchen; Wilburn, B.; Zhang, Wei (July 2011). "Penrose Pixels for Super-Resolution" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 33 (7): 1370–1383. CiteSeerX 10.1.1.174.8804. doi:10.1109/TPAMI.2010.213. PMID 21135446. S2CID 184868. Berliner, L.; Buffa, A. (2011). "Super-resolution variable-dose imaging in digital radiography: quality and dose reduction with a fluoroscopic flat-panel detector". Int J Comput Assist Radiol Surg. 6 (5): 663–673. doi:10.1007/s11548-011-0545-9. PMID 21298404. Timofte, R.; De Smet, V.; Van Gool, L. (November 2014). A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution (PDF). 12th Asian Conference on Computer Vision (ACCV).; "codes and data". Huang, J.-B; Singh, A.; Ahuja, N. (June 2015). Single Image Super-Resolution from Transformed Self-Exemplars. IEEE Conference on Computer Vision and Pattern Recognition.; "project page". CHRISTENSEN-JEFFRIES, T.; COUTURE, O.; DAYTON, P.A.; ELDAR, Y.C.; HYNYNEN, K.; KIESSLING, F.; O’REILLY, M.; PINTON, G.F.; SCHMITZ, G.; TANG, M.-X.; TANTER, M.; VAN SLOUN, R.J.G. (2020). "Super-resolution Ultrasound Imaging". Ultrasound Med. Biol. 46 (4): 865–891. doi:10.1016/j.ultrasmedbio.2019.11.013. PMC 8388823. PMID 31973952.
Wikipedia/Super-resolution
In version control systems, a repository is a data structure that stores metadata for a set of files or directory structure. Depending on whether the version control system in use is distributed, like Git or Mercurial, or centralized, like Subversion, CVS, or Perforce, the whole set of information in the repository may be duplicated on every user's system or may be maintained on a single server. Some of the metadata that a repository contains includes, among other things, a historical record of changes in the repository, a set of commit objects, and a set of references to commit objects, called heads. The main purpose of a repository is to store a set of files, as well as the history of changes made to those files. Exactly how each version control system handles storing those changes, however, differs greatly. For instance, Subversion in the past relied on a database instance but has since moved to storing its changes directly on the filesystem. These differences in storage techniques have generally led to diverse uses of version control by different groups, depending on their needs. == Overview == In software engineering, a version control system is used to keep track of versions of a set of files, usually to allow multiple developers to collaborate on a project. The repository keeps track of the files in the project, which is represented as a graph. A distributed version control system is made up of central and branch repositories. A central repository exists on the server. To make changes to it, a developer first works on a branch repository, and proceeds to commit the change to the former. == Forges == A code forge is a web interface to a version control system. A user can commonly browse repositories and their constituent files on the page itself. === Static web hosting === While forges are mainly used to perform version control operations, some forges allow users to host static web pages by uploading its source code (such as HTML and JavaScript, but not PHP) to a repository. This is usually done in order to provide documentation or a landing page for a software project. The use of repositories as a place to upload web documents allows version control to be integrated, and additionally allows quick iteration because changes are pushed through the Version Control System instead of having to upload the file through a protocol like FTP. Examples of this kind of service include GitHub Pages and GitLab Pages. == See also == Sandbox (software development) Software repository Codebase Git Forge (software) Comparison of source-code-hosting facilities == References ==
Wikipedia/Repository_(version_control)
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate. The basic idea behind stochastic approximation can be traced back to the Robbins–Monro algorithm of the 1950s. Today, stochastic gradient descent has become an important optimization method in machine learning. == Background == Both statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum: Q ( w ) = 1 n ∑ i = 1 n Q i ( w ) , {\displaystyle Q(w)={\frac {1}{n}}\sum _{i=1}^{n}Q_{i}(w),} where the parameter w {\displaystyle w} that minimizes Q ( w ) {\displaystyle Q(w)} is to be estimated. Each summand function Q i {\displaystyle Q_{i}} is typically associated with the i {\displaystyle i} -th observation in the data set (used for training). In classical statistics, sum-minimization problems arise in least squares and in maximum-likelihood estimation (for independent observations). The general class of estimators that arise as minimizers of sums are called M-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation. Therefore, contemporary statistical theorists often consider stationary points of the likelihood function (or zeros of its derivative, the score function, and other estimating equations). The sum-minimization problem also arises for empirical risk minimization. There, Q i ( w ) {\displaystyle Q_{i}(w)} is the value of the loss function at i {\displaystyle i} -th example, and Q ( w ) {\displaystyle Q(w)} is the empirical risk. When used to minimize the above function, a standard (or "batch") gradient descent method would perform the following iterations: w := w − η ∇ Q ( w ) = w − η n ∑ i = 1 n ∇ Q i ( w ) . {\displaystyle w:=w-\eta \,\nabla Q(w)=w-{\frac {\eta }{n}}\sum _{i=1}^{n}\nabla Q_{i}(w).} The step size is denoted by η {\displaystyle \eta } (sometimes called the learning rate in machine learning) and here " := {\displaystyle :=} " denotes the update of a variable in the algorithm. In many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics, one-parameter exponential families allow economical function-evaluations and gradient-evaluations. However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems. == Iterative method == In stochastic (or "on-line") gradient descent, the true gradient of Q ( w ) {\displaystyle Q(w)} is approximated by a gradient at a single sample: w := w − η ∇ Q i ( w ) . {\displaystyle w:=w-\eta \,\nabla Q_{i}(w).} As the algorithm sweeps through the training set, it performs the above update for each training sample. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an adaptive learning rate so that the algorithm converges. In pseudocode, stochastic gradient descent can be presented as : A compromise between computing the true gradient and the gradient at a single sample is to compute the gradient against more than one training sample (called a "mini-batch") at each step. This can perform significantly better than "true" stochastic gradient descent described, because the code can make use of vectorization libraries rather than computing each step separately as was first shown in where it was called "the bunch-mode back-propagation algorithm". It may also result in smoother convergence, as the gradient computed at each step is averaged over more training samples. The convergence of stochastic gradient descent has been analyzed using the theories of convex minimization and of stochastic approximation. Briefly, when the learning rates η {\displaystyle \eta } decrease with an appropriate rate, and subject to relatively mild assumptions, stochastic gradient descent converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum. This is in fact a consequence of the Robbins–Siegmund theorem. == Linear regression == Suppose we want to fit a straight line y ^ = w 1 + w 2 x {\displaystyle {\hat {y}}=w_{1}+w_{2}x} to a training set with observations ( ( x 1 , y 1 ) , ( x 2 , y 2 ) … , ( x n , y n ) ) {\displaystyle ((x_{1},y_{1}),(x_{2},y_{2})\ldots ,(x_{n},y_{n}))} and corresponding estimated responses ( y ^ 1 , y ^ 2 , … , y ^ n ) {\displaystyle ({\hat {y}}_{1},{\hat {y}}_{2},\ldots ,{\hat {y}}_{n})} using least squares. The objective function to be minimized is Q ( w ) = ∑ i = 1 n Q i ( w ) = ∑ i = 1 n ( y ^ i − y i ) 2 = ∑ i = 1 n ( w 1 + w 2 x i − y i ) 2 . {\displaystyle Q(w)=\sum _{i=1}^{n}Q_{i}(w)=\sum _{i=1}^{n}\left({\hat {y}}_{i}-y_{i}\right)^{2}=\sum _{i=1}^{n}\left(w_{1}+w_{2}x_{i}-y_{i}\right)^{2}.} The last line in the above pseudocode for this specific problem will become: [ w 1 w 2 ] ← [ w 1 w 2 ] − η [ ∂ ∂ w 1 ( w 1 + w 2 x i − y i ) 2 ∂ ∂ w 2 ( w 1 + w 2 x i − y i ) 2 ] = [ w 1 w 2 ] − η [ 2 ( w 1 + w 2 x i − y i ) 2 x i ( w 1 + w 2 x i − y i ) ] . {\displaystyle {\begin{bmatrix}w_{1}\\w_{2}\end{bmatrix}}\leftarrow {\begin{bmatrix}w_{1}\\w_{2}\end{bmatrix}}-\eta {\begin{bmatrix}{\frac {\partial }{\partial w_{1}}}(w_{1}+w_{2}x_{i}-y_{i})^{2}\\{\frac {\partial }{\partial w_{2}}}(w_{1}+w_{2}x_{i}-y_{i})^{2}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\end{bmatrix}}-\eta {\begin{bmatrix}2(w_{1}+w_{2}x_{i}-y_{i})\\2x_{i}(w_{1}+w_{2}x_{i}-y_{i})\end{bmatrix}}.} Note that in each iteration or update step, the gradient is only evaluated at a single x i {\displaystyle x_{i}} . This is the key difference between stochastic gradient descent and batched gradient descent. In general, given a linear regression y ^ = ∑ k ∈ 1 : m w k x k {\displaystyle {\hat {y}}=\sum _{k\in 1:m}w_{k}x_{k}} problem, stochastic gradient descent behaves differently when m < n {\displaystyle m<n} (underparameterized) and m ≥ n {\displaystyle m\geq n} (overparameterized). In the overparameterized case, stochastic gradient descent converges to arg ⁡ min w : w T x k = y k ∀ k ∈ 1 : n ‖ w − w 0 ‖ {\displaystyle \arg \min _{w:w^{T}x_{k}=y_{k}\forall k\in 1:n}\|w-w_{0}\|} . That is, SGD converges to the interpolation solution with minimum distance from the starting w 0 {\displaystyle w_{0}} . This is true even when the learning rate remains constant. In the underparameterized case, SGD does not converge if learning rate remains constant. == History == In 1951, Herbert Robbins and Sutton Monro introduced the earliest stochastic approximation methods, preceding stochastic gradient descent. Building on this work one year later, Jack Kiefer and Jacob Wolfowitz published an optimization algorithm very close to stochastic gradient descent, using central differences as an approximation of the gradient. Later in the 1950s, Frank Rosenblatt used SGD to optimize his perceptron model, demonstrating the first applicability of stochastic gradient descent to neural networks. Backpropagation was first described in 1986, with stochastic gradient descent being used to efficiently optimize parameters across neural networks with multiple hidden layers. Soon after, another improvement was developed: mini-batch gradient descent, where small batches of data are substituted for single samples. In 1997, the practical performance benefits from vectorization achievable with such small batches were first explored, paving the way for efficient optimization in machine learning. As of 2023, this mini-batch approach remains the norm for training neural networks, balancing the benefits of stochastic gradient descent with gradient descent. By the 1980s, momentum had already been introduced, and was added to SGD optimization techniques in 1986. However, these optimization techniques assumed constant hyperparameters, i.e. a fixed learning rate and momentum parameter. In the 2010s, adaptive approaches to applying SGD with a per-parameter learning rate were introduced with AdaGrad (for "Adaptive Gradient") in 2011 and RMSprop (for "Root Mean Square Propagation") in 2012. In 2014, Adam (for "Adaptive Moment Estimation") was published, applying the adaptive approaches of RMSprop to momentum; many improvements and branches of Adam were then developed such as Adadelta, Adagrad, AdamW, and Adamax. Within machine learning, approaches to optimization in 2023 are dominated by Adam-derived optimizers. TensorFlow and PyTorch, by far the most popular machine learning libraries, as of 2023 largely only include Adam-derived optimizers, as well as predecessors to Adam such as RMSprop and classic SGD. PyTorch also partially supports Limited-memory BFGS, a line-search method, but only for single-device setups without parameter groups. == Notable applications == Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the back propagation algorithm, it is the de facto standard algorithm for training artificial neural networks. Its use has been also reported in the Geophysics community, specifically to applications of Full Waveform Inversion (FWI). Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for training linear regression models, originally under the name ADALINE. Another stochastic gradient descent algorithm is the least mean squares (LMS) adaptive filter. == Extensions and variants == Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a learning rate (step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge. A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function ηt of the iteration number t, giving a learning rate schedule, so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on k-means clustering. Practical guidance on choosing the step size in several variants of SGD is given by Spall. === Implicit updates (ISGD) === As mentioned earlier, classical stochastic gradient descent is generally sensitive to learning rate η. Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved by considering implicit updates whereby the stochastic gradient is evaluated at the next iterate rather than the current one: w new := w old − η ∇ Q i ( w new ) . {\displaystyle w^{\text{new}}:=w^{\text{old}}-\eta \,\nabla Q_{i}(w^{\text{new}}).} This equation is implicit since w new {\displaystyle w^{\text{new}}} appears on both sides of the equation. It is a stochastic form of the proximal gradient method since the update can also be written as: w new := arg ⁡ min w { Q i ( w ) + 1 2 η ‖ w − w old ‖ 2 } . {\displaystyle w^{\text{new}}:=\arg \min _{w}\left\{Q_{i}(w)+{\frac {1}{2\eta }}\left\|w-w^{\text{old}}\right\|^{2}\right\}.} As an example, consider least squares with features x 1 , … , x n ∈ R p {\displaystyle x_{1},\ldots ,x_{n}\in \mathbb {R} ^{p}} and observations y 1 , … , y n ∈ R {\displaystyle y_{1},\ldots ,y_{n}\in \mathbb {R} } . We wish to solve: min w ∑ j = 1 n ( y j − x j ′ w ) 2 , {\displaystyle \min _{w}\sum _{j=1}^{n}\left(y_{j}-x_{j}'w\right)^{2},} where x j ′ w = x j 1 w 1 + x j , 2 w 2 + . . . + x j , p w p {\displaystyle x_{j}'w=x_{j1}w_{1}+x_{j,2}w_{2}+...+x_{j,p}w_{p}} indicates the inner product. Note that x {\displaystyle x} could have "1" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows: w new = w old + η ( y i − x i ′ w old ) x i {\displaystyle w^{\text{new}}=w^{\text{old}}+\eta \left(y_{i}-x_{i}'w^{\text{old}}\right)x_{i}} where i {\displaystyle i} is uniformly sampled between 1 and n {\displaystyle n} . Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, when η {\displaystyle \eta } is misspecified so that I − η x i x i ′ {\displaystyle I-\eta x_{i}x_{i}'} has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast, implicit stochastic gradient descent (shortened as ISGD) can be solved in closed-form as: w new = w old + η 1 + η ‖ x i ‖ 2 ( y i − x i ′ w old ) x i . {\displaystyle w^{\text{new}}=w^{\text{old}}+{\frac {\eta }{1+\eta \left\|x_{i}\right\|^{2}}}\left(y_{i}-x_{i}'w^{\text{old}}\right)x_{i}.} This procedure will remain numerically stable virtually for all η {\displaystyle \eta } as the learning rate is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and normalized least mean squares filter (NLMS). Even though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models. Specifically, suppose that Q i ( w ) {\displaystyle Q_{i}(w)} depends on w {\displaystyle w} only through a linear combination with features x i {\displaystyle x_{i}} , so that we can write ∇ w Q i ( w ) = − q ( x i ′ w ) x i {\displaystyle \nabla _{w}Q_{i}(w)=-q(x_{i}'w)x_{i}} , where q ( ) ∈ R {\displaystyle q()\in \mathbb {R} } may depend on x i , y i {\displaystyle x_{i},y_{i}} as well but not on w {\displaystyle w} except through x i ′ w {\displaystyle x_{i}'w} . Least squares obeys this rule, and so does logistic regression, and most generalized linear models. For instance, in least squares, q ( x i ′ w ) = y i − x i ′ w {\displaystyle q(x_{i}'w)=y_{i}-x_{i}'w} , and in logistic regression q ( x i ′ w ) = y i − S ( x i ′ w ) {\displaystyle q(x_{i}'w)=y_{i}-S(x_{i}'w)} , where S ( u ) = e u / ( 1 + e u ) {\displaystyle S(u)=e^{u}/(1+e^{u})} is the logistic function. In Poisson regression, q ( x i ′ w ) = y i − e x i ′ w {\displaystyle q(x_{i}'w)=y_{i}-e^{x_{i}'w}} , and so on. In such settings, ISGD is simply implemented as follows. Let f ( ξ ) = η q ( x i ′ w old + ξ ‖ x i ‖ 2 ) {\displaystyle f(\xi )=\eta q(x_{i}'w^{\text{old}}+\xi \|x_{i}\|^{2})} , where ξ {\displaystyle \xi } is scalar. Then, ISGD is equivalent to: w new = w old + ξ ∗ x i , where ξ ∗ = f ( ξ ∗ ) . {\displaystyle w^{\text{new}}=w^{\text{old}}+\xi ^{\ast }x_{i},~{\text{where}}~\xi ^{\ast }=f(\xi ^{\ast }).} The scaling factor ξ ∗ ∈ R {\displaystyle \xi ^{\ast }\in \mathbb {R} } can be found through the bisection method since in most regular models, such as the aforementioned generalized linear models, function q ( ) {\displaystyle q()} is decreasing, and thus the search bounds for ξ ∗ {\displaystyle \xi ^{\ast }} are [ min ( 0 , f ( 0 ) ) , max ( 0 , f ( 0 ) ) ] {\displaystyle [\min(0,f(0)),\max(0,f(0))]} . === Momentum === Further proposals include the momentum method or the heavy ball method, which in ML context appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning and borrowed the idea from Soviet mathematician Boris Polyak's 1964 article on solving functional equations. Stochastic gradient descent with momentum remembers the update Δw at each iteration, and determines the next update as a linear combination of the gradient and the previous update: Δ w := α Δ w − η ∇ Q i ( w ) {\displaystyle \Delta w:=\alpha \Delta w-\eta \,\nabla Q_{i}(w)} w := w + Δ w {\displaystyle w:=w+\Delta w} that leads to: w := w − η ∇ Q i ( w ) + α Δ w {\displaystyle w:=w-\eta \,\nabla Q_{i}(w)+\alpha \Delta w} where the parameter w {\displaystyle w} which minimizes Q ( w ) {\displaystyle Q(w)} is to be estimated, η {\displaystyle \eta } is a step size (sometimes called the learning rate in machine learning) and α {\displaystyle \alpha } is an exponential decay factor between 0 and 1 that determines the relative contribution of the current gradient and earlier gradients to the weight change. The name momentum stems from an analogy to momentum in physics: the weight vector w {\displaystyle w} , thought of as a particle traveling through parameter space, incurs acceleration from the gradient of the loss ("force"). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfully by computer scientists in the training of artificial neural networks for several decades. The momentum method is closely related to underdamped Langevin dynamics, and may be combined with simulated annealing. In mid-1980s the method was modified by Yurii Nesterov to use the gradient predicted at the next point, and the resulting so-called Nesterov Accelerated Gradient was sometimes used in ML in the 2010s. === Averaging === Averaged stochastic gradient descent, invented independently by Ruppert and Polyak in the late 1980s, is ordinary stochastic gradient descent that records an average of its parameter vector over time. That is, the update is the same as for ordinary stochastic gradient descent, but the algorithm also keeps track of w ¯ = 1 t ∑ i = 0 t − 1 w i . {\displaystyle {\bar {w}}={\frac {1}{t}}\sum _{i=0}^{t-1}w_{i}.} When optimization is done, this averaged parameter vector takes the place of w. === AdaGrad === AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative. Examples of such applications include natural language processing and image recognition. It still has a base learning rate η, but this is multiplied with the elements of a vector {Gj,j} which is the diagonal of the outer product matrix G = ∑ τ = 1 t g τ g τ T {\displaystyle G=\sum _{\tau =1}^{t}g_{\tau }g_{\tau }^{\mathsf {T}}} where g τ = ∇ Q i ( w ) {\displaystyle g_{\tau }=\nabla Q_{i}(w)} , the gradient, at iteration τ. The diagonal is given by G j , j = ∑ τ = 1 t g τ , j 2 . {\displaystyle G_{j,j}=\sum _{\tau =1}^{t}g_{\tau ,j}^{2}.} This vector essentially stores a historical sum of gradient squares by dimension and is updated after every iteration. The formula for an update is now w := w − η d i a g ( G ) − 1 2 ⊙ g {\displaystyle w:=w-\eta \,\mathrm {diag} (G)^{-{\frac {1}{2}}}\odot g} or, written as per-parameter updates, w j := w j − η G j , j g j . {\displaystyle w_{j}:=w_{j}-{\frac {\eta }{\sqrt {G_{j,j}}}}g_{j}.} Each {G(i,i)} gives rise to a scaling factor for the learning rate that applies to a single parameter wi. Since the denominator in this factor, G i = ∑ τ = 1 t g τ 2 {\textstyle {\sqrt {G_{i}}}={\sqrt {\sum _{\tau =1}^{t}g_{\tau }^{2}}}} is the ℓ2 norm of previous derivatives, extreme parameter updates get dampened, while parameters that get few or small updates receive higher learning rates. While designed for convex problems, AdaGrad has been successfully applied to non-convex optimization. === RMSProp === RMSProp (for Root Mean Square Propagation) is a method invented in 2012 by James Martens and Ilya Sutskever, at the time both PhD students in Geoffrey Hinton's group, in which the learning rate is, like in Adagrad, adapted for each of the parameters. The idea is to divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight. Unusually, it was not published in an article but merely described in a Coursera lecture. Citation 1: https://deepai.org/machine-learning-glossary-and-terms/rmsprop#:~:text=The%20RMSProp%20algorithm%20was%20introduced,its%20effectiveness%20in%20various%20applications. Citation 2: this video at 36:37 https://www.youtube.com/watch?v=-eyhCTvrEtE&t=36m37s So, first the running average is calculated in terms of means square, v ( w , t ) := γ v ( w , t − 1 ) + ( 1 − γ ) ( ∇ Q i ( w ) ) 2 {\displaystyle v(w,t):=\gamma v(w,t-1)+\left(1-\gamma \right)\left(\nabla Q_{i}(w)\right)^{2}} where, γ {\displaystyle \gamma } is the forgetting factor. The concept of storing the historical gradient as sum of squares is borrowed from Adagrad, but "forgetting" is introduced to solve Adagrad's diminishing learning rates in non-convex problems by gradually decreasing the influence of old data. And the parameters are updated as, w := w − η v ( w , t ) ∇ Q i ( w ) {\displaystyle w:=w-{\frac {\eta }{\sqrt {v(w,t)}}}\nabla Q_{i}(w)} RMSProp has shown good adaptation of learning rate in different applications. RMSProp can be seen as a generalization of Rprop and is capable to work with mini-batches as well opposed to only full-batches. === Adam === Adam (short for Adaptive Moment Estimation) is a 2014 update to the RMSProp optimizer combining it with the main feature of the Momentum method. In this optimization algorithm, running averages with exponential forgetting of both the gradients and the second moments of the gradients are used. Given parameters w ( t ) {\displaystyle w^{(t)}} and a loss function L ( t ) {\displaystyle L^{(t)}} , where t {\displaystyle t} indexes the current training iteration (indexed at 1 {\displaystyle 1} ), Adam's parameter update is given by: m w ( t ) := β 1 m w ( t − 1 ) + ( 1 − β 1 ) ∇ w L ( t − 1 ) {\displaystyle m_{w}^{(t)}:=\beta _{1}m_{w}^{(t-1)}+\left(1-\beta _{1}\right)\nabla _{w}L^{(t-1)}} v w ( t ) := β 2 v w ( t − 1 ) + ( 1 − β 2 ) ( ∇ w L ( t − 1 ) ) 2 {\displaystyle v_{w}^{(t)}:=\beta _{2}v_{w}^{(t-1)}+\left(1-\beta _{2}\right)\left(\nabla _{w}L^{(t-1)}\right)^{2}} m ^ w ( t ) = m w ( t ) 1 − β 1 t {\displaystyle {\hat {m}}_{w}^{(t)}={\frac {m_{w}^{(t)}}{1-\beta _{1}^{t}}}} v ^ w ( t ) = v w ( t ) 1 − β 2 t {\displaystyle {\hat {v}}_{w}^{(t)}={\frac {v_{w}^{(t)}}{1-\beta _{2}^{t}}}} w ( t ) := w ( t − 1 ) − η m ^ w ( t ) v ^ w ( t ) + ε {\displaystyle w^{(t)}:=w^{(t-1)}-\eta {\frac {{\hat {m}}_{w}^{(t)}}{{\sqrt {{\hat {v}}_{w}^{(t)}}}+\varepsilon }}} where ε {\displaystyle \varepsilon } is a small scalar (e.g. 10 − 8 {\displaystyle 10^{-8}} ) used to prevent division by 0, and β 1 {\displaystyle \beta _{1}} (e.g. 0.9) and β 2 {\displaystyle \beta _{2}} (e.g. 0.999) are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done element-wise. As the exponential moving averages of the gradient m w ( t ) {\displaystyle m_{w}^{(t)}} and the squared gradient v w ( t ) {\displaystyle v_{w}^{(t)}} are initialized with a vector of 0's, there would be a bias towards zero in the first training iterations. A factor 1 1 − β 1 / 2 t {\displaystyle {\tfrac {1}{1-\beta _{1/2}^{t}}}} is introduced to compensate this bias and get better estimates m ^ w ( t ) {\displaystyle {\hat {m}}_{w}^{(t)}} and v ^ w ( t ) {\displaystyle {\hat {v}}_{w}^{(t)}} . The initial proof establishing the convergence of Adam was incomplete, and subsequent analysis has revealed that Adam does not converge for all convex objectives. Despite this, Adam continues to be used due to its strong performance in practice. ==== Variants ==== The popularity of Adam inspired many variants and enhancements. Some examples include: Nesterov-enhanced gradients: NAdam, FASFA varying interpretations of second-order information: Powerpropagation and AdaSqrt. Using infinity norm: AdaMax AMSGrad, which improves convergence over Adam by using maximum of past squared gradients instead of the exponential average. AdamX further improves convergence over AMSGrad. AdamW, which improves the weight decay. === Sign-based stochastic gradient descent === Even though sign-based optimization goes back to the aforementioned Rprop, in 2018 researchers tried to simplify Adam by removing the magnitude of the stochastic gradient from being taken into account and only considering its sign. === Backtracking line search === Backtracking line search is another variant of gradient descent. All of the below are sourced from the mentioned link. It is based on a condition known as the Armijo–Goldstein condition. Both methods allow learning rates to change at each iteration; however, the manner of the change is different. Backtracking line search uses function evaluations to check Armijo's condition, and in principle the loop in the algorithm for determining the learning rates can be long and unknown in advance. Adaptive SGD does not need a loop in determining learning rates. On the other hand, adaptive SGD does not guarantee the "descent property" – which Backtracking line search enjoys – which is that f ( x n + 1 ) ≤ f ( x n ) {\displaystyle f(x_{n+1})\leq f(x_{n})} for all n. If the gradient of the cost function is globally Lipschitz continuous, with Lipschitz constant L, and learning rate is chosen of the order 1/L, then the standard version of SGD is a special case of backtracking line search. === Second-order methods === A stochastic analogue of the standard (deterministic) Newton–Raphson algorithm (a "second-order" method) provides an asymptotically optimal or near-optimal form of iterative optimization in the setting of stochastic approximation. A method that uses direct measurements of the Hessian matrices of the summands in the empirical risk function was developed by Byrd, Hansen, Nocedal, and Singer. However, directly determining the required Hessian matrices for optimization may not be possible in practice. Practical and theoretically sound methods for second-order versions of SGD that do not require direct Hessian information are given by Spall and others. (A less efficient method based on finite differences, instead of simultaneous perturbations, is given by Ruppert.) Another approach to the approximation Hessian matrix is replacing it with the Fisher information matrix, which transforms usual gradient to natural. These methods not requiring direct Hessian information are based on either values of the summands in the above empirical risk function or values of the gradients of the summands (i.e., the SGD inputs). In particular, second-order optimality is asymptotically achievable without direct calculation of the Hessian matrices of the summands in the empirical risk function. When the objective is a nonlinear least-squres loss Q ( w ) = 1 n ∑ i = 1 n Q i ( w ) = 1 n ∑ i = 1 n ( m ( w ; x i ) − y i ) 2 , {\displaystyle Q(w)={\frac {1}{n}}\sum _{i=1}^{n}Q_{i}(w)={\frac {1}{n}}\sum _{i=1}^{n}(m(w;x_{i})-y_{i})^{2},} where m ( w ; x i ) {\displaystyle m(w;x_{i})} is the predictive model (e.g., a deep neural network) the objective's structure can be exploited to estimate 2nd order information using gradients only. The resulting methods are simple and often effective == Approximations in continuous time == For small learning rate η {\textstyle \eta } stochastic gradient descent ( w n ) n ∈ N 0 {\textstyle (w_{n})_{n\in \mathbb {N} _{0}}} can be viewed as a discretization of the gradient flow ODE d d t W t = − ∇ Q ( W t ) {\displaystyle {\frac {d}{dt}}W_{t}=-\nabla Q(W_{t})} subject to additional stochastic noise. This approximation is only valid on a finite time-horizon in the following sense: assume that all the coefficients Q i {\textstyle Q_{i}} are sufficiently smooth. Let T > 0 {\textstyle T>0} and g : R d → R {\textstyle g:\mathbb {R} ^{d}\to \mathbb {R} } be a sufficiently smooth test function. Then, there exists a constant C > 0 {\textstyle C>0} such that for all η > 0 {\textstyle \eta >0} max k = 0 , … , ⌊ T / η ⌋ | E [ g ( w k ) ] − g ( W k η ) | ≤ C η , {\displaystyle \max _{k=0,\dots ,\lfloor T/\eta \rfloor }\left|\mathbb {E} [g(w_{k})]-g(W_{k\eta })\right|\leq C\eta ,} where E {\textstyle \mathbb {E} } denotes taking the expectation with respect to the random choice of indices in the stochastic gradient descent scheme. Since this approximation does not capture the random fluctuations around the mean behavior of stochastic gradient descent solutions to stochastic differential equations (SDEs) have been proposed as limiting objects. More precisely, the solution to the SDE d W t = − ∇ ( Q ( W t ) + 1 4 η | ∇ Q ( W t ) | 2 ) d t + η Σ ( W t ) 1 / 2 d B t , {\displaystyle dW_{t}=-\nabla \left(Q(W_{t})+{\tfrac {1}{4}}\eta |\nabla Q(W_{t})|^{2}\right)dt+{\sqrt {\eta }}\Sigma (W_{t})^{1/2}dB_{t},} for Σ ( w ) = 1 n 2 ( ∑ i = 1 n Q i ( w ) − Q ( w ) ) ( ∑ i = 1 n Q i ( w ) − Q ( w ) ) T {\displaystyle \Sigma (w)={\frac {1}{n^{2}}}\left(\sum _{i=1}^{n}Q_{i}(w)-Q(w)\right)\left(\sum _{i=1}^{n}Q_{i}(w)-Q(w)\right)^{T}} where d B t {\textstyle dB_{t}} denotes the Ito-integral with respect to a Brownian motion is a more precise approximation in the sense that there exists a constant C > 0 {\textstyle C>0} such that max k = 0 , … , ⌊ T / η ⌋ | E [ g ( w k ) ] − E [ g ( W k η ) ] | ≤ C η 2 . {\displaystyle \max _{k=0,\dots ,\lfloor T/\eta \rfloor }\left|\mathbb {E} [g(w_{k})]-\mathbb {E} [g(W_{k\eta })]\right|\leq C\eta ^{2}.} However this SDE only approximates the one-point motion of stochastic gradient descent. For an approximation of the stochastic flow one has to consider SDEs with infinite-dimensional noise. == See also == Backtracking line search Broken Neural Scaling Law Coordinate descent – changes one coordinate at a time, rather than one example Linear classifier Online machine learning Stochastic hill climbing Stochastic variance reduction == Notes == == References == == Further reading == Bottou, Léon (2004), "Stochastic Learning", Advanced Lectures on Machine Learning, LNAI, vol. 3176, Springer, pp. 146–168, ISBN 978-3-540-23122-6 Buduma, Nikhil; Locascio, Nicholas (2017), "Beyond Gradient Descent", Fundamentals of Deep Learning : Designing Next-Generation Machine Intelligence Algorithms, O'Reilly, ISBN 9781491925584 LeCun, Yann A.; Bottou, Léon; Orr, Genevieve B.; Müller, Klaus-Robert (2012), "Efficient BackProp", Neural Networks: Tricks of the Trade, Springer, pp. 9–48, ISBN 978-3-642-35288-1 Spall, James C. (2003), Introduction to Stochastic Search and Optimization, Wiley, ISBN 978-0-471-33052-3 == External links == "Gradient Descent, How Neural Networks Learn". 3Blue1Brown. October 16, 2017. Archived from the original on 2021-12-22 – via YouTube. Goh (April 4, 2017). "Why Momentum Really Works". Distill. 2 (4). doi:10.23915/distill.00006. Interactive paper explaining momentum.
Wikipedia/Adam_(optimization_algorithm)
The Open Neural Network Exchange (ONNX) [ˈɒnɪks] is an open-source artificial intelligence ecosystem of technology companies and research organizations that establish open standards for representing machine learning algorithms and software tools to promote innovation and collaboration in the AI sector. ONNX is available on GitHub. == History == ONNX was originally named Toffee and was developed by the PyTorch team at Facebook. In September 2017 it was renamed to ONNX and announced by Facebook and Microsoft. Later, IBM, Huawei, Intel, AMD, Arm and Qualcomm announced support for the initiative. In October 2017, Microsoft announced that it would add its Cognitive Toolkit and Project Brainwave platform to the initiative. In November 2019 ONNX was accepted as graduate project in Linux Foundation AI. In October 2020 Zetane Systems became a member of the ONNX ecosystem. == Intent == The initiative targets: === Framework interoperability === Allow developers to more easily move between frameworks, some of which may be more desirable for specific phases of the development process, such as fast training, network architecture flexibility or inferencing on mobile devices. === Shared optimization === Allow hardware vendors and others to improve the performance of artificial neural networks of multiple frameworks at once by targeting the ONNX representation. == Contents == ONNX provides definitions of an extensible computation graph model, built-in operators and standard data types, focused on inferencing (evaluation). Each computation dataflow graph is a list of nodes that form an acyclic graph. Nodes have inputs and outputs. Each node is a call to an operator. Metadata documents the graph. Built-in operators are to be available on each ONNX-supporting framework. == See also == Neural Network Exchange Format Comparison of deep learning software Predictive Model Markup Language—an XML-based predictive model interchange format PicklingTools—an open-source collection of tools for allowing C++ and Python systems to share information quickly and easily. == References == == External links == Boyd, Eric (2017-09-07). "Microsoft and Facebook create open ecosystem for AI model interoperability – Microsoft Cognitive Toolkit". Microsoft Cognitive Toolkit. Retrieved 2017-10-11. onnx: Open Neural Network Exchange, Open Neural Network Exchange, 2017-10-11, retrieved 2017-10-11
Wikipedia/Open_Neural_Network_Exchange
In information theory, the cross-entropy between two probability distributions p {\displaystyle p} and q {\displaystyle q} , over the same underlying set of events, measures the average number of bits needed to identify an event drawn from the set when the coding scheme used for the set is optimized for an estimated probability distribution q {\displaystyle q} , rather than the true distribution p {\displaystyle p} . == Definition == The cross-entropy of the distribution q {\displaystyle q} relative to a distribution p {\displaystyle p} over a given set is defined as follows: H ( p , q ) = − E p ⁡ [ log ⁡ q ] , {\displaystyle H(p,q)=-\operatorname {E} _{p}[\log q],} where E p ⁡ [ ⋅ ] {\displaystyle \operatorname {E} _{p}[\cdot ]} is the expected value operator with respect to the distribution p {\displaystyle p} . The definition may be formulated using the Kullback–Leibler divergence D K L ( p ∥ q ) {\displaystyle D_{\mathrm {KL} }(p\parallel q)} , divergence of p {\displaystyle p} from q {\displaystyle q} (also known as the relative entropy of p {\displaystyle p} with respect to q {\displaystyle q} ). H ( p , q ) = H ( p ) + D K L ( p ∥ q ) , {\displaystyle H(p,q)=H(p)+D_{\mathrm {KL} }(p\parallel q),} where H ( p ) {\displaystyle H(p)} is the entropy of p {\displaystyle p} . For discrete probability distributions p {\displaystyle p} and q {\displaystyle q} with the same support X {\displaystyle {\mathcal {X}}} , this means The situation for continuous distributions is analogous. We have to assume that p {\displaystyle p} and q {\displaystyle q} are absolutely continuous with respect to some reference measure r {\displaystyle r} (usually r {\displaystyle r} is a Lebesgue measure on a Borel σ-algebra). Let P {\displaystyle P} and Q {\displaystyle Q} be probability density functions of p {\displaystyle p} and q {\displaystyle q} with respect to r {\displaystyle r} . Then − ∫ X P ( x ) log ⁡ Q ( x ) d x = E p ⁡ [ − log ⁡ Q ] , {\displaystyle -\int _{\mathcal {X}}P(x)\,\log Q(x)\,\mathrm {d} x=\operatorname {E} _{p}[-\log Q],} and therefore NB: The notation H ( p , q ) {\displaystyle H(p,q)} is also used for a different concept, the joint entropy of p {\displaystyle p} and q {\displaystyle q} . == Motivation == In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value x i {\displaystyle x_{i}} out of a set of possibilities { x 1 , … , x n } {\displaystyle \{x_{1},\ldots ,x_{n}\}} can be seen as representing an implicit probability distribution q ( x i ) = ( 1 2 ) ℓ i {\displaystyle q(x_{i})=\left({\frac {1}{2}}\right)^{\ell _{i}}} over { x 1 , … , x n } {\displaystyle \{x_{1},\ldots ,x_{n}\}} , where ℓ i {\displaystyle \ell _{i}} is the length of the code for x i {\displaystyle x_{i}} in bits. Therefore, cross-entropy can be interpreted as the expected message-length per datum when a wrong distribution q {\displaystyle q} is assumed while the data actually follows a distribution p {\displaystyle p} . That is why the expectation is taken over the true probability distribution p {\displaystyle p} and not q . {\displaystyle q.} Indeed the expected message-length under the true distribution p {\displaystyle p} is E p ⁡ [ ℓ ] = − E p ⁡ [ ln ⁡ q ( x ) ln ⁡ ( 2 ) ] = − E p ⁡ [ log 2 ⁡ q ( x ) ] = − ∑ x i p ( x i ) log 2 ⁡ q ( x i ) = − ∑ x p ( x ) log 2 ⁡ q ( x ) = H ( p , q ) . {\displaystyle {\begin{aligned}\operatorname {E} _{p}[\ell ]&=-\operatorname {E} _{p}\left[{\frac {\ln {q(x)}}{\ln(2)}}\right]\\[1ex]&=-\operatorname {E} _{p}\left[\log _{2}{q(x)}\right]\\[1ex]&=-\sum _{x_{i}}p(x_{i})\,\log _{2}q(x_{i})\\[1ex]&=-\sum _{x}p(x)\,\log _{2}q(x)=H(p,q).\end{aligned}}} == Estimation == There are many situations where cross-entropy needs to be measured but the distribution of p {\displaystyle p} is unknown. An example is language modeling, where a model is created based on a training set T {\displaystyle T} , and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example, p {\displaystyle p} is the true distribution of words in any corpus, and q {\displaystyle q} is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula: H ( T , q ) = − ∑ i = 1 N 1 N log 2 ⁡ q ( x i ) {\displaystyle H(T,q)=-\sum _{i=1}^{N}{\frac {1}{N}}\log _{2}q(x_{i})} where N {\displaystyle N} is the size of the test set, and q ( x ) {\displaystyle q(x)} is the probability of event x {\displaystyle x} estimated from the training set. In other words, q ( x i ) {\displaystyle q(x_{i})} is the probability estimate of the model that the i-th word of the text is x i {\displaystyle x_{i}} . The sum is averaged over the N {\displaystyle N} words of the test. This is a Monte Carlo estimate of the true cross-entropy, where the test set is treated as samples from p ( x ) {\displaystyle p(x)} . == Relation to maximum likelihood == The cross entropy arises in classification problems when introducing a logarithm in the guise of the log-likelihood function. The section is concerned with the subject of estimation of the probability of different possible discrete outcomes. To this end, denote a parametrized family of distributions by q θ {\displaystyle q_{\theta }} , with θ {\displaystyle \theta } subject to the optimization effort. Consider a given finite sequence of N {\displaystyle N} values x i {\displaystyle x_{i}} from a training set, obtained from conditionally independent sampling. The likelihood assigned to any considered parameter θ {\displaystyle \theta } of the model is then given by the product over all probabilities q θ ( X = x i ) {\displaystyle q_{\theta }(X=x_{i})} . Repeated occurrences are possible, leading to equal factors in the product. If the count of occurrences of the value equal to x i {\displaystyle x_{i}} (for some index i {\displaystyle i} ) is denoted by # x i {\displaystyle \#x_{i}} , then the frequency of that value equals # x i / N {\displaystyle \#x_{i}/N} . Denote the latter by p ( X = x i ) {\displaystyle p(X=x_{i})} , as it may be understood as empirical approximation to the probability distribution underlying the scenario. Further denote by P P := e H ( p , q θ ) {\displaystyle PP:={\mathrm {e} }^{H(p,q_{\theta })}} the perplexity, which can be seen to equal ∏ x i q θ ( X = x i ) − p ( X = x i ) {\textstyle \prod _{x_{i}}q_{\theta }(X=x_{i})^{-p(X=x_{i})}} by the calculation rules for the logarithm, and where the product is over the values without double counting. So L ( θ ; x ) = ∏ i q θ ( X = x i ) = ∏ x i q θ ( X = x i ) # x i = P P − N = e − N ⋅ H ( p , q θ ) {\displaystyle {\mathcal {L}}(\theta ;{\mathbf {x} })=\prod _{i}q_{\theta }(X=x_{i})=\prod _{x_{i}}q_{\theta }(X=x_{i})^{\#x_{i}}=PP^{-N}={\mathrm {e} }^{-N\cdot H(p,q_{\theta })}} or log ⁡ L ( θ ; x ) = − N ⋅ H ( p , q θ ) . {\displaystyle \log {\mathcal {L}}(\theta ;{\mathbf {x} })=-N\cdot H(p,q_{\theta }).} Since the logarithm is a monotonically increasing function, it does not affect extremization. So observe that the likelihood maximization amounts to minimization of the cross-entropy. == Cross-entropy minimization == Cross-entropy minimization is frequently used in optimization and rare-event probability estimation. When comparing a distribution q {\displaystyle q} against a fixed reference distribution p {\displaystyle p} , cross-entropy and KL divergence are identical up to an additive constant (since p {\displaystyle p} is fixed): According to the Gibbs' inequality, both take on their minimal values when p = q {\displaystyle p=q} , which is 0 {\displaystyle 0} for KL divergence, and H ( p ) {\displaystyle \mathrm {H} (p)} for cross-entropy. In the engineering literature, the principle of minimizing KL divergence (Kullback's "Principle of Minimum Discrimination Information") is often called the Principle of Minimum Cross-Entropy (MCE), or Minxent. However, as discussed in the article Kullback–Leibler divergence, sometimes the distribution q {\displaystyle q} is the fixed prior reference distribution, and the distribution p {\displaystyle p} is optimized to be as close to q {\displaystyle q} as possible, subject to some constraint. In this case the two minimizations are not equivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by restating cross-entropy to be D K L ( p ∥ q ) {\displaystyle D_{\mathrm {KL} }(p\parallel q)} , rather than H ( p , q ) {\displaystyle H(p,q)} . In fact, cross-entropy is another name for relative entropy; see Cover and Thomas and Good. On the other hand, H ( p , q ) {\displaystyle H(p,q)} does not agree with the literature and can be misleading. == Cross-entropy loss function and logistic regression == Cross-entropy can be used to define a loss function in machine learning and optimization. Mao, Mohri, and Zhong (2023) give an extensive analysis of the properties of the family of cross-entropy loss functions in machine learning, including theoretical learning guarantees and extensions to adversarial learning. The true probability p i {\displaystyle p_{i}} is the true label, and the given distribution q i {\displaystyle q_{i}} is the predicted value of the current model. This is also known as the log loss (or logarithmic loss or logistic loss); the terms "log loss" and "cross-entropy loss" are used interchangeably. More specifically, consider a binary regression model which can be used to classify observations into two possible classes (often simply labelled 0 {\displaystyle 0} and 1 {\displaystyle 1} ). The output of the model for a given observation, given a vector of input features x {\displaystyle x} , can be interpreted as a probability, which serves as the basis for classifying the observation. In logistic regression, the probability is modeled using the logistic function g ( z ) = 1 / ( 1 + e − z ) {\displaystyle g(z)=1/(1+e^{-z})} where z {\displaystyle z} is some function of the input vector x {\displaystyle x} , commonly just a linear function. The probability of the output y = 1 {\displaystyle y=1} is given by q y = 1 = y ^ ≡ g ( w ⋅ x ) = 1 1 + e − w ⋅ x , {\displaystyle q_{y=1}={\hat {y}}\equiv g(\mathbf {w} \cdot \mathbf {x} )={\frac {1}{1+e^{-\mathbf {w} \cdot \mathbf {x} }}},} where the vector of weights w {\displaystyle \mathbf {w} } is optimized through some appropriate algorithm such as gradient descent. Similarly, the complementary probability of finding the output y = 0 {\displaystyle y=0} is simply given by q y = 0 = 1 − y ^ . {\displaystyle q_{y=0}=1-{\hat {y}}.} Having set up our notation, p ∈ { y , 1 − y } {\displaystyle p\in \{y,1-y\}} and q ∈ { y ^ , 1 − y ^ } {\displaystyle q\in \{{\hat {y}},1-{\hat {y}}\}} , we can use cross-entropy to get a measure of dissimilarity between p {\displaystyle p} and q {\displaystyle q} : H ( p , q ) = − ∑ i p i log ⁡ q i = − y log ⁡ y ^ − ( 1 − y ) log ⁡ ( 1 − y ^ ) . {\displaystyle {\begin{aligned}H(p,q)&=-\sum _{i}p_{i}\log q_{i}\\[1ex]&=-y\log {\hat {y}}-(1-y)\log(1-{\hat {y}}).\end{aligned}}} Logistic regression typically optimizes the log loss for all the observations on which it is trained, which is the same as optimizing the average cross-entropy in the sample. Other loss functions that penalize errors differently can be also used for training, resulting in models with different final test accuracy. For example, suppose we have N {\displaystyle N} samples with each sample indexed by n = 1 , … , N {\displaystyle n=1,\dots ,N} . The average of the loss function is then given by: J ( w ) = 1 N ∑ n = 1 N H ( p n , q n ) = − 1 N ∑ n = 1 N [ y n log ⁡ y ^ n + ( 1 − y n ) log ⁡ ( 1 − y ^ n ) ] , {\displaystyle {\begin{aligned}J(\mathbf {w} )&={\frac {1}{N}}\sum _{n=1}^{N}H(p_{n},q_{n})\\&=-{\frac {1}{N}}\sum _{n=1}^{N}\ \left[y_{n}\log {\hat {y}}_{n}+(1-y_{n})\log(1-{\hat {y}}_{n})\right],\end{aligned}}} where y ^ n ≡ g ( w ⋅ x n ) = 1 / ( 1 + e − w ⋅ x n ) {\displaystyle {\hat {y}}_{n}\equiv g(\mathbf {w} \cdot \mathbf {x} _{n})=1/(1+e^{-\mathbf {w} \cdot \mathbf {x} _{n}})} , with g ( z ) {\displaystyle g(z)} the logistic function as before. The logistic loss is sometimes called cross-entropy loss. It is also known as log loss. (In this case, the binary label is often denoted by {−1,+1}.) Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared-error loss for linear regression. That is, define X T = ( 1 x 11 … x 1 p 1 x 21 ⋯ x 2 p ⋮ ⋮ ⋮ 1 x n 1 ⋯ x n p ) ∈ R n × ( p + 1 ) , {\displaystyle X^{\mathsf {T}}={\begin{pmatrix}1&x_{11}&\dots &x_{1p}\\1&x_{21}&\cdots &x_{2p}\\\vdots &\vdots &&\vdots \\1&x_{n1}&\cdots &x_{np}\\\end{pmatrix}}\in \mathbb {R} ^{n\times (p+1)},} y i ^ = f ^ ( x i 1 , … , x i p ) = 1 1 + exp ⁡ ( − β 0 − β 1 x i 1 − ⋯ − β p x i p ) , {\displaystyle {\hat {y_{i}}}={\hat {f}}(x_{i1},\dots ,x_{ip})={\frac {1}{1+\exp(-\beta _{0}-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})}},} L ( β ) = − ∑ i = 1 N [ y i log ⁡ y ^ i + ( 1 − y i ) log ⁡ ( 1 − y ^ i ) ] . {\displaystyle L({\boldsymbol {\beta }})=-\sum _{i=1}^{N}\left[y_{i}\log {\hat {y}}_{i}+(1-y_{i})\log(1-{\hat {y}}_{i})\right].} Then we have the result ∂ ∂ β L ( β ) = X T ( Y ^ − Y ) . {\displaystyle {\frac {\partial }{\partial {\boldsymbol {\beta }}}}L({\boldsymbol {\beta }})=X^{T}({\hat {Y}}-Y).} The proof is as follows. For any y ^ i {\displaystyle {\hat {y}}_{i}} , we have ∂ ∂ β 0 ln ⁡ 1 1 + e − β 0 + k 0 = e − β 0 + k 0 1 + e − β 0 + k 0 , {\displaystyle {\frac {\partial }{\partial \beta _{0}}}\ln {\frac {1}{1+e^{-\beta _{0}+k_{0}}}}={\frac {e^{-\beta _{0}+k_{0}}}{1+e^{-\beta _{0}+k_{0}}}},} ∂ ∂ β 0 ln ⁡ ( 1 − 1 1 + e − β 0 + k 0 ) = − 1 1 + e − β 0 + k 0 , {\displaystyle {\frac {\partial }{\partial \beta _{0}}}\ln \left(1-{\frac {1}{1+e^{-\beta _{0}+k_{0}}}}\right)={\frac {-1}{1+e^{-\beta _{0}+k_{0}}}},} ∂ ∂ β 0 L ( β ) = − ∑ i = 1 N [ y i ⋅ e − β 0 + k 0 1 + e − β 0 + k 0 − ( 1 − y i ) 1 1 + e − β 0 + k 0 ] = − ∑ i = 1 N [ y i − y ^ i ] = ∑ i = 1 N ( y ^ i − y i ) , {\displaystyle {\begin{aligned}{\frac {\partial }{\partial \beta _{0}}}L({\boldsymbol {\beta }})&=-\sum _{i=1}^{N}\left[{\frac {y_{i}\cdot e^{-\beta _{0}+k_{0}}}{1+e^{-\beta _{0}+k_{0}}}}-(1-y_{i}){\frac {1}{1+e^{-\beta _{0}+k_{0}}}}\right]\\&=-\sum _{i=1}^{N}\left[y_{i}-{\hat {y}}_{i}\right]=\sum _{i=1}^{N}({\hat {y}}_{i}-y_{i}),\end{aligned}}} ∂ ∂ β 1 ln ⁡ 1 1 + e − β 1 x i 1 + k 1 = x i 1 e k 1 e β 1 x i 1 + e k 1 , {\displaystyle {\frac {\partial }{\partial \beta _{1}}}\ln {\frac {1}{1+e^{-\beta _{1}x_{i1}+k_{1}}}}={\frac {x_{i1}e^{k_{1}}}{e^{\beta _{1}x_{i1}}+e^{k_{1}}}},} ∂ ∂ β 1 ln ⁡ [ 1 − 1 1 + e − β 1 x i 1 + k 1 ] = − x i 1 e β 1 x i 1 e β 1 x i 1 + e k 1 , {\displaystyle {\frac {\partial }{\partial \beta _{1}}}\ln \left[1-{\frac {1}{1+e^{-\beta _{1}x_{i1}+k_{1}}}}\right]={\frac {-x_{i1}e^{\beta _{1}x_{i1}}}{e^{\beta _{1}x_{i1}}+e^{k_{1}}}},} ∂ ∂ β 1 L ( β ) = − ∑ i = 1 N x i 1 ( y i − y ^ i ) = ∑ i = 1 N x i 1 ( y ^ i − y i ) . {\displaystyle {\frac {\partial }{\partial \beta _{1}}}L({\boldsymbol {\beta }})=-\sum _{i=1}^{N}x_{i1}(y_{i}-{\hat {y}}_{i})=\sum _{i=1}^{N}x_{i1}({\hat {y}}_{i}-y_{i}).} In a similar way, we eventually obtain the desired result. == Amended cross-entropy == It may be beneficial to train an ensemble of models that have diversity, such that when they are combined, their predictive accuracy is augmented. Assuming a simple ensemble of K {\displaystyle K} classifiers is assembled via averaging the outputs, then the amended cross-entropy is given by e k = H ( p , q k ) − λ K ∑ j ≠ k H ( q j , q k ) {\displaystyle e^{k}=H(p,q^{k})-{\frac {\lambda }{K}}\sum _{j\neq k}H(q^{j},q^{k})} where e k {\displaystyle e^{k}} is the cost function of the k t h {\displaystyle k^{th}} classifier, q k {\displaystyle q^{k}} is the output probability of the k t h {\displaystyle k^{th}} classifier, p {\displaystyle p} is the true probability to be estimated, and λ {\displaystyle \lambda } is a parameter between 0 and 1 that defines the 'diversity' that we would like to establish among the ensemble. When λ = 0 {\displaystyle \lambda =0} we want each classifier to do its best regardless of the ensemble and when λ = 1 {\displaystyle \lambda =1} we would like the classifier to be as diverse as possible. == See also == Cross-entropy method Logistic regression Conditional entropy Kullback–Leibler distance Maximum-likelihood estimation Mutual information Perplexity == References == == Further reading == de Boer, Kroese, D.P., Mannor, S. and Rubinstein, R.Y. (2005). A tutorial on the cross-entropy method. Annals of Operations Research 134 (1), 19–67.
Wikipedia/Cross_entropy
Differential evolution (DE) is an evolutionary algorithm to optimize a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found. DE is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means DE does not require the optimization problem to be differentiable, as is required by classic optimization methods such as gradient descent and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc. DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way, the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed. == History == Storn and Price introduced Differential Evolution in 1995. Books have been published on theoretical and practical aspects of using DE in parallel computing, multiobjective optimization, constrained optimization, and the books also contain surveys of application areas. Surveys on the multi-faceted research aspects of DE can be found in journal articles. == Algorithm == A basic variant of the DE algorithm works by having a population of candidate solutions (called agents). These agents are moved around in the search-space by using simple mathematical formulae to combine the positions of existing agents from the population. If the new position of an agent is an improvement then it is accepted and forms part of the population, otherwise the new position is simply discarded. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered. Formally, let f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } be the fitness function which must be minimized (note that maximization can be performed by considering the function h := − f {\displaystyle h:=-f} instead). The function takes a candidate solution as argument in the form of a vector of real numbers. It produces a real number as output which indicates the fitness of the given candidate solution. The gradient of f {\displaystyle f} is not known. The goal is to find a solution m {\displaystyle \mathbf {m} } for which f ( m ) ≤ f ( p ) {\displaystyle f(\mathbf {m} )\leq f(\mathbf {p} )} for all p {\displaystyle \mathbf {p} } in the search-space, which means that m {\displaystyle \mathbf {m} } is the global minimum. Let x ∈ R n {\displaystyle \mathbf {x} \in \mathbb {R} ^{n}} designate a candidate solution (agent) in the population. The basic DE algorithm can then be described as follows: Choose the parameters NP ≥ 4 {\displaystyle {\text{NP}}\geq 4} , CR ∈ [ 0 , 1 ] {\displaystyle {\text{CR}}\in [0,1]} , and F ∈ [ 0 , 2 ] {\displaystyle F\in [0,2]} . NP : NP {\displaystyle {\text{NP}}} is the population size, i.e. the number of candidate agents or "parents". CR : The parameter CR ∈ [ 0 , 1 ] {\displaystyle {\text{CR}}\in [0,1]} is called the crossover probability. F : The parameter F ∈ [ 0 , 2 ] {\displaystyle F\in [0,2]} is called the differential weight. Typical settings are N P = 10 n {\displaystyle NP=10n} , C R = 0.9 {\displaystyle CR=0.9} and F = 0.8 {\displaystyle F=0.8} . Optimization performance may be greatly impacted by these choices; see below. Initialize all agents x {\displaystyle \mathbf {x} } with random positions in the search-space. Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following: For each agent x {\displaystyle \mathbf {x} } in the population do: Pick three agents a , b {\displaystyle \mathbf {a} ,\mathbf {b} } , and c {\displaystyle \mathbf {c} } from the population at random, they must be distinct from each other as well as from agent x {\displaystyle \mathbf {x} } . ( a {\displaystyle \mathbf {a} } is called the "base" vector.) Pick a random index R ∈ { 1 , … , n } {\displaystyle R\in \{1,\ldots ,n\}} where n {\displaystyle n} is the dimensionality of the problem being optimized. Compute the agent's potentially new position y = [ y 1 , … , y n ] {\displaystyle \mathbf {y} =[y_{1},\ldots ,y_{n}]} as follows: For each i ∈ { 1 , … , n } {\displaystyle i\in \{1,\ldots ,n\}} , pick a uniformly distributed random number r i ∼ U ( 0 , 1 ) {\displaystyle r_{i}\sim U(0,1)} If r i < C R {\displaystyle r_{i}<CR} or i = R {\displaystyle i=R} then set y i = a i + F × ( b i − c i ) {\displaystyle y_{i}=a_{i}+F\times (b_{i}-c_{i})} otherwise set y i = x i {\displaystyle y_{i}=x_{i}} . (Index position R {\displaystyle R} is replaced for certain.) If f ( y ) ≤ f ( x ) {\displaystyle f(\mathbf {y} )\leq f(\mathbf {x} )} then replace the agent x {\displaystyle \mathbf {x} } in the population with the improved or equal candidate solution y {\displaystyle \mathbf {y} } . Pick the agent from the population that has the best fitness and return it as the best found candidate solution. == Parameter selection == The choice of DE parameters NP {\displaystyle {\text{NP}}} , CR {\displaystyle {\text{CR}}} and F {\displaystyle F} can have a large impact on optimization performance. Selecting the DE parameters that yield good performance has therefore been the subject of much research. Rules of thumb for parameter selection were devised by Storn et al. and Liu and Lampinen. Mathematical convergence analysis regarding parameter selection was done by Zaharie. == Constraint handling == Differential evolution can be utilized for constrained optimization as well. A common method involves modifying the target function to include a penalty for any violation of constraints, expressed as: f ( ~ x ) = f ( x ) + ρ × C V ( x ) {\displaystyle f{\tilde {(}}x)=f(x)+\rho \times \mathrm {CV} (x)} . Here, C V ( x ) {\displaystyle \mathrm {CV} (x)} represents either a constraint violation (an L1 penalty) or the square of a constraint violation (an L2 penalty). This method, however, has certain drawbacks. One significant challenge is the appropriate selection of the penalty coefficient ρ {\displaystyle \rho } . If ρ {\displaystyle \rho } is set too low, it may not effectively enforce constraints. Conversely, if it's too high, it can greatly slow down or even halt the convergence process. Despite these challenges, this approach remains widely used due to its simplicity and because it doesn't require altering the differential evolution algorithm itself. There are alternative strategies, such as projecting onto a feasible set or reducing dimensionality, which can be used for box-constrained or linearly constrained cases. However, in the context of general nonlinear constraints, the most reliable methods typically involve penalty functions. == Variants == Variants of the DE algorithm are continually being developed in an effort to improve optimization performance. The following directions of development can be outlined: New schemes for performing crossover and mutation of agents Various strategies for handling constraints Adaptive strategies that dynamically adjust population size, F and CR parameters Specialized algorithms for large-scale optimization Multi-objective and many-objective algorithms Techniques for handling binary/integer variables == See also == Artificial bee colony algorithm CMA-ES Evolution strategy Genetic algorithm == References ==
Wikipedia/Differential_evolution
In artificial immune systems, clonal selection algorithms are a class of algorithms inspired by the clonal selection theory of acquired immunity that explains how B and T lymphocytes improve their response to antigens over time called affinity maturation. These algorithms focus on the Darwinian attributes of the theory where selection is inspired by the affinity of antigen-antibody interactions, reproduction is inspired by cell division, and variation is inspired by somatic hypermutation. Clonal selection algorithms are most commonly applied to optimization and pattern recognition domains, some of which resemble parallel hill climbing and the genetic algorithm without the recombination operator. == Techniques == CLONALG: The CLONal selection ALGorithm AIRS: The Artificial Immune Recognition System BCA: The B-Cell Algorithm == See also == Artificial immune system Biologically inspired computing Computational immunology Computational intelligence Evolutionary computation Immunocomputing Natural computation Swarm intelligence == Notes == == External links == Clonal Selection Pseudo code on AISWeb CLONALG in Matlab developed by Leandro de Castro and Fernando Von Zuben Optimization Algorithm Toolkit in Java developed by Jason Brownlee which includes the following clonal selection algorithms: Adaptive Clonal Selection (ACS), Optimization Immune Algorithm (opt-IMMALG), Optimization Immune Algorithm (opt-IA), Clonal Selection Algorithm (CLONALG, CLONALG1, CLONALG2), B-Cell Algorithm (BCA), Cloning, Information Gain, Aging (CLIGA), Immunological Algorithm (IA) AIRS in C++ developed by Andrew Watkins BCA in C++ developed by Johnny Kelsey
Wikipedia/Clonal_selection_algorithm
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation. == History == Early Ideas The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as a Turing machine. Turing firstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes. He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general. Neural Networks First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms. They first mathematically described that a system of simplistic neurons was able to produce simple logical operations such as logical conjunction, disjunction and negation. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969 book by Marvin Minsky and Seymour Papert as the main cause. Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known as threshold functions. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits. Ant Colonies Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent. More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "emergent behavior." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms. Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony. A very important result since it suggested that group selection evolutionary algorithms coupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms. == Areas of research == Some areas of study in biologically inspired computing, and their biological counterparts: === Population Based Bio-Inspired Algorithms === Bio-inspired computing, which work on a population of possible solutions in the context of evolutionary algorithms or in the context of swarm intelligence algorithms, are subdivided into Population Based Bio-Inspired Algorithms (PBBIA). They include Evolutionary Algorithms, Particle Swarm Optimization, Ant colony optimization algorithms and Artificial bee colony algorithms. ==== Virtual Insect Example ==== Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules: turn right for target-and-obstacle left; turn left for target-and-obstacle right; turn left for target-left-obstacle-right; turn right for target-right-obstacle-left; turn left for target-left without obstacle; turn right for target-right without obstacle. The virtual insect controlled by the trained spiking neural network can find food after training in any unknown terrain. After several generations of rule application it is usually the case that some forms of complex behaviour emerge. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (see complex systems). For this reason, when modeling the neural network, it is necessary to accurately model an in vivo network, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases. Natural evolution is a good analogy to this method–the rules of evolution (selection, recombination/reproduction, mutation and more recently transposition) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used in genetic algorithms. === Brain-inspired computing === Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level. ==== Research ==== Artificial intelligence researchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology. ==== The influence of brain science on Brain-inspired computing ==== Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopic network connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model. ==== Brain-inspired chip ==== Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphic chip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backed SpiNNaker and BrainScaleS, Stanford's Neurogrid, IBM's TrueNorth, and Qualcomm's Zeroth. TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities. The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release. In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian". The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips. ==== Unclear Brain mechanism cognition ==== The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research. Recent advances in brain simulation linked individual variability in human cognitive processing speed and fluid intelligence to the balance of excitation and inhibition in structural brain networks, functional connectivity, winner-take-all decision-making and attractor working memory. ==== Unclear Brain-inspired computational models and algorithms ==== In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability. ==== Constrained Computational architecture and capabilities ==== Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application. == See also == Applications of artificial intelligence Behavior based robotics Bioinformatics Bionics Cognitive architecture Cognitive modeling Cognitive science Connectionism Digital morphogenesis Digital organism Fuzzy logic Gene expression programming Genetic algorithm Genetic programming Gerald Edelman Janine Benyus Learning classifier system Mark A. O'Neill Mathematical biology Mathematical model Natural computation Neuroevolution Olaf Sporns Organic computing Unconventional computing Lists List of emerging technologies Outline of artificial intelligence == References == == Further reading == (the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top) "Nature-Inspired Algorithms" "Biologically Inspired Computing" "Digital Biology", Peter J. Bentley. "First International Symposium on Biologically Inspired Computing" Emergence: The Connected Lives of Ants, Brains, Cities and Software, Steven Johnson. Dr. Dobb's Journal, Apr-1991. (Issue theme: Biocomputing) Turtles, Termites and Traffic Jams, Mitchel Resnick. Understanding Nonlinear Dynamics, Daniel Kaplan and Leon Glass. Ridge, E.; Kudenko, D.; Kazakov, D.; Curry, E. (2005). "Moving Nature-Inspired Algorithms to Parallel, Asynchronous and Decentralised Environments". Self-Organization and Autonomic Informatics (I). 135: 35–49. CiteSeerX 10.1.1.64.3403. Swarms and Swarm Intelligence by Michael G. Hinchey, Roy Sterritt, and Chris Rouff, Fundamentals of Natural Computing: Basic Concepts, Algorithms, and Applications, L. N. de Castro, Chapman & Hall/CRC, June 2006. "The Computational Beauty of Nature", Gary William Flake. MIT Press. 1998, hardcover ed.; 2000, paperback ed. An in-depth discussion of many of the topics and underlying themes of bio-inspired computing. Kevin M. Passino, Biomimicry for Optimization, Control, and Automation, Springer-Verlag, London, UK, 2005. Recent Developments in Biologically Inspired Computing, L. N. de Castro and F. J. Von Zuben, Idea Group Publishing, 2004. Nancy Forbes, Imitation of Life: How Biology is Inspiring Computing, MIT Press, Cambridge, MA 2004. M. Blowers and A. Sisti, Evolutionary and Bio-inspired Computation: Theory and Applications, SPIE Press, 2007. X. S. Yang, Z. H. Cui, R. B. Xiao, A. H. Gandomi, M. Karamanoglu, Swarm Intelligence and Bio-Inspired Computation: Theory and Applications, Elsevier, 2013. "Biologically Inspired Computing Lecture Notes", Luis M. Rocha The portable UNIX programming system (PUPS) and CANTOR: a computational envorionment for dynamical representation and analysis of complex neurobiological data, Mark A. O'Neill, and Claus-C Hilgetag, Phil Trans R Soc Lond B 356 (2001), 1259–1276 "Going Back to our Roots: Second Generation Biocomputing", J. Timmis, M. Amos, W. Banzhaf, and A. Tyrrell, Journal of Unconventional Computing 2 (2007) 349–378. Neumann, Frank; Witt, Carsten (2010). Bioinspired computation in combinatorial optimization. Algorithms and their computational complexity. Natural Computing Series. Berlin: Springer-Verlag. ISBN 978-3-642-16543-6. Zbl 1223.68002. Brabazon, Anthony; O’Neill, Michael (2006). Biologically inspired algorithms for financial modelling. Natural Computing Series. Berlin: Springer-Verlag. ISBN 978-3-540-26252-7. Zbl 1117.91030. C-M. Pintea, 2014, Advances in Bio-inspired Computing for Combinatorial Optimization Problem, Springer ISBN 978-3-642-40178-7 "PSA: A novel optimization algorithm based on survival rules of porcellio scaber", Y. Zhang and S. Li == External links == Nature Inspired Computing and Engineering (NICE) Group, University of Surrey, UK ALife Project in Sussex Biologically Inspired Computation for Chemical Sensing Neurochem Project AND Corporation Centre of Excellence for Research in Computational Intelligence and Applications Birmingham, UK BiSNET: Biologically-inspired architecture for Sensor NETworks BiSNET/e: A Cognitive Sensor Networking Architecture with Evolutionary Multiobjective Optimization Biologically inspired neural networks NCRA UCD, Dublin Ireland The PUPS/P3 Organic Computing Environment for Linux SymbioticSphere: A Biologically-inspired Architecture for Scalable, Adaptive and Survivable Network Systems The runner-root algorithm Bio-inspired Wireless Networking Team (BioNet) Biologically Inspired Intelligence
Wikipedia/Population_Based_Bio-Inspired_Algorithms
A cellular evolutionary algorithm (cEA) is a kind of evolutionary algorithm (EA) in which individuals cannot mate arbitrarily, but every one interacts with its closer neighbors on which a basic EA is applied (selection, variation, replacement). The cellular model simulates natural evolution from the point of view of the individual, which encodes a tentative optimization, learning, or search problem solution. The essential idea of this model is to provide the EA population with a special structure defined as a connected graph, in which each vertex is an individual who communicates with his nearest neighbors. Particularly, individuals are conceptually set in a toroidal mesh, and are only allowed to recombine with close individuals. This leads to a kind of locality known as "isolation by distance". The set of potential mates of an individual is called its "neighborhood". It is known that, in this kind of algorithm, similar individuals tend to cluster creating niches, and these groups operate as if they were separate sub-populations (islands). There is no clear borderline between adjacent groups, and close niches could be easily colonized by competitive niches and potentially merge solution contents during the process. Simultaneously, farther niches can be affected more slowly. == Introduction == A cellular evolutionary algorithm (cEA) usually evolves a structured bidimensional grid of individuals, although other topologies are also possible. In this grid, clusters of similar individuals are naturally created during evolution, promoting exploration in their boundaries, while exploitation is mainly performed by direct competition and merging inside them. The grid is usually 2D toroidal structure, although the number of dimensions can be easily extended (to 3D) or reduced (to 1D, e.g. a ring). The neighborhood of a particular point of the grid (where an individual is placed) is defined in terms of the Manhattan distance from it to others in the population. Each point of the grid has a neighborhood that overlaps the neighborhoods of nearby individuals. In the basic algorithm, all the neighborhoods have the same size and identical shapes. The two most commonly used neighborhoods are L5, also called the Von Neumann or NEWS (North, East, West and South) neighborhood, and C9, also known as the Moore neighborhood. Here, L stands for "linear" while C stands for "compact". In cEAs, the individuals can only interact with their neighbors in the reproductive cycle where the variation operators are applied. This reproductive cycle is executed inside the neighborhood of each individual and, generally, consists in selecting two parents among its neighbors according to a certain criterion, applying the variation operators to them (recombination and mutation for example), and replacing the considered individual by the recently created offspring following a given criterion, for instance, replace if the offspring represents a better solution than the considered individual. == Synchronous versus asynchronous == In a regular synchronous cEA, the algorithm proceeds from the very first top left individual to the right and then to the several rows by using the information in the population to create a new temporary population. After finishing with the bottom-right last individual the temporary population is full with the newly computed individuals, and the replacement step starts. In it, the old population is completely and synchronously replaced with the newly computed one according to some criterion. Usually, the replacement keeps the best individual in the same position of both populations, that is, elitism is used. According to the update policy of the population used, an asynchronous cEA may also be defined and is a well-known issue in cellular automata. In asynchronous cEAs the order in which the individuals in the grid are update changes depending on the choice of criterion: line sweep, fixed random sweep, new random sweep, and uniform choice. All four proceed using the newly computed individual (or the original if better) for the computations of its neighbors. The overlap of the neighborhoods provides an implicit mechanism of solution migration to the cEA. Since the best solutions spread smoothly through the whole population, genetic diversity in the population is preserved longer than in non structured EAs. This soft dispersion of the best solutions through the population is one of the main issues of the good tradeoff between exploration and exploitation that cEAs perform during the search. This tradeoff can be tuned (and by extension the genetic diversity level along the evolution) by modifying (for instance) the size of the neighborhood used, as the overlap degree between the neighborhoods grows according to the size of the neighborhood. A cEA can be seen as a cellular automaton (CA) with probabilistic rewritable rules, where the alphabet of the CA is equivalent to the potential number of solutions of the problem. Hence, knowledge from research in CAs can be applied to cEAs. == Parallelism == Cellular EAs are very amenable to parallelism, thus usually found in the literature of parallel metaheuristics. In particular, fine grain parallelism can be used to assign independent threads of execution to every individual, thus allowing the whole cEA to run on a concurrent or actually parallel hardware platform. In this way, large time reductions can be obtained when running cEAs on FPGAs or GPUs. However, it is important to stress that cEAs are a model of search, in many senses different from traditional EAs. Also, they can be run in sequential and parallel platforms, reinforcing the fact that the model and the implementation are two different concepts. See here for a complete description on the fundamentals for the understanding, design, and application of cEAs. == See also == Cellular automaton Dual-phase evolution Enrique Alba Evolutionary algorithm Metaheuristic Parallel metaheuristic == References == E. Alba, B. Dorronsoro, Cellular Genetic Algorithms, Springer-Verlag, ISBN 978-0-387-77609-5, 2008 A.J. Neighbor, J.J. Durillo, F. Luna, B. Dorronsoro, E. Alba, MOCell: A New Cellular Genetic Algorithm for Multiobjective Optimization, International Journal of Intelligent Systems, 24:726-746, 2009 E. Alba, B. Dorronsoro, F. Luna, A.J. Neighbor, P. Bouvry, L. Hogie, A Cellular Multi-Objective Genetic Algorithm for Optimal Broadcasting Strategy in Metropolitan MANETs, Computer Communications, 30(4):685-697, 2007 E. Alba, B. Dorronsoro, Computing Nine New Best-So-Far Solutions for Capacitated VRP with a Cellular GA, Information Processing Letters, Elsevier, 98(6):225-230, 30 June 2006 M. Giacobini, M. Tomassini, A. Tettamanzi, E. Alba, The Selection Intensity in Cellular Evolutionary Algorithms for Regular Lattices, IEEE Transactions on Evolutionary Computation, IEEE Press, 9(5):489-505, 2005 E. Alba, B. Dorronsoro, The Exploration/Exploitation Tradeoff in Dynamic Cellular Genetic Algorithms, IEEE Transactions on Evolutionary Computation, IEEE Press, 9(2)126-142, 2005 == External links == The site on Cellular Evolutionary Algorithms NEO Research Group at University of Málaga, Spain Archived 2018-09-28 at the Wayback Machine
Wikipedia/Cellular_evolutionary_algorithm
In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down. For example, consider the problem of minimizing the function x 2 + y 4 {\displaystyle x^{2}+y^{4}} with respect to the variables x {\displaystyle x} and y , {\displaystyle y,} subject to 1 ≤ x ≤ 10 {\displaystyle 1\leq x\leq 10} and 5 ≤ y ≤ 12. {\displaystyle 5\leq y\leq 12.\,} Here the feasible set is the set of pairs (x, y) in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12. The feasible set of the problem is separate from the objective function, which states the criterion to be optimized and which in the above example is x 2 + y 4 . {\displaystyle x^{2}+y^{4}.} In many problems, the feasible set reflects a constraint that one or more variables must be non-negative. In pure integer programming problems, the feasible set is the set of integers (or some subset thereof). In linear programming problems, the feasible set is a convex polytope: a region in multidimensional space whose boundaries are formed by hyperplanes and whose corners are vertices. Constraint satisfaction is the process of finding a point in the feasible region. == Convex feasible set == A convex feasible set is one in which a line segment connecting any two feasible points goes through only other feasible points, and not through any points outside the feasible set. Convex feasible sets arise in many types of problems, including linear programming problems, and they are of particular interest because, if the problem has a convex objective function that is to be minimized, it will generally be easier to solve in the presence of a convex feasible set and any local optimum will also be a global optimum. == No feasible set == If the constraints of an optimization problem are mutually contradictory, there are no points that satisfy all the constraints and thus the feasible region is the empty set. In this case the problem has no solution and is said to be infeasible. == Bounded and unbounded feasible sets == Feasible sets may be bounded or unbounded. For example, the feasible set defined by the constraint set {x ≥ 0, y ≥ 0} is unbounded because in some directions there is no limit on how far one can go and still be in the feasible region. In contrast, the feasible set formed by the constraint set {x ≥ 0, y ≥ 0, x + 2y ≤ 4} is bounded because the extent of movement in any direction is limited by the constraints. In linear programming problems with n variables, a necessary but insufficient condition for the feasible set to be bounded is that the number of constraints be at least n + 1 (as illustrated by the above example). If the feasible set is unbounded, there may or may not be an optimum, depending on the specifics of the objective function. For example, if the feasible region is defined by the constraint set {x ≥ 0, y ≥ 0}, then the problem of maximizing x + y has no optimum since any candidate solution can be improved upon by increasing x or y; yet if the problem is to minimize x + y, then there is an optimum (specifically at (x, y) = (0, 0)). == Candidate solution == In optimization and other branches of mathematics, and in search algorithms (a topic in computer science), a candidate solution is a member of the set of possible solutions in the feasible region of a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem—it is simply in the set that satisfies all constraints; that is, it is in the set of feasible solutions. Algorithms for solving various types of optimization problems often narrow the set of candidate solutions down to a subset of the feasible solutions, whose points remain as candidate solutions while the other feasible solutions are henceforth excluded as candidates. The space of all candidate solutions, before any feasible points have been excluded, is called the feasible region, feasible set, search space, or solution space. This is the set of all possible solutions that satisfy the problem's constraints. Constraint satisfaction is the process of finding a point in the feasible set. === Genetic algorithm === In the case of the genetic algorithm, the candidate solutions are the individuals in the population being evolved by the algorithm. === Calculus === In calculus, an optimal solution is sought using the first derivative test: the first derivative of the function being optimized is equated to zero, and any values of the choice variable(s) that satisfy this equation are viewed as candidate solutions (while those that do not are ruled out as candidates). There are several ways in which a candidate solution might not be an actual solution. First, it might give a minimum when a maximum is being sought (or vice versa), and second, it might give neither a minimum nor a maximum but rather a saddle point or an inflection point, at which a temporary pause in the local rise or fall of the function occurs. Such candidate solutions may be able to be ruled out by use of the second derivative test, the satisfaction of which is sufficient for the candidate solution to be at least locally optimal. Third, a candidate solution may be a local optimum but not a global optimum. In taking antiderivatives of monomials of the form x n , {\displaystyle x^{n},} the candidate solution using Cavalieri's quadrature formula would be 1 n + 1 x n + 1 + C . {\displaystyle {\tfrac {1}{n+1}}x^{n+1}+C.} This candidate solution is in fact correct except when n = − 1. {\displaystyle n=-1.} === Linear programming === In the simplex method for solving linear programming problems, a vertex of the feasible polytope is selected as the initial candidate solution and is tested for optimality; if it is rejected as the optimum, an adjacent vertex is considered as the next candidate solution. This process is continued until a candidate solution is found to be the optimum. == References ==
Wikipedia/Candidate_solution
The promoter based genetic algorithm (PBGA) is a genetic algorithm for neuroevolution developed by F. Bellas and R.J. Duro in the Integrated Group for Engineering Research (GII) at the University of Coruña, in Spain. It evolves variable size feedforward artificial neural networks (ANN) that are encoded into sequences of genes for constructing a basic ANN unit. Each of these blocks is preceded by a gene promoter acting as an on/off switch that determines if that particular unit will be expressed or not. == PBGA basics == The basic unit in the PBGA is a neuron with all of its inbound connections as represented in the following figure: The genotype of a basic unit is a set of real valued weights followed by the parameters of the neuron and proceeded by an integer valued field that determines the promoter gene value and, consequently, the expression of the unit. By concatenating units of this type we can construct the whole network. With this encoding it is imposed that the information that is not expressed is still carried by the genotype in evolution but it is shielded from direct selective pressure, maintaining this way the diversity in the population, which has been a design premise for this algorithm. Therefore, a clear difference is established between the search space and the solution space, permitting information learned and encoded into the genotypic representation to be preserved by disabling promoter genes. == Results == The PBGA was originally presented within the field of autonomous robotics, in particular in the real time learning of environment models of the robot. It has been used inside the Multilevel Darwinist Brain (MDB) cognitive mechanism developed in the GII for real robots on-line learning. In another paper it is shown how the application of the PBGA together with an external memory that stores the successful obtained world models, is an optimal strategy for adaptation in dynamic environments. Recently, the PBGA has provided results that outperform other neuroevolutionary algorithms in non-stationary problems, where the fitness function varies in time. == References == == External links == Grupo Integrado de Ingeniería Francisco Bellas’ website Richard J. Duro’s website
Wikipedia/Promoter_based_genetic_algorithm
Natural evolution strategies (NES) are a family of numerical optimization algorithms for black box problems. Similar in spirit to evolution strategies, they iteratively update the (continuous) parameters of a search distribution by following the natural gradient towards higher expected fitness. == Method == The general procedure is as follows: the parameterized search distribution is used to produce a batch of search points, and the fitness function is evaluated at each such point. The distribution’s parameters (which include strategy parameters) allow the algorithm to adaptively capture the (local) structure of the fitness function. For example, in the case of a Gaussian distribution, this comprises the mean and the covariance matrix. From the samples, NES estimates a search gradient on the parameters towards higher expected fitness. NES then performs a gradient ascent step along the natural gradient, a second order method which, unlike the plain gradient, renormalizes the update with respect to uncertainty. This step is crucial, since it prevents oscillations, premature convergence, and undesired effects stemming from a given parameterization. The entire process reiterates until a stopping criterion is met. All members of the NES family operate based on the same principles. They differ in the type of probability distribution and the gradient approximation method used. Different search spaces require different search distributions; for example, in low dimensionality it can be highly beneficial to model the full covariance matrix. In high dimensions, on the other hand, a more scalable alternative is to limit the covariance to the diagonal only. In addition, highly multi-modal search spaces may benefit from more heavy-tailed distributions (such as Cauchy, as opposed to the Gaussian). A last distinction arises between distributions where we can analytically compute the natural gradient, and more general distributions where we need to estimate it from samples. === Search gradients === Let θ {\displaystyle \theta } denote the parameters of the search distribution π ( x | θ ) {\displaystyle \pi (x\,|\,\theta )} and f ( x ) {\displaystyle f(x)} the fitness function evaluated at x {\displaystyle x} . NES then pursues the objective of maximizing the expected fitness under the search distribution J ( θ ) = E θ ⁡ [ f ( x ) ] = ∫ f ( x ) π ( x | θ ) d x {\displaystyle J(\theta )=\operatorname {E} _{\theta }[f(x)]=\int f(x)\;\pi (x\,|\,\theta )\;dx} through gradient ascent. The gradient can be rewritten as ∇ θ J ( θ ) = ∇ θ ∫ f ( x ) π ( x | θ ) d x {\displaystyle \nabla _{\theta }J(\theta )=\nabla _{\theta }\int f(x)\;\pi (x\,|\,\theta )\;dx} = ∫ f ( x ) ∇ θ π ( x | θ ) d x {\displaystyle =\int f(x)\;\nabla _{\theta }\pi (x\,|\,\theta )\;dx} = ∫ f ( x ) ∇ θ π ( x | θ ) π ( x | θ ) π ( x | θ ) d x {\displaystyle =\int f(x)\;\nabla _{\theta }\pi (x\,|\,\theta )\;{\frac {\pi (x\,|\,\theta )}{\pi (x\,|\,\theta )}}\;dx} = ∫ [ f ( x ) ∇ θ log ⁡ π ( x | θ ) ] π ( x | θ ) d x {\displaystyle =\int {\Big [}f(x)\;\nabla _{\theta }\log \pi (x\,|\,\theta ){\Big ]}\;\pi (x\,|\,\theta )\;dx} = E θ ⁡ [ f ( x ) ∇ θ log ⁡ π ( x | θ ) ] {\displaystyle =\operatorname {E} _{\theta }\left[f(x)\;\nabla _{\theta }\log \pi (x\,|\,\theta )\right]} that is, the expected value of f ( x ) {\displaystyle f(x)} times the log-derivatives at x {\displaystyle x} . In practice, it is possible to use the Monte Carlo approximation based on a finite number of λ {\displaystyle \lambda } samples ∇ θ J ( θ ) ≈ 1 λ ∑ k = 1 λ f ( x k ) ∇ θ log ⁡ π ( x k | θ ) {\displaystyle \nabla _{\theta }J(\theta )\approx {\frac {1}{\lambda }}\sum _{k=1}^{\lambda }f(x_{k})\;\nabla _{\theta }\log \pi (x_{k}\,|\,\theta )} . Finally, the parameters of the search distribution can be updated iteratively θ ← θ + η ∇ θ J ( θ ) {\displaystyle \theta \leftarrow \theta +\eta \nabla _{\theta }J(\theta )} === Natural gradient ascent === Instead of using the plain stochastic gradient for updates, NES follows the natural gradient, which has been shown to possess numerous advantages over the plain (vanilla) gradient, e.g.: the gradient direction is independent of the parameterization of the search distribution the updates magnitudes are automatically adjusted based on uncertainty, in turn speeding convergence on plateaus and ridges. The NES update is therefore θ ← θ + η F − 1 ∇ θ J ( θ ) {\displaystyle \theta \leftarrow \theta +\eta \mathbf {F} ^{-1}\nabla _{\theta }J(\theta )} , where F {\displaystyle \mathbf {F} } is the Fisher information matrix. The Fisher matrix can sometimes be computed exactly, otherwise it is estimated from samples, reusing the log-derivatives ∇ θ log ⁡ π ( x | θ ) {\displaystyle \nabla _{\theta }\log \pi (x|\theta )} . === Fitness shaping === NES utilizes rank-based fitness shaping in order to render the algorithm more robust, and invariant under monotonically increasing transformations of the fitness function. For this purpose, the fitness of the population is transformed into a set of utility values u 1 ≥ ⋯ ≥ u λ {\displaystyle u_{1}\geq \dots \geq u_{\lambda }} . Let x i {\displaystyle x_{i}} denote the ith best individual. Replacing fitness with utility, the gradient estimate becomes ∇ θ J ( θ ) = ∑ k = 1 λ u k ∇ θ log ⁡ π ( x k | θ ) {\displaystyle \nabla _{\theta }J(\theta )=\sum _{k=1}^{\lambda }u_{k}\;\nabla _{\theta }\log \pi (x_{k}\,|\,\theta )} . The choice of utility function is a free parameter of the algorithm. === Pseudocode === input: f , θ i n i t {\displaystyle f,\;\;\theta _{init}} 1 repeat 2 for k = 1 … λ {\displaystyle k=1\ldots \lambda } do // λ is the population size 3 draw sample x k ∼ π ( ⋅ | θ ) {\displaystyle x_{k}\sim \pi (\cdot |\theta )} 4 evaluate fitness f ( x k ) {\displaystyle f(x_{k})} 5 calculate log-derivatives ∇ θ log ⁡ π ( x k | θ ) {\displaystyle \nabla _{\theta }\log \pi (x_{k}|\theta )} 6 end 7 assign the utilities u k {\displaystyle u_{k}} // based on rank 8 estimate the gradient ∇ θ J ← 1 λ ∑ k = 1 λ u k ⋅ ∇ θ log ⁡ π ( x k | θ ) {\displaystyle \nabla _{\theta }J\leftarrow {\frac {1}{\lambda }}\sum _{k=1}^{\lambda }u_{k}\cdot \nabla _{\theta }\log \pi (x_{k}|\theta )} 9 estimate F ← 1 λ ∑ k = 1 λ ∇ θ log ⁡ π ( x k | θ ) ∇ θ log ⁡ π ( x k | θ ) ⊤ {\displaystyle \mathbf {F} \leftarrow {\frac {1}{\lambda }}\sum _{k=1}^{\lambda }\nabla _{\theta }\log \pi (x_{k}|\theta )\nabla _{\theta }\log \pi (x_{k}|\theta )^{\top }} // or compute it exactly 10 update parameters θ ← θ + η ⋅ F − 1 ∇ θ J {\displaystyle \theta \leftarrow \theta +\eta \cdot \mathbf {F} ^{-1}\nabla _{\theta }J} // η is the learning rate 11 until stopping criterion is met == See also == Evolutionary computation Covariance matrix adaptation evolution strategy (CMA-ES) == Bibliography == D. Wierstra, T. Schaul, J. Peters and J. Schmidhuber (2008). Natural Evolution Strategies. IEEE Congress on Evolutionary Computation (CEC). Y. Sun, D. Wierstra, T. Schaul and J. Schmidhuber (2009). Stochastic Search using the Natural Gradient. International Conference on Machine Learning (ICML). T. Glasmachers, T. Schaul, Y. Sun, D. Wierstra and J. Schmidhuber (2010). Exponential Natural Evolution Strategies. Genetic and Evolutionary Computation Conference (GECCO). T. Schaul, T. Glasmachers and J. Schmidhuber (2011). High Dimensions and Heavy Tails for Natural Evolution Strategies. Genetic and Evolutionary Computation Conference (GECCO). T. Schaul (2012). Natural Evolution Strategies Converge on Sphere Functions. Genetic and Evolutionary Computation Conference (GECCO). == External links == Collection of NES implementations in different languages
Wikipedia/Natural_evolution_strategy
Cultural algorithms (CA) are a branch of evolutionary computation where there is a knowledge component that is called the belief space in addition to the population component. In this sense, cultural algorithms can be seen as an extension to a conventional genetic algorithm. Cultural algorithms were introduced by Reynolds (see references). == Belief space == The belief space of a cultural algorithm is divided into distinct categories. These categories represent different domains of knowledge that the population has of the search space. The belief space is updated after each iteration by the best individuals of the population. The best individuals can be selected using a fitness function that assesses the performance of each individual in population much like in genetic algorithms. === List of belief space categories === Normative knowledge A collection of desirable value ranges for the individuals in the population component e.g. acceptable behavior for the agents in population. Domain specific knowledge Information about the domain of the cultural algorithm problem is applied to. Situational knowledge Specific examples of important events - e.g. successful/unsuccessful solutions Temporal knowledge History of the search space - e.g. the temporal patterns of the search process Spatial knowledge Information about the topography of the search space == Population == The population component of the cultural algorithm is approximately the same as that of the genetic algorithm. == Communication protocol == Cultural algorithms require an interface between the population and belief space. The best individuals of the population can update the belief space via the update function. Also, the knowledge categories of the belief space can affect the population component via the influence function. The influence function can affect population by altering the genome or the actions of the individuals. == Pseudocode for cultural algorithms == Initialize population space (choose initial population) Initialize belief space (e.g. set domain specific knowledge and normative value-ranges) Repeat until termination condition is met Perform actions of the individuals in population space Evaluate each individual by using the fitness function Select the parents to reproduce a new generation of offspring Let the belief space alter the genome of the offspring by using the influence function Update the belief space by using the accept function (this is done by letting the best individuals to affect the belief space) == Applications == Various optimization problems Social simulation Real-parameter optimization == See also == Artificial intelligence Artificial life Evolutionary computation Genetic algorithm Harmony search Machine learning Memetic algorithm Memetics Metaheuristic Social simulation Sociocultural evolution Stochastic optimization Swarm intelligence == References == Robert G. Reynolds, Ziad Kobti, Tim Kohler: Agent-Based Modeling of Cultural Change in Swarm Using Cultural Algorithms R. G. Reynolds, “An Introduction to Cultural Algorithms, ” in Proceedings of the 3rd Annual Conference on Evolutionary Programming, World Scientific Publishing, pp 131–139, 1994. Robert G. Reynolds, Bin Peng. Knowledge Learning and Social Swarms in Cultural Systems. Journal of Mathematical Sociology. 29:1-18, 2005 Reynolds, R. G., and Ali, M. Z, “Embedding a Social Fabric Component into Cultural Algorithms Toolkit for an Enhanced Knowledge-Driven Engineering Optimization”, International Journal of Intelligent Computing and Cybernetics (IJICC), Vol. 1, No 4, pp. 356–378, 2008 Reynolds, R G., and Ali, M Z., Exploring Knowledge and Population Swarms via an Agent-Based Cultural Algorithms Simulation Toolkit (CAT), in proceedings of IEEE Congress on Computational Intelligence 2007.
Wikipedia/Cultural_algorithm
In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms. It is also known as Rosenbrock's valley or Rosenbrock's banana function. The global minimum is inside a long, narrow, parabolic-shaped flat valley. To find the valley is trivial. To converge to the global minimum, however, is difficult. The function is defined by f ( x , y ) = ( a − x ) 2 + b ( y − x 2 ) 2 {\displaystyle f(x,y)=(a-x)^{2}+b(y-x^{2})^{2}} It has a global minimum at ( x , y ) = ( a , a 2 ) {\displaystyle (x,y)=(a,a^{2})} , where f ( x , y ) = 0 {\displaystyle f(x,y)=0} . Usually, these parameters are set such that a = 1 {\displaystyle a=1} and b = 100 {\displaystyle b=100} . Only in the trivial case where a = 0 {\displaystyle a=0} the function is symmetric and the minimum is at the origin. == Multidimensional generalizations == Two variants are commonly encountered. One is the sum of N / 2 {\displaystyle N/2} uncoupled 2D Rosenbrock problems, and is defined only for even N {\displaystyle N} s: f ( x ) = f ( x 1 , x 2 , … , x N ) = ∑ i = 1 N / 2 [ 100 ( x 2 i − 1 2 − x 2 i ) 2 + ( x 2 i − 1 − 1 ) 2 ] . {\displaystyle f(\mathbf {x} )=f(x_{1},x_{2},\dots ,x_{N})=\sum _{i=1}^{N/2}\left[100(x_{2i-1}^{2}-x_{2i})^{2}+(x_{2i-1}-1)^{2}\right].} This variant has predictably simple solutions. A second, more involved variant is f ( x ) = ∑ i = 1 N − 1 [ 100 ( x i + 1 − x i 2 ) 2 + ( 1 − x i ) 2 ] where x = ( x 1 , … , x N ) ∈ R N . {\displaystyle f(\mathbf {x} )=\sum _{i=1}^{N-1}[100(x_{i+1}-x_{i}^{2})^{2}+(1-x_{i})^{2}]\quad {\mbox{where}}\quad \mathbf {x} =(x_{1},\ldots ,x_{N})\in \mathbb {R} ^{N}.} has exactly one minimum for N = 3 {\displaystyle N=3} (at ( 1 , 1 , 1 ) {\displaystyle (1,1,1)} ) and exactly two minima for 4 ≤ N ≤ 7 {\displaystyle 4\leq N\leq 7} —the global minimum at ( 1 , 1 , . . . , 1 ) {\displaystyle (1,1,...,1)} and a local minimum near x ^ = ( − 1 , 1 , … , 1 ) {\displaystyle {\hat {\mathbf {x} }}=(-1,1,\dots ,1)} . This result is obtained by setting the gradient of the function equal to zero, noticing that the resulting equation is a rational function of x {\displaystyle x} . For small N {\displaystyle N} the polynomials can be determined exactly and Sturm's theorem can be used to determine the number of real roots, while the roots can be bounded in the region of | x i | < 2.4 {\displaystyle |x_{i}|<2.4} . For larger N {\displaystyle N} this method breaks down due to the size of the coefficients involved. == Stationary points == Many of the stationary points of the function exhibit a regular pattern when plotted. This structure can be exploited to locate them. == Optimization examples == The Rosenbrock function can be efficiently optimized by adapting appropriate coordinate system without using any gradient information and without building local approximation models (in contrast to many derivate-free optimizers). The following figure illustrates an example of 2-dimensional Rosenbrock function optimization by adaptive coordinate descent from starting point x 0 = ( − 3 , − 4 ) {\displaystyle x_{0}=(-3,-4)} . The solution with the function value 10 − 10 {\displaystyle 10^{-10}} can be found after 325 function evaluations. Using the Nelder–Mead method from starting point x 0 = ( − 1 , 1 ) {\displaystyle x_{0}=(-1,1)} with a regular initial simplex a minimum is found with function value 1.36 ⋅ 10 − 10 {\displaystyle 1.36\cdot 10^{-10}} after 185 function evaluations. The figure below visualizes the evolution of the algorithm. == See also == Test functions for optimization == References == == External links == Rosenbrock function plot in 3D Weisstein, Eric W. "Rosenbrock Function". MathWorld.
Wikipedia/Rosenbrock_function
Evolution strategy (ES) from computer science is a subclass of evolutionary algorithms, which serves as an optimization technique. It uses the major genetic operators mutation, recombination and selection of parents. == History == The 'evolution strategy' optimization technique was created in the early 1960s and developed further in the 1970s and later by Ingo Rechenberg, Hans-Paul Schwefel and their co-workers. == Methods == Evolution strategies use natural problem-dependent representations, so problem space and search space are identical. In common with evolutionary algorithms, the operators are applied in a loop. An iteration of the loop is called a generation. The sequence of generations is continued until a termination criterion is met. The special feature of the ES is the self-adaptation of mutation step sizes and the coevolution associated with it. The ES is briefly presented using the standard form, pointing out that there are many variants. The real-valued chromosome contains, in addition to the n {\displaystyle n} decision variables, n ′ {\displaystyle n'} mutation step sizes σ j {\displaystyle {\sigma }_{j}} , where: 1 ≤ j ≤ n ′ ≤ n {\displaystyle 1\leq j\leq n'\leq n} . Often one mutation step size is used for all decision variables or each has its own step size. Mate selection to produce λ {\displaystyle \lambda } offspring is random, i.e. independent of fitness. First, new mutation step sizes are generated per mating by intermediate recombination of the parental σ j {\displaystyle {\sigma }_{j}} with subsequent mutation as follows: σ j ′ = σ j ⋅ e ( N ( 0 , 1 ) − N j ( 0 , 1 ) ) {\displaystyle {\sigma }'_{j}=\sigma _{j}\cdot e^{({\mathcal {N}}(0,1)-{\mathcal {N}}_{j}(0,1))}} where N ( 0 , 1 ) {\displaystyle {\mathcal {N}}(0,1)} is a normally distributed random variable with mean 0 {\displaystyle 0} and standard deviation 1 {\displaystyle 1} . N ( 0 , 1 ) {\displaystyle {\mathcal {N}}(0,1)} applies to all σ j ′ {\displaystyle {\sigma }'_{j}} , while N j ( 0 , 1 ) {\displaystyle {\mathcal {N}}_{j}(0,1)} is newly determined for each σ j ′ {\displaystyle {\sigma }'_{j}} . Next, discrete recombination of the decision variables is followed by a mutation using the new mutation step sizes as standard deviations of the normal distribution. The new decision variables x j ′ {\displaystyle x_{j}'} are calculated as follows: x j ′ = x j + N j ( 0 , σ j ′ ) {\displaystyle x_{j}'=x_{j}+{\mathcal {N}}_{j}(0,{\sigma }_{j}')} This results in an evolutionary search on two levels: First, at the problem level itself and second, at the mutation step size level. In this way, it can be ensured that the ES searches for its target in ever finer steps. However, there is also the danger of being able to skip larger invalid areas in the search space only with difficulty. == Variants == The ES knows two variants of best selection for the generation of the next parent population ( μ {\displaystyle \mu } - number of parents, λ {\displaystyle \lambda } - number of offspring): ( μ , λ ) {\displaystyle (\mu ,\lambda )} : The μ {\displaystyle \mu } best offspring are used for the next generation (usually μ = λ 2 {\displaystyle \mu ={\frac {\lambda }{2}}} ). ( μ + λ ) {\displaystyle (\mu +\lambda )} : The best are selected from a union of μ {\displaystyle \mu } parents and λ {\displaystyle \lambda } offspring. Bäck and Schwefel recommend that the value of λ {\displaystyle \lambda } should be approximately seven times the μ {\displaystyle \mu } , whereby μ {\displaystyle \mu } must not be chosen too small because of the strong selection pressure. Suitable values for μ {\displaystyle \mu } are application-dependent and must be determined experimentally. The selection of the next generation in evolution strategies is deterministic and only based on the fitness rankings, not on the actual fitness values. The resulting algorithm is therefore invariant with respect to monotonic transformations of the objective function. The simplest and oldest evolution strategy ( 1 + 1 ) {\displaystyle {\mathit {(1+1)}}} operates on a population of size two: the current point (parent) and the result of its mutation. Only if the mutant's fitness is at least as good as the parent one, it becomes the parent of the next generation. Otherwise the mutant is disregarded. More generally, λ {\displaystyle \lambda } mutants can be generated and compete with the parent, called ( 1 + λ ) {\displaystyle (1+\lambda )} . In ( 1 , λ ) {\displaystyle (1,\lambda )} the best mutant becomes the parent of the next generation while the current parent is always disregarded. For some of these variants, proofs of linear convergence (in a stochastic sense) have been derived on unimodal objective functions. Individual step sizes for each coordinate, or correlations between coordinates, which are essentially defined by an underlying covariance matrix, are controlled in practice either by self-adaptation or by covariance matrix adaptation (CMA-ES). When the mutation step is drawn from a multivariate normal distribution using an evolving covariance matrix, it has been hypothesized that this adapted matrix approximates the inverse Hessian of the search landscape. This hypothesis has been proven for a static model relying on a quadratic approximation. In 2025, Chen et.al. proposed a multi-agent evolution strategy for consensus-based distributed optimization, where a novel step adaptation method is designed to help multiple agents control the step size cooperatively. == See also == Covariance matrix adaptation evolution strategy (CMA-ES) Derivative-free optimization Evolutionary computation Genetic algorithm Natural evolution strategy Evolutionary game theory == References == == Bibliography == Ingo Rechenberg (1971): Evolutionsstrategie – Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Frommann-Holzboog (1973). ISBN 3-7728-1642-8 Hans-Paul Schwefel (1974): Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977). Hans-Paul Schwefel: Evolution and Optimum Seeking. New York: Wiley & Sons 1995. ISBN 0-471-57148-2 H.-G. Beyer and H.-P. Schwefel. Evolution Strategies: A Comprehensive Introduction. Journal Natural Computing, 1(1):3–52, 2002. Hans-Georg Beyer: The Theory of Evolution Strategies. Springer, April 27, 2001. ISBN 3-540-67297-4 Ingo Rechenberg: Evolutionsstrategie '94. Stuttgart: Frommann-Holzboog 1994. ISBN 3-7728-1642-8 J. Klockgether and H. P. Schwefel (1970). Two-Phase Nozzle And Hollow Core Jet Experiments. AEG-Forschungsinstitut. MDH Staustrahlrohr Project Group. Berlin, Federal Republic of Germany. Proceedings of the 11th Symposium on Engineering Aspects of Magneto-Hydrodynamics, Caltech, Pasadena, Cal., 24.–26.3. 1970. M. Emmerich, O.M. Shir, and H. Wang: Evolution Strategies. In: Handbook of Heuristics, 1-31. Springer International Publishing (2018). == Research centers == Bionics & Evolutiontechnique at Technische Universität Berlin Chair of Algorithm Engineering (Ls11) – TU Dortmund University Collaborative Research Center 531 – TU Dortmund University
Wikipedia/Evolution_strategy
In computer science and operations research, the bees algorithm is a population-based search algorithm which was developed by Pham, Ghanbarzadeh et al. in 2005. It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighbourhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bees algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bees algorithm have been proven in a number of studies. == Metaphor == A colony of honey bees can extend itself over long distances (over 14 km) and in multiple directions simultaneously to harvest nectar or pollen from multiple food sources (flower patches). A small fraction of the colony constantly searches the environment looking for new flower patches. These scout bees move randomly in the area surrounding the hive, evaluating the profitability (net energy yield) of the food sources encountered. When they return to the hive, the scouts deposit the food harvested. Those individuals that found a highly profitable food source go to an area in the hive called the “dance floor”, and perform a ritual known as the waggle dance. Through the waggle dance a scout bee communicates the location of its discovery to idle onlookers, which join in the exploitation of the flower patch. Since the length of the dance is proportional to the scout’s rating of the food source, more foragers get recruited to harvest the best rated flower patches. After dancing, the scout returns to the food source it discovered to collect more food. As long as they are evaluated as profitable, rich food sources will be advertised by the scouts when they return to the hive. Recruited foragers may waggle dance as well, increasing the recruitment for highly rewarding flower patches. Thanks to this autocatalytic process, the bee colony is able to quickly switch the focus of the foraging effort on the most profitable flower patches. == Algorithm == The bees algorithm mimics the foraging strategy of honey bees to look for the best solution to an optimisation problem. Each candidate solution is thought of as a food source (flower), and a population (colony) of n agents (bees) is used to search the solution space. Each time an artificial bee visits a flower (lands on a solution), it evaluates its profitability (fitness). The bees algorithm consists of an initialisation procedure and a main search cycle which is iterated for a given number T of times, or until a solution of acceptable fitness is found. Each search cycle is composed of five procedures: recruitment, local search, neighbourhood shrinking, site abandonment, and global search. Pseudocode for the standard bees algorithm 1 for i = 1, ..., ns i scout[i] = Initialise_scout() ii flower_patch[i] = Initialise_flower_patch(scout[i]) 2 do until stopping_condition = TRUE i Recruitment() ii for i = 1, ..., na 1 flower_patch[i] = Local_search(flower_patch[i]) 2 flower_patch[i] = Site_abandonment(flower_patch[i]) 3 flower_patch[i] = Neighbourhood_shrinking(flower_patch[i]) iii for i = nb, ..., ns 1 flower_patch[i] = Global_search(flower_patch[i])} In the initialisation routine ns scout bees are randomly placed in the search space, and evaluate the fitness of the solutions where they land. For each solution, a neighbourhood (called flower patch) is delimited. In the recruitment procedure, the scouts that visited the nb≤ns fittest solutions (best sites) perform the waggle dance. That is, they recruit foragers to search further the neighbourhoods of the most promising solutions. The scouts that located the very best ne≤nb solutions (elite sites) recruit nre foragers each, whilst the remaining nb-ne scouts recruit nrb≤nre foragers each. Thus, the number of foragers recruited depends on the profitability of the food source. In the local search procedure, the recruited foragers are randomly scattered within the flower patches enclosing the solutions visited by the scouts (local exploitation). If any of the foragers in a flower patch lands on a solution of higher fitness than the solution visited by the scout, that forager becomes the new scout. If no forager finds a solution of higher fitness, the size of the flower patch is shrunk (neighbourhood shrinking procedure). Usually, flower patches are initially defined over a large area, and their size is gradually shrunk by the neighbourhood shrinking procedure. As a result, the scope of the local exploration is progressively focused on the area immediately close to the local fitness best. If no improvement in fitness is recorded in a given flower patch for a pre-set number of search cycles, the local maximum of fitness is considered found, the patch is abandoned (site abandonment), and a new scout is randomly generated. As in biological bee colonies, a small number of scouts keeps exploring the solution space looking for new regions of high fitness (global search). The global search procedure re-initialises the last ns-nb flower patches with randomly generated solutions. At the end of one search cycle, the scout population is again composed of ns scouts: nr scouts produced by the local search procedure (some of which may have been re-initialised by the site abandonment procedure), and ns-nb scouts generated by the global search procedure. The total artificial bee colony size is n=ne•nre+(nb-ne)•nrb+ns (elite sites foragers + remaining best sites foragers + scouts) bees. == Variants == In addition to the basic bees algorithm, there are a number of improved or hybrid versions of the BA, each of which focuses on some shortcomings of the basic BA. These variants include (but are not limited to) fuzzy or enhanced BA (EBA), grouped BA (GBA), hybrid modified BA (MBA) and so on. The pseudo-code for the grouped BA (GBA) is as follows. == See also == Ant colony optimization algorithms Artificial bee colony algorithm Evolutionary computation Lévy flight foraging hypothesis Manufacturing Engineering Centre Mathematical optimization Metaheuristic Particle swarm optimization Swarm intelligence == References == == External links == The bees algorithm website Boffins put dancing bees to work – BBC News The bees algorithm workshop
Wikipedia/Bees_algorithm
A schema (pl.: schemata) is a template in computer science used in the field of genetic algorithms that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of cylinder sets, forming a basis for a product topology on strings. In other words, schemata can be used to generate a topology on a space of strings. == Description == For example, consider binary strings of length 6. The schema 1**0*1 describes the set of all words of length 6 with 1's at the first and sixth positions and a 0 at the fourth position. The * is a wildcard symbol, which means that positions 2, 3 and 5 can have a value of either 1 or 0. The order of a schema is defined as the number of fixed positions in the template, while the defining length δ ( H ) {\displaystyle \delta (H)} is the distance between the first and last specific positions. The order of 1**0*1 is 3 and its defining length is 5. The fitness of a schema is the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function. === Length === The length of a schema H {\displaystyle H} , called N ( H ) {\displaystyle N(H)} , is defined as the total number of nodes in the schema. N ( H ) {\displaystyle N(H)} is also equal to the number of nodes in the programs matching H {\displaystyle H} . === Disruption === If the child of an individual that matches schema H does not itself match H, the schema is said to have been disrupted. == Propagation of schema == In evolutionary computing such as genetic algorithms and genetic programming, propagation refers to the inheritance of characteristics of one generation by the next. For example, a schema is propagated if individuals in the current generation match it and so do those in the next generation. Those in the next generation may be (but do not have to be) children of parents who matched it. == The Expansion and Compression Operators == Recently schema have been studied using order theory. Two basic operators are defined for schema: expansion and compression. The expansion maps a schema onto a set of words which it represents, while the compression maps a set of words on to a schema. In the following definitions Σ {\displaystyle \Sigma } denotes an alphabet, Σ l {\displaystyle \Sigma ^{l}} denotes all words of length l {\displaystyle l} over the alphabet Σ {\displaystyle \Sigma } , Σ ∗ {\displaystyle \Sigma _{*}} denotes the alphabet Σ {\displaystyle \Sigma } with the extra symbol ∗ {\displaystyle *} . Σ ∗ l {\displaystyle \Sigma _{*}^{l}} denotes all schema of length l {\displaystyle l} over the alphabet Σ ∗ {\displaystyle \Sigma _{*}} as well as the empty schema ϵ ∗ {\displaystyle \epsilon _{*}} . For any schema s ∈ Σ ∗ l {\displaystyle s\in \Sigma _{*}^{l}} the following operator ↑ s {\displaystyle {\uparrow }s} , called the e x p a n s i o n {\displaystyle expansion} of s {\displaystyle s} , which maps s {\displaystyle s} to a subset of words in Σ l {\displaystyle \Sigma ^{l}} : ↑ s := { b ∈ Σ l | b i = s i or s i = ∗ for each i ∈ { 1 , . . . , l } } {\displaystyle {\uparrow }s:=\{b\in \Sigma ^{l}|b_{i}=s_{i}{\mbox{ or }}s_{i}=*{\mbox{ for each }}i\in \{1,...,l\}\}} Where subscript i {\displaystyle i} denotes the character at position i {\displaystyle i} in a word or schema. When s = ϵ ∗ {\displaystyle s=\epsilon _{*}} then ↑ s = ∅ {\displaystyle {\uparrow }s=\emptyset } . More simply put, ↑ s {\displaystyle {\uparrow }s} is the set of all words in Σ l {\displaystyle \Sigma ^{l}} that can be made by exchanging the ∗ {\displaystyle *} symbols in s {\displaystyle s} with symbols from Σ {\displaystyle \Sigma } . For example, if Σ = { 0 , 1 } {\displaystyle \Sigma =\{0,1\}} , l = 3 {\displaystyle l=3} and s = 10 ∗ {\displaystyle s=10*} then ↑ s = { 100 , 101 } {\displaystyle {\uparrow }s=\{100,101\}} . Conversely, for any A ⊆ Σ l {\displaystyle A\subseteq \Sigma ^{l}} we define ↓ A {\displaystyle {\downarrow }{A}} , called the c o m p r e s s i o n {\displaystyle compression} of A {\displaystyle A} , which maps A {\displaystyle A} on to a schema s ∈ Σ ∗ l {\displaystyle s\in \Sigma _{*}^{l}} : ↓ A := s {\displaystyle {\downarrow }A:=s} where s {\displaystyle s} is a schema of length l {\displaystyle l} such that the symbol at position i {\displaystyle i} in s {\displaystyle s} is determined in the following way: if x i = y i {\displaystyle x_{i}=y_{i}} for all x , y ∈ A {\displaystyle x,y\in A} then s i = x i {\displaystyle s_{i}=x_{i}} otherwise s i = ∗ {\displaystyle s_{i}=*} . If A = ∅ {\displaystyle A=\emptyset } then ↓ A = ϵ ∗ {\displaystyle {\downarrow }A=\epsilon _{*}} . One can think of this operator as stacking up all the items in A {\displaystyle A} and if all elements in a column are equivalent, the symbol at that position in s {\displaystyle s} takes this value, otherwise there is a wild card symbol. For example, let A = { 100 , 000 , 010 } {\displaystyle A=\{100,000,010\}} then ↓ A = ∗ ∗ 0 {\displaystyle {\downarrow }A=**0} . Schemata can be partially ordered. For any a , b ∈ Σ ∗ l {\displaystyle a,b\in \Sigma _{*}^{l}} we say a ≤ b {\displaystyle a\leq b} if and only if ↑ a ⊆ ↑ b {\displaystyle {\uparrow }a\subseteq {\uparrow }b} . It follows that ≤ {\displaystyle \leq } is a partial ordering on a set of schemata from the reflexivity, antisymmetry and transitivity of the subset relation. For example, ϵ ∗ ≤ 11 ≤ 1 ∗ ≤ ∗ ∗ {\displaystyle \epsilon _{*}\leq 11\leq 1*\leq **} . This is because ↑ ϵ ∗ ⊆ ↑ 11 ⊆ ↑ 1 ∗ ⊆ ↑ ∗ ∗ = ∅ ⊆ { 11 } ⊆ { 11 , 10 } ⊆ { 11 , 10 , 01 , 00 } {\displaystyle {\uparrow }\epsilon _{*}\subseteq {\uparrow }11\subseteq {\uparrow }1*\subseteq {\uparrow }**=\emptyset \subseteq \{11\}\subseteq \{11,10\}\subseteq \{11,10,01,00\}} . The compression and expansion operators form a Galois connection, where ↓ {\displaystyle \downarrow } is the lower adjoint and ↑ {\displaystyle \uparrow } the upper adjoint. == The Schematic Completion and The Schematic Lattice == For a set A ⊆ Σ l {\displaystyle A\subseteq \Sigma ^{l}} , we call the process of calculating the compression on each subset of A, that is { ↓ X | X ⊆ A } {\displaystyle \{{\downarrow }X|X\subseteq A\}} , the schematic completion of A {\displaystyle A} , denoted S ( A ) {\displaystyle {\mathcal {S}}(A)} . For example, let A = { 110 , 100 , 001 , 000 } {\displaystyle A=\{110,100,001,000\}} . The schematic completion of A {\displaystyle A} , results in the following set: S ( A ) = { 001 , 100 , 000 , 110 , 00 ∗ , ∗ 00 , 1 ∗ 0 , ∗ ∗ 0 , ∗ 0 ∗ , ∗ ∗ ∗ , ϵ ∗ } {\displaystyle {\mathcal {S}}(A)=\{001,100,000,110,00*,*00,1*0,**0,*0*,***,\epsilon _{*}\}} The poset ( S ( A ) , ≤ ) {\displaystyle ({\mathcal {S}}(A),\leq )} always forms a complete lattice called the schematic lattice. The schematic lattice is similar to the concept lattice found in Formal concept analysis. == See also == Holland's schema theorem Formal concept analysis == References ==
Wikipedia/Schema_(genetic_algorithms)