text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Ping Zhang is a mathematician specializing in graph theory. She is a professor of mathematics at Western Michigan University and the author of multiple textbooks on graph theory and mathematical proof.
Zhang earned a master's degree in 1989 from the University of Jordan, working there on ring theory with Hasan Al-Ezeh.
She completed her Ph.D. in 1995 at Michigan State University. Her dissertation, in algebraic combinatorics, was Subposets of Boolean Algebras, and was supervised by Bruce Sagan.
After a short-term position at the University of Texas at El Paso, she joined the Western Michigan faculty in 1996.
== Books ==
Zhang is the author of:
Mathematical Proofs: A Transition to Advanced Mathematics (with Gary Chartrand and A. D. Polimeni, Addison-Wesley, 2002; 2nd ed., 2007; 3rd ed., 2012)
Introduction to Graph Theory (with Gary Chartrand, McGraw-Hill, 2004; Chinese ed., 2006); revised as A First Course in Graph Theory (Dover, 2012)
Chromatic Graph Theory (with Gary Chartrand, CRC Press, 2008)
Graphs & Digraphs (by Gary Chartrand and Linda Lesniak, with Zhang added as a co-author on the 5th ed., CRC Press, 2010)
Discrete Mathematics (with Gary Chartrand, Waveland Press, 2011)
Covering Walks in Graphs (with Futaba Fujie, Springer, 2014)
Color-Induced Graph Colorings (Springer, 2015)
The Fascinating World of Graph Theory (with Arthur T. Benjamin and Gary Chartrand, Princeton University Press, 2015)
A Kaleidoscopic View of Graph Colorings (Springer, 2016)
How to Label a Graph (with Gary Chartrand and Cooroo Egan, Springer, 2019) MR3932147
Irregularity in Graphs (with Akbar Ali and Gary Chartrand, Springer, 2021) MR4292275
She is also the co-editor of:
Handbook of Graph Theory (originally edited by Jonathan L. Gross and Jay Yellen, with Zhang added as a co-editor on the 2nd ed., CRC Press, 2013)
== References == | Wikipedia/Ping_Zhang_(graph_theorist) |
In the mathematical theory of directed graphs, a graph is said to be strongly connected if every vertex is reachable from every other vertex. The strongly connected components of a directed graph form a partition into subgraphs that are themselves strongly connected. It is possible to test the strong connectivity of a graph, or to find its strongly connected components, in linear time (that is, Θ(V + E)).
== Definitions ==
A directed graph is called strongly connected if there is a path in each direction between each pair of vertices of the graph. That is, a path exists from the first vertex in the pair to the second, and another path exists from the second vertex to the first.
In a directed graph G that may not itself be strongly connected, a pair of vertices u and v are said to be strongly connected to each other if there is a path in each direction between them.
The binary relation of being strongly connected is an equivalence relation, and the induced subgraphs of its equivalence classes are called strongly connected components.
Equivalently, a strongly connected component of a directed graph G is a subgraph that is strongly connected, and is maximal with this property: no additional edges or vertices from G can be included in the subgraph without breaking its property of being strongly connected. The collection of strongly connected components forms a partition of the set of vertices of G. A strongly connected component C is called trivial when C consists of a single vertex which is not connected to itself with an edge, and non-trivial otherwise.
If each strongly connected component is contracted to a single vertex, the resulting graph is a directed acyclic graph, the condensation of G. A directed graph is acyclic if and only if it has no strongly connected subgraphs with more than one vertex, because a directed cycle is strongly connected and every non-trivial strongly connected component contains at least one directed cycle.
== Algorithms ==
=== DFS-based linear-time algorithms ===
Several algorithms based on depth-first search compute strongly connected components in linear time.
Kosaraju's algorithm uses two passes of depth-first search. The first, in the original graph, is used to choose the order in which the outer loop of the second depth-first search tests vertices for having been visited already and recursively explores them if not. The second depth-first search is on the transpose graph of the original graph, and each recursive exploration finds a single new strongly connected component. It is named after S. Rao Kosaraju, who described it (but did not publish his results) in 1978; Micha Sharir later published it in 1981.
Tarjan's strongly connected components algorithm, published by Robert Tarjan in 1972, performs a single pass of depth-first search. It maintains a stack of vertices that have been explored by the search but not yet assigned to a component, and calculates "low numbers" of each vertex (an index number of the highest ancestor reachable in one step from a descendant of the vertex) which it uses to determine when a set of vertices should be popped off the stack into a new component.
The path-based strong component algorithm uses a depth-first search, like Tarjan's algorithm, but with two stacks. One of the stacks is used to keep track of the vertices not yet assigned to components, while the other keeps track of the current path in the depth-first search tree. The first linear time version of this algorithm was published by Edsger W. Dijkstra in 1976.
Although Kosaraju's algorithm is conceptually simple, Tarjan's and the path-based algorithm require only one depth-first search rather than two.
=== Reachability-based algorithms ===
Previous linear-time algorithms are based on depth-first search which is generally considered hard to parallelize. Fleischer et al. in 2000 proposed a divide-and-conquer approach based on reachability queries, and such algorithms are usually called reachability-based SCC algorithms. The idea of this approach is to pick a random pivot vertex and apply forward and backward reachability queries from this vertex. The two queries partition the vertex set into 4 subsets: vertices reached by both, either one, or none of the searches. One can show that a strongly connected component has to be contained in one of the subsets. The vertex subset reached by both searches forms a strongly connected component, and the algorithm then recurses on the other 3 subsets.
The expected sequential running time of this algorithm is shown to be O(n log n), a factor of O(log n) more than the classic algorithms. The parallelism comes from: (1) the reachability queries can be parallelized more easily (e.g. by a breadth-first search (BFS), and it can be fast if the diameter of the graph is small); and (2) the independence between the subtasks in the divide-and-conquer process.
This algorithm performs well on real-world graphs, but does not have theoretical guarantee on the parallelism (consider if a graph has no edges, the algorithm requires O(n) levels of recursions).
Blelloch et al. in 2016 shows that if the reachability queries are applied in a random order, the cost bound of O(n log n) still holds. Furthermore, the queries then can be batched in a prefix-doubling manner (i.e. 1, 2, 4, 8 queries) and run simultaneously in one round. The overall span of this algorithm is log2 n reachability queries, which is probably the optimal parallelism that can be achieved using the reachability-based approach.
=== Generating random strongly connected graphs ===
Peter M. Maurer describes an algorithm for generating random strongly connected graphs, based on a modification of an algorithm for strong connectivity augmentation, the problem of adding as few edges as possible to make a graph strongly connected. When used in conjunction with the Gilbert or Erdős-Rényi models with node relabelling, the algorithm is capable of generating any strongly connected graph on n nodes, without restriction on the kinds of structures that can be generated.
== Applications ==
Algorithms for finding strongly connected components may be used to solve 2-satisfiability problems (systems of Boolean variables with constraints on the values of pairs of variables): as Aspvall, Plass & Tarjan (1979) showed, a 2-satisfiability instance is unsatisfiable if and only if there is a variable v such that v and its negation are both contained in the same strongly connected component of the implication graph of the instance.
Strongly connected components are also used to compute the Dulmage–Mendelsohn decomposition, a classification of the edges of a bipartite graph, according to whether or not they can be part of a perfect matching in the graph.
== Related results ==
A directed graph is strongly connected if and only if it has an ear decomposition, a partition of the edges into a sequence of directed paths and cycles such that the first subgraph in the sequence is a cycle, and each subsequent subgraph is either a cycle sharing one vertex with previous subgraphs, or a path sharing its two endpoints with previous subgraphs.
According to Robbins' theorem, an undirected graph may be oriented in such a way that it becomes strongly connected, if and only if it is 2-edge-connected. One way to prove this result is to find an ear decomposition of the underlying undirected graph and then orient each ear consistently.
== See also ==
Clique (graph theory)
Connected component (graph theory)
Modular decomposition
Weak component
== References ==
== External links ==
Java implementation for computation of strongly connected components in the jBPT library (see StronglyConnectedComponents class).
C++ implementation of Strongly Connected Components | Wikipedia/Condensation_(graph_theory) |
In software development, distributed version control (also known as distributed revision control) is a form of version control in which the complete codebase, including its full history, is mirrored on every developer's computer. Compared to centralized version control, this enables automatic management branching and merging, speeds up most operations (except pushing and fetching), improves the ability to work offline, and does not rely on a single location for backups. Git, the world's most popular version control system, is a distributed version control system.
In 2010, software development author Joel Spolsky described distributed version control systems as "possibly the biggest advance in software development technology in the [past] ten years".
== Distributed vs. centralized ==
Distributed version control systems (DVCS) use a peer-to-peer approach to version control, as opposed to the client–server approach of centralized systems. Distributed revision control synchronizes repositories by transferring patches from peer to peer. There is no single central version of the codebase; instead, each user has a working copy and the full change history.
Advantages of DVCS (compared with centralized systems) include:
Allows users to work productively when not connected to a network.
Common operations (such as commits, viewing history, and reverting changes) are faster for DVCS, because there is no need to communicate with a central server. With DVCS, communication is necessary only when sharing changes among other peers.
Allows private work, so users can use their changes even for early drafts they do not want to publish.
Working copies effectively function as remote backups, which avoids relying on one physical machine as a single point of failure.
Allows various development models to be used, such as using development branches or a Commander/Lieutenant model.
Permits centralized control of the "release version" of the project
On FOSS software projects it is much easier to create a project fork from a project that is stalled because of leadership conflicts or design disagreements.
Disadvantages of DVCS (compared with centralized systems) include:
Initial checkout of a repository is slower as compared to checkout in a centralized version control system, because all branches and revision history are copied to the local machine by default.
The lack of locking mechanisms that is part of most centralized VCS and still plays an important role when it comes to non-mergeable binary files such as graphic assets or too complex single file binary or XML packages (e.g. office documents, PowerBI files, SQL Server Data Tools BI packages, etc.).
Additional storage required for every user to have a complete copy of the complete codebase history.
Increased exposure of the code base since every participant has a locally vulnerable copy.
Some originally centralized systems now offer some distributed features. Team Foundation Server and Visual Studio Team Services now host centralized and distributed version control repositories via hosting Git.
Similarly, some distributed systems now offer features that mitigate the issues of checkout times and storage costs, such as the Virtual File System for Git developed by Microsoft to work with very large codebases, which exposes a virtual file system that downloads files to local storage only as they are needed.
== Work model ==
A distributed model is generally better suited for large projects with partly independent developers, such as the Linux Kernel. It allows developers to work in independent branches and apply changes that can later be committed, audited and merged (or rejected) by others. This model allows for better flexibility and permits for the creation and adaptation of custom source code branches (forks) whose purpose might differ from the original project. In addition, it permits developers to locally clone an existing code repository and work on such from a local environment where changes are tracked and committed to the local repository allowing for better tracking of changes before being committed to the master branch of the repository. Such an approach enables developers to work in local and disconnected branches, making it more convenient for larger distributed teams.
=== Central and branch repositories ===
In a truly distributed project, such as Linux, every contributor maintains their own version of the project, with different contributors hosting their own respective versions and pulling in changes from other users as needed, resulting in a general consensus emerging from multiple different nodes. This also makes the process of "forking" easy, as all that is required is one contributor stop accepting pull requests from other contributors and letting the codebases gradually grow apart.
This arrangement, however, can be difficult to maintain, resulting in many projects choosing to shift to a paradigm in which one contributor is the universal "upstream", a repository from whom changes are almost always pulled. Under this paradigm, development is somewhat recentralized, as every project now has a central repository that is informally considered as the official repository, managed by the project maintainers collectively. While distributed version control systems make it easy for new developers to "clone" a copy of any other contributor's repository, in a central model, new developers always clone the central repository to create identical local copies of the code base. Under this system, code changes in the central repository are periodically synchronized with the local repository, and once the development is done, the change should be integrated into the central repository as soon as possible.
Organizations utilizing this centralize pattern often choose to host the central repository on a third party service like GitHub, which offers not only more reliable uptime than self-hosted repositories, but can also add centralized features like issue trackers and continuous integration.
=== Pull requests ===
Contributions to a source code repository that uses a distributed version control system are commonly made by means of a pull request, also known as a merge request. The contributor requests that the project maintainer pull the source code change, hence the name "pull request". The maintainer has to merge the pull request if the contribution should become part of the source base.
The developer creates a pull request to notify maintainers of a new change; a comment thread is associated with each pull request. This allows for focused discussion of code changes. Submitted pull requests are visible to anyone with repository access. A pull request can be accepted or rejected by maintainers.
Once the pull request is reviewed and approved, it is merged into the repository. Depending on the established workflow, the code may need to be tested before being included into official release. Therefore, some projects contain a special branch for merging untested pull requests. Other projects run an automated test suite on every pull request, using a continuous integration tool, and the reviewer checks that any new code has appropriate test coverage.
== History ==
The first open-source DVCS systems included Arch, Monotone, and Darcs. However, open source DVCSs were never very popular until the release of Git and Mercurial.
BitKeeper was used in the development of the Linux kernel from 2002 to 2005. The development of Git, now the world's most popular version control system, was prompted by the decision of the company that made BitKeeper to rescind the free license that Linus Torvalds and some other Linux kernel developers had previously taken advantage of.
== See also ==
== References ==
== External links ==
Essay on various revision control systems, especially the section "Centralized vs. Decentralized SCM"
Introduction to distributed version control systems - IBM Developer Works article | Wikipedia/Distributed_revision_control |
In graph theory, an arborescence is a directed graph where there exists a vertex r (called the root) such that, for any other vertex v, there is exactly one directed walk from r to v (noting that the root r is unique). An arborescence is thus the directed-graph form of a rooted tree, understood here as an undirected graph. An arborescence is also a directed rooted tree in which all edges point away from the root; a number of other equivalent characterizations exist.
Every arborescence is a directed acyclic graph (DAG), but not every DAG is an arborescence.
== Definition ==
The term arborescence comes from French. Some authors object to it on grounds that it is cumbersome to spell. There is a large number of synonyms for arborescence in graph theory, including directed rooted tree, out-arborescence, out-tree, and even branching being used to denote the same concept. Rooted tree itself has been defined by some authors as a directed graph.
=== Further definitions ===
Furthermore, some authors define an arborescence to be a spanning directed tree of a given digraph. The same can be said about some of its synonyms, especially branching. Other authors use branching to denote a forest of arborescences, with the latter notion defined in broader sense given at beginning of this article, but a variation with both notions of the spanning flavor is also encountered.
It's also possible to define a useful notion by reversing all the edges of an arborescence, i.e. making them all point in the direction of the root rather than away from it. Such digraphs are also designated by a variety of terms, such as in-tree or anti-arborescence. W. T. Tutte distinguishes between the two cases by using the phrases arborescence diverging from [some root] and arborescence converging to [some root].
The number of rooted trees (or arborescences) with n nodes is given by the sequence:
0, 1, 1, 2, 4, 9, 20, 48, 115, 286, 719, 1842, 4766, 12486, ... (sequence A000081 in the OEIS).
== See also ==
Edmonds' algorithm
Multitree
== References ==
== External links ==
Weisstein, Eric W. "Arborescence". MathWorld.
Weisstein, Eric W. "Rooted Tree". MathWorld. | Wikipedia/Arborescence_(graph_theory) |
In graph theory, a moral graph is used to find the equivalent undirected form of a directed acyclic graph. It is a key step of the junction tree algorithm, used in belief propagation on graphical models.
The moralized counterpart of a directed acyclic graph is formed by adding edges between all pairs of non-adjacent nodes that have a common child, and then making all edges in the graph undirected. Equivalently, a moral graph of a directed acyclic graph G is an undirected graph in which each node of the original G is now connected to its Markov blanket. The name stems from the fact that, in a moral graph, two nodes that have a common child are required to be married by sharing an edge.
Moralization may also be applied to mixed graphs, called in this context "chain graphs". In a chain graph, a connected component of the undirected subgraph is called a chain. Moralization adds an undirected edge between any two vertices that both have outgoing edges to the same chain, and then forgets the orientation of the directed edges of the graph.
== Weakly recursively simplicial ==
A graph is weakly recursively simplicial if it has a simplicial vertex and the subgraph after removing a simplicial vertex and some edges (possibly none) between its neighbours is weakly recursively simplicial. A graph is moral if and only if it is weakly recursively simplicial.
A chordal graph (a.k.a., recursive simplicial) is a special case of weakly recursively simplicial when no edge is removed during the elimination process. Therefore, a chordal graph is also moral. But a moral graph is not necessarily chordal.
== Recognising moral graphs ==
Unlike chordal graphs that can be recognised in polynomial time, Verma & Pearl (1993) proved that deciding whether or not a graph is moral is NP-complete.
== See also ==
D-separation
Tree decomposition
== References ==
== External links ==
M. Studeny: On mathematical description of probabilistic conditional independence structures | Wikipedia/Moral_graph |
A language model is a model of the human brain's ability to produce natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.
Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.
== History ==
Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars.
In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances.
In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning, and common relationships between pairs of words like plurality or gender.
== Pure statistical models ==
In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.
=== Models based on word n-grams ===
=== Exponential ===
Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is
P
(
w
m
∣
w
1
,
…
,
w
m
−
1
)
=
1
Z
(
w
1
,
…
,
w
m
−
1
)
exp
(
a
T
f
(
w
1
,
…
,
w
m
)
)
{\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))}
where
Z
(
w
1
,
…
,
w
m
−
1
)
{\displaystyle Z(w_{1},\ldots ,w_{m-1})}
is the partition function,
a
{\displaystyle a}
is the parameter vector, and
f
(
w
1
,
…
,
w
m
)
{\displaystyle f(w_{1},\ldots ,w_{m})}
is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on
a
{\displaystyle a}
or some form of regularization.
The log-bilinear model is another example of an exponential language model.
=== Skip-gram model ===
== Neural models ==
=== Recurrent neural network ===
Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.
=== Large language models ===
Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.
== Evaluation and benchmarks ==
Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves.
Various data sets have been developed for use in evaluating language processing systems. These include:
Massive Multitask Language Understanding (MMLU)
Corpus of Linguistic Acceptability
GLUE benchmark
Microsoft Research Paraphrase Corpus
Multi-Genre Natural Language Inference
Question Natural Language Inference
Quora Question Pairs
Recognizing Textual Entailment
Semantic Textual Similarity Benchmark
SQuAD question answering Test
Stanford Sentiment Treebank
Winograd NLI
BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs
== See also ==
== References ==
== Further reading == | Wikipedia/Language_models |
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.
== Function approximation ==
Most of the physical laws that govern the dynamics of a system can be described by partial differential equations. For example, the Navier–Stokes equations are a set of partial differential equations derived from the conservation laws (i.e., conservation of mass, momentum, and energy) that govern fluid mechanics. The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry. However, these equations cannot be solved exactly and therefore numerical methods must be used (such as finite differences, finite elements and finite volumes). In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization.
Recently, solving the governing partial differential equations of physical phenomena using deep learning has emerged as a new field of scientific machine learning (SciML), leveraging the universal approximation theorem and high expressivity of neural networks. In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied. However, such networks do not consider the physical characteristics underlying the problem, and the level of approximation accuracy provided by them is still heavily dependent on careful specifications of the problem geometry as well as the initial and boundary conditions. Without this preliminary information, the solution is not unique and may lose physical correctness. On the other hand, physics-informed neural networks (PINNs) leverage governing physical equations in neural network training. Namely, PINNs are designed to be trained to satisfy the given training data as well as the imposed governing equations. In this fashion, a neural network can be guided with training data that do not necessarily need to be large and complete. Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions. Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity.
PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs. PINNs can be thought of as a meshfree alternative to traditional approaches (e.g., CFD for fluid dynamics), and new data-driven approaches for model inversion and system identification. Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained. In addition, they allow for exploiting automatic differentiation (AD) to compute the required derivatives in the partial differential equations, a new class of differentiation techniques widely used to derive neural networks assessed to be superior to numerical or symbolic differentiation.
== Modeling and computation ==
A general nonlinear partial differential equation can be:
u
t
+
N
[
u
;
λ
]
=
0
,
x
∈
Ω
,
t
∈
[
0
,
T
]
{\displaystyle u_{t}+N[u;\lambda ]=0,\quad x\in \Omega ,\quad t\in [0,T]}
where
u
(
t
,
x
)
{\displaystyle u(t,x)}
denotes the solution,
N
[
⋅
;
λ
]
{\displaystyle N[\cdot ;\lambda ]}
is a nonlinear operator parameterized by
λ
{\displaystyle \lambda }
, and
Ω
{\displaystyle \Omega }
is a subset of
R
D
{\displaystyle \mathbb {R} ^{D}}
. This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems:
data-driven solution
data-driven discovery of partial differential equations.
=== Data-driven solution of partial differential equations ===
The data-driven solution of PDE computes the hidden state
u
(
t
,
x
)
{\displaystyle u(t,x)}
of the system given boundary data and/or measurements
z
{\displaystyle z}
, and fixed model parameters
λ
{\displaystyle \lambda }
. We solve:
u
t
+
N
[
u
]
=
0
,
x
∈
Ω
,
t
∈
[
0
,
T
]
{\displaystyle u_{t}+N[u]=0,\quad x\in \Omega ,\quad t\in [0,T]}
.
By defining the residual
f
(
t
,
x
)
{\displaystyle f(t,x)}
as
f
:=
u
t
+
N
[
u
]
=
0
{\displaystyle f:=u_{t}+N[u]=0}
,
and approximating
u
(
t
,
x
)
{\displaystyle u(t,x)}
by a deep neural network. This network can be differentiated using automatic differentiation. The parameters of
u
(
t
,
x
)
{\displaystyle u(t,x)}
and
f
(
t
,
x
)
{\displaystyle f(t,x)}
can be then learned by minimizing the following loss function
L
t
o
t
{\displaystyle L_{tot}}
:
L
t
o
t
=
L
u
+
L
f
{\displaystyle L_{tot}=L_{u}+L_{f}}
.
Where
L
u
=
‖
u
−
z
‖
Γ
{\displaystyle L_{u}=\Vert u-z\Vert _{\Gamma }}
is the error between the PINN
u
(
t
,
x
)
{\displaystyle u(t,x)}
and the set of boundary conditions and measured data on the set of points
Γ
{\displaystyle \Gamma }
where the boundary conditions and data are defined, and
L
f
=
‖
f
‖
Γ
{\displaystyle L_{f}=\Vert f\Vert _{\Gamma }}
is the mean-squared error of the residual function. This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process.
This approach has been used to yield computationally efficient physics-informed surrogate models with applications in the forecasting of physical processes, model predictive control, multi-physics and multi-scale modeling, and simulation. It has been shown to converge to the solution of the PDE.
=== Data-driven discovery of partial differential equations ===
Given noisy and incomplete measurements
z
{\displaystyle z}
of the state of the system, the data-driven discovery of PDE results in computing the unknown state
u
(
t
,
x
)
{\displaystyle u(t,x)}
and learning model parameters
λ
{\displaystyle \lambda }
that best describe the observed data and it reads as follows:
u
t
+
N
[
u
;
λ
]
=
0
,
x
∈
Ω
,
t
∈
[
0
,
T
]
{\displaystyle u_{t}+N[u;\lambda ]=0,\quad x\in \Omega ,\quad t\in [0,T]}
.
By defining
f
(
t
,
x
)
{\displaystyle f(t,x)}
as
f
:=
u
t
+
N
[
u
;
λ
]
=
0
{\displaystyle f:=u_{t}+N[u;\lambda ]=0}
,
and approximating
u
(
t
,
x
)
{\displaystyle u(t,x)}
by a deep neural network,
f
(
t
,
x
)
{\displaystyle f(t,x)}
results in a PINN. This network can be derived using automatic differentiation. The parameters of
u
(
t
,
x
)
{\displaystyle u(t,x)}
and
f
(
t
,
x
)
{\displaystyle f(t,x)}
, together with the parameter
λ
{\displaystyle \lambda }
of the differential operator can be then learned by minimizing the following loss function
L
t
o
t
{\displaystyle L_{tot}}
:
L
t
o
t
=
L
u
+
L
f
{\displaystyle L_{tot}=L_{u}+L_{f}}
.
Where
L
u
=
‖
u
−
z
‖
Γ
{\displaystyle L_{u}=\Vert u-z\Vert _{\Gamma }}
, with
u
{\displaystyle u}
and
z
{\displaystyle z}
state solutions and measurements at sparse location
Γ
{\displaystyle \Gamma }
, respectively and
L
f
=
‖
f
‖
Γ
{\displaystyle L_{f}=\Vert f\Vert _{\Gamma }}
residual function. This second term requires the structured information represented by the partial differential equations to be satisfied in the training process.
This strategy allows for discovering dynamic models described by nonlinear PDEs assembling computationally efficient and fully differentiable surrogate models that may find application in predictive forecasting, control, and data assimilation.
== Physics-informed neural networks for piece-wise function approximation ==
PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. DPINN (Distributed physics-informed neural networks) and DPIELM (Distributed physics-informed extreme learning machines) are generalizable space-time domain discretization for better approximation. DPIELM is an extremely fast and lightweight approximator with competitive accuracy. Domain scaling on the top has a special effect. Another school of thought is discretization for parallel computation to leverage usage of available computational resources.
XPINNs is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization.
== Physics-informed neural networks and theory of functional connections ==
In the PINN framework, initial and boundary conditions are not analytically satisfied, thus they need to be included in the loss function of the network to be simultaneously learned with the differential equation (DE) unknown functions. Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution. This drawback is overcome by using functional interpolation techniques such as the Theory of functional connections (TFC)'s constrained expression, in the Deep-TFC framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. A further improvement of PINN and functional interpolation approach is given by the Extreme Theory of Functional Connections (X-TFC) framework, where a single-layer Neural Network and the extreme learning machine training algorithm are employed. X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications.
== Physics-informed PointNet (PIPN) for multiple sets of irregular geometries ==
Regular PINNs are only able to obtain the solution of a forward or inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) is fundamentally the result of a combination of PINN's loss function with PointNet. In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas. PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for incompressible flow, heat transfer and linear elasticity.
== Physics-informed neural networks (PINNs) for inverse computations ==
Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, demonstrating their applicability across science, engineering, and economics. They have shown useful for solving inverse problems in a variety of fields, including nano-optics, topology optimization/characterization, multiphase flow in porous media, and high-speed fluid flow. PINNs have demonstrated flexibility when dealing with noisy and uncertain observation datasets. They also demonstrated clear advantages in the inverse calculation of parameters for multi-fidelity datasets, meaning datasets with different quality, quantity, and types of observations. Uncertainties in calculations can be evaluated using ensemble-based or Bayesian-based calculations.
PINNs can also be used in connection with symbolic regression for discovering the mathematical expression in connection with discovery of parameters and functions. One example of such application is the study on chemical ageing of cellulose insulation material, in this example PINNs are used to first discover a parameter for a set of ordinary differential equations (ODEs) and later a function solution, which is later used to find a more fitting expression using a symbolic regression with a combination of operators.
== Physics-informed neural networks for elasticity problems ==
Ensemble of physics-informed neural networks is applied for solving plane elasticity problems. Surrogate networks are intended for the unknown functions, namely, the components of the strain and the stress tensors as well as the unknown displacement field, respectively. The residual network provides the residuals of the partial differential equations (PDEs) and of the boundary conditions. The computational approach is based on principles of artificial intelligence. This approach can be extended to nonlinear elasticity problems, where the constitutive equations are nonlinear. PINNs can also be used for Kirchhoff plate bending problems with transverse distributed loads and to contact models with elastic Winkler’s foundations.
== Physics-informed neural networks (PINNs) with backward stochastic differential equation ==
Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions.
== Physics-informed neural networks for biology ==
An extension or adaptation of PINNs are Biologically-informed neural networks (BINNs). BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function
L
t
o
t
{\displaystyle L_{tot}}
is modified to include
L
c
o
n
s
t
r
{\displaystyle L_{constr}}
, a term used to incorporate domain-specific knowledge that helps enforce biological applicability. For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms. Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases.
A natural example of BINNs can be found in cell dynamics, where the cell density
u
(
x
,
t
)
{\displaystyle u(x,t)}
is governed by a reaction-diffusion equation with diffusion and growth functions
D
(
u
)
{\displaystyle D(u)}
and
G
(
u
)
{\displaystyle G(u)}
, respectively:
u
t
=
∇
⋅
(
D
(
u
)
∇
u
)
+
G
(
u
)
u
,
x
∈
Ω
,
t
∈
[
0
,
T
]
{\displaystyle u_{t}=\nabla \cdot (D(u)\nabla u)+G(u)u,\quad x\in \Omega ,\quad t\in [0,T]}
In this case, a component of
L
c
o
n
s
t
r
{\displaystyle L_{constr}}
could be
|
|
D
|
|
Γ
{\displaystyle ||D||_{\Gamma }}
for
D
<
D
m
i
n
,
D
>
D
m
a
x
{\displaystyle D<D_{min},D>D_{max}}
, which penalizes values of
D
{\displaystyle D}
that fall outside a biologically relevant diffusion range defined by
D
m
i
n
≤
D
≤
D
m
a
x
{\displaystyle D_{min}\leq D\leq D_{max}}
. Furthermore, the BINN architecture, when utilizing multilayer-perceptrons (MLPs), would function as follows: an MLP is used to construct
u
M
L
P
(
x
,
t
)
{\displaystyle u_{MLP}(x,t)}
from model inputs
(
x
,
t
)
{\displaystyle (x,t)}
, serving as a surrogate model for the cell density
u
(
x
,
t
)
{\displaystyle u(x,t)}
. This surrogate is then fed into the two additional MLPs,
D
M
L
P
(
u
M
L
P
)
{\displaystyle D_{MLP}(u_{MLP})}
and
G
M
L
P
(
u
M
L
P
)
{\displaystyle G_{MLP}(u_{MLP})}
, which model the diffusion and growth functions. Automatic differentiation can then be applied to compute the necessary derivatives of
u
M
L
P
{\displaystyle u_{MLP}}
,
D
M
L
P
{\displaystyle D_{MLP}}
and
G
M
L
P
{\displaystyle G_{MLP}}
to form the governing reaction-diffusion equation.
Note that since
u
M
L
P
{\displaystyle u_{MLP}}
is a surrogate for the cell density, it may contain errors, particularly in regions where the PDE is not fully satisfied. Therefore, the reaction-diffusion equation may be solved numerically, for instance using a method-of-lines approach approach.
== Limitations ==
Translation and discontinuous behavior are hard to approximate using PINNs. They fail when solving differential equations with slight advective dominance and hence asymptotic behaviour causes the method to fail. Such PDEs could be solved by scaling variables.
This difficulty in training of PINNs in advection-dominated PDEs can be explained by the Kolmogorov n–width of the solution.
They also fail to solve a system of dynamical systems and hence have not been a success in solving chaotic equations. One of the reasons behind the failure of regular PINNs is soft-constraining of Dirichlet and Neumann boundary conditions which pose a multi-objective optimization problem which requires manually weighing the loss terms to be able to optimize.
More generally, posing the solution of a PDE as an optimization problem brings with it all the problems that are faced in the world of optimization, the major one being getting stuck in local optima.
== References ==
== External links ==
Physics Informed Neural Network
PINN – repository to implement physics-informed neural network in Python
XPINN – repository to implement extended physics-informed neural network (XPINN) in Python
PIPN [2]– repository to implement physics-informed PointNet (PIPN) in Python | Wikipedia/Physics-informed_neural_networks |
In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction.
== Model ==
Suppose we have a sequence of observations
O
1
,
…
,
O
n
{\displaystyle O_{1},\dots ,O_{n}}
that we seek to tag with the labels
S
1
,
…
,
S
n
{\displaystyle S_{1},\dots ,S_{n}}
that maximize the conditional probability
P
(
S
1
,
…
,
S
n
∣
O
1
,
…
,
O
n
)
{\displaystyle P(S_{1},\dots ,S_{n}\mid O_{1},\dots ,O_{n})}
. In a MEMM, this probability is factored into Markov transition probabilities, where the probability of transitioning to a particular label depends only on the observation at that position and the previous position's label:
P
(
S
1
,
…
,
S
n
∣
O
1
,
…
,
O
n
)
=
∏
t
=
1
n
P
(
S
t
∣
S
t
−
1
,
O
t
)
.
{\displaystyle P(S_{1},\dots ,S_{n}\mid O_{1},\dots ,O_{n})=\prod _{t=1}^{n}P(S_{t}\mid S_{t-1},O_{t}).}
Each of these transition probabilities comes from the same general distribution
P
(
s
∣
s
′
,
o
)
{\displaystyle P(s\mid s',o)}
. For each possible label value of the previous label
s
′
{\displaystyle s'}
, the probability of a certain label
s
{\displaystyle s}
is modeled in the same way as a maximum entropy classifier:
P
(
s
∣
s
′
,
o
)
=
P
s
′
(
s
∣
o
)
=
1
Z
(
o
,
s
′
)
exp
(
∑
a
λ
a
f
a
(
o
,
s
)
)
.
{\displaystyle P(s\mid s',o)=P_{s'}(s\mid o)={\frac {1}{Z(o,s')}}\exp \left(\sum _{a}\lambda _{a}f_{a}(o,s)\right).}
Here, the
f
a
(
o
,
s
)
{\displaystyle f_{a}(o,s)}
are real-valued or categorical feature-functions, and
Z
(
o
,
s
′
)
{\displaystyle Z(o,s')}
is a normalization term ensuring that the distribution sums to one. This form for the distribution corresponds to the maximum entropy probability distribution satisfying the constraint that the empirical expectation for the feature is equal to the expectation given the model:
E
e
[
f
a
(
o
,
s
)
]
=
E
p
[
f
a
(
o
,
s
)
]
for all
a
.
{\displaystyle \operatorname {E} _{e}\left[f_{a}(o,s)\right]=\operatorname {E} _{p}\left[f_{a}(o,s)\right]\quad {\text{ for all }}a.}
The parameters
λ
a
{\displaystyle \lambda _{a}}
can be estimated using generalized iterative scaling. Furthermore, a variant of the Baum–Welch algorithm, which is used for training HMMs, can be used to estimate parameters when training data has incomplete or missing labels.
The optimal state sequence
S
1
,
…
,
S
n
{\displaystyle S_{1},\dots ,S_{n}}
can be found using a very similar Viterbi algorithm to the one used for HMMs. The dynamic program uses the forward probability:
α
t
+
1
(
s
)
=
∑
s
′
∈
S
α
t
(
s
′
)
P
s
′
(
s
∣
o
t
+
1
)
.
{\displaystyle \alpha _{t+1}(s)=\sum _{s'\in S}\alpha _{t}(s')P_{s'}(s\mid o_{t+1}).}
== Strengths and weaknesses ==
An advantage of MEMMs rather than HMMs for sequence tagging is that they offer increased freedom in choosing features to represent observations. In sequence tagging situations, it is useful to use domain knowledge to design special-purpose features. In the original paper introducing MEMMs, the authors write that "when trying to extract previously unseen company names from a newswire article, the identity of a word alone is not very predictive; however, knowing that the word is capitalized, that is a noun, that it is used in an appositive, and that it appears near the top of the article would all be quite predictive (in conjunction with the context provided by the state-transition structure)." Useful sequence tagging features, such as these, are often non-independent. Maximum entropy models do not assume independence between features, but generative observation models used in HMMs do. Therefore, MEMMs allow the user to specify many correlated, but informative features.
Another advantage of MEMMs versus HMMs and conditional random fields (CRFs) is that training can be considerably more efficient. In HMMs and CRFs, one needs to use some version of the forward–backward algorithm as an inner loop in training. However, in MEMMs, estimating the parameters of the maximum-entropy distributions used for the transition probabilities can be done for each transition distribution in isolation.
A drawback of MEMMs is that they potentially suffer from the "label bias problem," where states with low-entropy transition distributions "effectively ignore their observations." Conditional random fields were designed to overcome this weakness,
which had already been recognised in the context of neural network-based Markov models in the early 1990s.
Another source of label bias is that training is always done with respect to known previous tags, so the model struggles at test time when there is uncertainty in the previous tag.
== References == | Wikipedia/Maximum-entropy_Markov_model |
A generative adversarial network (GAN) is a class of machine learning frameworks and a prominent framework for approaching generative artificial intelligence. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neural networks compete with each other in the form of a zero-sum game, where one agent's gain is another agent's loss.
Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning.
The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.
GANs are similar to mimicry in evolutionary biology, with an evolutionary arms race between both networks.
== Definition ==
=== Mathematical ===
The original GAN is defined as the following game:
Each probability space
(
Ω
,
μ
ref
)
{\displaystyle (\Omega ,\mu _{\text{ref}})}
defines a GAN game.
There are 2 players: generator and discriminator.
The generator's strategy set is
P
(
Ω
)
{\displaystyle {\mathcal {P}}(\Omega )}
, the set of all probability measures
μ
G
{\displaystyle \mu _{G}}
on
Ω
{\displaystyle \Omega }
.
The discriminator's strategy set is the set of Markov kernels
μ
D
:
Ω
→
P
[
0
,
1
]
{\displaystyle \mu _{D}:\Omega \to {\mathcal {P}}[0,1]}
, where
P
[
0
,
1
]
{\displaystyle {\mathcal {P}}[0,1]}
is the set of probability measures on
[
0
,
1
]
{\displaystyle [0,1]}
.
The GAN game is a zero-sum game, with objective function
L
(
μ
G
,
μ
D
)
:=
E
x
∼
μ
ref
,
y
∼
μ
D
(
x
)
[
ln
y
]
+
E
x
∼
μ
G
,
y
∼
μ
D
(
x
)
[
ln
(
1
−
y
)
]
.
{\displaystyle L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\operatorname {E} _{x\sim \mu _{G},y\sim \mu _{D}(x)}[\ln(1-y)].}
The generator aims to minimize the objective, and the discriminator aims to maximize the objective.
The generator's task is to approach
μ
G
≈
μ
ref
{\displaystyle \mu _{G}\approx \mu _{\text{ref}}}
, that is, to match its own output distribution as closely as possible to the reference distribution. The discriminator's task is to output a value close to 1 when the input appears to be from the reference distribution, and to output a value close to 0 when the input looks like it came from the generator distribution.
=== In practice ===
The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution. The generative network's training objective is to increase the error rate of the discriminative network (i.e., "fool" the discriminator network by producing novel candidates that the discriminator thinks are not synthesized (are part of the true data distribution)).
A known dataset serves as the initial training data for the discriminator. Training involves presenting it with samples from the training dataset until it achieves acceptable accuracy. The generator is trained based on whether it succeeds in fooling the discriminator. Typically, the generator is seeded with randomized input that is sampled from a predefined latent space (e.g. a multivariate normal distribution). Thereafter, candidates synthesized by the generator are evaluated by the discriminator. Independent backpropagation procedures are applied to both networks so that the generator produces better samples, while the discriminator becomes more skilled at flagging synthetic samples. When used for image generation, the generator is typically a deconvolutional neural network, and the discriminator is a convolutional neural network.
=== Relation to other statistical machine learning methods ===
GANs are implicit generative models, which means that they do not explicitly model the likelihood function nor provide a means for finding the latent variable corresponding to a given sample, unlike alternatives such as flow-based generative model.
Compared to fully visible belief networks such as WaveNet and PixelRNN and autoregressive models in general, GANs can generate one complete sample in one pass, rather than multiple passes through the network.
Compared to Boltzmann machines and linear ICA, there is no restriction on the type of function used by the network.
Since neural networks are universal approximators, GANs are asymptotically consistent. Variational autoencoders might be universal approximators, but it is not proven as of 2017.
== Mathematical properties ==
=== Measure-theoretic considerations ===
This section provides some of the mathematical theory behind these methods.
In modern probability theory based on measure theory, a probability space also needs to be equipped with a σ-algebra. As a result, a more rigorous definition of the GAN game would make the following changes:Each probability space
(
Ω
,
B
,
μ
ref
)
{\displaystyle (\Omega ,{\mathcal {B}},\mu _{\text{ref}})}
defines a GAN game.
The generator's strategy set is
P
(
Ω
,
B
)
{\displaystyle {\mathcal {P}}(\Omega ,{\mathcal {B}})}
, the set of all probability measures
μ
G
{\displaystyle \mu _{G}}
on the measure-space
(
Ω
,
B
)
{\displaystyle (\Omega ,{\mathcal {B}})}
.
The discriminator's strategy set is the set of Markov kernels
μ
D
:
(
Ω
,
B
)
→
P
(
[
0
,
1
]
,
B
(
[
0
,
1
]
)
)
{\displaystyle \mu _{D}:(\Omega ,{\mathcal {B}})\to {\mathcal {P}}([0,1],{\mathcal {B}}([0,1]))}
, where
B
(
[
0
,
1
]
)
{\displaystyle {\mathcal {B}}([0,1])}
is the Borel σ-algebra on
[
0
,
1
]
{\displaystyle [0,1]}
.Since issues of measurability never arise in practice, these will not concern us further.
=== Choice of the strategy set ===
In the most generic version of the GAN game described above, the strategy set for the discriminator contains all Markov kernels
μ
D
:
Ω
→
P
[
0
,
1
]
{\displaystyle \mu _{D}:\Omega \to {\mathcal {P}}[0,1]}
, and the strategy set for the generator contains arbitrary probability distributions
μ
G
{\displaystyle \mu _{G}}
on
Ω
{\displaystyle \Omega }
.
However, as shown below, the optimal discriminator strategy against any
μ
G
{\displaystyle \mu _{G}}
is deterministic, so there is no loss of generality in restricting the discriminator's strategies to deterministic functions
D
:
Ω
→
[
0
,
1
]
{\displaystyle D:\Omega \to [0,1]}
. In most applications,
D
{\displaystyle D}
is a deep neural network function.
As for the generator, while
μ
G
{\displaystyle \mu _{G}}
could theoretically be any computable probability distribution, in practice, it is usually implemented as a pushforward:
μ
G
=
μ
Z
∘
G
−
1
{\displaystyle \mu _{G}=\mu _{Z}\circ G^{-1}}
. That is, start with a random variable
z
∼
μ
Z
{\displaystyle z\sim \mu _{Z}}
, where
μ
Z
{\displaystyle \mu _{Z}}
is a probability distribution that is easy to compute (such as the uniform distribution, or the Gaussian distribution), then define a function
G
:
Ω
Z
→
Ω
{\displaystyle G:\Omega _{Z}\to \Omega }
. Then the distribution
μ
G
{\displaystyle \mu _{G}}
is the distribution of
G
(
z
)
{\displaystyle G(z)}
.
Consequently, the generator's strategy is usually defined as just
G
{\displaystyle G}
, leaving
z
∼
μ
Z
{\displaystyle z\sim \mu _{Z}}
implicit. In this formalism, the GAN game objective is
L
(
G
,
D
)
:=
E
x
∼
μ
ref
[
ln
D
(
x
)
]
+
E
z
∼
μ
Z
[
ln
(
1
−
D
(
G
(
z
)
)
)
]
.
{\displaystyle L(G,D):=\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]+\operatorname {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z)))].}
=== Generative reparametrization ===
The GAN architecture has two main components. One is casting optimization into a game, of form
min
G
max
D
L
(
G
,
D
)
{\displaystyle \min _{G}\max _{D}L(G,D)}
, which is different from the usual kind of optimization, of form
min
θ
L
(
θ
)
{\displaystyle \min _{\theta }L(\theta )}
. The other is the decomposition of
μ
G
{\displaystyle \mu _{G}}
into
μ
Z
∘
G
−
1
{\displaystyle \mu _{Z}\circ G^{-1}}
, which can be understood as a reparametrization trick.
To see its significance, one must compare GAN with previous methods for learning generative models, which were plagued with "intractable probabilistic computations that arise in maximum likelihood estimation and related strategies".
At the same time, Kingma and Welling and Rezende et al. developed the same idea of reparametrization into a general stochastic backpropagation method. Among its first applications was the variational autoencoder.
=== Move order and strategic equilibria ===
In the original paper, as well as most subsequent papers, it is usually assumed that the generator moves first, and the discriminator moves second, thus giving the following minimax game:
min
μ
G
max
μ
D
L
(
μ
G
,
μ
D
)
:=
E
x
∼
μ
ref
,
y
∼
μ
D
(
x
)
[
ln
y
]
+
E
x
∼
μ
G
,
y
∼
μ
D
(
x
)
[
ln
(
1
−
y
)
]
.
{\displaystyle \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\operatorname {E} _{x\sim \mu _{G},y\sim \mu _{D}(x)}[\ln(1-y)].}
If both the generator's and the discriminator's strategy sets are spanned by a finite number of strategies, then by the minimax theorem,
min
μ
G
max
μ
D
L
(
μ
G
,
μ
D
)
=
max
μ
D
min
μ
G
L
(
μ
G
,
μ
D
)
{\displaystyle \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=\max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})}
that is, the move order does not matter.
However, since the strategy sets are both not finitely spanned, the minimax theorem does not apply, and the idea of an "equilibrium" becomes delicate. To wit, there are the following different concepts of equilibrium:
Equilibrium when generator moves first, and discriminator moves second:
μ
^
G
∈
arg
min
μ
G
max
μ
D
L
(
μ
G
,
μ
D
)
,
μ
^
D
∈
arg
max
μ
D
L
(
μ
^
G
,
μ
D
)
,
{\displaystyle {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D}),\quad {\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}L({\hat {\mu }}_{G},\mu _{D}),\quad }
Equilibrium when discriminator moves first, and generator moves second:
μ
^
D
∈
arg
max
μ
D
min
μ
G
L
(
μ
G
,
μ
D
)
,
μ
^
G
∈
arg
min
μ
G
L
(
μ
G
,
μ
^
D
)
,
{\displaystyle {\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D}),\quad {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}L(\mu _{G},{\hat {\mu }}_{D}),}
Nash equilibrium
(
μ
^
D
,
μ
^
G
)
{\displaystyle ({\hat {\mu }}_{D},{\hat {\mu }}_{G})}
, which is stable under simultaneous move order:
μ
^
D
∈
arg
max
μ
D
L
(
μ
^
G
,
μ
D
)
,
μ
^
G
∈
arg
min
μ
G
L
(
μ
G
,
μ
^
D
)
{\displaystyle {\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}L({\hat {\mu }}_{G},\mu _{D}),\quad {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}L(\mu _{G},{\hat {\mu }}_{D})}
For general games, these equilibria do not have to agree, or even to exist. For the original GAN game, these equilibria all exist, and are all equal. However, for more general GAN games, these do not necessarily exist, or agree.
=== Main theorems for GAN game ===
The original GAN paper proved the following two theorems:
Interpretation: For any fixed generator strategy
μ
G
{\displaystyle \mu _{G}}
, the optimal discriminator keeps track of the likelihood ratio between the reference distribution and the generator distribution:
D
(
x
)
1
−
D
(
x
)
=
d
μ
ref
d
μ
G
(
x
)
=
μ
ref
(
d
x
)
μ
G
(
d
x
)
;
D
(
x
)
=
σ
(
ln
μ
ref
(
d
x
)
−
ln
μ
G
(
d
x
)
)
{\displaystyle {\frac {D(x)}{1-D(x)}}={\frac {d\mu _{\text{ref}}}{d\mu _{G}}}(x)={\frac {\mu _{\text{ref}}(dx)}{\mu _{G}(dx)}};\quad D(x)=\sigma (\ln \mu _{\text{ref}}(dx)-\ln \mu _{G}(dx))}
where
σ
{\displaystyle \sigma }
is the logistic function.
In particular, if the prior probability for an image
x
{\displaystyle x}
to come from the reference distribution is equal to
1
2
{\displaystyle {\frac {1}{2}}}
, then
D
(
x
)
{\displaystyle D(x)}
is just the posterior probability that
x
{\displaystyle x}
came from the reference distribution:
D
(
x
)
=
Pr
(
x
came from reference distribution
∣
x
)
.
{\displaystyle D(x)=\Pr(x{\text{ came from reference distribution}}\mid x).}
== Training and evaluating GAN ==
=== Training ===
==== Unstable convergence ====
While the GAN game has a unique global equilibrium point when both the generator and discriminator have access to their entire strategy sets, the equilibrium is no longer guaranteed when they have a restricted strategy set.
In practice, the generator has access only to measures of form
μ
Z
∘
G
θ
−
1
{\displaystyle \mu _{Z}\circ G_{\theta }^{-1}}
, where
G
θ
{\displaystyle G_{\theta }}
is a function computed by a neural network with parameters
θ
{\displaystyle \theta }
, and
μ
Z
{\displaystyle \mu _{Z}}
is an easily sampled distribution, such as the uniform or normal distribution. Similarly, the discriminator has access only to functions of form
D
ζ
{\displaystyle D_{\zeta }}
, a function computed by a neural network with parameters
ζ
{\displaystyle \zeta }
. These restricted strategy sets take up a vanishingly small proportion of their entire strategy sets.
Further, even if an equilibrium still exists, it can only be found by searching in the high-dimensional space of all possible neural network functions. The standard strategy of using gradient descent to find the equilibrium often does not work for GAN, and often the game "collapses" into one of several failure modes. To improve the convergence stability, some training strategies start with an easier task, such as generating low-resolution images or simple images (one object with uniform background), and gradually increase the difficulty of the task during training. This essentially translates to applying a curriculum learning scheme.
==== Mode collapse ====
GANs often suffer from mode collapse where they fail to generalize properly, missing entire modes from the input data. For example, a GAN trained on the MNIST dataset containing many samples of each digit might only generate pictures of digit 0. This was termed "the Helvetica scenario".
One way this can happen is if the generator learns too fast compared to the discriminator. If the discriminator
D
{\displaystyle D}
is held constant, then the optimal generator would only output elements of
arg
max
x
D
(
x
)
{\displaystyle \arg \max _{x}D(x)}
. So for example, if during GAN training for generating MNIST dataset, for a few epochs, the discriminator somehow prefers the digit 0 slightly more than other digits, the generator may seize the opportunity to generate only digit 0, then be unable to escape the local minimum after the discriminator improves.
Some researchers perceive the root problem to be a weak discriminative network that fails to notice the pattern of omission, while others assign blame to a bad choice of objective function. Many solutions have been proposed, but it is still an open problem.
Even the state-of-the-art architecture, BigGAN (2019), could not avoid mode collapse. The authors resorted to "allowing collapse to occur at the later stages of training, by which time a model is sufficiently trained to achieve good results".
==== Two time-scale update rule ====
The two time-scale update rule (TTUR) is proposed to make GAN convergence more stable by making the learning rate of the generator lower than that of the discriminator. The authors argued that the generator should move slower than the discriminator, so that it does not "drive the discriminator steadily into new regions without capturing its gathered information".
They proved that a general class of games that included the GAN game, when trained under TTUR, "converges under mild assumptions to a stationary local Nash equilibrium".
They also proposed using the Adam stochastic optimization to avoid mode collapse, as well as the Fréchet inception distance for evaluating GAN performances.
==== Vanishing gradient ====
Conversely, if the discriminator learns too fast compared to the generator, then the discriminator could almost perfectly distinguish
μ
G
θ
,
μ
ref
{\displaystyle \mu _{G_{\theta }},\mu _{\text{ref}}}
. In such case, the generator
G
θ
{\displaystyle G_{\theta }}
could be stuck with a very high loss no matter which direction it changes its
θ
{\displaystyle \theta }
, meaning that the gradient
∇
θ
L
(
G
θ
,
D
ζ
)
{\displaystyle \nabla _{\theta }L(G_{\theta },D_{\zeta })}
would be close to zero. In such case, the generator cannot learn, a case of the vanishing gradient problem.
Intuitively speaking, the discriminator is too good, and since the generator cannot take any small step (only small steps are considered in gradient descent) to improve its payoff, it does not even try.
One important method for solving this problem is the Wasserstein GAN.
=== Evaluation ===
GANs are usually evaluated by Inception score (IS), which measures how varied the generator's outputs are (as classified by an image classifier, usually Inception-v3), or Fréchet inception distance (FID), which measures how similar the generator's outputs are to a reference set (as classified by a learned image featurizer, such as Inception-v3 without its final layer). Many papers that propose new GAN architectures for image generation report how their architectures break the state of the art on FID or IS.
Another evaluation method is the Learned Perceptual Image Patch Similarity (LPIPS), which starts with a learned image featurizer
f
θ
:
Image
→
R
n
{\displaystyle f_{\theta }:{\text{Image}}\to \mathbb {R} ^{n}}
, and finetunes it by supervised learning on a set of
(
x
,
x
′
,
p
e
r
c
e
p
t
u
a
l
d
i
f
f
e
r
e
n
c
e
(
x
,
x
′
)
)
{\displaystyle (x,x',\operatorname {perceptual~difference} (x,x'))}
, where
x
{\displaystyle x}
is an image,
x
′
{\displaystyle x'}
is a perturbed version of it, and
p
e
r
c
e
p
t
u
a
l
d
i
f
f
e
r
e
n
c
e
(
x
,
x
′
)
{\displaystyle \operatorname {perceptual~difference} (x,x')}
is how much they differ, as reported by human subjects. The model is finetuned so that it can approximate
‖
f
θ
(
x
)
−
f
θ
(
x
′
)
‖
≈
p
e
r
c
e
p
t
u
a
l
d
i
f
f
e
r
e
n
c
e
(
x
,
x
′
)
{\displaystyle \|f_{\theta }(x)-f_{\theta }(x')\|\approx \operatorname {perceptual~difference} (x,x')}
. This finetuned model is then used to define
LPIPS
(
x
,
x
′
)
:=
‖
f
θ
(
x
)
−
f
θ
(
x
′
)
‖
{\displaystyle \operatorname {LPIPS} (x,x'):=\|f_{\theta }(x)-f_{\theta }(x')\|}
.
Other evaluation methods are reviewed in.
== Variants ==
There is a veritable zoo of GAN variants. Some of the most prominent are as follows:
=== Conditional GAN ===
Conditional GANs are similar to standard GANs except they allow the model to conditionally generate samples based on additional information. For example, if we want to generate a cat face given a dog picture, we could use a conditional GAN.
The generator in a GAN game generates
μ
G
{\displaystyle \mu _{G}}
, a probability distribution on the probability space
Ω
{\displaystyle \Omega }
. This leads to the idea of a conditional GAN, where instead of generating one probability distribution on
Ω
{\displaystyle \Omega }
, the generator generates a different probability distribution
μ
G
(
c
)
{\displaystyle \mu _{G}(c)}
on
Ω
{\displaystyle \Omega }
, for each given class label
c
{\displaystyle c}
.
For example, for generating images that look like ImageNet, the generator should be able to generate a picture of cat when given the class label "cat".
In the original paper, the authors noted that GAN can be trivially extended to conditional GAN by providing the labels to both the generator and the discriminator.
Concretely, the conditional GAN game is just the GAN game with class labels provided:
L
(
μ
G
,
D
)
:=
E
c
∼
μ
C
,
x
∼
μ
ref
(
c
)
[
ln
D
(
x
,
c
)
]
+
E
c
∼
μ
C
,
x
∼
μ
G
(
c
)
[
ln
(
1
−
D
(
x
,
c
)
)
]
{\displaystyle L(\mu _{G},D):=\operatorname {E} _{c\sim \mu _{C},x\sim \mu _{\text{ref}}(c)}[\ln D(x,c)]+\operatorname {E} _{c\sim \mu _{C},x\sim \mu _{G}(c)}[\ln(1-D(x,c))]}
where
μ
C
{\displaystyle \mu _{C}}
is a probability distribution over classes,
μ
ref
(
c
)
{\displaystyle \mu _{\text{ref}}(c)}
is the probability distribution of real images of class
c
{\displaystyle c}
, and
μ
G
(
c
)
{\displaystyle \mu _{G}(c)}
the probability distribution of images generated by the generator when given class label
c
{\displaystyle c}
.
In 2017, a conditional GAN learned to generate 1000 image classes of ImageNet.
=== GANs with alternative architectures ===
The GAN game is a general framework and can be run with any reasonable parametrization of the generator
G
{\displaystyle G}
and discriminator
D
{\displaystyle D}
. In the original paper, the authors demonstrated it using multilayer perceptron networks and convolutional neural networks. Many alternative architectures have been tried.
Deep convolutional GAN (DCGAN): For both generator and discriminator, uses only deep networks consisting entirely of convolution-deconvolution layers, that is, fully convolutional networks.
Self-attention GAN (SAGAN): Starts with the DCGAN, then adds residually-connected standard self-attention modules to the generator and discriminator.
Variational autoencoder GAN (VAEGAN): Uses a variational autoencoder (VAE) for the generator.
Transformer GAN (TransGAN): Uses the pure transformer architecture for both the generator and discriminator, entirely devoid of convolution-deconvolution layers.
Flow-GAN: Uses flow-based generative model for the generator, allowing efficient computation of the likelihood function.
=== GANs with alternative objectives ===
Many GAN variants are merely obtained by changing the loss functions for the generator and discriminator.
Original GAN:
We recast the original GAN objective into a form more convenient for comparison:
{
min
D
L
D
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
[
ln
D
(
x
)
]
−
E
x
∼
μ
ref
[
ln
(
1
−
D
(
x
)
)
]
min
G
L
G
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln D(x)]-\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}
Original GAN, non-saturating loss:
This objective for generator was recommended in the original paper for faster convergence.
L
G
=
E
x
∼
μ
G
[
ln
D
(
x
)
]
{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[\ln D(x)]}
The effect of using this objective is analyzed in Section 2.2.2 of Arjovsky et al.
Original GAN, maximum likelihood:
L
G
=
E
x
∼
μ
G
[
(
exp
∘
σ
−
1
∘
D
)
(
x
)
]
{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[({\exp }\circ \sigma ^{-1}\circ D)(x)]}
where
σ
{\displaystyle \sigma }
is the logistic function. When the discriminator is optimal, the generator gradient is the same as in maximum likelihood estimation, even though GAN cannot perform maximum likelihood estimation itself.
Hinge loss GAN:
L
D
=
−
E
x
∼
p
ref
[
min
(
0
,
−
1
+
D
(
x
)
)
]
−
E
x
∼
μ
G
[
min
(
0
,
−
1
−
D
(
x
)
)
]
{\displaystyle L_{D}=-\operatorname {E} _{x\sim p_{\text{ref}}}\left[\min \left(0,-1+D(x)\right)\right]-\operatorname {E} _{x\sim \mu _{G}}\left[\min \left(0,-1-D\left(x\right)\right)\right]}
L
G
=
−
E
x
∼
μ
G
[
D
(
x
)
]
{\displaystyle L_{G}=-\operatorname {E} _{x\sim \mu _{G}}[D(x)]}
Least squares GAN:
L
D
=
E
x
∼
μ
ref
[
(
D
(
x
)
−
b
)
2
]
+
E
x
∼
μ
G
[
(
D
(
x
)
−
a
)
2
]
{\displaystyle L_{D}=\operatorname {E} _{x\sim \mu _{\text{ref}}}[(D(x)-b)^{2}]+\operatorname {E} _{x\sim \mu _{G}}[(D(x)-a)^{2}]}
L
G
=
E
x
∼
μ
G
[
(
D
(
x
)
−
c
)
2
]
{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[(D(x)-c)^{2}]}
where
a
,
b
,
c
{\displaystyle a,b,c}
are parameters to be chosen. The authors recommended
a
=
−
1
,
b
=
1
,
c
=
0
{\displaystyle a=-1,b=1,c=0}
.
=== Wasserstein GAN (WGAN) ===
The Wasserstein GAN modifies the GAN game at two points:
The discriminator's strategy set is the set of measurable functions of type
D
:
Ω
→
R
{\displaystyle D:\Omega \to \mathbb {R} }
with bounded Lipschitz norm:
‖
D
‖
L
≤
K
{\displaystyle \|D\|_{L}\leq K}
, where
K
{\displaystyle K}
is a fixed positive constant.
The objective is
L
W
G
A
N
(
μ
G
,
D
)
:=
E
x
∼
μ
G
[
D
(
x
)
]
−
E
x
∼
μ
ref
[
D
(
x
)
]
{\displaystyle L_{WGAN}(\mu _{G},D):=\operatorname {E} _{x\sim \mu _{G}}[D(x)]-\mathbb {E} _{x\sim \mu _{\text{ref}}}[D(x)]}
One of its purposes is to solve the problem of mode collapse (see above). The authors claim "In no experiment did we see evidence of mode collapse for the WGAN algorithm".
=== GANs with more than two players ===
==== Adversarial autoencoder ====
An adversarial autoencoder (AAE) is more autoencoder than GAN. The idea is to start with a plain autoencoder, but train a discriminator to discriminate the latent vectors from a reference distribution (often the normal distribution).
==== InfoGAN ====
In conditional GAN, the generator receives both a noise vector
z
{\displaystyle z}
and a label
c
{\displaystyle c}
, and produces an image
G
(
z
,
c
)
{\displaystyle G(z,c)}
. The discriminator receives image-label pairs
(
x
,
c
)
{\displaystyle (x,c)}
, and computes
D
(
x
,
c
)
{\displaystyle D(x,c)}
.
When the training dataset is unlabeled, conditional GAN does not work directly.
The idea of InfoGAN is to decree that every latent vector in the latent space can be decomposed as
(
z
,
c
)
{\displaystyle (z,c)}
: an incompressible noise part
z
{\displaystyle z}
, and an informative label part
c
{\displaystyle c}
, and encourage the generator to comply with the decree, by encouraging it to maximize
I
(
c
,
G
(
z
,
c
)
)
{\displaystyle I(c,G(z,c))}
, the mutual information between
c
{\displaystyle c}
and
G
(
z
,
c
)
{\displaystyle G(z,c)}
, while making no demands on the mutual information
z
{\displaystyle z}
between
G
(
z
,
c
)
{\displaystyle G(z,c)}
.
Unfortunately,
I
(
c
,
G
(
z
,
c
)
)
{\displaystyle I(c,G(z,c))}
is intractable in general, The key idea of InfoGAN is Variational Mutual Information Maximization: indirectly maximize it by maximizing a lower bound
I
^
(
G
,
Q
)
=
E
z
∼
μ
Z
,
c
∼
μ
C
[
ln
Q
(
c
∣
G
(
z
,
c
)
)
]
;
I
(
c
,
G
(
z
,
c
)
)
≥
sup
Q
I
^
(
G
,
Q
)
{\displaystyle {\hat {I}}(G,Q)=\mathbb {E} _{z\sim \mu _{Z},c\sim \mu _{C}}[\ln Q(c\mid G(z,c))];\quad I(c,G(z,c))\geq \sup _{Q}{\hat {I}}(G,Q)}
where
Q
{\displaystyle Q}
ranges over all Markov kernels of type
Q
:
Ω
Y
→
P
(
Ω
C
)
{\displaystyle Q:\Omega _{Y}\to {\mathcal {P}}(\Omega _{C})}
.
The InfoGAN game is defined as follows:Three probability spaces define an InfoGAN game:
(
Ω
X
,
μ
ref
)
{\displaystyle (\Omega _{X},\mu _{\text{ref}})}
, the space of reference images.
(
Ω
Z
,
μ
Z
)
{\displaystyle (\Omega _{Z},\mu _{Z})}
, the fixed random noise generator.
(
Ω
C
,
μ
C
)
{\displaystyle (\Omega _{C},\mu _{C})}
, the fixed random information generator.
There are 3 players in 2 teams: generator, Q, and discriminator. The generator and Q are on one team, and the discriminator on the other team.
The objective function is
L
(
G
,
Q
,
D
)
=
L
G
A
N
(
G
,
D
)
−
λ
I
^
(
G
,
Q
)
{\displaystyle L(G,Q,D)=L_{GAN}(G,D)-\lambda {\hat {I}}(G,Q)}
where
L
G
A
N
(
G
,
D
)
=
E
x
∼
μ
ref
,
[
ln
D
(
x
)
]
+
E
z
∼
μ
Z
[
ln
(
1
−
D
(
G
(
z
,
c
)
)
)
]
{\displaystyle L_{GAN}(G,D)=\operatorname {E} _{x\sim \mu _{\text{ref}},}[\ln D(x)]+\operatorname {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z,c)))]}
is the original GAN game objective, and
I
^
(
G
,
Q
)
=
E
z
∼
μ
Z
,
c
∼
μ
C
[
ln
Q
(
c
∣
G
(
z
,
c
)
)
]
{\displaystyle {\hat {I}}(G,Q)=\mathbb {E} _{z\sim \mu _{Z},c\sim \mu _{C}}[\ln Q(c\mid G(z,c))]}
Generator-Q team aims to minimize the objective, and discriminator aims to maximize it:
min
G
,
Q
max
D
L
(
G
,
Q
,
D
)
{\displaystyle \min _{G,Q}\max _{D}L(G,Q,D)}
==== Bidirectional GAN (BiGAN) ====
The standard GAN generator is a function of type
G
:
Ω
Z
→
Ω
X
{\displaystyle G:\Omega _{Z}\to \Omega _{X}}
, that is, it is a mapping from a latent space
Ω
Z
{\displaystyle \Omega _{Z}}
to the image space
Ω
X
{\displaystyle \Omega _{X}}
. This can be understood as a "decoding" process, whereby every latent vector
z
∈
Ω
Z
{\displaystyle z\in \Omega _{Z}}
is a code for an image
x
∈
Ω
X
{\displaystyle x\in \Omega _{X}}
, and the generator performs the decoding. This naturally leads to the idea of training another network that performs "encoding", creating an autoencoder out of the encoder-generator pair.
Already in the original paper, the authors noted that "Learned approximate inference can be performed by training an auxiliary network to predict
z
{\displaystyle z}
given
x
{\displaystyle x}
". The bidirectional GAN architecture performs exactly this.
The BiGAN is defined as follows: Two probability spaces define a BiGAN game:
(
Ω
X
,
μ
X
)
{\displaystyle (\Omega _{X},\mu _{X})}
, the space of reference images.
(
Ω
Z
,
μ
Z
)
{\displaystyle (\Omega _{Z},\mu _{Z})}
, the latent space.
There are 3 players in 2 teams: generator, encoder, and discriminator. The generator and encoder are on one team, and the discriminator on the other team.
The generator's strategies are functions
G
:
Ω
Z
→
Ω
X
{\displaystyle G:\Omega _{Z}\to \Omega _{X}}
, and the encoder's strategies are functions
E
:
Ω
X
→
Ω
Z
{\displaystyle E:\Omega _{X}\to \Omega _{Z}}
. The discriminator's strategies are functions
D
:
Ω
X
→
[
0
,
1
]
{\displaystyle D:\Omega _{X}\to [0,1]}
.
The objective function is
L
(
G
,
E
,
D
)
=
E
x
∼
μ
X
[
ln
D
(
x
,
E
(
x
)
)
]
+
E
z
∼
μ
Z
[
ln
(
1
−
D
(
G
(
z
)
,
z
)
)
]
{\displaystyle L(G,E,D)=\mathbb {E} _{x\sim \mu _{X}}[\ln D(x,E(x))]+\mathbb {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z),z))]}
Generator-encoder team aims to minimize the objective, and discriminator aims to maximize it:
min
G
,
E
max
D
L
(
G
,
E
,
D
)
{\displaystyle \min _{G,E}\max _{D}L(G,E,D)}
In the paper, they gave a more abstract definition of the objective as:
L
(
G
,
E
,
D
)
=
E
(
x
,
z
)
∼
μ
E
,
X
[
ln
D
(
x
,
z
)
]
+
E
(
x
,
z
)
∼
μ
G
,
Z
[
ln
(
1
−
D
(
x
,
z
)
)
]
{\displaystyle L(G,E,D)=\mathbb {E} _{(x,z)\sim \mu _{E,X}}[\ln D(x,z)]+\mathbb {E} _{(x,z)\sim \mu _{G,Z}}[\ln(1-D(x,z))]}
where
μ
E
,
X
(
d
x
,
d
z
)
=
μ
X
(
d
x
)
⋅
δ
E
(
x
)
(
d
z
)
{\displaystyle \mu _{E,X}(dx,dz)=\mu _{X}(dx)\cdot \delta _{E(x)}(dz)}
is the probability distribution on
Ω
X
×
Ω
Z
{\displaystyle \Omega _{X}\times \Omega _{Z}}
obtained by pushing
μ
X
{\displaystyle \mu _{X}}
forward via
x
↦
(
x
,
E
(
x
)
)
{\displaystyle x\mapsto (x,E(x))}
, and
μ
G
,
Z
(
d
x
,
d
z
)
=
δ
G
(
z
)
(
d
x
)
⋅
μ
Z
(
d
z
)
{\displaystyle \mu _{G,Z}(dx,dz)=\delta _{G(z)}(dx)\cdot \mu _{Z}(dz)}
is the probability distribution on
Ω
X
×
Ω
Z
{\displaystyle \Omega _{X}\times \Omega _{Z}}
obtained by pushing
μ
Z
{\displaystyle \mu _{Z}}
forward via
z
↦
(
G
(
x
)
,
z
)
{\displaystyle z\mapsto (G(x),z)}
.
Applications of bidirectional models include semi-supervised learning, interpretable machine learning, and neural machine translation.
==== CycleGAN ====
CycleGAN is an architecture for performing translations between two domains, such as between photos of horses and photos of zebras, or photos of night cities and photos of day cities.
The CycleGAN game is defined as follows:There are two probability spaces
(
Ω
X
,
μ
X
)
,
(
Ω
Y
,
μ
Y
)
{\displaystyle (\Omega _{X},\mu _{X}),(\Omega _{Y},\mu _{Y})}
, corresponding to the two domains needed for translations fore-and-back.
There are 4 players in 2 teams: generators
G
X
:
Ω
X
→
Ω
Y
,
G
Y
:
Ω
Y
→
Ω
X
{\displaystyle G_{X}:\Omega _{X}\to \Omega _{Y},G_{Y}:\Omega _{Y}\to \Omega _{X}}
, and discriminators
D
X
:
Ω
X
→
[
0
,
1
]
,
D
Y
:
Ω
Y
→
[
0
,
1
]
{\displaystyle D_{X}:\Omega _{X}\to [0,1],D_{Y}:\Omega _{Y}\to [0,1]}
.
The objective function is
L
(
G
X
,
G
Y
,
D
X
,
D
Y
)
=
L
G
A
N
(
G
X
,
D
X
)
+
L
G
A
N
(
G
Y
,
D
Y
)
+
λ
L
c
y
c
l
e
(
G
X
,
G
Y
)
{\displaystyle L(G_{X},G_{Y},D_{X},D_{Y})=L_{GAN}(G_{X},D_{X})+L_{GAN}(G_{Y},D_{Y})+\lambda L_{cycle}(G_{X},G_{Y})}
where
λ
{\displaystyle \lambda }
is a positive adjustable parameter,
L
G
A
N
{\displaystyle L_{GAN}}
is the GAN game objective, and
L
c
y
c
l
e
{\displaystyle L_{cycle}}
is the cycle consistency loss:
L
c
y
c
l
e
(
G
X
,
G
Y
)
=
E
x
∼
μ
X
‖
G
X
(
G
Y
(
x
)
)
−
x
‖
+
E
y
∼
μ
Y
‖
G
Y
(
G
X
(
y
)
)
−
y
‖
{\displaystyle L_{cycle}(G_{X},G_{Y})=E_{x\sim \mu _{X}}\|G_{X}(G_{Y}(x))-x\|+E_{y\sim \mu _{Y}}\|G_{Y}(G_{X}(y))-y\|}
The generators aim to minimize the objective, and the discriminators aim to maximize it:
min
G
X
,
G
Y
max
D
X
,
D
Y
L
(
G
X
,
G
Y
,
D
X
,
D
Y
)
{\displaystyle \min _{G_{X},G_{Y}}\max _{D_{X},D_{Y}}L(G_{X},G_{Y},D_{X},D_{Y})}
Unlike previous work like pix2pix, which requires paired training data, cycleGAN requires no paired data. For example, to train a pix2pix model to turn a summer scenery photo to winter scenery photo and back, the dataset must contain pairs of the same place in summer and winter, shot at the same angle; cycleGAN would only need a set of summer scenery photos, and an unrelated set of winter scenery photos.
=== GANs with particularly large or small scales ===
==== BigGAN ====
The BigGAN is essentially a self-attention GAN trained on a large scale (up to 80 million parameters) to generate large images of ImageNet (up to 512 x 512 resolution), with numerous engineering tricks to make it converge.
==== Invertible data augmentation ====
When there is insufficient training data, the reference distribution
μ
ref
{\displaystyle \mu _{\text{ref}}}
cannot be well-approximated by the empirical distribution given by the training dataset. In such cases, data augmentation can be applied, to allow training GAN on smaller datasets. Naïve data augmentation, however, brings its problems.
Consider the original GAN game, slightly reformulated as follows:
{
min
D
L
D
(
D
,
μ
G
)
=
−
E
x
∼
μ
ref
[
ln
D
(
x
)
]
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
min
G
L
G
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}
Now we use data augmentation by randomly sampling semantic-preserving transforms
T
:
Ω
→
Ω
{\displaystyle T:\Omega \to \Omega }
and applying them to the dataset, to obtain the reformulated GAN game:
{
min
D
L
D
(
D
,
μ
G
)
=
−
E
x
∼
μ
ref
,
T
∼
μ
trans
[
ln
D
(
T
(
x
)
)
]
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
min
G
L
G
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}[\ln D(T(x))]-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}
This is equivalent to a GAN game with a different distribution
μ
ref
′
{\displaystyle \mu _{\text{ref}}'}
, sampled by
T
(
x
)
{\displaystyle T(x)}
, with
x
∼
μ
ref
,
T
∼
μ
trans
{\displaystyle x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}
. For example, if
μ
ref
{\displaystyle \mu _{\text{ref}}}
is the distribution of images in ImageNet, and
μ
trans
{\displaystyle \mu _{\text{trans}}}
samples identity-transform with probability 0.5, and horizontal-reflection with probability 0.5, then
μ
ref
′
{\displaystyle \mu _{\text{ref}}'}
is the distribution of images in ImageNet and horizontally-reflected ImageNet, combined.
The result of such training would be a generator that mimics
μ
ref
′
{\displaystyle \mu _{\text{ref}}'}
. For example, it would generate images that look like they are randomly cropped, if the data augmentation uses random cropping.
The solution is to apply data augmentation to both generated and real images:
{
min
D
L
D
(
D
,
μ
G
)
=
−
E
x
∼
μ
ref
,
T
∼
μ
trans
[
ln
D
(
T
(
x
)
)
]
−
E
x
∼
μ
G
,
T
∼
μ
trans
[
ln
(
1
−
D
(
T
(
x
)
)
)
]
min
G
L
G
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
,
T
∼
μ
trans
[
ln
(
1
−
D
(
T
(
x
)
)
)
]
{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}[\ln D(T(x))]-\operatorname {E} _{x\sim \mu _{G},T\sim \mu _{\text{trans}}}[\ln(1-D(T(x)))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G},T\sim \mu _{\text{trans}}}[\ln(1-D(T(x)))]\end{cases}}}
The authors demonstrated high-quality generation using just 100-picture-large datasets.
The StyleGAN-2-ADA paper points out a further point on data augmentation: it must be invertible. Continue with the example of generating ImageNet pictures. If the data augmentation is "randomly rotate the picture by 0, 90, 180, 270 degrees with equal probability", then there is no way for the generator to know which is the true orientation: Consider two generators
G
,
G
′
{\displaystyle G,G'}
, such that for any latent
z
{\displaystyle z}
, the generated image
G
(
z
)
{\displaystyle G(z)}
is a 90-degree rotation of
G
′
(
z
)
{\displaystyle G'(z)}
. They would have exactly the same expected loss, and so neither is preferred over the other.
The solution is to only use invertible data augmentation: instead of "randomly rotate the picture by 0, 90, 180, 270 degrees with equal probability", use "randomly rotate the picture by 90, 180, 270 degrees with 0.1 probability, and keep the picture as it is with 0.7 probability". This way, the generator is still rewarded to keep images oriented the same way as un-augmented ImageNet pictures.
Abstractly, the effect of randomly sampling transformations
T
:
Ω
→
Ω
{\displaystyle T:\Omega \to \Omega }
from the distribution
μ
trans
{\displaystyle \mu _{\text{trans}}}
is to define a Markov kernel
K
trans
:
Ω
→
P
(
Ω
)
{\displaystyle K_{\text{trans}}:\Omega \to {\mathcal {P}}(\Omega )}
. Then, the data-augmented GAN game pushes the generator to find some
μ
^
G
∈
P
(
Ω
)
{\displaystyle {\hat {\mu }}_{G}\in {\mathcal {P}}(\Omega )}
, such that
K
trans
∗
μ
ref
=
K
trans
∗
μ
^
G
{\displaystyle K_{\text{trans}}*\mu _{\text{ref}}=K_{\text{trans}}*{\hat {\mu }}_{G}}
where
∗
{\displaystyle *}
is the Markov kernel convolution.
A data-augmentation method is defined to be invertible if its Markov kernel
K
trans
{\displaystyle K_{\text{trans}}}
satisfies
K
trans
∗
μ
=
K
trans
∗
μ
′
⟹
μ
=
μ
′
∀
μ
,
μ
′
∈
P
(
Ω
)
{\displaystyle K_{\text{trans}}*\mu =K_{\text{trans}}*\mu '\implies \mu =\mu '\quad \forall \mu ,\mu '\in {\mathcal {P}}(\Omega )}
Immediately by definition, we see that composing multiple invertible data-augmentation methods results in yet another invertible method. Also by definition, if the data-augmentation method is invertible, then using it in a GAN game does not change the optimal strategy
μ
^
G
{\displaystyle {\hat {\mu }}_{G}}
for the generator, which is still
μ
ref
{\displaystyle \mu _{\text{ref}}}
.
There are two prototypical examples of invertible Markov kernels:
Discrete case: Invertible stochastic matrices, when
Ω
{\displaystyle \Omega }
is finite.
For example, if
Ω
=
{
↑
,
↓
,
←
,
→
}
{\displaystyle \Omega =\{\uparrow ,\downarrow ,\leftarrow ,\rightarrow \}}
is the set of four images of an arrow, pointing in 4 directions, and the data augmentation is "randomly rotate the picture by 90, 180, 270 degrees with probability
p
{\displaystyle p}
, and keep the picture as it is with probability
(
1
−
3
p
)
{\displaystyle (1-3p)}
", then the Markov kernel
K
trans
{\displaystyle K_{\text{trans}}}
can be represented as a stochastic matrix:
[
K
trans
]
=
[
(
1
−
3
p
)
p
p
p
p
(
1
−
3
p
)
p
p
p
p
(
1
−
3
p
)
p
p
p
p
(
1
−
3
p
)
]
{\displaystyle [K_{\text{trans}}]={\begin{bmatrix}(1-3p)&p&p&p\\p&(1-3p)&p&p\\p&p&(1-3p)&p\\p&p&p&(1-3p)\end{bmatrix}}}
and
K
trans
{\displaystyle K_{\text{trans}}}
is an invertible kernel iff
[
K
trans
]
{\displaystyle [K_{\text{trans}}]}
is an invertible matrix, that is,
p
≠
1
/
4
{\displaystyle p\neq 1/4}
.
Continuous case: The gaussian kernel, when
Ω
=
R
n
{\displaystyle \Omega =\mathbb {R} ^{n}}
for some
n
≥
1
{\displaystyle n\geq 1}
.
For example, if
Ω
=
R
256
2
{\displaystyle \Omega =\mathbb {R} ^{256^{2}}}
is the space of 256x256 images, and the data-augmentation method is "generate a gaussian noise
z
∼
N
(
0
,
I
256
2
)
{\displaystyle z\sim {\mathcal {N}}(0,I_{256^{2}})}
, then add
ϵ
z
{\displaystyle \epsilon z}
to the image", then
K
trans
{\displaystyle K_{\text{trans}}}
is just convolution by the density function of
N
(
0
,
ϵ
2
I
256
2
)
{\displaystyle {\mathcal {N}}(0,\epsilon ^{2}I_{256^{2}})}
. This is invertible, because convolution by a gaussian is just convolution by the heat kernel, so given any
μ
∈
P
(
R
n
)
{\displaystyle \mu \in {\mathcal {P}}(\mathbb {R} ^{n})}
, the convolved distribution
K
trans
∗
μ
{\displaystyle K_{\text{trans}}*\mu }
can be obtained by heating up
R
n
{\displaystyle \mathbb {R} ^{n}}
precisely according to
μ
{\displaystyle \mu }
, then wait for time
ϵ
2
/
4
{\displaystyle \epsilon ^{2}/4}
. With that, we can recover
μ
{\displaystyle \mu }
by running the heat equation backwards in time for
ϵ
2
/
4
{\displaystyle \epsilon ^{2}/4}
.
More examples of invertible data augmentations are found in the paper.
==== SinGAN ====
SinGAN pushes data augmentation to the limit, by using only a single image as training data and performing data augmentation on it. The GAN architecture is adapted to this training method by using a multi-scale pipeline.
The generator
G
{\displaystyle G}
is decomposed into a pyramid of generators
G
=
G
1
∘
G
2
∘
⋯
∘
G
N
{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}
, with the lowest one generating the image
G
N
(
z
N
)
{\displaystyle G_{N}(z_{N})}
at the lowest resolution, then the generated image is scaled up to
r
(
G
N
(
z
N
)
)
{\displaystyle r(G_{N}(z_{N}))}
, and fed to the next level to generate an image
G
N
−
1
(
z
N
−
1
+
r
(
G
N
(
z
N
)
)
)
{\displaystyle G_{N-1}(z_{N-1}+r(G_{N}(z_{N})))}
at a higher resolution, and so on. The discriminator is decomposed into a pyramid as well.
=== StyleGAN series ===
The StyleGAN family is a series of architectures published by Nvidia's research division.
==== Progressive GAN ====
Progressive GAN is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator as
G
=
G
1
∘
G
2
∘
⋯
∘
G
N
{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}
, and the discriminator as
D
=
D
1
∘
D
2
∘
⋯
∘
D
N
{\displaystyle D=D_{1}\circ D_{2}\circ \cdots \circ D_{N}}
.
During training, at first only
G
N
,
D
N
{\displaystyle G_{N},D_{N}}
are used in a GAN game to generate 4x4 images. Then
G
N
−
1
,
D
N
−
1
{\displaystyle G_{N-1},D_{N-1}}
are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images.
To avoid shock between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper). For example, this is how the second stage GAN game starts:
Just before, the GAN game consists of the pair
G
N
,
D
N
{\displaystyle G_{N},D_{N}}
generating and discriminating 4x4 images.
Just after, the GAN game consists of the pair
(
(
1
−
α
)
+
α
⋅
G
N
−
1
)
∘
u
∘
G
N
,
D
N
∘
d
∘
(
(
1
−
α
)
+
α
⋅
D
N
−
1
)
{\displaystyle ((1-\alpha )+\alpha \cdot G_{N-1})\circ u\circ G_{N},D_{N}\circ d\circ ((1-\alpha )+\alpha \cdot D_{N-1})}
generating and discriminating 8x8 images. Here, the functions
u
,
d
{\displaystyle u,d}
are image up- and down-sampling functions, and
α
{\displaystyle \alpha }
is a blend-in factor (much like an alpha in image composing) that smoothly glides from 0 to 1.
==== StyleGAN-1 ====
StyleGAN-1 is designed as a combination of Progressive GAN with neural style transfer.
The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant
4
×
4
×
512
{\displaystyle 4\times 4\times 512}
array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer uses Gramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance).
At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector).
After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles.
Style-mixing between two images
x
,
x
′
{\displaystyle x,x'}
can be performed as well. First, run a gradient descent to find
z
,
z
′
{\displaystyle z,z'}
such that
G
(
z
)
≈
x
,
G
(
z
′
)
≈
x
′
{\displaystyle G(z)\approx x,G(z')\approx x'}
. This is called "projecting an image back to style latent space". Then,
z
{\displaystyle z}
can be fed to the lower style blocks, and
z
′
{\displaystyle z'}
to the higher style blocks, to generate a composite image that has the large-scale style of
x
{\displaystyle x}
, and the fine-detail style of
x
′
{\displaystyle x'}
. Multiple images can also be composed this way.
==== StyleGAN-2 ====
StyleGAN-2 improves upon StyleGAN-1, by using the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem.
This was updated by the StyleGAN-2-ADA ("ADA" stands for "adaptive"), which uses invertible data augmentation as described above. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive".
==== StyleGAN-3 ====
StyleGAN-3 improves upon StyleGAN-2 by solving the "texture sticking" problem, which can be seen in the official videos. They analyzed the problem by the Nyquist–Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon.
To solve this, they proposed imposing strict lowpass filters between each generator's layers, so that the generator is forced to operate on the pixels in a way faithful to the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using more signal filters. The resulting StyleGAN-3 is able to solve the texture sticking problem, as well as generating images that rotate and translate smoothly.
== Other uses ==
Other than for generative and discriminative modelling of data, GANs have been used for other things.
GANs have been used for transfer learning to enforce the alignment of the latent feature space, such as in deep reinforcement learning. This works by feeding the embeddings of the source and target task to the discriminator which tries to guess the context. The resulting loss is then (inversely) backpropagated through the encoder.
== Applications ==
=== Science ===
Iteratively reconstruct astronomical images
Simulate gravitational lensing for dark matter research.
Model the distribution of dark matter in a particular direction in space and to predict the gravitational lensing that will occur.
Model high energy jet formation and showers through calorimeters of high-energy physics experiments.
Approximate bottlenecks in computationally expensive simulations of particle physics experiments. Applications in the context of present and proposed CERN experiments have demonstrated the potential of these methods for accelerating simulation and/or improving simulation fidelity.
Reconstruct velocity and scalar fields in turbulent flows.
GAN-generated molecules were validated experimentally in mice.
=== Medical ===
One of the major concerns in medical imaging is preserving patient privacy. Due to these reasons, researchers often face difficulties in obtaining medical images for their research purposes. GAN has been used for generating synthetic medical images, such as MRI and PET images to address this challenge.
GAN can be used to detect glaucomatous images helping the early diagnosis which is essential to avoid partial or total loss of vision.
GANs have been used to create forensic facial reconstructions of deceased historical figures.
=== Malicious ===
Concerns have been raised about the potential use of GAN-based human image synthesis for sinister purposes, e.g., to produce fake, possibly incriminating, photographs and videos.
GANs can be used to generate unique, realistic profile photos of people who do not exist, in order to automate creation of fake social media profiles.
In 2019 the state of California considered and passed on October 3, 2019, the bill AB-602, which bans the use of human image synthesis technologies to make fake pornography without the consent of the people depicted, and bill AB-730, which prohibits distribution of manipulated videos of a political candidate within 60 days of an election. Both bills were authored by Assembly member Marc Berman and signed by Governor Gavin Newsom. The laws went into effect in 2020.
DARPA's Media Forensics program studies ways to counteract fake media, including fake media produced using GANs.
=== Fashion, art and advertising ===
GANs can be used to generate art; The Verge wrote in March 2019 that "The images created by GANs have become the defining look of contemporary AI art." GANs can also be used to
inpaint photographs
generate fashion models, shadows, photorealistic renders of interior design, industrial design, shoes, etc. Such networks were reported to be used by Facebook.
Some have worked with using GAN for artistic creativity, as "creative adversarial network". A GAN, trained on a set of 15,000 portraits from WikiArt from the 14th to the 19th century, created the 2018 painting Edmond de Belamy, which sold for US$432,500.
GANs were used by the video game modding community to up-scale low-resolution 2D textures in old video games by recreating them in 4k or higher resolutions via image training, and then down-sampling them to fit the game's native resolution (resembling supersampling anti-aliasing).
In 2020, Artbreeder was used to create the main antagonist in the sequel to the psychological web horror series Ben Drowned. The author would later go on to praise GAN applications for their ability to help generate assets for independent artists who are short on budget and manpower.
In May 2020, Nvidia researchers taught an AI system (termed "GameGAN") to recreate the game of Pac-Man simply by watching it being played.
In August 2019, a large dataset consisting of 12,197 MIDI songs each with paired lyrics and melody alignment was created for neural melody generation from lyrics using conditional GAN-LSTM (refer to sources at GitHub AI Melody Generation from Lyrics).
=== Miscellaneous ===
GANs have been used to
show how an individual's appearance might change with age.
reconstruct 3D models of objects from images,
generate novel objects as 3D point clouds,
model patterns of motion in video.
inpaint missing features in maps, transfer map styles in cartography or augment street view imagery.
use feedback to generate images and replace image search systems.
visualize the effect that climate change will have on specific houses.
reconstruct an image of a person's face after listening to their voice.
produces videos of a person speaking, given only a single photo of that person.
recurrent sequence generation.
== History ==
In 1991, Juergen Schmidhuber published "artificial curiosity", neural networks in a zero-sum game. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. GANs can be regarded as a case where the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set.
Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo. This idea was never implemented and did not involve stochasticity in the generator and thus was not a generative model. It is now known as a conditional GAN or cGAN. An idea similar to GANs was used to model animal behavior by Li, Gauci and Gross in 2013.
Another inspiration for GANs was noise-contrastive estimation, which uses the same loss function as GANs and which Goodfellow studied during his PhD in 2010–2014.
Adversarial machine learning has other uses besides generative modeling and can be applied to models other than neural networks. In control theory, adversarial learning based on neural networks was used in 2006 to train robust controllers in a game theoretic sense, by alternating the iterations between a minimizer policy, the controller, and a maximizer policy, the disturbance.
In 2017, a GAN was used for image enhancement focusing on realistic textures rather than pixel-accuracy, producing a higher image quality at high magnification. In 2017, the first faces were generated. These were exhibited in February 2018 at the Grand Palais. Faces generated by StyleGAN in 2019 drew comparisons with Deepfakes.
== See also ==
Artificial intelligence art – Visual media created with AI
Deepfake – Realistic artificially generated media
Deep learning – Branch of machine learning
Diffusion model – Deep learning algorithm
Generative artificial intelligence – Subset of AI using generative models
Synthetic media – Artificial production, manipulation, and modification of data and media by automated means
== References ==
== External links ==
Knight, Will. "5 Big Predictions for Artificial Intelligence in 2017". MIT Technology Review. Retrieved January 5, 2017.
Karras, Tero; Laine, Samuli; Aila, Timo (2018). "A Style-Based Generator Architecture for Generative Adversarial Networks". arXiv:1812.04948 [cs.NE].
This Person Does Not Exist – photorealistic images of people who do not exist, generated by StyleGAN
This Cat Does Not Exist Archived March 5, 2019, at the Wayback Machine – photorealistic images of cats who do not exist, generated by StyleGAN
Wang, Zhengwei; She, Qi; Ward, Tomas E. (2019). "Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy". arXiv:1906.01529 [cs.LG]. | Wikipedia/Generative_adversarial_networks |
An energy-based model (EBM) (also called Canonical Ensemble Learning or Learning via Canonical Ensemble – CEL and LCE, respectively) is an application of canonical ensemble formulation from statistical physics for learning from data. The approach prominently appears in generative artificial intelligence.
EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models.
An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution.
Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models, the energy functions of which are parameterized by modern deep neural networks.
Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy.
== Description ==
For a given input
x
{\displaystyle x}
, the model describes an energy
E
θ
(
x
)
{\displaystyle E_{\theta }(x)}
such that the Boltzmann distribution
P
θ
(
x
)
=
exp
(
−
β
E
θ
(
x
)
)
/
Z
(
θ
)
{\displaystyle P_{\theta }(x)=\exp(-\beta E_{\theta }(x))/Z(\theta )}
is a probability (density), and typically
β
=
1
{\displaystyle \beta =1}
.
Since the normalization constant:
Z
(
θ
)
:=
∫
x
∈
X
exp
(
−
β
E
θ
(
x
)
)
d
x
{\displaystyle Z(\theta ):=\int _{x\in X}\exp(-\beta E_{\theta }(x))dx}
(also known as the partition function) depends on all the Boltzmann factors of all possible inputs
x
{\displaystyle x}
, it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation.
However, for maximizing the likelihood during training, the gradient of the log-likelihood of a single training example
x
{\displaystyle x}
is given by using the chain rule:
∂
θ
log
(
P
θ
(
x
)
)
=
E
x
′
∼
P
θ
[
∂
θ
E
θ
(
x
′
)
]
−
∂
θ
E
θ
(
x
)
(
∗
)
{\displaystyle \partial _{\theta }\log \left(P_{\theta }(x)\right)=\mathbb {E} _{x'\sim P_{\theta }}[\partial _{\theta }E_{\theta }(x')]-\partial _{\theta }E_{\theta }(x)\,(*)}
The expectation in the above formula for the gradient can be approximately estimated by drawing samples
x
′
{\displaystyle x'}
from the distribution
P
θ
{\displaystyle P_{\theta }}
using Markov chain Monte Carlo (MCMC).
Early energy-based models, such as the 2003 Boltzmann machine by Hinton, estimated this expectation via blocked Gibbs sampling. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD), drawing samples using:
x
0
′
∼
P
0
,
x
i
+
1
′
=
x
i
′
−
α
2
∂
E
θ
(
x
i
′
)
∂
x
i
′
+
ϵ
{\displaystyle x_{0}'\sim P_{0},x_{i+1}'=x_{i}'-{\frac {\alpha }{2}}{\frac {\partial E_{\theta }(x_{i}')}{\partial x_{i}'}}+\epsilon }
,
where
ϵ
∼
N
(
0
,
α
)
{\displaystyle \epsilon \sim {\mathcal {N}}(0,\alpha )}
. A replay buffer of past values
x
i
′
{\displaystyle x_{i}'}
is used with LD to initialize the optimization module.
The parameters
θ
{\displaystyle \theta }
of the neural network are therefore trained in a generative manner via MCMC-based maximum likelihood estimation:
the learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method (e.g., Langevin dynamics or Hybrid Monte Carlo), and then updates the parameters
θ
{\displaystyle \theta }
based on the difference between the training examples and the synthesized ones – see equation
(
∗
)
{\displaystyle (*)}
. This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation.
Essentially, the model learns a function
E
θ
{\displaystyle E_{\theta }}
that associates low energies to correct values, and higher energies to incorrect values.
After training, given a converged energy model
E
θ
{\displaystyle E_{\theta }}
, the Metropolis–Hastings algorithm can be used to draw new samples. The acceptance probability is given by:
P
a
c
c
(
x
i
→
x
∗
)
=
min
(
1
,
P
θ
(
x
∗
)
P
θ
(
x
i
)
)
.
{\displaystyle P_{acc}(x_{i}\to x^{*})=\min \left(1,{\frac {P_{\theta }(x^{*})}{P_{\theta }(x_{i})}}\right).}
== History ==
The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs.
Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables.
== Characteristics ==
EBMs demonstrate useful properties:
Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance.
Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples.
Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes).
Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples.
Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques.
== Experimental results ==
On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification.
== Applications ==
Target applications include natural language processing, robotics and computer vision.
The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis,
3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ).
== Alternatives ==
EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows.
== Extensions ==
=== Joint energy-based models ===
Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability
p
θ
(
y
|
x
)
=
e
f
→
θ
(
x
)
[
y
]
∑
j
=
1
K
e
f
→
θ
(
x
)
[
j
]
for
y
=
1
,
…
,
K
and
f
→
θ
=
(
f
1
,
…
,
f
K
)
∈
R
K
,
{\displaystyle p_{\theta }(y|x)={\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{\sum _{j=1}^{K}e^{{\vec {f}}_{\theta }(x)[j]}}}\ \ {\text{ for }}y=1,\dotsc ,K{\text{ and }}{\vec {f}}_{\theta }=(f_{1},\dotsc ,f_{K})\in \mathbb {R} ^{K},}
where
f
→
θ
(
x
)
[
y
]
{\displaystyle {\vec {f}}_{\theta }(x)[y]}
is the y-th index of the logits
f
→
{\displaystyle {\vec {f}}}
corresponding to class y.
Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density:
p
θ
(
y
,
x
)
=
e
f
→
θ
(
x
)
[
y
]
Z
(
θ
)
,
{\displaystyle p_{\theta }(y,x)={\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{Z(\theta )}},}
with unknown partition function
Z
(
θ
)
{\displaystyle Z(\theta )}
and energy
E
θ
(
x
,
y
)
=
−
f
θ
(
x
)
[
y
]
{\displaystyle E_{\theta }(x,y)=-f_{\theta }(x)[y]}
.
By marginalization, we obtain the unnormalized density
p
θ
(
x
)
=
∑
y
p
θ
(
y
,
x
)
=
∑
y
e
f
→
θ
(
x
)
[
y
]
Z
(
θ
)
=:
exp
(
−
E
θ
(
x
)
)
,
{\displaystyle p_{\theta }(x)=\sum _{y}p_{\theta }(y,x)=\sum _{y}{\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{Z(\theta )}}=:\exp(-E_{\theta }(x)),}
therefore,
E
θ
(
x
)
=
−
log
(
∑
y
e
f
→
θ
(
x
)
[
y
]
Z
(
θ
)
)
,
{\displaystyle E_{\theta }(x)=-\log \left(\sum _{y}{\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{Z(\theta )}}\right),}
so that any classifier can be used to define an energy function
E
θ
(
x
)
{\displaystyle E_{\theta }(x)}
.
== See also ==
Empirical likelihood
Posterior predictive distribution
Contrastive learning
== Literature ==
Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263
== References ==
== External links ==
"CIAR NCAP Summer School". www.cs.toronto.edu. Retrieved 2019-12-27.
Dayan, Peter; Hinton, Geoffrey; Neal, Radford; Zemel, Richard S. (1999), "Helmholtz Machine", Unsupervised Learning, The MIT Press, doi:10.7551/mitpress/7011.003.0017, ISBN 978-0-262-28803-3
Hinton, Geoffrey E. (August 2002). "Training Products of Experts by Minimizing Contrastive Divergence". Neural Computation. 14 (8): 1771–1800. doi:10.1162/089976602760128018. ISSN 0899-7667. PMID 12180402. S2CID 207596505.
Salakhutdinov, Ruslan; Hinton, Geoffrey (2009-04-15). "Deep Boltzmann Machines". Artificial Intelligence and Statistics: 448–455. | Wikipedia/Energy_based_model |
In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information. Mixture models are used for clustering, under the name model-based clustering, and also for density estimation.
Mixture models should not be confused with models for compositional data, i.e., data whose components are constrained to sum to a constant value (1, 100%, etc.). However, compositional models can be thought of as mixture models, where members of the population are sampled at random. Conversely, mixture models can be thought of as compositional models, where the total size reading population has been normalized to 1.
== Structure ==
=== General mixture model ===
A typical finite-dimensional mixture model is a hierarchical model consisting of the following components:
N random variables that are observed, each distributed according to a mixture of K components, with the components belonging to the same parametric family of distributions (e.g., all normal, all Zipfian, etc.) but with different parameters
N random latent variables specifying the identity of the mixture component of each observation, each distributed according to a K-dimensional categorical distribution
A set of K mixture weights, which are probabilities that sum to 1.
A set of K parameters, each specifying the parameter of the corresponding mixture component. In many cases, each "parameter" is actually a set of parameters. For example, if the mixture components are Gaussian distributions, there will be a mean and variance for each component. If the mixture components are categorical distributions (e.g., when each observation is a token from a finite alphabet of size V), there will be a vector of V probabilities summing to 1.
In addition, in a Bayesian setting, the mixture weights and parameters will themselves be random variables, and prior distributions will be placed over the variables. In such a case, the weights are typically viewed as a K-dimensional random vector drawn from a Dirichlet distribution (the conjugate prior of the categorical distribution), and the parameters will be distributed according to their respective conjugate priors.
Mathematically, a basic parametric mixture model can be described as follows:
K
=
number of mixture components
N
=
number of observations
θ
i
=
1
…
K
=
parameter of distribution of observation associated with component
i
ϕ
i
=
1
…
K
=
mixture weight, i.e., prior probability of a particular component
i
ϕ
=
K
-dimensional vector composed of all the individual
ϕ
1
…
K
; must sum to 1
z
i
=
1
…
N
=
component of observation
i
x
i
=
1
…
N
=
observation
i
F
(
x
|
θ
)
=
probability distribution of an observation, parametrized on
θ
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
|
z
i
=
1
…
N
∼
F
(
θ
z
i
)
{\displaystyle {\begin{array}{lcl}K&=&{\text{number of mixture components}}\\N&=&{\text{number of observations}}\\\theta _{i=1\dots K}&=&{\text{parameter of distribution of observation associated with component }}i\\\phi _{i=1\dots K}&=&{\text{mixture weight, i.e., prior probability of a particular component }}i\\{\boldsymbol {\phi }}&=&K{\text{-dimensional vector composed of all the individual }}\phi _{1\dots K}{\text{; must sum to 1}}\\z_{i=1\dots N}&=&{\text{component of observation }}i\\x_{i=1\dots N}&=&{\text{observation }}i\\F(x|\theta )&=&{\text{probability distribution of an observation, parametrized on }}\theta \\z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}|z_{i=1\dots N}&\sim &F(\theta _{z_{i}})\end{array}}}
In a Bayesian setting, all parameters are associated with random variables, as follows:
K
,
N
=
as above
θ
i
=
1
…
K
,
ϕ
i
=
1
…
K
,
ϕ
=
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
,
F
(
x
|
θ
)
=
as above
α
=
shared hyperparameter for component parameters
β
=
shared hyperparameter for mixture weights
H
(
θ
|
α
)
=
prior probability distribution of component parameters, parametrized on
α
θ
i
=
1
…
K
∼
H
(
θ
|
α
)
ϕ
∼
S
y
m
m
e
t
r
i
c
-
D
i
r
i
c
h
l
e
t
K
(
β
)
z
i
=
1
…
N
|
ϕ
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
|
z
i
=
1
…
N
,
θ
i
=
1
…
K
∼
F
(
θ
z
i
)
{\displaystyle {\begin{array}{lcl}K,N&=&{\text{as above}}\\\theta _{i=1\dots K},\phi _{i=1\dots K},{\boldsymbol {\phi }}&=&{\text{as above}}\\z_{i=1\dots N},x_{i=1\dots N},F(x|\theta )&=&{\text{as above}}\\\alpha &=&{\text{shared hyperparameter for component parameters}}\\\beta &=&{\text{shared hyperparameter for mixture weights}}\\H(\theta |\alpha )&=&{\text{prior probability distribution of component parameters, parametrized on }}\alpha \\\theta _{i=1\dots K}&\sim &H(\theta |\alpha )\\{\boldsymbol {\phi }}&\sim &\operatorname {Symmetric-Dirichlet} _{K}(\beta )\\z_{i=1\dots N}|{\boldsymbol {\phi }}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}|z_{i=1\dots N},\theta _{i=1\dots K}&\sim &F(\theta _{z_{i}})\end{array}}}
This characterization uses F and H to describe arbitrary distributions over observations and parameters, respectively. Typically H will be the conjugate prior of F. The two most common choices of F are Gaussian aka "normal" (for real-valued observations) and categorical (for discrete observations). Other common possibilities for the distribution of the mixture components are:
Binomial distribution, for the number of "positive occurrences" (e.g., successes, yes votes, etc.) given a fixed number of total occurrences
Multinomial distribution, similar to the binomial distribution, but for counts of multi-way occurrences (e.g., yes/no/maybe in a survey)
Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs
Poisson distribution, for the number of occurrences of an event in a given period of time, for an event that is characterized by a fixed rate of occurrence
Exponential distribution, for the time before the next event occurs, for an event that is characterized by a fixed rate of occurrence
Log-normal distribution, for positive real numbers that are assumed to grow exponentially, such as incomes or prices
Multivariate normal distribution (aka multivariate Gaussian distribution), for vectors of correlated outcomes that are individually Gaussian-distributed
Multivariate Student's t-distribution, for vectors of heavy-tailed correlated outcomes
A vector of Bernoulli-distributed values, corresponding, e.g., to a black-and-white image, with each value representing a pixel; see the handwriting-recognition example below
=== Specific examples ===
==== Gaussian mixture model ====
A typical non-Bayesian Gaussian mixture model looks like this:
K
,
N
=
as above
ϕ
i
=
1
…
K
,
ϕ
=
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
=
as above
θ
i
=
1
…
K
=
{
μ
i
=
1
…
K
,
σ
i
=
1
…
K
2
}
μ
i
=
1
…
K
=
mean of component
i
σ
i
=
1
…
K
2
=
variance of component
i
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
∼
N
(
μ
z
i
,
σ
z
i
2
)
{\displaystyle {\begin{array}{lcl}K,N&=&{\text{as above}}\\\phi _{i=1\dots K},{\boldsymbol {\phi }}&=&{\text{as above}}\\z_{i=1\dots N},x_{i=1\dots N}&=&{\text{as above}}\\\theta _{i=1\dots K}&=&\{\mu _{i=1\dots K},\sigma _{i=1\dots K}^{2}\}\\\mu _{i=1\dots K}&=&{\text{mean of component }}i\\\sigma _{i=1\dots K}^{2}&=&{\text{variance of component }}i\\z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}&\sim &{\mathcal {N}}(\mu _{z_{i}},\sigma _{z_{i}}^{2})\end{array}}}
A Bayesian version of a Gaussian mixture model is as follows:
K
,
N
=
as above
ϕ
i
=
1
…
K
,
ϕ
=
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
=
as above
θ
i
=
1
…
K
=
{
μ
i
=
1
…
K
,
σ
i
=
1
…
K
2
}
μ
i
=
1
…
K
=
mean of component
i
σ
i
=
1
…
K
2
=
variance of component
i
μ
0
,
λ
,
ν
,
σ
0
2
=
shared hyperparameters
μ
i
=
1
…
K
∼
N
(
μ
0
,
λ
σ
i
2
)
σ
i
=
1
…
K
2
∼
I
n
v
e
r
s
e
-
G
a
m
m
a
(
ν
,
σ
0
2
)
ϕ
∼
S
y
m
m
e
t
r
i
c
-
D
i
r
i
c
h
l
e
t
K
(
β
)
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
∼
N
(
μ
z
i
,
σ
z
i
2
)
{\displaystyle {\begin{array}{lcl}K,N&=&{\text{as above}}\\\phi _{i=1\dots K},{\boldsymbol {\phi }}&=&{\text{as above}}\\z_{i=1\dots N},x_{i=1\dots N}&=&{\text{as above}}\\\theta _{i=1\dots K}&=&\{\mu _{i=1\dots K},\sigma _{i=1\dots K}^{2}\}\\\mu _{i=1\dots K}&=&{\text{mean of component }}i\\\sigma _{i=1\dots K}^{2}&=&{\text{variance of component }}i\\\mu _{0},\lambda ,\nu ,\sigma _{0}^{2}&=&{\text{shared hyperparameters}}\\\mu _{i=1\dots K}&\sim &{\mathcal {N}}(\mu _{0},\lambda \sigma _{i}^{2})\\\sigma _{i=1\dots K}^{2}&\sim &\operatorname {Inverse-Gamma} (\nu ,\sigma _{0}^{2})\\{\boldsymbol {\phi }}&\sim &\operatorname {Symmetric-Dirichlet} _{K}(\beta )\\z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}&\sim &{\mathcal {N}}(\mu _{z_{i}},\sigma _{z_{i}}^{2})\end{array}}}
{\displaystyle }
==== Multivariate Gaussian mixture model ====
A Bayesian Gaussian mixture model is commonly extended to fit a vector of unknown parameters (denoted in bold), or multivariate normal distributions. In a multivariate distribution (i.e. one modelling a vector
x
{\displaystyle {\boldsymbol {x}}}
with N random variables) one may model a vector of parameters (such as several observations of a signal or patches within an image) using a Gaussian mixture model prior distribution on the vector of estimates given by
p
(
θ
)
=
∑
i
=
1
K
ϕ
i
N
(
μ
i
,
Σ
i
)
{\displaystyle p({\boldsymbol {\theta }})=\sum _{i=1}^{K}\phi _{i}{\mathcal {N}}({\boldsymbol {\mu }}_{i},{\boldsymbol {\Sigma }}_{i})}
where the ith vector component is characterized by normal distributions with weights
ϕ
i
{\displaystyle \phi _{i}}
, means
μ
i
{\displaystyle {\boldsymbol {\mu }}_{i}}
and covariance matrices
Σ
i
{\displaystyle {\boldsymbol {\Sigma }}_{i}}
. To incorporate this prior into a Bayesian estimation, the prior is multiplied with the known distribution
p
(
x
|
θ
)
{\displaystyle p({\boldsymbol {x|\theta }})}
of the data
x
{\displaystyle {\boldsymbol {x}}}
conditioned on the parameters
θ
{\displaystyle {\boldsymbol {\theta }}}
to be estimated. With this formulation, the posterior distribution
p
(
θ
|
x
)
{\displaystyle p({\boldsymbol {\theta |x}})}
is also a Gaussian mixture model of the form
p
(
θ
|
x
)
=
∑
i
=
1
K
ϕ
~
i
N
(
μ
~
i
,
Σ
~
i
)
{\displaystyle p({\boldsymbol {\theta |x}})=\sum _{i=1}^{K}{\tilde {\phi }}_{i}{\mathcal {N}}({\boldsymbol {{\tilde {\mu }}_{i}}},{\boldsymbol {\tilde {\Sigma }}}_{i})}
with new parameters
ϕ
~
i
,
μ
~
i
{\displaystyle {\tilde {\phi }}_{i},{\boldsymbol {\tilde {\mu }}}_{i}}
and
Σ
~
i
{\displaystyle {\boldsymbol {\tilde {\Sigma }}}_{i}}
that are updated using the EM algorithm.
Although EM-based parameter updates are well-established, providing the initial estimates for these parameters is currently an area of active research. Note that this formulation yields a closed-form solution to the complete posterior distribution. Estimations of the random variable
θ
{\displaystyle {\boldsymbol {\theta }}}
may be obtained via one of several estimators, such as the mean or maximum of the posterior distribution.
Such distributions are useful for assuming patch-wise shapes of images and clusters, for example. In the case of image representation, each Gaussian may be tilted, expanded, and warped according to the covariance matrices
Σ
i
{\displaystyle {\boldsymbol {\Sigma }}_{i}}
. One Gaussian distribution of the set is fit to each patch (usually of size 8×8 pixels) in the image. Notably, any distribution of points around a cluster (see k-means) may be accurately given enough Gaussian components, but scarcely over K=20 components are needed to accurately model a given image distribution or cluster of data.
==== Categorical mixture model ====
A typical non-Bayesian mixture model with categorical observations looks like this:
K
,
N
:
{\displaystyle K,N:}
as above
ϕ
i
=
1
…
K
,
ϕ
:
{\displaystyle \phi _{i=1\dots K},{\boldsymbol {\phi }}:}
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
:
{\displaystyle z_{i=1\dots N},x_{i=1\dots N}:}
as above
V
:
{\displaystyle V:}
dimension of categorical observations, e.g., size of word vocabulary
θ
i
=
1
…
K
,
j
=
1
…
V
:
{\displaystyle \theta _{i=1\dots K,j=1\dots V}:}
probability for component
i
{\displaystyle i}
of observing item
j
{\displaystyle j}
θ
i
=
1
…
K
:
{\displaystyle {\boldsymbol {\theta }}_{i=1\dots K}:}
vector of dimension
V
,
{\displaystyle V,}
composed of
θ
i
,
1
…
V
;
{\displaystyle \theta _{i,1\dots V};}
must sum to 1
The random variables:
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
∼
Categorical
(
θ
z
i
)
{\displaystyle {\begin{array}{lcl}z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}&\sim &{\text{Categorical}}({\boldsymbol {\theta }}_{z_{i}})\end{array}}}
A typical Bayesian mixture model with categorical observations looks like this:
K
,
N
:
{\displaystyle K,N:}
as above
ϕ
i
=
1
…
K
,
ϕ
:
{\displaystyle \phi _{i=1\dots K},{\boldsymbol {\phi }}:}
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
:
{\displaystyle z_{i=1\dots N},x_{i=1\dots N}:}
as above
V
:
{\displaystyle V:}
dimension of categorical observations, e.g., size of word vocabulary
θ
i
=
1
…
K
,
j
=
1
…
V
:
{\displaystyle \theta _{i=1\dots K,j=1\dots V}:}
probability for component
i
{\displaystyle i}
of observing item
j
{\displaystyle j}
θ
i
=
1
…
K
:
{\displaystyle {\boldsymbol {\theta }}_{i=1\dots K}:}
vector of dimension
V
,
{\displaystyle V,}
composed of
θ
i
,
1
…
V
;
{\displaystyle \theta _{i,1\dots V};}
must sum to 1
α
:
{\displaystyle \alpha :}
shared concentration hyperparameter of
θ
{\displaystyle {\boldsymbol {\theta }}}
for each component
β
:
{\displaystyle \beta :}
concentration hyperparameter of
ϕ
{\displaystyle {\boldsymbol {\phi }}}
The random variables:
ϕ
∼
S
y
m
m
e
t
r
i
c
-
D
i
r
i
c
h
l
e
t
K
(
β
)
θ
i
=
1
…
K
∼
Symmetric-Dirichlet
V
(
α
)
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
∼
Categorical
(
θ
z
i
)
{\displaystyle {\begin{array}{lcl}{\boldsymbol {\phi }}&\sim &\operatorname {Symmetric-Dirichlet} _{K}(\beta )\\{\boldsymbol {\theta }}_{i=1\dots K}&\sim &{\text{Symmetric-Dirichlet}}_{V}(\alpha )\\z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}&\sim &{\text{Categorical}}({\boldsymbol {\theta }}_{z_{i}})\end{array}}}
== Examples ==
=== A financial model ===
Financial returns often behave differently in normal situations and during crisis times. A mixture model for return data seems reasonable. Sometimes the model used is a jump-diffusion model, or as a mixture of two normal distributions. See Financial economics § Challenges and criticism and Financial risk management § Banking for further context.
=== House prices ===
Assume that we observe the prices of N different houses. Different types of houses in different neighborhoods will have vastly different prices, but the price of a particular type of house in a particular neighborhood (e.g., three-bedroom house in moderately upscale neighborhood) will tend to cluster fairly closely around the mean. One possible model of such prices would be to assume that the prices are accurately described by a mixture model with K different components, each distributed as a normal distribution with unknown mean and variance, with each component specifying a particular combination of house type/neighborhood. Fitting this model to observed prices, e.g., using the expectation-maximization algorithm, would tend to cluster the prices according to house type/neighborhood and reveal the spread of prices in each type/neighborhood. (Note that for values such as prices or incomes that are guaranteed to be positive and which tend to grow exponentially, a log-normal distribution might actually be a better model than a normal distribution.)
=== Topics in a document ===
Assume that a document is composed of N different words from a total vocabulary of size V, where each word corresponds to one of K possible topics. The distribution of such words could be modelled as a mixture of K different V-dimensional categorical distributions. A model of this sort is commonly termed a topic model. Note that expectation maximization applied to such a model will typically fail to produce realistic results, due (among other things) to the excessive number of parameters. Some sorts of additional assumptions are typically necessary to get good results. Typically two sorts of additional components are added to the model:
A prior distribution is placed over the parameters describing the topic distributions, using a Dirichlet distribution with a concentration parameter that is set significantly below 1, so as to encourage sparse distributions (where only a small number of words have significantly non-zero probabilities).
Some sort of additional constraint is placed over the topic identities of words, to take advantage of natural clustering.
For example, a Markov chain could be placed on the topic identities (i.e., the latent variables specifying the mixture component of each observation), corresponding to the fact that nearby words belong to similar topics. (This results in a hidden Markov model, specifically one where a prior distribution is placed over state transitions that favors transitions that stay in the same state.)
Another possibility is the latent Dirichlet allocation model, which divides up the words into D different documents and assumes that in each document only a small number of topics occur with any frequency.
=== Handwriting recognition ===
The following example is based on an example in Christopher M. Bishop, Pattern Recognition and Machine Learning.
Imagine that we are given an N×N black-and-white image that is known to be a scan of a hand-written digit between 0 and 9, but we don't know which digit is written. We can create a mixture model with
K
=
10
{\displaystyle K=10}
different components, where each component is a vector of size
N
2
{\displaystyle N^{2}}
of Bernoulli distributions (one per pixel). Such a model can be trained with the expectation-maximization algorithm on an unlabeled set of hand-written digits, and will effectively cluster the images according to the digit being written. The same model could then be used to recognize the digit of another image simply by holding the parameters constant, computing the probability of the new image for each possible digit (a trivial calculation), and returning the digit that generated the highest probability.
=== Assessing projectile accuracy (a.k.a. circular error probable, CEP) ===
Mixture models apply in the problem of directing multiple projectiles at a target (as in air, land, or sea defense applications), where the physical and/or statistical characteristics of the projectiles differ within the multiple projectiles. An example might be shots from multiple munitions types or shots from multiple locations directed at one target. The combination of projectile types may be characterized as a Gaussian mixture model. Further, a well-known measure of accuracy for a group of projectiles is the circular error probable (CEP), which is the number R such that, on average, half of the group of projectiles falls within the circle of radius R about the target point. The mixture model can be used to determine (or estimate) the value R. The mixture model properly captures the different types of projectiles.
=== Direct and indirect applications ===
The financial example above is one direct application of the mixture model, a situation in which we assume an underlying mechanism so that each observation belongs to one of some number of different sources or categories. This underlying mechanism may or may not, however, be observable. In this form of mixture, each of the sources is described by a component probability density function, and its mixture weight is the probability that an observation comes from this component.
In an indirect application of the mixture model we do not assume such a mechanism. The mixture model is simply used for its mathematical flexibilities. For example, a mixture of two normal distributions with different means may result in a density with two modes, which is not modeled by standard parametric distributions. Another example is given by the possibility of mixture distributions to model fatter tails than the basic Gaussian ones, so as to be a candidate for modeling more extreme events.
=== Predictive Maintenance ===
The mixture model-based clustering is also predominantly used in identifying the state of the machine in predictive maintenance. Density plots are used to analyze the density of high dimensional features. If multi-model densities are observed, then it is assumed that a finite set of densities are formed by a finite set of normal mixtures. A multivariate Gaussian mixture model is used to cluster the feature data into k number of groups where k represents each state of the machine. The machine state can be a normal state, power off state, or faulty state. Each formed cluster can be diagnosed using techniques such as spectral analysis. In the recent years, this has also been widely used in other areas such as early fault detection.
=== Fuzzy image segmentation ===
In image processing and computer vision, traditional image segmentation models often assign to one pixel only one exclusive pattern. In fuzzy or soft segmentation, any pattern can have certain "ownership" over any single pixel. If the patterns are Gaussian, fuzzy segmentation naturally results in Gaussian mixtures. Combined with other analytic or geometric tools (e.g., phase transitions over diffusive boundaries), such spatially regularized mixture models could lead to more realistic and computationally efficient segmentation methods.
=== Point set registration ===
Probabilistic mixture models such as Gaussian mixture models (GMM) are used to resolve point set registration problems in image processing and computer vision fields. For pair-wise point set registration, one point set is regarded as the centroids of mixture models, and the other point set is regarded as data points (observations). State-of-the-art methods are e.g. coherent point drift (CPD)
and Student's t-distribution mixture models (TMM).
The result of recent research demonstrate the superiority of hybrid mixture models
(e.g. combining Student's t-distribution and Watson distribution/Bingham distribution to model spatial positions and axes orientations separately) compare to CPD and TMM, in terms of inherent robustness, accuracy and discriminative capacity.
== Identifiability ==
Identifiability refers to the existence of a unique characterization for any one of the models in the class (family) being considered. Estimation procedures may not be well-defined and asymptotic theory may not hold if a model is not identifiable.
=== Example ===
Let J be the class of all binomial distributions with n = 2. Then a mixture of two members of J would have
p
0
=
π
(
1
−
θ
1
)
2
+
(
1
−
π
)
(
1
−
θ
2
)
2
p
1
=
2
π
θ
1
(
1
−
θ
1
)
+
2
(
1
−
π
)
θ
2
(
1
−
θ
2
)
{\displaystyle {\begin{aligned}p_{0}&=\pi {\left(1-\theta _{1}\right)}^{2}+\left(1-\pi \right){\left(1-\theta _{2}\right)}^{2}\\[1ex]p_{1}&=2\pi \theta _{1}\left(1-\theta _{1}\right)+2\left(1-\pi \right)\theta _{2}\left(1-\theta _{2}\right)\end{aligned}}}
and p2 = 1 − p0 − p1. Clearly, given p0 and p1, it is not possible to determine the above mixture model uniquely, as there are three parameters (π, θ1, θ2) to be determined.
=== Definition ===
Consider a mixture of parametric distributions of the same class. Let
J
=
{
f
(
⋅
;
θ
)
:
θ
∈
Ω
}
{\displaystyle J=\{f(\cdot ;\theta ):\theta \in \Omega \}}
be the class of all component distributions. Then the convex hull K of J defines the class of all finite mixture of distributions in J:
K
=
{
p
(
⋅
)
:
p
(
⋅
)
=
∑
i
=
1
n
a
i
f
i
(
⋅
;
θ
i
)
,
a
i
>
0
,
∑
i
=
1
n
a
i
=
1
,
f
i
(
⋅
;
θ
i
)
∈
J
∀
i
,
n
}
{\displaystyle K=\left\{p(\cdot ):p(\cdot )=\sum _{i=1}^{n}a_{i}f_{i}(\cdot ;\theta _{i}),a_{i}>0,\sum _{i=1}^{n}a_{i}=1,f_{i}(\cdot ;\theta _{i})\in J\ \forall i,n\right\}}
K is said to be identifiable if all its members are unique, that is, given two members p and p′ in K, being mixtures of k distributions and k′ distributions respectively in J, we have p = p′ if and only if, first of all, k = k′ and secondly we can reorder the summations such that ai = ai′ and fi = fi′ for all i.
== Parameter estimation and system identification ==
Parametric mixture models are often used when we know the distribution Y and we can sample from X, but we would like to determine the ai and θi values. Such situations can arise in studies in which we sample from a population that is composed of several distinct subpopulations.
It is common to think of probability mixture modeling as a missing data problem. One way to understand this is to assume that the data points under consideration have "membership" in one of the distributions we are using to model the data. When we start, this membership is unknown, or missing. The job of estimation is to devise appropriate parameters for the model functions we choose, with the connection to the data points being represented as their membership in the individual model distributions.
A variety of approaches to the problem of mixture decomposition have been proposed, many of which focus on maximum likelihood methods such as expectation maximization (EM) or maximum a posteriori estimation (MAP). Generally these methods consider separately the questions of system identification and parameter estimation; methods to determine the number and functional form of components within a mixture are distinguished from methods to estimate the corresponding parameter values. Some notable departures are the graphical methods as outlined in Tarter and Lock and more recently minimum message length (MML) techniques such as Figueiredo and Jain and to some extent the moment matching pattern analysis routines suggested by McWilliam and Loh (2009).
=== Expectation maximization (EM) ===
Expectation maximization (EM) is seemingly the most popular technique used to determine the parameters of a mixture with an a priori given number of components. This is a particular way of implementing maximum likelihood estimation for this problem. EM is of particular appeal for finite normal mixtures where closed-form expressions are possible such as in the following iterative algorithm by Dempster et al. (1977)
w
s
(
j
+
1
)
=
1
N
∑
t
=
1
N
h
s
(
j
)
(
t
)
{\displaystyle w_{s}^{(j+1)}={\frac {1}{N}}\sum _{t=1}^{N}h_{s}^{(j)}(t)}
μ
s
(
j
+
1
)
=
∑
t
=
1
N
h
s
(
j
)
(
t
)
x
(
t
)
∑
t
=
1
N
h
s
(
j
)
(
t
)
{\displaystyle \mu _{s}^{(j+1)}={\frac {\sum _{t=1}^{N}h_{s}^{(j)}(t)x^{(t)}}{\sum _{t=1}^{N}h_{s}^{(j)}(t)}}}
Σ
s
(
j
+
1
)
=
∑
t
=
1
N
h
s
(
j
)
(
t
)
[
x
(
t
)
−
μ
s
(
j
+
1
)
]
[
x
(
t
)
−
μ
s
(
j
+
1
)
]
⊤
∑
t
=
1
N
h
s
(
j
)
(
t
)
{\displaystyle \Sigma _{s}^{(j+1)}={\frac {\sum _{t=1}^{N}h_{s}^{(j)}(t)[x^{(t)}-\mu _{s}^{(j+1)}][x^{(t)}-\mu _{s}^{(j+1)}]^{\top }}{\sum _{t=1}^{N}h_{s}^{(j)}(t)}}}
with the posterior probabilities
h
s
(
j
)
(
t
)
=
w
s
(
j
)
p
s
(
x
(
t
)
;
μ
s
(
j
)
,
Σ
s
(
j
)
)
∑
i
=
1
n
w
i
(
j
)
p
i
(
x
(
t
)
;
μ
i
(
j
)
,
Σ
i
(
j
)
)
.
{\displaystyle h_{s}^{(j)}(t)={\frac {w_{s}^{(j)}p_{s}(x^{(t)};\mu _{s}^{(j)},\Sigma _{s}^{(j)})}{\sum _{i=1}^{n}w_{i}^{(j)}p_{i}(x^{(t)};\mu _{i}^{(j)},\Sigma _{i}^{(j)})}}.}
Thus on the basis of the current estimate for the parameters, the conditional probability for a given observation x(t) being generated from state s is determined for each t = 1, …, N ; N being the sample size. The parameters are then updated such that the new component weights correspond to the average conditional probability and each component mean and covariance is the component specific weighted average of the mean and covariance of the entire sample.
Dempster also showed that each successive EM iteration will not decrease the likelihood, a property not shared by other gradient based maximization techniques. Moreover, EM naturally embeds within it constraints on the probability vector, and for sufficiently large sample sizes positive definiteness of the covariance iterates. This is a key advantage since explicitly constrained methods incur extra computational costs to check and maintain appropriate values. Theoretically EM is a first-order algorithm and as such converges slowly to a fixed-point solution. Redner and Walker (1984) make this point arguing in favour of superlinear and second order Newton and quasi-Newton methods and reporting slow convergence in EM on the basis of their empirical tests. They do concede that convergence in likelihood was rapid even if convergence in the parameter values themselves was not. The relative merits of EM and other algorithms vis-à-vis convergence have been discussed in other literature.
Other common objections to the use of EM are that it has a propensity to spuriously identify local maxima, as well as displaying sensitivity to initial values. One may address these problems by evaluating EM at several initial points in the parameter space but this is computationally costly and other approaches, such as the annealing EM method of Udea and Nakano (1998) (in which the initial components are essentially forced to overlap, providing a less heterogeneous basis for initial guesses), may be preferable.
Figueiredo and Jain note that convergence to 'meaningless' parameter values obtained at the boundary (where regularity conditions breakdown, e.g., Ghosh and Sen (1985)) is frequently observed when the number of model components exceeds the optimal/true one. On this basis they suggest a unified approach to estimation and identification in which the initial n is chosen to greatly exceed the expected optimal value. Their optimization routine is constructed via a minimum message length (MML) criterion that effectively eliminates a candidate component if there is insufficient information to support it. In this way it is possible to systematize reductions in n and consider estimation and identification jointly.
==== The expectation step ====
With initial guesses for the parameters of our mixture model, "partial membership" of each data point in each constituent distribution is computed by calculating expectation values for the membership variables of each data point. That is, for each data point xj and distribution Yi, the membership value yi, j is:
y
i
,
j
=
a
i
f
Y
(
x
j
;
θ
i
)
f
X
(
x
j
)
.
{\displaystyle y_{i,j}={\frac {a_{i}f_{Y}(x_{j};\theta _{i})}{f_{X}(x_{j})}}.}
==== The maximization step ====
With expectation values in hand for group membership, plug-in estimates are recomputed for the distribution parameters.
The mixing coefficients ai are the means of the membership values over the N data points.
a
i
=
1
N
∑
j
=
1
N
y
i
,
j
{\displaystyle a_{i}={\frac {1}{N}}\sum _{j=1}^{N}y_{i,j}}
The component model parameters θi are also calculated by expectation maximization using data points xj that have been weighted using the membership values. For example, if θ is a mean μ
μ
i
=
∑
j
y
i
,
j
x
j
∑
j
y
i
,
j
.
{\displaystyle \mu _{i}={\frac {\sum _{j}y_{i,j}x_{j}}{\sum _{j}y_{i,j}}}.}
With new estimates for ai and the θi's, the expectation step is repeated to recompute new membership values. The entire procedure is repeated until model parameters converge.
=== Markov chain Monte Carlo ===
As an alternative to the EM algorithm, the mixture model parameters can be deduced using posterior sampling as indicated by Bayes' theorem. This is still regarded as an incomplete data problem in which membership of data points is the missing data. A two-step iterative procedure known as Gibbs sampling can be used.
The previous example of a mixture of two Gaussian distributions can demonstrate how the method works. As before, initial guesses of the parameters for the mixture model are made. Instead of computing partial memberships for each elemental distribution, a membership value for each data point is drawn from a Bernoulli distribution (that is, it will be assigned to either the first or the second Gaussian). The Bernoulli parameter θ is determined for each data point on the basis of one of the constituent distributions. Draws from the distribution generate membership associations for each data point. Plug-in estimators can then be used as in the M step of EM to generate a new set of mixture model parameters, and the binomial draw step repeated.
=== Moment matching ===
The method of moment matching is one of the oldest techniques for determining the mixture parameters dating back to Karl Pearson's seminal work of 1894.
In this approach the parameters of the mixture are determined such that the composite distribution has moments matching some given value. In many instances extraction of solutions to the moment equations may present non-trivial algebraic or computational problems. Moreover, numerical analysis by Day has indicated that such methods may be inefficient compared to EM. Nonetheless, there has been renewed interest in this method, e.g., Craigmile and Titterington (1998) and Wang.
McWilliam and Loh (2009) consider the characterisation of a hyper-cuboid normal mixture copula in large dimensional systems for which EM would be computationally prohibitive. Here a pattern analysis routine is used to generate multivariate tail-dependencies consistent with a set of univariate and (in some sense) bivariate moments. The performance of this method is then evaluated using equity log-return data with Kolmogorov–Smirnov test statistics suggesting a good descriptive fit.
=== Spectral method ===
Some problems in mixture model estimation can be solved using spectral methods.
In particular it becomes useful if data points xi are points in high-dimensional real space, and the hidden distributions are known to be log-concave (such as Gaussian distribution or Exponential distribution).
Spectral methods of learning mixture models are based on the use of Singular Value Decomposition of a matrix which contains data points.
The idea is to consider the top k singular vectors, where k is the number of distributions to be learned. The projection
of each data point to a linear subspace spanned by those vectors groups points originating from the same distribution
very close together, while points from different distributions stay far apart.
One distinctive feature of the spectral method is that it allows us to prove that if
distributions satisfy certain separation condition (e.g., not too close), then the estimated mixture will be very close to the true one with high probability.
=== Graphical Methods ===
Tarter and Lock describe a graphical approach to mixture identification in which a kernel function is applied to an empirical frequency plot so to reduce intra-component variance. In this way one may more readily identify components having differing means. While this λ-method does not require prior knowledge of the number or functional form of the components its success does rely on the choice of the kernel parameters which to some extent implicitly embeds assumptions about the component structure.
=== Other methods ===
Some of them can even probably learn mixtures of heavy-tailed distributions including those with
infinite variance (see links to papers below).
In this setting, EM based methods would not work, since the Expectation step would diverge due to presence of
outliers.
=== A simulation ===
To simulate a sample of size N that is from a mixture of distributions Fi, i=1 to n, with probabilities pi (sum= pi = 1):
Generate N random numbers from a categorical distribution of size n and probabilities pi for i= 1= to n. These tell you which of the Fi each of the N values will come from. Denote by mi the quantity of random numbers assigned to the ith category.
For each i, generate mi random numbers from the Fi distribution.
== Extensions ==
In a Bayesian setting, additional levels can be added to the graphical model defining the mixture model. For example, in the common latent Dirichlet allocation topic model, the observations are sets of words drawn from D different documents and the K mixture components represent topics that are shared across documents. Each document has a different set of mixture weights, which specify the topics prevalent in that document. All sets of mixture weights share common hyperparameters.
A very common extension is to connect the latent variables defining the mixture component identities into a Markov chain, instead of assuming that they are independent identically distributed random variables. The resulting model is termed a hidden Markov model and is one of the most common sequential hierarchical models. Numerous extensions of hidden Markov models have been developed; see the resulting article for more information.
== History ==
Mixture distributions and the problem of mixture decomposition, that is the identification of its constituent components and the parameters thereof, has been cited in the literature as far back as 1846 (Quetelet in McLachlan, 2000) although common reference is made to the work of Karl Pearson (1894) as the first author to explicitly address the decomposition problem in characterising non-normal attributes of forehead to body length ratios in female shore crab populations. The motivation for this work was provided by the zoologist Walter Frank Raphael Weldon who had speculated in 1893 (in Tarter and Lock) that asymmetry in the histogram of these ratios could signal evolutionary divergence. Pearson's approach was to fit a univariate mixture of two normals to the data by choosing the five parameters of the mixture such that the empirical moments matched that of the model.
While his work was successful in identifying two potentially distinct sub-populations and in demonstrating the flexibility of mixtures as a moment matching tool, the formulation required the solution of a 9th degree (nonic) polynomial which at the time posed a significant computational challenge.
Subsequent works focused on addressing these problems, but it was not until the advent of the modern computer and the popularisation of Maximum Likelihood (MLE) parameterisation techniques that research really took off. Since that time there has been a vast body of research on the subject spanning areas such as fisheries research, agriculture, botany, economics, medicine, genetics, psychology, palaeontology, electrophoresis, finance, geology and zoology.
== See also ==
=== Mixture ===
Mixture density
Mixture (probability)
Flexible Mixture Model (FMM)
Subspace Gaussian mixture model
Giry monad
=== Hierarchical models ===
Graphical model
Hierarchical Bayes model
=== Outlier detection ===
RANSAC
== References ==
== Further reading ==
=== Books on mixture models ===
Everitt, B.S.; Hand, D.J. (1981). Finite mixture distributions. Chapman & Hall. ISBN 978-0-412-22420-1.
Lindsay, B. G. (1995). Mixture Models: Theory, Geometry, and Applications. NSF-CBMS Regional Conference Series in Probability and Statistics. Vol. 5. Hayward: Institute of Mathematical Statistics.
Marin, J.M.; Mengersen, K.; Robert, C. P. (2011). "Bayesian modelling and inference on mixtures of distributions" (PDF). In Dey, D.; Rao, C.R. (eds.). Essential Bayesian models. Handbook of statistics: Bayesian thinking - modeling and computation. Vol. 25. Elsevier. ISBN 9780444537324.
McLachlan, G.J.; Peel, D. (2000). Finite Mixture Models. Wiley. ISBN 978-0-471-00626-8.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 16.1. Gaussian Mixture Models and k-Means Clustering". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
Titterington, D.; Smith, A.; Makov, U. (1985). Statistical Analysis of Finite Mixture Distributions. Wiley. ISBN 978-0-471-90763-3.
Yao, W.; Xiang, S. (2024). Mixture Models: Parametric, Semiparametric, and New Directions. Chapman & Hall/CRC Press. ISBN 978-0367481827.
=== Application of Gaussian mixture models ===
Reynolds, D.A.; Rose, R.C. (January 1995). "Robust text-independent speaker identification using Gaussian mixture speaker models". IEEE Transactions on Speech and Audio Processing. 3 (1): 72–83. doi:10.1109/89.365379. S2CID 7319345.
Permuter, H.; Francos, J.; Jermyn, I.H. (2003). Gaussian mixture models of texture and colour for image database retrieval. IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings (ICASSP '03). doi:10.1109/ICASSP.2003.1199538.
Permuter, Haim; Francos, Joseph; Jermyn, Ian (2006). "A study of Gaussian mixture models of color and texture features for image classification and segmentation" (PDF). Pattern Recognition. 39 (4): 695–706. Bibcode:2006PatRe..39..695P. doi:10.1016/j.patcog.2005.10.028. S2CID 8530776.
Lemke, Wolfgang (2005). Term Structure Modeling and Estimation in a State Space Framework. Springer Verlag. ISBN 978-3-540-28342-3.
Brigo, Damiano; Mercurio, Fabio (2001). Displaced and Mixture Diffusions for Analytically-Tractable Smile Models. Mathematical Finance – Bachelier Congress 2000. Proceedings. Springer Verlag.
Brigo, Damiano; Mercurio, Fabio (June 2002). "Lognormal-mixture dynamics and calibration to market volatility smiles". International Journal of Theoretical and Applied Finance. 5 (4): 427. CiteSeerX 10.1.1.210.4165. doi:10.1142/S0219024902001511.
Spall, J. C.; Maryak, J. L. (1992). "A feasible Bayesian estimator of quantiles for projectile accuracy from non-i.i.d. data". Journal of the American Statistical Association. 87 (419): 676–681. doi:10.1080/01621459.1992.10475269. JSTOR 2290205.
Alexander, Carol (December 2004). "Normal mixture diffusion with uncertain volatility: Modelling short- and long-term smile effects" (PDF). Journal of Banking & Finance. 28 (12): 2957–80. doi:10.1016/j.jbankfin.2003.10.017.
Stylianou, Yannis; Pantazis, Yannis; Calderero, Felipe; Larroy, Pedro; Severin, Francois; Schimke, Sascha; Bonal, Rolando; Matta, Federico; Valsamakis, Athanasios (2005). GMM-Based Multimodal Biometric Verification (PDF).
Chen, J.; Adebomi, 0.E.; Olusayo, O.S.; Kulesza, W. (2010). The Evaluation of the Gaussian Mixture Probability Hypothesis Density approach for multi-target tracking. IEEE International Conference on Imaging Systems and Techniques, 2010. doi:10.1109/IST.2010.5548541.{{cite conference}}: CS1 maint: numeric names: authors list (link)
== External links ==
Nielsen, Frank (23 March 2012). "K-MLE: A fast algorithm for learning statistical mixture models". 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 869–872. arXiv:1203.5181. Bibcode:2012arXiv1203.5181N. doi:10.1109/ICASSP.2012.6288022. ISBN 978-1-4673-0046-9. S2CID 935615.
The SOCR demonstrations of EM and Mixture Modeling
Mixture modelling page (and the Snob program for Minimum Message Length (MML) applied to finite mixture models), maintained by D.L. Dowe.
PyMix – Python Mixture Package, algorithms and data structures for a broad variety of mixture model based data mining applications in Python
sklearn.mixture – A module from the scikit-learn Python library for learning Gaussian Mixture Models (and sampling from them), previously packaged with SciPy and now packaged as a SciKit
GMM.m Matlab code for GMM Implementation
GPUmix C++ implementation of Bayesian Mixture Models using EM and MCMC with 100x speed acceleration using GPGPU.
[2] Matlab code for GMM Implementation using EM algorithm
[3] jMEF: A Java open source library for learning and processing mixtures of exponential families (using duality with Bregman divergences). Includes a Matlab wrapper.
Very Fast and clean C implementation of the Expectation Maximization (EM) algorithm for estimating Gaussian Mixture Models (GMMs).
mclust is an R package for mixture modeling.
dpgmm Pure Python Dirichlet process Gaussian mixture model implementation (variational).
Gaussian Mixture Models Blog post on Gaussian Mixture Models trained via Expectation Maximization, with an implementation in Python. | Wikipedia/Gaussian_mixture_model |
In general, a function approximation problem asks us to select a function among a well-defined class that closely matches ("approximates") a target function in a task-specific way. The need for function approximations arises in many branches of applied mathematics, and computer science in particular , such as predicting the growth of microbes in microbiology. Function approximations are used where theoretical models are unavailable or hard to compute.
One can distinguish two major classes of function approximation problems:
First, for known target functions approximation theory is the branch of numerical analysis that investigates how certain known functions (for example, special functions) can be approximated by a specific class of functions (for example, polynomials or rational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.).
Second, the target function, call it g, may be unknown; instead of an explicit formula, only a set of points of the form (x, g(x)) is provided. Depending on the structure of the domain and codomain of g, several techniques for approximating g may be applicable. For example, if g is an operation on the real numbers, techniques of interpolation, extrapolation, regression analysis, and curve fitting can be used. If the codomain (range or target set) of g is a finite set, one is dealing with a classification problem instead.
To some extent, the different problems (regression, classification, fitness approximation) have received a unified treatment in statistical learning theory, where they are viewed as supervised learning problems.
== References ==
== See also ==
Approximation theory
Fitness approximation
Kriging
Least squares (function approximation)
Radial basis function network | Wikipedia/Target_function |
Proximal gradient (forward backward splitting) methods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is
ℓ
1
{\displaystyle \ell _{1}}
regularization (also known as Lasso) of the form
min
w
∈
R
d
1
n
∑
i
=
1
n
(
y
i
−
⟨
w
,
x
i
⟩
)
2
+
λ
‖
w
‖
1
,
where
x
i
∈
R
d
and
y
i
∈
R
.
{\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\lambda \|w\|_{1},\quad {\text{ where }}x_{i}\in \mathbb {R} ^{d}{\text{ and }}y_{i}\in \mathbb {R} .}
Proximal gradient methods offer a general framework for solving regularization problems from statistical learning theory with penalties that are tailored to a specific problem application. Such customized penalties can help to induce certain structure in problem solutions, such as sparsity (in the case of lasso) or group structure (in the case of group lasso).
== Relevant background ==
Proximal gradient methods are applicable in a wide variety of scenarios for solving convex optimization problems of the form
min
x
∈
H
F
(
x
)
+
R
(
x
)
,
{\displaystyle \min _{x\in {\mathcal {H}}}F(x)+R(x),}
where
F
{\displaystyle F}
is convex and differentiable with Lipschitz continuous gradient,
R
{\displaystyle R}
is a convex, lower semicontinuous function which is possibly nondifferentiable, and
H
{\displaystyle {\mathcal {H}}}
is some set, typically a Hilbert space. The usual criterion of
x
{\displaystyle x}
minimizes
F
(
x
)
+
R
(
x
)
{\displaystyle F(x)+R(x)}
if and only if
∇
(
F
+
R
)
(
x
)
=
0
{\displaystyle \nabla (F+R)(x)=0}
in the convex, differentiable setting is now replaced by
0
∈
∂
(
F
+
R
)
(
x
)
,
{\displaystyle 0\in \partial (F+R)(x),}
where
∂
φ
{\displaystyle \partial \varphi }
denotes the subdifferential of a real-valued, convex function
φ
{\displaystyle \varphi }
.
Given a convex function
φ
:
H
→
R
{\displaystyle \varphi :{\mathcal {H}}\to \mathbb {R} }
an important operator to consider is its proximal operator
prox
φ
:
H
→
H
{\displaystyle \operatorname {prox} _{\varphi }:{\mathcal {H}}\to {\mathcal {H}}}
defined by
prox
φ
(
u
)
=
arg
min
x
∈
H
φ
(
x
)
+
1
2
‖
u
−
x
‖
2
2
,
{\displaystyle \operatorname {prox} _{\varphi }(u)=\operatorname {arg} \min _{x\in {\mathcal {H}}}\varphi (x)+{\frac {1}{2}}\|u-x\|_{2}^{2},}
which is well-defined because of the strict convexity of the
ℓ
2
{\displaystyle \ell _{2}}
norm. The proximal operator can be seen as a generalization of a projection.
We see that the proximity operator is important because
x
∗
{\displaystyle x^{*}}
is a minimizer to the problem
min
x
∈
H
F
(
x
)
+
R
(
x
)
{\displaystyle \min _{x\in {\mathcal {H}}}F(x)+R(x)}
if and only if
x
∗
=
prox
γ
R
(
x
∗
−
γ
∇
F
(
x
∗
)
)
,
{\displaystyle x^{*}=\operatorname {prox} _{\gamma R}\left(x^{*}-\gamma \nabla F(x^{*})\right),}
where
γ
>
0
{\displaystyle \gamma >0}
is any positive real number.
=== Moreau decomposition ===
One important technique related to proximal gradient methods is the Moreau decomposition, which decomposes the identity operator as the sum of two proximity operators. Namely, let
φ
:
X
→
R
{\displaystyle \varphi :{\mathcal {X}}\to \mathbb {R} }
be a lower semicontinuous, convex function on a vector space
X
{\displaystyle {\mathcal {X}}}
. We define its Fenchel conjugate
φ
∗
:
X
→
R
{\displaystyle \varphi ^{*}:{\mathcal {X}}\to \mathbb {R} }
to be the function
φ
∗
(
u
)
:=
sup
x
∈
X
⟨
x
,
u
⟩
−
φ
(
x
)
.
{\displaystyle \varphi ^{*}(u):=\sup _{x\in {\mathcal {X}}}\langle x,u\rangle -\varphi (x).}
The general form of Moreau's decomposition states that for any
x
∈
X
{\displaystyle x\in {\mathcal {X}}}
and any
γ
>
0
{\displaystyle \gamma >0}
that
x
=
prox
γ
φ
(
x
)
+
γ
prox
φ
∗
/
γ
(
x
/
γ
)
,
{\displaystyle x=\operatorname {prox} _{\gamma \varphi }(x)+\gamma \operatorname {prox} _{\varphi ^{*}/\gamma }(x/\gamma ),}
which for
γ
=
1
{\displaystyle \gamma =1}
implies that
x
=
prox
φ
(
x
)
+
prox
φ
∗
(
x
)
{\displaystyle x=\operatorname {prox} _{\varphi }(x)+\operatorname {prox} _{\varphi ^{*}}(x)}
. The Moreau decomposition can be seen to be a generalization of the usual orthogonal decomposition of a vector space, analogous with the fact that proximity operators are generalizations of projections.
In certain situations it may be easier to compute the proximity operator for the conjugate
φ
∗
{\displaystyle \varphi ^{*}}
instead of the function
φ
{\displaystyle \varphi }
, and therefore the Moreau decomposition can be applied. This is the case for group lasso.
== Lasso regularization ==
Consider the regularized empirical risk minimization problem with square loss and with the
ℓ
1
{\displaystyle \ell _{1}}
norm as the regularization penalty:
min
w
∈
R
d
1
n
∑
i
=
1
n
(
y
i
−
⟨
w
,
x
i
⟩
)
2
+
λ
‖
w
‖
1
,
{\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\lambda \|w\|_{1},}
where
x
i
∈
R
d
and
y
i
∈
R
.
{\displaystyle x_{i}\in \mathbb {R} ^{d}{\text{ and }}y_{i}\in \mathbb {R} .}
The
ℓ
1
{\displaystyle \ell _{1}}
regularization problem is sometimes referred to as lasso (least absolute shrinkage and selection operator). Such
ℓ
1
{\displaystyle \ell _{1}}
regularization problems are interesting because they induce sparse solutions, that is, solutions
w
{\displaystyle w}
to the minimization problem have relatively few nonzero components. Lasso can be seen to be a convex relaxation of the non-convex problem
min
w
∈
R
d
1
n
∑
i
=
1
n
(
y
i
−
⟨
w
,
x
i
⟩
)
2
+
λ
‖
w
‖
0
,
{\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\lambda \|w\|_{0},}
where
‖
w
‖
0
{\displaystyle \|w\|_{0}}
denotes the
ℓ
0
{\displaystyle \ell _{0}}
"norm", which is the number of nonzero entries of the vector
w
{\displaystyle w}
. Sparse solutions are of particular interest in learning theory for interpretability of results: a sparse solution can identify a small number of important factors.
=== Solving for L1 proximity operator ===
For simplicity we restrict our attention to the problem where
λ
=
1
{\displaystyle \lambda =1}
. To solve the problem
min
w
∈
R
d
1
n
∑
i
=
1
n
(
y
i
−
⟨
w
,
x
i
⟩
)
2
+
‖
w
‖
1
,
{\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\|w\|_{1},}
we consider our objective function in two parts: a convex, differentiable term
F
(
w
)
=
1
n
∑
i
=
1
n
(
y
i
−
⟨
w
,
x
i
⟩
)
2
{\displaystyle F(w)={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}}
and a convex function
R
(
w
)
=
‖
w
‖
1
{\displaystyle R(w)=\|w\|_{1}}
. Note that
R
{\displaystyle R}
is not strictly convex.
Let us compute the proximity operator for
R
(
w
)
{\displaystyle R(w)}
. First we find an alternative characterization of the proximity operator
prox
R
(
x
)
{\displaystyle \operatorname {prox} _{R}(x)}
as follows:
u
=
prox
R
(
x
)
⟺
0
∈
∂
(
R
(
u
)
+
1
2
‖
u
−
x
‖
2
2
)
⟺
0
∈
∂
R
(
u
)
+
u
−
x
⟺
x
−
u
∈
∂
R
(
u
)
.
{\displaystyle {\begin{aligned}u=\operatorname {prox} _{R}(x)\iff &0\in \partial \left(R(u)+{\frac {1}{2}}\|u-x\|_{2}^{2}\right)\\\iff &0\in \partial R(u)+u-x\\\iff &x-u\in \partial R(u).\end{aligned}}}
For
R
(
w
)
=
‖
w
‖
1
{\displaystyle R(w)=\|w\|_{1}}
it is easy to compute
∂
R
(
w
)
{\displaystyle \partial R(w)}
: the
i
{\displaystyle i}
th entry of
∂
R
(
w
)
{\displaystyle \partial R(w)}
is precisely
∂
|
w
i
|
=
{
1
,
w
i
>
0
−
1
,
w
i
<
0
[
−
1
,
1
]
,
w
i
=
0.
{\displaystyle \partial |w_{i}|={\begin{cases}1,&w_{i}>0\\-1,&w_{i}<0\\\left[-1,1\right],&w_{i}=0.\end{cases}}}
Using the recharacterization of the proximity operator given above, for the choice of
R
(
w
)
=
‖
w
‖
1
{\displaystyle R(w)=\|w\|_{1}}
and
γ
>
0
{\displaystyle \gamma >0}
we have that
prox
γ
R
(
x
)
{\displaystyle \operatorname {prox} _{\gamma R}(x)}
is defined entrywise by
(
prox
γ
R
(
x
)
)
i
=
{
x
i
−
γ
,
x
i
>
γ
0
,
|
x
i
|
≤
γ
x
i
+
γ
,
x
i
<
−
γ
,
{\displaystyle \left(\operatorname {prox} _{\gamma R}(x)\right)_{i}={\begin{cases}x_{i}-\gamma ,&x_{i}>\gamma \\0,&|x_{i}|\leq \gamma \\x_{i}+\gamma ,&x_{i}<-\gamma ,\end{cases}}}
which is known as the soft thresholding operator
S
γ
(
x
)
=
prox
γ
‖
⋅
‖
1
(
x
)
{\displaystyle S_{\gamma }(x)=\operatorname {prox} _{\gamma \|\cdot \|_{1}}(x)}
.
=== Fixed point iterative schemes ===
To finally solve the lasso problem we consider the fixed point equation shown earlier:
x
∗
=
prox
γ
R
(
x
∗
−
γ
∇
F
(
x
∗
)
)
.
{\displaystyle x^{*}=\operatorname {prox} _{\gamma R}\left(x^{*}-\gamma \nabla F(x^{*})\right).}
Given that we have computed the form of the proximity operator explicitly, then we can define a standard fixed point iteration procedure. Namely, fix some initial
w
0
∈
R
d
{\displaystyle w^{0}\in \mathbb {R} ^{d}}
, and for
k
=
1
,
2
,
…
{\displaystyle k=1,2,\ldots }
define
w
k
+
1
=
S
γ
(
w
k
−
γ
∇
F
(
w
k
)
)
.
{\displaystyle w^{k+1}=S_{\gamma }\left(w^{k}-\gamma \nabla F\left(w^{k}\right)\right).}
Note here the effective trade-off between the empirical error term
F
(
w
)
{\displaystyle F(w)}
and the regularization penalty
R
(
w
)
{\displaystyle R(w)}
. This fixed point method has decoupled the effect of the two different convex functions which comprise the objective function into a gradient descent step (
w
k
−
γ
∇
F
(
w
k
)
{\displaystyle w^{k}-\gamma \nabla F\left(w^{k}\right)}
) and a soft thresholding step (via
S
γ
{\displaystyle S_{\gamma }}
).
Convergence of this fixed point scheme is well-studied in the literature and is guaranteed under appropriate choice of step size
γ
{\displaystyle \gamma }
and loss function (such as the square loss taken here). Accelerated methods were introduced by Nesterov in 1983 which improve the rate of convergence under certain regularity assumptions on
F
{\displaystyle F}
. Such methods have been studied extensively in previous years.
For more general learning problems where the proximity operator cannot be computed explicitly for some regularization term
R
{\displaystyle R}
, such fixed point schemes can still be carried out using approximations to both the gradient and the proximity operator.
== Practical considerations ==
There have been numerous developments within the past decade in convex optimization techniques which have influenced the application of proximal gradient methods in statistical learning theory. Here we survey a few important topics which can greatly improve practical algorithmic performance of these methods.
=== Adaptive step size ===
In the fixed point iteration scheme
w
k
+
1
=
prox
γ
R
(
w
k
−
γ
∇
F
(
w
k
)
)
,
{\displaystyle w^{k+1}=\operatorname {prox} _{\gamma R}\left(w^{k}-\gamma \nabla F\left(w^{k}\right)\right),}
one can allow variable step size
γ
k
{\displaystyle \gamma _{k}}
instead of a constant
γ
{\displaystyle \gamma }
. Numerous adaptive step size schemes have been proposed throughout the literature. Applications of these schemes suggest that these can offer substantial improvement in number of iterations required for fixed point convergence.
=== Elastic net (mixed norm regularization) ===
Elastic net regularization offers an alternative to pure
ℓ
1
{\displaystyle \ell _{1}}
regularization. The problem of lasso (
ℓ
1
{\displaystyle \ell _{1}}
) regularization involves the penalty term
R
(
w
)
=
‖
w
‖
1
{\displaystyle R(w)=\|w\|_{1}}
, which is not strictly convex. Hence, solutions to
min
w
F
(
w
)
+
R
(
w
)
,
{\displaystyle \min _{w}F(w)+R(w),}
where
F
{\displaystyle F}
is some empirical loss function, need not be unique. This is often avoided by the inclusion of an additional strictly convex term, such as an
ℓ
2
{\displaystyle \ell _{2}}
norm regularization penalty. For example, one can consider the problem
min
w
∈
R
d
1
n
∑
i
=
1
n
(
y
i
−
⟨
w
,
x
i
⟩
)
2
+
λ
(
(
1
−
μ
)
‖
w
‖
1
+
μ
‖
w
‖
2
2
)
,
{\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\lambda \left((1-\mu )\|w\|_{1}+\mu \|w\|_{2}^{2}\right),}
where
x
i
∈
R
d
and
y
i
∈
R
.
{\displaystyle x_{i}\in \mathbb {R} ^{d}{\text{ and }}y_{i}\in \mathbb {R} .}
For
0
<
μ
≤
1
{\displaystyle 0<\mu \leq 1}
the penalty term
λ
(
(
1
−
μ
)
‖
w
‖
1
+
μ
‖
w
‖
2
2
)
{\displaystyle \lambda \left((1-\mu )\|w\|_{1}+\mu \|w\|_{2}^{2}\right)}
is now strictly convex, and hence the minimization problem now admits a unique solution. It has been observed that for sufficiently small
μ
>
0
{\displaystyle \mu >0}
, the additional penalty term
μ
‖
w
‖
2
2
{\displaystyle \mu \|w\|_{2}^{2}}
acts as a preconditioner and can substantially improve convergence while not adversely affecting the sparsity of solutions.
== Exploiting group structure ==
Proximal gradient methods provide a general framework which is applicable to a wide variety of problems in statistical learning theory. Certain problems in learning can often involve data which has additional structure that is known a priori. In the past several years there have been new developments which incorporate information about group structure to provide methods which are tailored to different applications. Here we survey a few such methods.
=== Group lasso ===
Group lasso is a generalization of the lasso method when features are grouped into disjoint blocks. Suppose the features are grouped into blocks
{
w
1
,
…
,
w
G
}
{\displaystyle \{w_{1},\ldots ,w_{G}\}}
. Here we take as a regularization penalty
R
(
w
)
=
∑
g
=
1
G
‖
w
g
‖
2
,
{\displaystyle R(w)=\sum _{g=1}^{G}\|w_{g}\|_{2},}
which is the sum of the
ℓ
2
{\displaystyle \ell _{2}}
norm on corresponding feature vectors for the different groups. A similar proximity operator analysis as above can be used to compute the proximity operator for this penalty. Where the lasso penalty has a proximity operator which is soft thresholding on each individual component, the proximity operator for the group lasso is soft thresholding on each group. For the group
w
g
{\displaystyle w_{g}}
we have that proximity operator of
λ
γ
(
∑
g
=
1
G
‖
w
g
‖
2
)
{\displaystyle \lambda \gamma \left(\sum _{g=1}^{G}\|w_{g}\|_{2}\right)}
is given by
S
~
λ
γ
(
w
g
)
=
{
w
g
−
λ
γ
w
g
‖
w
g
‖
2
,
‖
w
g
‖
2
>
λ
γ
0
,
‖
w
g
‖
2
≤
λ
γ
{\displaystyle {\widetilde {S}}_{\lambda \gamma }(w_{g})={\begin{cases}w_{g}-\lambda \gamma {\frac {w_{g}}{\|w_{g}\|_{2}}},&\|w_{g}\|_{2}>\lambda \gamma \\0,&\|w_{g}\|_{2}\leq \lambda \gamma \end{cases}}}
where
w
g
{\displaystyle w_{g}}
is the
g
{\displaystyle g}
th group.
In contrast to lasso, the derivation of the proximity operator for group lasso relies on the Moreau decomposition. Here the proximity operator of the conjugate of the group lasso penalty becomes a projection onto the ball of a dual norm.
=== Other group structures ===
In contrast to the group lasso problem, where features are grouped into disjoint blocks, it may be the case that grouped features are overlapping or have a nested structure. Such generalizations of group lasso have been considered in a variety of contexts. For overlapping groups one common approach is known as latent group lasso which introduces latent variables to account for overlap. Nested group structures are studied in hierarchical structure prediction and with directed acyclic graphs.
== See also ==
Convex analysis
Proximal gradient method
Regularization
Statistical learning theory
== References == | Wikipedia/Proximal_gradient_methods_for_learning |
In mathematics (including combinatorics, linear algebra, and dynamical systems), a linear recurrence with constant coefficients: ch. 17 : ch. 10 (also known as a linear recurrence relation or linear difference equation) sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.
The solution of such an equation is a function of t, and not of any iterate values, giving the value of the iterate at any time. To find the solution it is necessary to know the specific values (known as initial conditions) of n of the iterates, and normally these are the n iterates that are oldest. The equation or its variable is said to be stable if from any set of initial conditions the variable's limit as time goes to infinity exists; this limit is called the steady state.
Difference equations are used in a variety of contexts, such as in economics to model the evolution through time of variables such as gross domestic product, the inflation rate, the exchange rate, etc. They are used in modeling such time series because values of these variables are only measured at discrete intervals. In econometric applications, linear difference equations are modeled with stochastic terms in the form of autoregressive (AR) models and in models such as vector autoregression (VAR) and autoregressive moving average (ARMA) models that combine AR with other features.
== Definitions ==
A linear recurrence with constant coefficients is an equation of the following form, written in terms of parameters a1, ..., an and b:
y
t
=
a
1
y
t
−
1
+
⋯
+
a
n
y
t
−
n
+
b
,
{\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b,}
or equivalently as
y
t
+
n
=
a
1
y
t
+
n
−
1
+
⋯
+
a
n
y
t
+
b
.
{\displaystyle y_{t+n}=a_{1}y_{t+n-1}+\cdots +a_{n}y_{t}+b.}
The positive integer
n
{\displaystyle n}
is called the order of the recurrence and denotes the longest time lag between iterates. The equation is called homogeneous if b = 0 and nonhomogeneous if b ≠ 0.
If the equation is homogeneous, the coefficients determine the characteristic polynomial (also "auxiliary polynomial" or "companion polynomial")
p
(
λ
)
=
λ
n
−
a
1
λ
n
−
1
−
a
2
λ
n
−
2
−
⋯
−
a
n
{\displaystyle p(\lambda )=\lambda ^{n}-a_{1}\lambda ^{n-1}-a_{2}\lambda ^{n-2}-\cdots -a_{n}}
whose roots play a crucial role in finding and understanding the sequences satisfying the recurrence.
== Conversion to homogeneous form ==
If b ≠ 0, the equation
y
t
=
a
1
y
t
−
1
+
⋯
+
a
n
y
t
−
n
+
b
{\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b}
is said to be nonhomogeneous. To solve this equation it is convenient to convert it to homogeneous form, with no constant term. This is done by first finding the equation's steady state value—a value y* such that, if n successive iterates all had this value, so would all future values. This value is found by setting all values of y equal to y* in the difference equation, and solving, thus obtaining
y
∗
=
b
1
−
a
1
−
⋯
−
a
n
{\displaystyle y^{*}={\frac {b}{1-a_{1}-\cdots -a_{n}}}}
assuming the denominator is not 0. If it is zero, the steady state does not exist.
Given the steady state, the difference equation can be rewritten in terms of deviations of the iterates from the steady state, as
(
y
t
−
y
∗
)
=
a
1
(
y
t
−
1
−
y
∗
)
+
⋯
+
a
n
(
y
t
−
n
−
y
∗
)
{\displaystyle \left(y_{t}-y^{*}\right)=a_{1}\left(y_{t-1}-y^{*}\right)+\cdots +a_{n}\left(y_{t-n}-y^{*}\right)}
which has no constant term, and which can be written more succinctly as
x
t
=
a
1
x
t
−
1
+
⋯
+
a
n
x
t
−
n
{\displaystyle x_{t}=a_{1}x_{t-1}+\cdots +a_{n}x_{t-n}}
where x equals y − y*. This is the homogeneous form.
If there is no steady state, the difference equation
y
t
=
a
1
y
t
−
1
+
⋯
+
a
n
y
t
−
n
+
b
{\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b}
can be combined with its equivalent form
y
t
−
1
=
a
1
y
t
−
2
+
⋯
+
a
n
y
t
−
(
n
+
1
)
+
b
{\displaystyle y_{t-1}=a_{1}y_{t-2}+\cdots +a_{n}y_{t-(n+1)}+b}
to obtain (by solving both for b)
y
t
−
a
1
y
t
−
1
−
⋯
−
a
n
y
t
−
n
=
y
t
−
1
−
a
1
y
t
−
2
−
⋯
−
a
n
y
t
−
(
n
+
1
)
{\displaystyle y_{t}-a_{1}y_{t-1}-\cdots -a_{n}y_{t-n}=y_{t-1}-a_{1}y_{t-2}-\cdots -a_{n}y_{t-(n+1)}}
in which like terms can be combined to give a homogeneous equation of one order higher than the original.
== Solution example for small orders ==
The roots of the characteristic polynomial play a crucial role in finding and understanding the sequences satisfying the recurrence. If there are
d
{\displaystyle d}
distinct roots
r
1
,
r
2
,
…
,
r
d
,
{\displaystyle r_{1},r_{2},\ldots ,r_{d},}
then each solution to the recurrence takes the form
a
n
=
k
1
r
1
n
+
k
2
r
2
n
+
⋯
+
k
d
r
d
n
,
{\displaystyle a_{n}=k_{1}r_{1}^{n}+k_{2}r_{2}^{n}+\cdots +k_{d}r_{d}^{n},}
where the coefficients
k
i
{\displaystyle k_{i}}
are determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of
n
{\displaystyle n}
. For instance, if the characteristic polynomial can be factored as
(
x
−
r
)
3
{\displaystyle (x-r)^{3}}
, with the same root
r
{\displaystyle r}
occurring three times, then the solution would take the form
a
n
=
k
1
r
n
+
k
2
n
r
n
+
k
3
n
2
r
n
.
{\displaystyle a_{n}=k_{1}r^{n}+k_{2}nr^{n}+k_{3}n^{2}r^{n}.}
=== Order 1 ===
For order 1, the recurrence
a
n
=
r
a
n
−
1
{\displaystyle a_{n}=ra_{n-1}}
has the solution
a
n
=
r
n
{\displaystyle a_{n}=r^{n}}
with
a
0
=
1
{\displaystyle a_{0}=1}
and the most general solution is
a
n
=
k
r
n
{\displaystyle a_{n}=kr^{n}}
with
a
0
=
k
{\displaystyle a_{0}=k}
. The characteristic polynomial equated to zero (the characteristic equation) is simply
t
−
r
=
0
{\displaystyle t-r=0}
.
=== Order 2 ===
Solutions to such recurrence relations of higher order are found by systematic means, often using the fact that
a
n
=
r
n
{\displaystyle a_{n}=r^{n}}
is a solution for the recurrence exactly when
t
=
r
{\displaystyle t=r}
is a root of the characteristic polynomial. This can be approached directly or using generating functions (formal power series) or matrices.
Consider, for example, a recurrence relation of the form
a
n
=
A
a
n
−
1
+
B
a
n
−
2
.
{\displaystyle a_{n}=Aa_{n-1}+Ba_{n-2}.}
When does it have a solution of the same general form as
a
n
=
r
n
{\displaystyle a_{n}=r^{n}}
? Substituting this guess (ansatz) in the recurrence relation, we find that
r
n
=
A
r
n
−
1
+
B
r
n
−
2
{\displaystyle r^{n}=Ar^{n-1}+Br^{n-2}}
must be true for all
n
>
1
{\displaystyle n>1}
.
Dividing through by
r
n
−
2
{\displaystyle r^{n-2}}
, we get that all these equations reduce to the same thing:
r
2
=
A
r
+
B
,
r
2
−
A
r
−
B
=
0
,
{\displaystyle {\begin{aligned}r^{2}&=Ar+B,\\r^{2}-Ar-B&=0,\end{aligned}}}
which is the characteristic equation of the recurrence relation. Solve for
r
{\displaystyle r}
to obtain the two roots
λ
1
{\displaystyle \lambda _{1}}
,
λ
2
{\displaystyle \lambda _{2}}
: these roots are known as the characteristic roots or eigenvalues of the characteristic equation. Different solutions are obtained depending on the nature of the roots: If these roots are distinct, we have the general solution
a
n
=
C
λ
1
n
+
D
λ
2
n
{\displaystyle a_{n}=C\lambda _{1}^{n}+D\lambda _{2}^{n}}
while if they are identical (when
A
2
+
4
B
=
0
{\displaystyle A^{2}+4B=0}
), we have
a
n
=
C
λ
n
+
D
n
λ
n
{\displaystyle a_{n}=C\lambda ^{n}+Dn\lambda ^{n}}
This is the most general solution; the two constants
C
{\displaystyle C}
and
D
{\displaystyle D}
can be chosen based on two given initial conditions
a
0
{\displaystyle a_{0}}
and
a
1
{\displaystyle a_{1}}
to produce a specific solution.
In the case of complex eigenvalues (which also gives rise to complex values for the solution parameters
C
{\displaystyle C}
and
D
{\displaystyle D}
), the use of complex numbers can be eliminated by rewriting the solution in trigonometric form. In this case we can write the eigenvalues as
λ
1
,
λ
2
=
α
±
β
i
.
{\displaystyle \lambda _{1},\lambda _{2}=\alpha \pm \beta i.}
Then it can be shown that
a
n
=
C
λ
1
n
+
D
λ
2
n
{\displaystyle a_{n}=C\lambda _{1}^{n}+D\lambda _{2}^{n}}
can be rewritten as: 576–585
a
n
=
2
M
n
(
E
cos
(
θ
n
)
+
F
sin
(
θ
n
)
)
=
2
G
M
n
cos
(
θ
n
−
δ
)
,
{\displaystyle a_{n}=2M^{n}\left(E\cos(\theta n)+F\sin(\theta n)\right)=2GM^{n}\cos(\theta n-\delta ),}
where
M
=
α
2
+
β
2
cos
(
θ
)
=
α
M
sin
(
θ
)
=
β
M
C
,
D
=
E
∓
F
i
G
=
E
2
+
F
2
cos
(
δ
)
=
E
G
sin
(
δ
)
=
F
G
{\displaystyle {\begin{array}{lcl}M={\sqrt {\alpha ^{2}+\beta ^{2}}}&\cos(\theta )={\tfrac {\alpha }{M}}&\sin(\theta )={\tfrac {\beta }{M}}\\C,D=E\mp Fi&&\\G={\sqrt {E^{2}+F^{2}}}&\cos(\delta )={\tfrac {E}{G}}&\sin(\delta )={\tfrac {F}{G}}\end{array}}}
Here
E
{\displaystyle E}
and
F
{\displaystyle F}
(or equivalently,
G
{\displaystyle G}
and
δ
{\displaystyle \delta }
) are real constants which depend on the initial conditions. Using
λ
1
+
λ
2
=
2
α
=
A
,
{\displaystyle \lambda _{1}+\lambda _{2}=2\alpha =A,}
λ
1
⋅
λ
2
=
α
2
+
β
2
=
−
B
,
{\displaystyle \lambda _{1}\cdot \lambda _{2}=\alpha ^{2}+\beta ^{2}=-B,}
one may simplify the solution given above as
a
n
=
(
−
B
)
n
2
(
E
cos
(
θ
n
)
+
F
sin
(
θ
n
)
)
,
{\displaystyle a_{n}=(-B)^{\frac {n}{2}}\left(E\cos(\theta n)+F\sin(\theta n)\right),}
where
a
1
{\displaystyle a_{1}}
and
a
2
{\displaystyle a_{2}}
are the initial conditions and
E
=
−
A
a
1
+
a
2
B
F
=
−
i
A
2
a
1
−
A
a
2
+
2
a
1
B
B
A
2
+
4
B
θ
=
arccos
(
A
2
−
B
)
{\displaystyle {\begin{aligned}E&={\frac {-Aa_{1}+a_{2}}{B}}\\F&=-i{\frac {A^{2}a_{1}-Aa_{2}+2a_{1}B}{B{\sqrt {A^{2}+4B}}}}\\\theta &=\arccos \left({\frac {A}{2{\sqrt {-B}}}}\right)\end{aligned}}}
In this way there is no need to solve for
λ
1
{\displaystyle \lambda _{1}}
and
λ
2
{\displaystyle \lambda _{2}}
.
In all cases—real distinct eigenvalues, real duplicated eigenvalues, and complex conjugate eigenvalues—the equation is stable (that is, the variable
a
{\displaystyle a}
converges to a fixed value [specifically, zero]) if and only if both eigenvalues are smaller than one in absolute value. In this second-order case, this condition on the eigenvalues can be shown to be equivalent to
|
A
|
<
1
−
B
<
2
{\displaystyle |A|<1-B<2}
, which is equivalent to
|
B
|
<
1
{\displaystyle |B|<1}
and
|
A
|
<
1
−
B
{\displaystyle |A|<1-B}
.
== General solution ==
=== Characteristic polynomial and roots ===
Solving the homogeneous equation
x
t
=
a
1
x
t
−
1
+
⋯
+
a
n
x
t
−
n
{\displaystyle x_{t}=a_{1}x_{t-1}+\cdots +a_{n}x_{t-n}}
involves first solving its characteristic polynomial
λ
n
=
a
1
λ
n
−
1
+
⋯
+
a
n
−
2
λ
2
+
a
n
−
1
λ
+
a
n
{\displaystyle \lambda ^{n}=a_{1}\lambda ^{n-1}+\cdots +a_{n-2}\lambda ^{2}+a_{n-1}\lambda +a_{n}}
for its characteristic roots λ1, ..., λn. These roots can be solved for algebraically if n ≤ 4, but not necessarily otherwise. If the solution is to be used numerically, all the roots of this characteristic equation can be found by numerical methods. However, for use in a theoretical context it may be that the only information required about the roots is whether any of them are greater than or equal to 1 in absolute value.
It may be that all the roots are real or instead there may be some that are complex numbers. In the latter case, all the complex roots come in complex conjugate pairs.
=== Solution with distinct characteristic roots ===
If all the characteristic roots are distinct, the solution of the homogeneous linear recurrence
x
t
=
a
1
x
t
−
1
+
⋯
+
a
n
x
t
−
n
{\displaystyle x_{t}=a_{1}x_{t-1}+\cdots +a_{n}x_{t-n}}
can be written in terms of the characteristic roots as
x
t
=
c
1
λ
1
t
+
⋯
+
c
n
λ
n
t
{\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{n}\lambda _{n}^{t}}
where the coefficients ci can be found by invoking the initial conditions. Specifically, for each time period for which an iterate value is known, this value and its corresponding value of t can be substituted into the solution equation to obtain a linear equation in the n as-yet-unknown parameters; n such equations, one for each initial condition, can be solved simultaneously for the n parameter values. If all characteristic roots are real, then all the coefficient values ci will also be real; but with non-real complex roots, in general some of these coefficients will also be non-real.
==== Converting complex solution to trigonometric form ====
If there are complex roots, they come in conjugate pairs and so do the complex terms in the solution equation. If two of these complex terms are cjλtj and cj+1λtj+1, the roots λj can be written as
λ
j
,
λ
j
+
1
=
α
±
β
i
=
M
(
α
M
±
β
M
i
)
{\displaystyle \lambda _{j},\lambda _{j+1}=\alpha \pm \beta i=M\left({\frac {\alpha }{M}}\pm {\frac {\beta }{M}}i\right)}
where i is the imaginary unit and M is the modulus of the roots:
M
=
α
2
+
β
2
.
{\displaystyle M={\sqrt {\alpha ^{2}+\beta ^{2}}}.}
Then the two complex terms in the solution equation can be written as
c
j
λ
j
t
+
c
j
+
1
λ
j
+
1
t
=
M
t
(
c
j
(
α
M
+
β
M
i
)
t
+
c
j
+
1
(
α
M
−
β
M
i
)
t
)
=
M
t
(
c
j
(
cos
θ
+
i
sin
θ
)
t
+
c
j
+
1
(
cos
θ
−
i
sin
θ
)
t
)
=
M
t
(
c
j
(
cos
θ
t
+
i
sin
θ
t
)
+
c
j
+
1
(
cos
θ
t
−
i
sin
θ
t
)
)
{\displaystyle {\begin{aligned}c_{j}\lambda _{j}^{t}+c_{j+1}\lambda _{j+1}^{t}&=M^{t}\left(c_{j}\left({\frac {\alpha }{M}}+{\frac {\beta }{M}}i\right)^{t}+c_{j+1}\left({\frac {\alpha }{M}}-{\frac {\beta }{M}}i\right)^{t}\right)\\[6pt]&=M^{t}\left(c_{j}\left(\cos \theta +i\sin \theta \right)^{t}+c_{j+1}\left(\cos \theta -i\sin \theta \right)^{t}\right)\\[6pt]&=M^{t}{\bigl (}c_{j}\left(\cos \theta t+i\sin \theta t\right)+c_{j+1}\left(\cos \theta t-i\sin \theta t\right){\bigr )}\end{aligned}}}
where θ is the angle whose cosine is α/M and whose sine is β/M; the last equality here made use of de Moivre's formula.
Now the process of finding the coefficients cj and cj+1 guarantees that they are also complex conjugates, which can be written as γ ± δi. Using this in the last equation gives this expression for the two complex terms in the solution equation:
2
M
t
(
γ
cos
θ
t
−
δ
sin
θ
t
)
{\displaystyle 2M^{t}\left(\gamma \cos \theta t-\delta \sin \theta t\right)}
which can also be written as
2
γ
2
+
δ
2
M
t
cos
(
θ
t
+
ψ
)
{\displaystyle 2{\sqrt {\gamma ^{2}+\delta ^{2}}}M^{t}\cos(\theta t+\psi )}
where ψ is the angle whose cosine is γ/√γ2 + δ2 and whose sine is δ/√γ2 + δ2.
==== Cyclicity ====
Depending on the initial conditions, even with all roots real the iterates can experience a transitory tendency to go above and below the steady state value. But true cyclicity involves a permanent tendency to fluctuate, and this occurs if there is at least one pair of complex conjugate characteristic roots. This can be seen in the trigonometric form of their contribution to the solution equation, involving cos θt and sin θt.
=== Solution with duplicate characteristic roots ===
In the second-order case, if the two roots are identical (λ1 = λ2), they can both be denoted as λ and a solution may be of the form
x
t
=
c
1
λ
t
+
c
2
t
λ
t
.
{\displaystyle x_{t}=c_{1}\lambda ^{t}+c_{2}t\lambda ^{t}.}
=== Solution by conversion to matrix form ===
An alternative solution method involves converting the nth order difference equation to a first-order matrix difference equation. This is accomplished by writing w1,t = yt, w2,t = yt−1 = w1,t−1, w3,t = yt−2 = w2,t−1, and so on. Then the original single nth-order equation
y
t
=
a
1
y
t
−
1
+
a
2
y
t
−
2
+
⋯
+
a
n
y
t
−
n
+
b
{\displaystyle y_{t}=a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots +a_{n}y_{t-n}+b}
can be replaced by the following n first-order equations:
w
1
,
t
=
a
1
w
1
,
t
−
1
+
a
2
w
2
,
t
−
1
+
⋯
+
a
n
w
n
,
t
−
1
+
b
w
2
,
t
=
w
1
,
t
−
1
⋮
w
n
,
t
=
w
n
−
1
,
t
−
1
.
{\displaystyle {\begin{aligned}w_{1,t}&=a_{1}w_{1,t-1}+a_{2}w_{2,t-1}+\cdots +a_{n}w_{n,t-1}+b\\w_{2,t}&=w_{1,t-1}\\&\,\,\,\vdots \\w_{n,t}&=w_{n-1,t-1}.\end{aligned}}}
Defining the vector wi as
w
i
=
[
w
1
,
i
w
2
,
i
⋮
w
n
,
i
]
{\displaystyle \mathbf {w} _{i}={\begin{bmatrix}w_{1,i}\\w_{2,i}\\\vdots \\w_{n,i}\end{bmatrix}}}
this can be put in matrix form as
w
t
=
A
w
t
−
1
+
b
{\displaystyle \mathbf {w} _{t}=\mathbf {A} \mathbf {w} _{t-1}+\mathbf {b} }
Here A is an n × n matrix in which the first row contains a1, ..., an and all other rows have a single 1 with all other elements being 0, and b is a column vector with first element b and with the rest of its elements being 0.
This matrix equation can be solved using the methods in the article Matrix difference equation.
In the homogeneous case yi is a para-permanent of a lower triangular matrix
=== Solution using generating functions ===
The recurrence
y
t
=
a
1
y
t
−
1
+
⋯
+
a
n
y
t
−
n
+
b
,
{\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b,}
can be solved using the theory of generating functions. First, we write
Y
(
x
)
=
∑
t
≥
0
y
t
x
t
{\textstyle Y(x)=\sum _{t\geq 0}y_{t}x^{t}}
. The recurrence is then equivalent to the following generating function equation:
Y
(
x
)
=
a
1
x
Y
(
x
)
+
a
2
x
2
Y
(
x
)
+
⋯
+
a
n
x
n
Y
(
x
)
+
b
1
−
x
+
p
(
x
)
{\displaystyle Y(x)=a_{1}xY(x)+a_{2}x^{2}Y(x)+\cdots +a_{n}x^{n}Y(x)+{\frac {b}{1-x}}+p(x)}
where
p
(
x
)
{\displaystyle p(x)}
is a polynomial of degree at most
n
−
1
{\displaystyle n-1}
correcting the initial terms.
From this equation we can solve to get
Y
(
x
)
=
(
b
1
−
x
+
p
(
x
)
)
⋅
1
1
−
a
1
x
−
a
2
x
2
−
⋯
−
a
n
x
n
.
{\displaystyle Y(x)=\left({\frac {b}{1-x}}+p(x)\right)\cdot {\frac {1}{1-a_{1}x-a_{2}x^{2}-\cdots -a_{n}x^{n}}}.}
In other words, not worrying about the exact coefficients,
Y
(
x
)
{\displaystyle Y(x)}
can be expressed as a rational function
Y
(
x
)
=
f
(
x
)
g
(
x
)
.
{\displaystyle Y(x)={\frac {f(x)}{g(x)}}.}
The closed form can then be derived via partial fraction decomposition. Specifically, if the generating function is written as
f
(
x
)
g
(
x
)
=
∑
i
f
i
(
x
)
(
x
−
r
i
)
m
i
{\displaystyle {\frac {f(x)}{g(x)}}=\sum _{i}{\frac {f_{i}(x)}{(x-r_{i})^{m_{i}}}}}
then the polynomial
p
(
x
)
{\displaystyle p(x)}
determines the initial set of corrections
z
(
n
)
{\displaystyle z(n)}
, the denominator
(
x
−
r
i
)
m
{\displaystyle (x-r_{i})^{m}}
determines the exponential term
r
i
n
{\displaystyle r_{i}^{n}}
, and the degree
m
{\displaystyle m}
together with the numerator
f
i
(
x
)
{\displaystyle f_{i}(x)}
determine the polynomial coefficient
k
i
(
n
)
{\displaystyle k_{i}(n)}
.
=== Relation to solution to differential equations ===
The method for solving linear differential equations is similar to the method above—the "intelligent guess" (ansatz) for linear differential equations with constant coefficients is
e
λ
x
{\displaystyle e^{\lambda x}}
where
λ
{\displaystyle \lambda }
is a complex number that is determined by substituting the guess into the differential equation.
This is not a coincidence. Considering the Taylor series of the solution to a linear differential equation:
∑
n
=
0
∞
f
(
n
)
(
a
)
n
!
(
x
−
a
)
n
{\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}}
it can be seen that the coefficients of the series are given by the
n
{\displaystyle n}
-th derivative of
f
(
x
)
{\displaystyle f(x)}
evaluated at the point
a
{\displaystyle a}
. The differential equation provides a linear difference equation relating these coefficients.
This equivalence can be used to quickly solve for the recurrence relationship for the coefficients in the power series solution of a linear differential equation.
The rule of thumb (for equations in which the polynomial multiplying the first term is non-zero at zero) is that:
y
[
k
]
→
f
[
n
+
k
]
{\displaystyle y^{[k]}\to f[n+k]}
and more generally
x
m
∗
y
[
k
]
→
n
(
n
−
1
)
.
.
.
(
n
−
m
+
1
)
f
[
n
+
k
−
m
]
{\displaystyle x^{m}*y^{[k]}\to n(n-1)...(n-m+1)f[n+k-m]}
Example: The recurrence relationship for the Taylor series coefficients of the equation:
(
x
2
+
3
x
−
4
)
y
[
3
]
−
(
3
x
+
1
)
y
[
2
]
+
2
y
=
0
{\displaystyle (x^{2}+3x-4)y^{[3]}-(3x+1)y^{[2]}+2y=0}
is given by
n
(
n
−
1
)
f
[
n
+
1
]
+
3
n
f
[
n
+
2
]
−
4
f
[
n
+
3
]
−
3
n
f
[
n
+
1
]
−
f
[
n
+
2
]
+
2
f
[
n
]
=
0
{\displaystyle n(n-1)f[n+1]+3nf[n+2]-4f[n+3]-3nf[n+1]-f[n+2]+2f[n]=0}
or
−
4
f
[
n
+
3
]
+
2
n
f
[
n
+
2
]
+
n
(
n
−
4
)
f
[
n
+
1
]
+
2
f
[
n
]
=
0.
{\displaystyle -4f[n+3]+2nf[n+2]+n(n-4)f[n+1]+2f[n]=0.}
This example shows how problems generally solved using the power series solution method taught in normal differential equation classes can be solved in a much easier way.
Example: The differential equation
a
y
″
+
b
y
′
+
c
y
=
0
{\displaystyle ay''+by'+cy=0}
has solution
y
=
e
a
x
.
{\displaystyle y=e^{ax}.}
The conversion of the differential equation to a difference equation of the Taylor coefficients is
a
f
[
n
+
2
]
+
b
f
[
n
+
1
]
+
c
f
[
n
]
=
0.
{\displaystyle af[n+2]+bf[n+1]+cf[n]=0.}
It is easy to see that the
n
{\displaystyle n}
-th derivative of
e
a
x
{\displaystyle e^{ax}}
evaluated at
0
{\displaystyle 0}
is
a
n
{\displaystyle a^{n}}
.
==== Solving with z-transforms ====
Certain difference equations - in particular, linear constant coefficient difference equations - can be solved using z-transforms. The z-transforms are a class of integral transforms that lead to more convenient algebraic manipulations and more straightforward solutions. There are cases in which obtaining a direct solution would be all but impossible, yet solving the problem via a thoughtfully chosen integral transform is straightforward.
== Stability ==
In the solution equation
x
t
=
c
1
λ
1
t
+
⋯
+
c
n
λ
n
t
,
{\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{n}\lambda _{n}^{t},}
a term with real characteristic roots converges to 0 as t grows indefinitely large if the absolute value of the characteristic root is less than 1. If the absolute value equals 1, the term will stay constant as t grows if the root is +1 but will fluctuate between two values if the root is −1. If the absolute value of the root is greater than 1 the term will become larger and larger over time. A pair of terms with complex conjugate characteristic roots will converge to 0 with dampening fluctuations if the absolute value of the modulus M of the roots is less than 1; if the modulus equals 1 then constant amplitude fluctuations in the combined terms will persist; and if the modulus is greater than 1, the combined terms will show fluctuations of ever-increasing magnitude.
Thus the evolving variable x will converge to 0 if all of the characteristic roots have magnitude less than 1.
If the largest root has absolute value 1, neither convergence to 0 nor divergence to infinity will occur. If all roots with magnitude 1 are real and positive, x will converge to the sum of their constant terms ci; unlike in the stable case, this converged value depends on the initial conditions; different starting points lead to different points in the long run. If any root is −1, its term will contribute permanent fluctuations between two values. If any of the unit-magnitude roots are complex then constant-amplitude fluctuations of x will persist.
Finally, if any characteristic root has magnitude greater than 1, then x will diverge to infinity as time goes to infinity, or will fluctuate between increasingly large positive and negative values.
A theorem of Issai Schur states that all roots have magnitude less than 1 (the stable case) if and only if a particular string of determinants are all positive.: 247
If a non-homogeneous linear difference equation has been converted to homogeneous form which has been analyzed as above, then the stability and cyclicality properties of the original non-homogeneous equation will be the same as those of the derived homogeneous form, with convergence in the stable case being to the steady-state value y* instead of to 0.
== See also ==
Recurrence relation
Linear differential equation
Skolem–Mahler–Lech theorem
Skolem problem
== References == | Wikipedia/Linear_difference_equation |
The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.
The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm and Grover's search algorithm. Provided the linear system is sparse and has a low condition number
κ
{\displaystyle \kappa }
, and that the user is interested in the result of a scalar measurement on the solution vector, instead of the values of the solution vector itself, then the algorithm has a runtime of
O
(
log
(
N
)
κ
2
)
{\displaystyle O(\log(N)\kappa ^{2})}
, where
N
{\displaystyle N}
is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in
O
(
N
κ
)
{\displaystyle O(N\kappa )}
(or
O
(
N
κ
)
{\displaystyle O(N{\sqrt {\kappa }})}
for positive semidefinite matrices).
An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by three independent publications. The demonstrations consisted of simple linear equations on specially designed quantum devices. The first demonstration of a general-purpose version of the algorithm appeared in 2018.
Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential for widespread applicability.
== Procedure ==
The HHL algorithm tackles the following problem: given a
N
×
N
{\displaystyle N\times N}
Hermitian matrix
A
{\displaystyle A}
and a unit vector
b
→
∈
R
N
{\displaystyle {\vec {b}}\in \mathbb {R} ^{N}}
, prepare the quantum state
|
x
⟩
{\displaystyle |x\rangle }
corresponding to the vector
x
→
∈
R
N
{\displaystyle {\vec {x}}\in \mathbb {R} ^{N}}
that solves the linear system
A
x
→
=
b
→
{\displaystyle A{\vec {x}}={\vec {b}}}
. More precisely, the goal is to prepare a state
|
x
⟩
{\displaystyle |x\rangle }
whose amplitudes equal the elements of
x
→
{\displaystyle {\vec {x}}}
. This means, in particular, that the algorithm cannot be used to efficiently retrieve the vector
x
→
{\displaystyle {\vec {x}}}
itself. It does, however, allow to efficiently compute expectation values of the form
⟨
x
|
M
|
x
⟩
{\displaystyle \langle x|M|x\rangle }
for some observable
M
{\displaystyle M}
.
First, the algorithm represents the vector
b
→
{\displaystyle {\vec {b}}}
as a quantum state of the form:
|
b
⟩
=
∑
i
=
1
N
b
i
|
i
⟩
.
{\displaystyle |b\rangle =\sum _{i\mathop {=} 1}^{N}b_{i}|i\rangle .}
Next, Hamiltonian simulation techniques are used to apply the unitary operator
e
i
A
t
{\displaystyle e^{iAt}}
to
|
b
⟩
{\displaystyle |b\rangle }
for a superposition of different times
t
{\displaystyle t}
. The ability to decompose
|
b
⟩
{\displaystyle |b\rangle }
into the eigenbasis of
A
{\displaystyle A}
and to find the corresponding eigenvalues
λ
j
{\displaystyle \lambda _{j}}
is facilitated by the use of quantum phase estimation.
The state of the system after this decomposition is approximately:
∑
j
=
1
N
β
j
|
u
j
⟩
|
λ
j
⟩
,
{\displaystyle \sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle |\lambda _{j}\rangle ,}
where
u
j
{\displaystyle u_{j}}
is the eigenvector basis of
A
{\displaystyle A}
, and
|
b
⟩
=
∑
j
=
1
N
β
j
|
u
j
⟩
{\displaystyle |b\rangle =\sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle }
.
We would then like to perform the linear map taking
|
λ
j
⟩
{\displaystyle |\lambda _{j}\rangle }
to
C
λ
j
−
1
|
λ
j
⟩
{\displaystyle C\lambda _{j}^{-1}|\lambda _{j}\rangle }
, where
C
{\displaystyle C}
is a normalizing constant. The linear mapping operation is not unitary and thus will require a number of repetitions as it has some probability of failing. After it succeeds, we uncomputed the
|
λ
j
⟩
{\displaystyle |\lambda _{j}\rangle }
register and are left with a state proportional to:
∑
i
=
1
N
β
i
λ
j
−
1
|
u
j
⟩
=
A
−
1
|
b
⟩
=
|
x
⟩
,
{\displaystyle \sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle ,}
where
|
x
⟩
{\displaystyle |x\rangle }
is a quantum-mechanical representation of the desired solution vector x. To read out all components of x would require the procedure be repeated at least N times. However, it is often the case that one is not interested in
x
{\displaystyle x}
itself, but rather some expectation value of a linear operator M acting on x. By mapping M to a quantum-mechanical operator and performing the quantum measurement corresponding to M, we obtain an estimate of the expectation value
⟨
x
|
M
|
x
⟩
{\displaystyle \langle x|M|x\rangle }
. This allows for a wide variety of features of the vector x to be extracted including normalization, weights in different parts of the state space, and moments without actually computing all the values of the solution vector x.
== Explanation ==
=== Initialization ===
Firstly, the algorithm requires that the matrix
A
{\displaystyle A}
be Hermitian so that it can be converted into a unitary operator. In the case where
A
{\displaystyle A}
is not Hermitian, define
C
=
[
0
A
A
†
0
]
.
{\displaystyle \mathbf {C} ={\begin{bmatrix}0&A\\A^{\dagger }&0\end{bmatrix}}.}
As
C
{\displaystyle C}
is Hermitian, the algorithm can now be used to solve
C
y
=
[
b
0
]
{\displaystyle Cy={\begin{bmatrix}b\\0\end{bmatrix}}}
to obtain
y
=
[
0
x
]
{\displaystyle y={\begin{bmatrix}0\\x\end{bmatrix}}}
.
Secondly, the algorithm requires an efficient procedure to prepare
|
b
⟩
{\displaystyle |b\rangle }
, the quantum representation of b. It is assumed that there exists some linear operator
B
{\displaystyle B}
that can take some arbitrary quantum state
|
i
n
i
t
i
a
l
⟩
{\displaystyle |\mathrm {initial} \rangle }
to
|
b
⟩
{\displaystyle |b\rangle }
efficiently or that this algorithm is a subroutine in a larger algorithm and is given
|
b
⟩
{\displaystyle |b\rangle }
as input. Any error in the preparation of state
|
b
⟩
{\displaystyle |b\rangle }
is ignored.
Finally, the algorithm assumes that the state
|
ψ
0
⟩
{\displaystyle |\psi _{0}\rangle }
can be prepared efficiently, where
|
ψ
0
⟩
:=
2
/
T
∑
τ
=
0
T
−
1
sin
π
(
τ
+
1
2
T
)
|
τ
⟩
{\displaystyle |\psi _{0}\rangle :={\sqrt {2/T}}\sum _{\tau \mathop {=} 0}^{T-1}\sin \pi \left({\tfrac {\tau +{\tfrac {1}{2}}}{T}}\right)|\tau \rangle }
for some large
T
{\displaystyle T}
. The coefficients of
|
ψ
0
⟩
{\displaystyle |\psi _{0}\rangle }
are chosen to minimize a certain quadratic loss function which induces error in the
U
i
n
v
e
r
t
{\displaystyle U_{\mathrm {invert} }}
subroutine described below.
=== Hamiltonian simulation ===
Hamiltonian simulation is used to transform the Hermitian matrix
A
{\displaystyle A}
into a unitary operator, which can then be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum Hamiltonian simulation allows
e
i
A
t
{\displaystyle e^{iAt}}
to be simulated in time
O
(
log
(
N
)
s
2
t
)
{\displaystyle O(\log(N)s^{2}t)}
.
=== Uinvert subroutine ===
The key subroutine to the algorithm, denoted
U
i
n
v
e
r
t
{\displaystyle U_{\mathrm {invert} }}
, is defined as follows and incorporates a phase estimation subroutine:
1. Prepare
|
ψ
0
⟩
C
{\displaystyle |\psi _{0}\rangle ^{C}}
on register C
2. Apply the conditional Hamiltonian evolution (sum)
3. Apply the Fourier transform to the register C. Denote the resulting basis states with
|
k
⟩
{\displaystyle |k\rangle }
for k = 0, ..., T − 1. Define
λ
k
:=
2
π
k
/
t
0
{\displaystyle \lambda _{k}:=2\pi k/t_{0}}
.
4. Adjoin a three-dimensional register S in the state
|
h
(
λ
k
)
⟩
S
:=
1
−
f
(
λ
k
)
2
−
g
(
λ
k
)
2
|
n
o
t
h
i
n
g
⟩
S
+
f
(
λ
k
)
|
w
e
l
l
⟩
S
+
g
(
λ
k
)
|
i
l
l
⟩
S
,
{\displaystyle |h(\lambda _{k})\rangle ^{S}:={\sqrt {1-f(\lambda _{k})^{2}-g(\lambda _{k})^{2}}}|\mathrm {nothing} \rangle ^{S}+f(\lambda _{k})|\mathrm {well} \rangle ^{S}+g(\lambda _{k})|\mathrm {ill} \rangle ^{S},}
5. Reverse steps 1–3, uncomputing any garbage produced along the way.
The phase estimation procedure in steps 1-3 allows for the estimation of eigenvalues of A up to error
ϵ
{\displaystyle \epsilon }
.
The ancilla register in step 4 is necessary to construct a final state with inverted eigenvalues corresponding to the diagonalized inverse of A. In this register, the functions f, g, are called filter functions. The states 'nothing', 'well' and 'ill' are used to instruct the loop body on how to proceed; 'nothing' indicates that the desired matrix inversion has not yet taken place, 'well' indicates that the inversion has taken place and the loop should halt, and 'ill' indicates that part of
|
b
⟩
{\displaystyle |b\rangle }
is in the ill-conditioned subspace of A and the algorithm will not be able to produce the desired inversion. Producing a state proportional to the inverse of A requires 'well' to be measured, after which the overall state of the system collapses to the desired state by the extended Born rule.
=== Main loop ===
The body of the algorithm follows the amplitude amplification procedure: starting with
U
i
n
v
e
r
t
B
|
i
n
i
t
i
a
l
⟩
{\displaystyle U_{\mathrm {invert} }B|\mathrm {initial} \rangle }
, the following operation is repeatedly applied:
U
i
n
v
e
r
t
B
R
i
n
i
t
B
†
U
i
n
v
e
r
t
†
R
s
u
c
c
,
{\displaystyle U_{\mathrm {invert} }BR_{\mathrm {init} }B^{\dagger }U_{\mathrm {invert} }^{\dagger }R_{\mathrm {succ} },}
where
R
s
u
c
c
=
I
−
2
|
w
e
l
l
⟩
⟨
w
e
l
l
|
{\displaystyle R_{\mathrm {succ} }=I-2|\mathrm {well} \rangle \langle \mathrm {well} |}
and
R
i
n
i
t
=
I
−
2
|
i
n
i
t
i
a
l
⟩
⟨
i
n
i
t
i
a
l
|
.
{\displaystyle R_{\mathrm {init} }=I-2|\mathrm {initial} \rangle \langle \mathrm {initial} |.}
After each repetition,
S
{\displaystyle S}
is measured and will produce a value of 'nothing', 'well', or 'ill' as described above. This loop is repeated until
|
w
e
l
l
⟩
{\displaystyle |\mathrm {well} \rangle }
is measured, which occurs with a probability
p
{\displaystyle p}
. Rather than repeating
1
p
{\displaystyle {\frac {1}{p}}}
times to minimize error, amplitude amplification is used to achieve the same error resilience using only
O
(
1
p
)
{\displaystyle O\left({\frac {1}{\sqrt {p}}}\right)}
repetitions.
=== Scalar measurement ===
After successfully measuring 'well' on
S
{\displaystyle S}
the system will be in a state proportional to:
∑
i
=
1
N
β
i
λ
j
−
1
|
u
j
⟩
=
A
−
1
|
b
⟩
=
|
x
⟩
.
{\displaystyle \sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle .}
Finally, we perform the quantum-mechanical operator corresponding to M and obtain an estimate of the value of
⟨
x
|
M
|
x
⟩
{\displaystyle \langle x|M|x\rangle }
.
== Run time analysis ==
=== Classical efficiency ===
The best classical algorithm which produces the actual solution vector
x
→
{\displaystyle {\vec {x}}}
is Gaussian elimination, which runs in
O
(
N
3
)
{\displaystyle O(N^{3})}
time.
If A is s-sparse and positive semi-definite, then the Conjugate Gradient method can be used to find the solution vector
x
→
{\displaystyle {\vec {x}}}
, which can be found in
O
(
N
s
κ
)
{\displaystyle O(Ns\kappa )}
time by minimizing the quadratic function
|
A
x
→
−
b
→
|
2
{\displaystyle |A{\vec {x}}-{\vec {b}}|^{2}}
.
When only a summary statistic of the solution vector
x
→
{\displaystyle {\vec {x}}}
is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of
x
→
†
M
x
→
{\displaystyle {\vec {x}}^{\dagger }M{\vec {x}}}
in
O
(
N
κ
)
{\displaystyle O(N{\sqrt {\kappa }})}
.
=== Quantum efficiency ===
The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be
O
(
κ
2
log
N
/
ε
)
{\displaystyle O(\kappa ^{2}\log N/\varepsilon )}
, where
ε
>
0
{\displaystyle \varepsilon >0}
is the error parameter and
κ
{\displaystyle \kappa }
is the condition number of
A
{\displaystyle A}
. This was subsequently improved to
O
(
κ
log
3
κ
log
N
/
ε
3
)
{\displaystyle O(\kappa \log ^{3}\kappa \log N/\varepsilon ^{3})}
by Andris Ambainis and a quantum algorithm with runtime polynomial in
log
(
1
/
ε
)
{\displaystyle \log(1/\varepsilon )}
was developed by Childs et al. Since the HHL algorithm maintains its logarithmic scaling in
N
{\displaystyle N}
only for sparse or low rank matrices, Wossnig et al. extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in
O
(
N
log
N
κ
2
)
{\displaystyle O({\sqrt {N}}\log N\kappa ^{2})}
time compared to the
O
(
N
log
N
κ
2
)
{\displaystyle O(N\log N\kappa ^{2})}
of the standard HHL algorithm.
=== Optimality ===
An important factor in the performance of the matrix inversion algorithm is the condition number
κ
{\displaystyle \kappa }
, which represents the ratio of
A
{\displaystyle A}
's largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as
A
{\displaystyle A}
becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all singular values of the matrix
A
{\displaystyle A}
lie between
1
κ
{\displaystyle {\frac {1}{\kappa }}}
and 1, in which case the claimed run-time proportional to
κ
2
{\displaystyle \kappa ^{2}}
will be achieved. Therefore, the speedup over classical algorithms is increased further when
κ
{\displaystyle \kappa }
is a
p
o
l
y
(
log
(
N
)
)
{\displaystyle \mathrm {poly} (\log(N))}
.
If the run-time of the algorithm were made poly-logarithmic in
κ
{\displaystyle \kappa }
then problems solvable on n qubits could be solved in poly(n) time, causing the complexity class BQP to be equal to PSPACE.
== Error analysis ==
Performing the Hamiltonian simulation, which is the dominant source of error, is done by simulating
e
i
A
t
{\displaystyle e^{iAt}}
. Assuming that
A
{\displaystyle A}
is s-sparse, this can be done with an error bounded by a constant
ε
{\displaystyle \varepsilon }
, which will translate to the additive error achieved in the output state
|
x
⟩
{\displaystyle |x\rangle }
.
The phase estimation step errs by
O
(
1
t
0
)
{\displaystyle O\left({\frac {1}{t_{0}}}\right)}
in estimating
λ
{\displaystyle \lambda }
, which translates into a relative error of
O
(
1
λ
t
0
)
{\displaystyle O\left({\frac {1}{\lambda t_{0}}}\right)}
in
λ
−
1
{\displaystyle \lambda ^{-1}}
. If
λ
≥
1
/
κ
{\displaystyle \lambda \geq 1/\kappa }
, taking
t
0
=
O
(
κ
ε
)
{\displaystyle t_{0}=O(\kappa \varepsilon )}
induces a final error of
ε
{\displaystyle \varepsilon }
. This requires that the overall run-time efficiency be increased proportional to
O
(
1
ε
)
{\displaystyle O\left({\frac {1}{\varepsilon }}\right)}
to minimize error.
== Experimental realization ==
While there does not yet exist a quantum computer that can truly offer a speedup over a classical computer, implementation of a "proof of concept" remains an important milestone in the development of a new quantum algorithm. Demonstrating the quantum algorithm for linear systems of equations remained a challenge for years after its proposal until 2013 when it was demonstrated by Cai et al., Barz et al. and Pan et al. in parallel.
=== Cai et al. ===
Published in Physical Review Letters 110, 230501 (2013), Cai et al. reported an experimental demonstration of the simplest meaningful instance of this algorithm, that is, solving
2
×
2
{\displaystyle 2\times 2}
linear equations for various input vectors. The quantum circuit is optimized and compiled into a linear optical network with four photonic quantum bits (qubits) and four controlled logic gates, which is used to coherently implement every subroutine for this algorithm. For various input vectors, the quantum computer gives solutions for the linear equations with reasonably high precision, ranging from fidelities of 0.825 to 0.993.
=== Barz et al. ===
On February 5, 2013, Stefanie Barz and co-workers demonstrated the quantum algorithm for linear systems of equations on a photonic quantum computing architecture. This implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Barz et al. found that the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion.
=== Pan et al. ===
On February 8, 2013, Pan et al. reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance quantum information processor. The implementation was tested using simple linear systems of only 2 variables. Across three experiments they obtain the solution vector with over 96% fidelity.
=== Wen et al. ===
Another experimental demonstration using NMR for solving an 8*8 system was reported by Wen et al. in 2018 using the algorithm developed by Subaşı et al.
== Applications ==
Quantum computers are devices that harness quantum mechanics to perform computations in ways that classical computers cannot. For certain problems, quantum algorithms supply exponential speedups over their classical counterparts, the most famous example being Shor's factoring algorithm. Few such exponential speedups are known, and those that are (such as the use of quantum computers to simulate other quantum systems) have so far found limited practical use due to the current small size of quantum computers. This algorithm provides an exponentially faster method of estimating features of the solution of a set of linear equations, which is a problem ubiquitous in science and engineering, both on its own and as a subroutine in more complex problems.
=== Electromagnetic scattering ===
Clader et al. provided a preconditioned version of the linear systems algorithm that provided two advances. First, they demonstrated how a preconditioner could be included within the quantum algorithm. This expands the class of problems that can achieve the promised exponential speedup, since the scaling of HHL and the best classical algorithms are both polynomial in the condition number. The second advance was the demonstration of how to use HHL to solve for the radar cross-section of a complex shape. This was one of the first end to end examples of how to use HHL to solve a concrete problem exponentially faster than the best known classical algorithm.
=== Linear differential equation solving ===
Dominic Berry proposed a new algorithm for solving linear time dependent differential equations as an extension of the quantum algorithm for solving linear systems of equations. Berry provides an efficient algorithm for solving the full-time evolution under sparse linear differential equations on a quantum computer.
=== Nonlinear differential equation solving ===
Two groups proposed efficient algorithms for numerically integrating dissipative nonlinear ordinary differential equations. Liu et al. utilized Carleman linearization technique for second order equations and Lloyd et al. employed a mean field linearization method inspired by nonlinear Schrödinger equation for general order nonlinearities. The resulting linear equations are solved using quantum algorithms for linear differential equations.
=== Finite element method ===
The Finite Element Method uses large systems of linear equations to find approximate solutions to various physical and mathematical models. Montanaro and Pallister demonstrate that the HHL algorithm, when applied to certain FEM problems, can achieve a polynomial quantum speedup. They suggest that an exponential speedup is not possible in problems with fixed dimensions, and for which the solution meets certain smoothness conditions.
Quantum speedups for the finite element method are higher for problems which include solutions with higher-order derivatives and large spatial dimensions. For example, problems in many-body dynamics require the solution of equations containing derivatives on orders scaling with the number of bodies, and some problems in computational finance, such as Black-Scholes models, require large spatial dimensions.
=== Least-squares fitting ===
Wiebe et al. provide a new quantum algorithm to determine the quality of a least-squares fit in which a continuous function is used to approximate a set of discrete points by extending the quantum algorithm for linear systems of equations. As the number of discrete points increases, the time required to produce a least-squares fit using even a quantum computer running a quantum state tomography algorithm becomes very large. Wiebe et al. find that in many cases, their algorithm can efficiently find a concise approximation of the data points, eliminating the need for the higher-complexity tomography algorithm.
=== Machine learning and big data analysis ===
Machine learning is the study of systems that can identify trends in data. Tasks in machine learning frequently involve manipulating and classifying a large volume of data in high-dimensional vector spaces. The runtime of classical machine learning algorithms is limited by a polynomial dependence on both the volume of data and the dimensions of the space. Quantum computers are capable of manipulating high-dimensional vectors using tensor product spaces and thus are well-suited platforms for machine learning algorithms.
The quantum algorithm for linear systems of equations has been applied to a support vector machine, which is an optimized linear or non-linear binary classifier. A support vector machine can be used for supervised machine learning, in which training set of already classified data is available, or unsupervised machine learning, in which all data given to the system is unclassified. Rebentrost et al. show that a quantum support vector machine can be used for big data classification and achieve an exponential speedup over classical computers.
In June 2018, Zhao et al. developed an algorithm for performing Bayesian training of deep neural networks in quantum computers with an exponential speedup over classical training due to the use of the quantum algorithm for linear systems of equations, providing also the first general-purpose implementation of the algorithm to be run in cloud-based quantum computers.
=== Finance ===
Proposals for using HHL in finance include solving partial differential equations for the Black–Scholes equation and determining portfolio optimization via a Markowitz solution.
=== Quantum chemistry ===
In 2023, Baskaran et al. proposed the use of HHL algorithm to quantum chemistry calculations, via the linearized coupled cluster method (LCC). The connection between the HHL algorithm and the LCC method is due to the fact that the latter can be recast in the form of system of linear equations. A key factor that makes this approach useful for quantum chemistry is that the number of state register qubits is the natural logarithm of the number of excitations, thus offering an exponential suppression in the number of required qubits when compared to variational quantum eigensolver or the quantum phase estimation algorithms. This leads to a 'coexistence across scales', where in a given quantum computing era, HHL-LCC could be applied to much larger systems whereas QPE-CASCI could be employed for smaller molecular systems but with better accuracy in predicting molecular properties. On the algorithmic side, the authors introduce the 'AdaptHHL' approach, which circumvents the need to expend an ~Ο(N3) classical overhead associated with fixing a value for the parameter 'c' in the controlled-rotation module of the algorithm.
== Implementation difficulties ==
Recognizing the importance of the HHL algorithm in the field of quantum machine learning, Scott Aaronson analyzes the caveats and factors that could limit the actual quantum advantage of the algorithm.
the solution vector,
|
b
⟩
{\displaystyle |b\rangle }
, has to be efficiently prepared in the quantum state. If the vector is not close to uniform, the state preparation is likely to be costly, and if it takes
O
(
n
c
)
{\displaystyle O(n^{c})}
steps the exponential advantage of HHL would vanish.
the QPE phases calls for the generation of the unitary
e
i
A
t
{\displaystyle e^{iAt}}
, and its controlled application. The efficiency of this step depends on the
A
{\displaystyle A}
matrix being sparse and 'well conditioned' (low
κ
{\displaystyle \kappa }
). Otherwise, the application of
e
i
A
t
{\displaystyle e^{iAt}}
would grow as
O
(
n
c
)
{\displaystyle O(n^{c})}
and once again, the algorithm's quantum advantage would vanish.
lastly, the vector,
|
x
⟩
{\displaystyle |x\rangle }
, is not readily accessible. The HHL algorithm enables learning a 'summary' of the vector, namely the result of measuring the expectation of an operator
⟨
x
|
M
|
x
⟩
{\displaystyle \langle x|M|x\rangle }
. If actual values of
x
→
{\displaystyle {\vec {x}}}
are needed, then HHL would need to be repeated
O
(
n
)
{\displaystyle O(n)}
times, killing the exponential speed-up. However, three ways of avoiding getting the actual values have been proposed: first, if only some properties of the solution are needed; second, if the results are needed only to feed downstream matrix operations; third, if only a sample of the solution is needed.
== See also ==
Differentiable programming
== References == | Wikipedia/HHL_Algorithm |
Siebel Systems, Inc. () was an American software company principally engaged in the design, development, marketing, and support of customer relationship management (CRM) applications—notably Siebel CRM.
The company was founded by Thomas Siebel and Patricia House in 1993. At first known mainly for its sales force automation products, the company expanded into the broader CRM market. By the late 1990s, Siebel Systems was the dominant CRM vendor, peaking at 45% market share in 2002.
On September 12, 2005, Oracle Corporation announced it had agreed to buy Siebel Systems for $5.8 billion. "Siebel" is now a brand name owned by Oracle Corporation.
Siebel Systems is Oracle's on-premises CRM system, and Oracle's cloud applications for CRM are Oracle Advertising and Customer Experience (CX).
== History ==
Siebel Systems, Inc. began with sales force automation software, then expanded into marketing and customer service applications, including CRM. From the time it was founded in 1993, the company grew quickly.
Benefiting from the explosive growth of the CRM market in the late 1990s, Siebel Systems was named the fastest growing company in the United States in 1999 by Fortune magazine.
=== Thomas Siebel, Pat House ===
Siebel's "first experience with sales technology was in the late 1980s, when he worked for Oracle." At the time, Siebel Systems co-founder Patricia House also was working for Oracle. Siebel left Oracle to try his hand at a startup. In 1992 House left Oracle and together they worked on what became Siebel Systems (in 1993).
== Key dates ==
1993: Siebel Systems, Inc. is founded.
1995: Siebel delivers Siebel Sales Enterprise software for sales force automation.
1995: Siebel 2.0 (Release end of 1995)
Siebel Customer Relationship Management (CRM)
Siebel Sales Enterprise
1996: Siebel becomes a publicly traded company.
1997: Siebel 3.0 (Release Feb 1997)
1998: Siebel 98
1998: Siebel Systems acquires Scopus Technology, Inc. "for its customer-service and support products."
1999: Siebel 99
2000: Siebel 6 (also known as Siebel 2000)
2000: Revenue surpasses the $1 billion mark.
2001: Siebel 7.0 (Released 2001, was the first web-based version)
2002: Siebel 7.5 (Released in 2002)
2004: Siebel 7.7 (Released in 2004)
2005: Siebel 7.8 (Released in 2005)
2006: Oracle acquires Siebel Systems.
2007: Oracle Siebel 8.0 (Released in 2007)
2007: Oracle Business Intelligence Enterprise Edition Plus (released 2007)
2007: Oracle Business Intelligence Applications (Formerly Siebel Analytics) (released 2007)
2008: Oracle Siebel 8.1 (Released in 2008)
2011: Oracle Siebel 8.2 (Released in 2011)
Oracle Sales Cloud
Oracle Fusion CRM
Oracle CRM On Demand
2015: Oracle Siebel 15.0 (Released 11 May 2015)
2016: Oracle Siebel 16.0 (Released 29 Apr 2016)
2017: Oracle Siebel 17.0 (Released 31 Jul 2017)
2018: Oracle Siebel 18.0 (Released 23 Jan 2018)
2019: Oracle Siebel 19.0 (Released 21 Jan 2019)
2020: Oracle Siebel 20.0 (Released 21 Jan 2020)
2021: Oracle Siebel 21.0 (Released 21 Jan 2021)
2022: Oracle Siebel 22.0 (Released 21 Jan 2022)
== See also ==
Oracle Advertising and Customer Experience (CX)
Oracle CRM
== References ==
== External links ==
Official website
Siebel Developer’s Reference | Wikipedia/Siebel_Systems |
Interactive Systems Corporation (styled INTERACTIVE Systems Corporation, abbreviated ISC) was a US-based software company and the first vendor of the Unix operating system outside AT&T, operating from Santa Monica, California. It was founded in 1977 by Peter G. Weiner, a RAND Corporation researcher who had previously founded the Yale University computer science department and had been the Ph.D. advisor to Brian Kernighan, one of Unix's developers at AT&T. Weiner was joined by Heinz Lycklama, also a veteran of AT&T and previously the author of a Version 6 Unix port to the LSI-11 computer.
ISC was acquired by the Eastman Kodak Company in 1988,
which sold its ISC Unix operating system assets to Sun Microsystems on September 26, 1991. Kodak sold the remaining parts of ISC to SHL Systemhouse Inc in 1993.
Several former ISC staff founded Segue Software which partnered with Lotus Development to develop the Unix version of Lotus 1-2-3 and with Peter Norton Computing to develop the Unix version of the Norton Utilities.
== Products ==
ISC's 1977 offering, IS/1, was a Version 6 Unix variant enhanced for office automation running on the PDP-11. IS/3 and IS/5 were enhanced versions of Unix System III and System V for PDP-11 and VAX. ISC Unix ports to the IBM PC included a variant of System III, developed under contract to IBM, known as PC/IX (Personal Computer Interactive eXecutive, also abbreviated PC-IX), with later versions branded 386/ix and finally INTERACTIVE UNIX System V/386 (based on System V Release 3.2). ISC was AT&T's "Principal Publisher" for System V.4 on the Intel platform. ISC was also involved in the development of VM/IX (Unix as a guest OS in VM/370) and enhancements to IX/370 (a TSS/370-based Unix system that IBM originally developed jointly with AT&T ca. 1980). They also developed the AIX 1.0 (Advanced Interactive eXecutive) for the IBM RT PC, again under contract to IBM, although IBM awarded the development contract for AIX version 2 of AIX/386 and AIX/370 to the competing Locus Computing Corporation.
=== PC/IX ===
Although observers in the early 1980s expected that IBM would choose Microsoft Xenix or a version from AT&T Corporation as the Unix for its microcomputer, PC/IX was the first Unix implementation for the IBM PC XT available directly from IBM. According to Bob Blake, the PC/IX product manager for IBM, their "primary objective was to make a credible Unix system - [...] not try to 'IBM-ize' the product. PC-IX is System III Unix." PC/IX was not, however, the first Unix port to the XT: Venix/86 preceded PC/IX by about a year, although it was based on the older Version 7 Unix.
The main addition to PC/IX was the INed screen editor from ISC. INed offered multiple windows and context-sensitive help, paragraph justification and margin changes, although it was not a fully fledged word processor. PC/IX omitted the System III FORTRAN compiler and the tar file archiver, and did not add BSD tools like vi or the C shell. One reason for not porting these was that in PC/IX, individual applications were limited to a single segment of 64 kB of RAM.
To achieve good filesystem performance, PC/IX addressed the XT hard drive directly, rather than doing this through the BIOS, which gave it a significant speed advantage compared to MS-DOS. Because of the lack of true memory protection in the 8086 and 8088 chips, IBM only sold single-user licenses for PC/IX.
The PC/IX distribution came on 19 floppy disks and was accompanied by a 1,800-page manual. Installed, PC/IX took approximately 4.5 MB of disk space. An editorial by Bill Machrone in PC Magazine at the time of PC/IX's launch flagged the $900 price as a show stopper given its lack of compatibility with MS-DOS applications. PC/IX was not a commercial success although BYTE in August 1984 described it as "a complete, usable single-user implementation that does what can be done with the 8088", noting that PC/IX on the PC outperformed Venix on the PDP-11/23.
=== INTERACTIVE UNIX System ===
PC/IX was succeeded by 386/ix in 1985, a System VR3 derivative. Later versions were termed INTERACTIVE UNIX System V/386 and based on System V 3.2, though with elements of BSD added.
After its acquisition of Interactive, Sun Microsystems continued to maintain INTERACTIVE UNIX System, offering it as a low-end alternative to its System V.4-based Solaris, even when the latter had been ported to x86-based desktop machines. This version of the INTERACTIVE UNIX System was praised by a PC Magazine reviewer for its stability. The last version was "System V/386 Release 3.2 Version 4.1.1", released in July 1998. Official support ended on July 23, 2006, five years after Sun withdrew the product from sale.
Until version ISA 3.0.1, INTERACTIVE UNIX System supported only 16 MB of RAM. In the next versions, it supported 256 MB RAM and the PCI bus. EISA versions always supported 256 MB RAM.
== See also ==
Coherent (operating system)
== Notes ==
== References ==
== Further reading ==
William B. Twitty (1984). UNIX on the IBM PC. Prentice Hall. ISBN 978-0-13-939075-3. Covers and compares PC/IX, Xenix, and Venix.
Maurice J. Bach, The Design of the UNIX Operating System, ISBN 0-13-201799-7, Prentice Hall, 1986.
IBM has snubbed both Microsoft's multimillion dollar investment in Xenix and AT&T's determination to establish System V as the dominant version on Unix. (InfoWorld 20 Feb 1984)
IBM's latest hot potato (PC Mag 20 Mar 1984)
== External links ==
Interactive Unix Documentation | Wikipedia/Interactive_Systems_Corporation |
Wind River Systems, Inc., also known as Wind River (trademarked as Wndrvr), is an Alameda, California–based company, subsidiary of Aptiv PLC. The company develops embedded system and cloud software consisting of real-time operating systems software, industry-specific software, simulation technology, development tools and middleware.
== History ==
Wind River Systems was formed by a partnership of Jerry Fiddler and Dave Wilner. Until 1981, Fiddler had worked at Berkeley Lab writing software for control systems, and wanted to pursue a career in computer generated music, which he funded through a consultancy business focused on real-time operating systems. His early clients included the National Football League and film director Francis Ford Coppola, for whom he designed a unique film editing system. Wilner, a former colleague at Berkeley Lab, joined Fiddler to form Wind River Systems in 1983.
In 2009, Wind River was acquired by Intel. In 2018, Intel spun out its Wind River division, which was then acquired by TPG Capital. On January 11, 2022, Wind River announced that it was acquired by Aptiv, an auto parts company, for $4.3 billion in cash.
The company's key milestones include:
1983: Wind River is incorporated in 1983 with each partner contributing $3,000 and a desk to the business. The company was named for Wind River, Wyoming, where Fiddler had vacationed that year
1987: Wind River introduces VxWorks, a leading real-time operating system for embedded devices.
1995: VxWorks launches into space on the NASA Clementine moon probe. Also, the Tornado integrated development environment is launched and wins EDN's Embedded Development Software Innovation of the Year award as the first graphically oriented development environment for embedded
1997: VxWorks, the real-time operating system for NASA's Mars Pathfinder mission, lands on Mars
1999: Acquisition of one of their major competitors, Integrated Systems Inc., makers of pSOS. Wind River has since discontinued the pSOS product line and has recommended existing pSOS customers move to VxWorks.
2001: Wind River Systems acquired Belgian software company Eonic Systems, the developer of Virtuoso RTOS for DSPs. In November 2015, Wind River Systems renamed the operating system to Rocket, made it open-source and royalty-free. In 2016, Rocket was incorporated into Zephyr RTOS hosted by Linux Foundation.
2004: Wind River officially enters the embedded Linux market, with a Carrier Grade Linux platform targeting the networking & communications infrastructure industry. Also, NASA's Mars Exploration Rovers, Spirit and Opportunity, powered by VxWorks, land on Mars. Wind River helped in manufacturing the IntelliStar for The Weather Channel. The IntelliStar is used at Cable Headends to insert Local Weather into The Weather Channel's national programming.
2007: Wind River joins Google's Open Handset Alliance as an original Linux commercialization partner.
2008: Wind River establishes the embedded Linux market share lead with greater than 30 percent of total market revenue, four years after entering the market.
2009: Intel acquires Wind River for approximately $884 million and it becomes a wholly owned subsidiary of Intel. Wind River launches a commercial Android software platform. Wind River becomes a founding member of the GENIVI Alliance, now called COVESA (Connected Vehicle Systems Alliance).
2010: Wind River adds Simics, a full system simulator, to its product portfolio. VxWorks becomes the first RTOS to be certified under Wurldtech's Achilles certification program, a standard for industrial cyber security. Wind River partners with Intel and the Linux Foundation to create the Yocto Project, an open source collaboration project providing templates, tools and methods to help developers create embedded Linux-based systems.
2012: NASA Jet Propulsion Laboratory (JPL) successfully lands Mars Science Laboratory rover Curiosity, powered by Wind River technology. Wind River debuts software platform targeted at gateways and hubs for the Internet of things.
2013: Wind River becomes part of Intel's Internet of Things Group (IOTG), but remains a wholly owned subsidiary. Barry Mainz assumes the position of President.
2014: Wind River introduces its software for network functions virtualization (NFV) applications, as well as its next-generation VxWorks platform reinvented for the Internet of Things.
2014: Wind River fined $750,000 by Bureau of Industry and Security for exporting encryption technology to countries including Israel and South Korea.
2015: the company was accused of repeated trademark and licensing violations of the Grsecurity project, which as response has restricted its code to commercial partners only.
2016: Intel announced that it intended to fully integrate Wind River into one of its divisions (thus ending Wind River's status as a wholly owned subsidiary,) although the scheduled completion date of this action has not been made public. Barry Mainz left the company to become president and CEO of MobileIron and Jim Douglas assumes the position of President.
2018: Intel divested Wind River Systems to alternative asset fund manager TPG under undisclosed terms.
2018: Ford selects Wind River Over-the-Air Update Technology.
2018: NASA's InSight lands on Mars with VxWorks operating system.
2019: Wind River became the first OpenChain 2.0 conformant organization.
2020: Kevin Dallas named as CEO and member of the board of directors.
2020: Verizon uses Wind River's software infrastructure for its deployment of virtualized 5G RAN.
2020: Wind River becomes first and only to achieve The Open Group FACE Conformance for Linux.
2021: Perseverance Mars becomes fourth Mars rover running VxWorks operating system.
2021: Vodafone selects Wind River as a partner to build Europe's first commercial open RAN network.
2022: Wind River was acquired by Aptiv from TPG Capital for $4.3 billion in cash.
2024: Wind River Chief Product Officer Avijit Sinha named as president.
== Products ==
Among the company's products are the VxWorks real-time operating system, the Wind River Linux operating system, and the Eclipse-based Wind River Workbench IDE. VxWorks began as an add-on to the VRTX operating system in the early 1980s. Wind River Workbench superseded the previous Tornado environment.
=== VxWorks ===
VxWorks is the original flagship product of Wind River. It is a real-time operating system (RTOS) intended for embedded and critical infrastructure devices and systems. It supports multicore processors, 32-bit and 64-bit, for several architectures including ARM, Intel, and Power and has over one hundred board support packages (BSPs) for different hardware systems. VxWorks is a real time and deterministic operating system.
=== Wind River Linux ===
Wind River's Linux product is source code and a build system that generate runtime images suitable for embedded devices.
Historically, Wind River Linux has supported a variety of architectures, including ARM, MIPS, PowerPC, IA32 and SPARC.
The current release of Wind River Linux supports a variety of ARM, IA32, and IA64 platforms with both standard and realtime (PREEMPT RT) kernels.
Wind River charges a subscription to provide commercial bug and CVE fixes for their Linux products. Pricing is project-based, with a flat fee for each solution built on top of Wind River Linux. There is no per-device subscription or royalty.
The key capabilities for Wind River Linux are 10 year commercial support life, complete customization including kernel changes, reproducible customizations, wide range of hardware support through Board Support Packages (BSP) that are ported, maintained, and tested by Wind River.
==== Early history ====
In 2004, Wind River announced a partnership with Red Hat to create a new Linux-based distribution for embedded devices. Wind River has since ended its partnership with Red Hat and now ships its own Linux distribution optimized for embedded Linux development.
Wind River released the first version of its embedded Linux distribution, Platform for Network Equipment - Linux Edition (PNE-LE) 1.0 in 2005. It was registered against the Carrier Grade Linux 2.0 specification and supported IA32 and PPC architectures. They added other platforms in subsequent releases, General Purpose Platform - Linux Edition (GPP-LE) and Platform for Consumer Devices - Linux Edition PCD-LE) starting in version 1.4. In 2013 Wind River announced Wind River Linux 6.0.
Wind River Systems acquired FSMLabs embedded technology in February 2007 and made a version available as Wind River Real-Time Core for Wind River Linux. As of August 2011, Wind River has discontinued the Wind River Real-Time Core product line, effectively ending commercial support for the RTLinux product.
On August 7, 2007, Palm Inc. announced it would use Wind River Systems Linux for its (later aborted) Palm Foleo.
In 2008, Wind River announced cooperation with BMW, Intel and Magneti Marelli for development of a Linux-based open-source platform to control in-car electronics, which was extended in the GENIVI Alliance in 2009.
==== Yocto Project ====
In 2012, Wind River introduced a version of Linux that was developed from the Yocto Project open source development infrastructure and achieved Yocto project compatible registration. All subsequent releases of Wind River Linux are based on the Yocto Project.
==== Wind River Linux release history ====
Wind River has historically released a new Wind River Linux LTS (Long Term Support) about every year that are generally based on the then current Linux Kernel LTS release and the latest Yocto Project release.
=== Wind River Linux Distro ===
In 2022, Wind River launched a new product, Wind River Linux Distro, that is a binary Linux distribution based on the Wind River Linux source-based product.
The Distro is intended for embedded solution developers that need a commercially supported Linux for their project, but do not need the extensive customization capabilities of the Yocto Project-based Wind River Linux. The key features are quick time to value, customization via tools such as the Linux Assembly Tool & an RPM package feed, and updates via OSTree.
Developers can download a free version of the Wind River Linux Distro by going to https://www.windriver.com/products/linux/download
A number of hardware platforms are enabled by the Distro. Commercial support is currently available for a subset of the enabled platforms.
=== Simics ===
Simics is a full-system simulator used by software developers to simulate the hardware of complex electronic systems.
=== Wind River Studio ===
Wind River Studio is a cloud-native platform for the deployment and servicing of mission-critical intelligent edge systems.
== Acquisitions ==
1991: Assets of ITRA (Vannes, France)
1997: DSP Foundry (WiSP RTOS for Motorola DSP563xx family)
1999: Integrated Systems Inc. (pSOS+)
2000: Merge staff of Dragonfly Software Consulting
2000: Embedded Support Tools Corp. (ESTC)
2000: ICEsoft (Bergen, Norway)
2000: AudeSi Technologies Inc. (Calgary, Alberta, Canada)
2001: Eonic Systems (Virtuoso RTOS)
2001: Berkeley Software Design Inc. (BSDI)
2005: ScopeTools business unit from Real-Time Innovations
2006: Interpeak AB (Stockholm, Sweden)
2007: Assets of FSMLabs (Socorro, New Mexico, United States)
2008: MIZI (Seoul, South Korea)
2009: Tilcon Software Limited (Ottawa, Ontario, Canada)
2010: Virtutech (Stockholm, Sweden)
2011: Switch++ (Santa Clara, United States)
2016 Arynga
2020 Star Labs
== References ==
== Further reading ==
Dataindustrier AB
Lord of the Toasters an article from Wired magazine
"Wind River's Linux Transformation". Archived from the original on 2013-01-20. Retrieved 2005-12-13.{{cite web}}: CS1 maint: bot: original URL status unknown (link) an article from CNET
== External links ==
Wind River Systems company website | Wikipedia/Wind_River_Systems |
Cobalt Networks was a maker of low-cost Linux-based servers and server appliances based in Mountain View, California. The company had 1,900 end user customers in more than 70 countries.
During the dot-com bubble, the company had a market capitalization of $6 billion despite only $22 million in annual revenue.
In 2000, the company was acquired by Sun Microsystems and in December 2003, Sun shut down the Cobalt product line.
Cobalt was considered a pioneering server appliance vendor, the first to market a 1 RU rackmounted server, and was credited by the founder of RLX Technologies as paving the way for blade servers.
== History ==
The company was founded in 1996 by Vivek Mehra as Cobalt Microserver. In June 1998, the company changed its name to Cobalt Networks, Inc.
The company introduced products as follows:
On November 5, 1999, the company became a public company via an initial public offering. Its stock price rose as much as 618% above its $22/share initial price.
On March 23, 2000, the company announced the acquisition of Chilisoft from Charlie Crystle for 1.15 million shares of Cobalt common stock, then valued at $69.9 million.
In September 2000, Sun Microsystems announced the acquisition of the company for $2 billion in stock. The acquisition was completed on December 7, 2000.
Many disgruntled engineers left the company in the months following the acquisition.
In December 2003, Sun shut down the Cobalt product line.
== References == | Wikipedia/Cobalt_Networks |
The Stanford University Network, also known as SUN, SUNet or SU-Net is the campus computer network for Stanford University.
== History ==
Stanford Research Institute, formerly part of Stanford but on a separate campus, was the site of one of the four original ARPANET nodes. Later ARPANET nodes were located in the Stanford Artificial Intelligence Laboratory, the Computer Science Department, and the Stanford University Medical Center. In late 1979, the Xerox Palo Alto Research Center donated equipment including Xerox Alto computers, a laser printer, and file server connected by Ethernet local area network technology.
A router based on the PDP-11 computer from Digital Equipment Corporation with software from MIT was used to connect the Ethernet to the ARPANET. The PARC Universal Packet protocol was initially used on the local parts of the network, which was the experimental version of Ethernet with a data rate under 3 megabits/second. As the TCP/IP protocols evolved through the 1980s, a TCP/IP network was built on the main campus, extending to other departments, and connecting many other computers. This network was called the Stanford University Network or SUN. Today, the campus network is referred to as SUNet.
Andy Bechtolsheim, a Stanford graduate student at the time, designed a SUN workstation for use on the network in 1980. It was inspired by the Alto, but used a more modular design powered by a Motorola 68000 processor interfaced to other circuit boards using Multibus. The workstations were used by researchers to develop the V-System and other projects. Bechtolsheim licensed the design to become the basis of the products of Sun Microsystems (whose name was a pun based on the SUN acronym).
The CPU board could be configured with Bechtolsheim's experimental Ethernet boards, or commercial 10 megabit/second boards made by 3Com or others to act as a router. These routers were called Blue Boxes for the color of their case. The routers were developed and deployed by a group of students, faculty, and staff, including Len Bosack who was in charge of the computer science department's computers, and Sandy Lerner who was the Director of Computer Facilities for the Stanford University Graduate School of Business. All told there were about two dozen Blue Boxes scattered across campus. This original router design formed the base of the first Cisco Systems router hardware products, founded by Bosack and Lerner (who were married at the time).
The original router software was called NOS, Network Operating System, written by William Yeager, a staff research engineer at Stanford's medical school. Distinguishing features of NOS were that it was written in C and that it was multi-tasking capable; this allowed additional network interfaces and additional features to be easily added as new tasks. NOS was the basis of Cisco's IOS operating system. In 1987, Stanford licensed the router software and two computer boards to Cisco, after investigations by Stanford staff members such as Les Earnest.
== References ==
== External links ==
Router man, an interview with William Yeager | Wikipedia/Stanford_University_Network |
Floating Point Systems, Inc. (FPS), was a Beaverton, Oregon vendor of attached array processors and minisupercomputers. The company was founded in 1970 by former Tektronix engineer Norm Winningstad, with partners Tom Prints, Frank Bouton and Robert Carter. Carter was a salesman for Data General Corp. who persuaded Bouton and Prince to leave Tektronix to start the new company. Winningstad was the fourth partner.
== History ==
The original goal of the company was to supply economical, but high-performance, floating-point coprocessors for minicomputers. In 1976, the AP-120B array processor was produced. This was soon followed by a unit for larger systems and IBM mainframes, the FPS AP-190. In 1981, the follow-on FPS-164 was produced, followed by the FPS-264, which had the same architecture. This was five times faster, using ECL instead of TTL chips.
These processors were widely used as attached processors for scientific applications in reflection seismology, physical chemistry, NSA cryptology and other disciplines requiring large numbers of computations. Attached array processors were usually used in facilities where larger supercomputers were either not needed or not affordable. Hundreds if not thousands of FPS boxes were delivered and highly regarded. FPS's primary competition up to this time was IBM (3838 array processor) and CSP Inc.
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.
=== Parallel processing ===
In 1986, the T-Series hypercube computers using INMOS transputers and Weitek floating-point processors was introduced. The T stood for "Tesseract". Unfortunately, parallel processing was still in its infancy and the software tools and libraries for the T-Series did not facilitate customers' parallel programming. I/O was also difficult, so the T-Series was discontinued, a mistake costing tens of millions of dollars that was nearly fatal to FPS. A few dozen T-series were delivered.
=== Celerity acquisition; acquisition by Cray ===
In 1988, FPS acquired the assets of Celerity Computing of San Diego, California, renaming itself as FPS Computing. Celerity's product lines were further developed by FPS, the Celerity 6000 minisupercomputer being developed into the FPS Model 500 series.
FPS was acquired by Cray in 1991 for $3.25 million, and their products became the S-MP and APP product lines of Cray Research.
The S-MP was a SPARC-based multiprocessor server (based on the Model 500); the MCP a matrix co-processor array based on eighty-four Intel i860 processors. After Cray purchased FPS, it changed the group's direction by making them Cray Research Superservers, Inc., later becoming the Cray Business Systems Division (Cray BSD). The MCP was renamed the Cray APP. The S-MP architecture was not developed further. Instead, it was replaced by the Cray Superserver 6400, (CS6400), which was derived indirectly from a collaboration between Sun Microsystems and Xerox PARC.
=== Acquisition by SGI and Sun ===
Silicon Graphics acquired Cray Research in 1996, and shortly afterward the Cray BSD business unit along with the CS6400 product line was sold to Sun Microsystems for an undisclosed amount (acknowledged later by a Sun executive to be "significantly less than $100 million"). Sun was then able to bring to market the follow-on to the CS6400 which Cray BSD was developing at the time, codenamed Starfire, launching it as the Ultra Enterprise 10000 multiprocessor server. This system was followed by the Sun Fire 15K and Sun Fire 25K. These systems allowed Sun to become a first tier vendor in the large server market. In January 2010, Sun was acquired by Oracle Corporation.
== See also ==
Glen Culler
Cydrome
Multiflow
== References ==
== External links ==
1986 news about FPS - Daily Journal of Commerce
The History of the Development of Parallel Computing
Howard Thrailkill FPS Computing: A History of Firsts
Gordon Bell. "A Brief History of Supercomputing" | Wikipedia/Cray_Business_Systems_Division |
NetDynamics Application Server was an early Java-based integrated software platform. The product was developed by NetDynamics, a Silicon Valley start-up company founded in 1995 by Zack Rinat and Ofer Ben-Shachar. Unlike other early application server competitors, NetDynamics chose Java as the development language for the platform.
As Java became the dominant development language for web-based applications, NetDynamics experienced significant revenue growth in 1997 and 1998. However, the product soon encountered problems due to the relative immaturity of Java and the rush to release new product versions in a rapidly changing marketplace. Believing that the new JDBC API was too immature, NetDynamics created a proprietary database development API based on a product from Rogue Wave Software.
NetDynamics, Inc. was acquired by Sun Microsystems in July 1998. The application server software, together with the Netscape Application Server, was the basis for Sun's iPlanet Application Server offering.
== Competition ==
NetDynamics initially competed against Bluestone, an application server based on the C Programming Language, and the Kiva application server. By mid-1998, a new competitor, WebLogic, Inc., released a Java application server that was compliant with JDBC and another emerging standard, JavaBeans. WebLogic Application Server eventually won out in the marketplace. The explosive growth of the application server business caught the attention of larger information technology companies. By 1998, the early application server start-up companies were all acquired: Hewlett-Packard purchased Bluestone, Sun Microsystems acquired NetDynamics, Netscape Communications Corporation bought out Kiva, and Weblogic was acquired by BEA Systems. Only Weblogic continues to survive as a product; Oracle Corporation acquired BEA in 2008, and the product was renamed Oracle Weblogic.
== See also ==
iPlanet
Comparison of application servers
== References == | Wikipedia/NetDynamics_Application_Server |
Oracle Labs (formerly Sun Microsystems Laboratories, or Sun Labs) is a research and development branch of Oracle Corporation. The labs were created when Oracle acquired Sun Microsystems. Sun Labs was established in 1990 by Ivan Sutherland and Robert Sproull. The initial locations were in Menlo Park, California and Burlington, Massachusetts, United States.
Oracle Labs has locations in Redwood Shores, California; Burlington, Massachusetts; Cambridge, UK; Brisbane, Australia; Linz, Austria; Zürich, Switzerland; and Casablanca, Morocco.
== Sun ==
Sun Labs worked in areas such as asynchronous circuits, optical communications, new web technologies, Java technologies, and computer networks. James G. Mitchell directed the labs starting in 2000.
When asked in 2007 why Sun continued to put resources into research as the market turned to commodity pricing, chief executive Jonathan I. Schwartz said, "You need to spend enormous amounts of money to differentiate."
== Oracle ==
In 2010 after Sun was purchased by Oracle Corporation, it became Oracle Labs.
As of April 2011 Sproull was director.
He was appointed in June 2006.
== References ==
== External links ==
Official website | Wikipedia/Sun_Microsystems_Laboratories |
Network equipment providers (NEPs) – sometimes called telecommunications equipment manufacturers (TEMs) – sell products and services to communication service providers such as fixed or mobile operators as well as to enterprise customers. NEP technology allows for calls on mobile phones, Internet surfing, joining a conference calls, or watching video on demand through IPTV (internet protocol TV). The history of the NEPs goes back to the mid-19th century when the first telegraph networks were set up. Some of these players still exist today.
== Telecommunications equipment manufacturers ==
The terminology of the traditional telecommunications industry has rapidly evolved during the Information Age. The terms "Network" and "Telecoms" are often used interchangeably. The same is true for "provider" and "manufacturer". Historically, NEPs sell integrated hardware/software systems to carriers such as NTT-DoCoMo, ATT, Sprint, and so on. They purchase hardware from TEMs (telecom equipment manufacturers), such as Vertiv, Kontron, and NEC, to name a few. TEMs are responsible for manufacturing the hardware, devices, and equipment the telecommunications industry requires. The distinction between NEP and TEM is sometimes blurred, because all the following phrases may imply NEP:
Telecommunications equipment provider
Telecommunications equipment industry
Telecommunications equipment company
Telecommunications equipment manufacturer (TEM)
Telecommunications equipment technology
Network equipment provider (NEP)
Network equipment industry
Network equipment companies
Network equipment manufacturer
Network equipment technology
== Services ==
This is a highly competitive industry that includes telephone, cable, and data services segments. Products and services include:
Mobile networks like GSM (Global System for Mobile Communication), Enhanced Data Rates for GSM Evolution (EDGE) or GPRS (General Packet Radio Service). Networks of this kind are typically also known as 2G and 2.5G networks. The 3G mobile networks are based on UMTS (Universal Mobile Telecommunication Standard) which allows much higher data rates than 2G or *5G.
Fixed networks which are typically based on PSTN (Public Switched Telephone Network).
Enterprise networks, like Unified Communication infrastructure
Internet infrastructures, like routers and switches
== Companies ==
Some providers in each customer segment are:
Majority of revenues from service providers:
Alcatel-Lucent
Ericsson
Huawei
Samsung
TP-Link
D-Link
Juniper Networks
NEC
Nokia Networks
Ciena
ZTE
Majority of revenues from enterprise customers:
Avaya
Cisco
Motorola
Unify
The NEPs have recently undergone a significant consolidation or M&A activity, for example, the joint venture of Nokia and Siemens (Nokia Siemens Networks), the acquisition of Marconi by Ericsson, the merger between Alcatel and Lucent, and many numerous acquisitions by Cisco.
A look at the financial performance of these players according to the segment they serve creates a diverse picture:
== Power balance in the NEP ecosystem ==
NEPs face high pressure from old & new rivals and a stronger, more consolidated customer base.
Threat of New entrants:
Growing importance software applications has led to the entry of new players like System integrators and other ISVs. (For some NEPs, SIs are being considered as competitors for selected network services i.e. application, services, and control layers of the network)
In the area of managed and hosted services, NEPs are likely to face competition from new players like Google due to lower entry barriers
Bargaining Power of Suppliers:
Increasing standardization and commoditization of network components leads to more competition among component suppliers, thus lowering their bargaining position.
Overcapacities have led to lower bargaining power of Semiconductor suppliers
As more standardized networks components are expected to be used for NGNs, a shift in the current supplier structure may balance the bargaining between suppliers and NEPs
Bargaining Power of Buyers:
Consolidation among communication service providers due to convergence leads to greater dependence on a few large clients, which means higher bargaining strength of customers
Due to pressures on their profitability, service providers are increasingly looking at lowering their operating costs and capital expenditures (lowering cost per subscriber), and this is putting pressures on NEPs margins.
Enterprises increasingly demand end-to-end solutions through a single vendor for their Unified Communication needs
Threat of Substitution:
Switch from PSTN to Next-Generation Network
Increasing use of standardized network components (COTS) compared to more proprietary equipment
Software to increasingly replace traditional network components
== Open Source Age ==
The SCOPE Alliance was a non-profit and influential Network Equipment provider (NEP) industry group aimed at standardizing "carrier-grade" systems for telecom in the Information Age, successfully in accelerating the NEP transformation towards Carrier-grade Open Source Hardware, OS, Middleware, Virtualization, and Cloud see table:
== NFV, SDN, 5G, Cloud transformation Age ==
From 2010 onwards, Telecom carriers (NEP customers) wanted direct involvement in driving transformation. The NEP-only SCOPE Alliance was retired, as the industry combined forces on Service Availability, ETSI Network function virtualization standardization, Software-defined networking adoption, and 5G network slicing initiatives.
== References ==
== External links ==
IBM study related to the NEP industry | Wikipedia/Network_Equipment_Provider |
"The Network is the Computer" is a slogan that was originally coined by John Gage for Sun Microsystems in 1984. Contrary to popular belief, the slogan was not coined by Scott McNealy. Wired dubbed the phrase a "truism of Silicon Valley".
Sun employee Larry Wake said of the slogan, "When Sun originated that tag line in the early 1980s, it was actually quite audacious. It was a stake in the ground [stating] ‘Computers should be networked, or they're… not computers. Well, at least, you're missing their potential by a country mile.'"
== History ==
The first slogan used by Sun was "Open Systems for Open Minds"; "The Network is the Computer" has been the tagline of Sun for decades. According to Sun's former director of CAD/CAM marketing, the meaning of the slogan is that you had one window into the network through your desktop computer and that with the appropriate software, other people's computing power could be utilized with offloading. In 2010, Oracle bought Sun but without the slogan being discussed, used or defended. In July 2019, content delivery network provider Cloudflare bought the rights to the expired trademark. John Gage stated in an interview with John Graham-Cumming that he was fine with Cloudflare having bought the rights because it meant Sun's efforts were successful.
== See also ==
Eric Schmidt § Schmidt's Law
== Further reading ==
Perry, T.S. (February 2004). "John Gage: he is the network". IEEE Spectrum. 41 (2): 32–33. doi:10.1109/MSPEC.2004.1265128. ISSN 1939-9340. S2CID 42375939.
== References ==
== External links ==
"The Network is the Computer" at the United States Patent and Trademark Office (USPTO) database | Wikipedia/The_Network_is_the_Computer |
Oracle APEX (Oracle Application Express) is a low-code application development platform developed by Oracle Corporation. APEX is used for developing and deploying cloud, mobile and desktop applications. It has a web-based integrated development environment (IDE) that includes tools such as wizards, drag-and-drop layout builders, and property editors.
== Background ==
APEX is a feature of the Oracle Database. It is a part of the Oracle Cloud within the Autonomous Database Cloud Services and the stand-alone APEX Application Development service.
Oracle APEX has had name changes since its creation in 2000, including:
Flows
Oracle Platform
Project Marvel
HTML DB
Application Express (APEX) aka Oracle APEX
== History ==
APEX was created by Oracle developer Michael Hichwa following his earlier project, WebDB. While building an internal web calendar, Hichwa collaborated with fellow Oracle employee Joel Kallman to develop Flows. Together, they co-developed the web calendar, adding features to Flows as they needed them to develop the calendar. Early builds of Flows had no front-end, so all changes to an application were made in SQL Plus via insert, update and delete commands.
With version 5.2, the numbering system was changed to align with the year and quarter of the release, renaming it to 18.1. This change is consistent with Oracle's change in numbering nomenclature.
== Low-code environment ==
Oracle APEX is a low-code development platform, a type of environment that can trace their origins to fourth-generation programming languages and rapid application development (RAD) tools.
APEX allows users to build web applications with a "no code" graphical user interface. However, when the requirements are more complex, APEX allows the extension of the low-code objects through a declarative framework. This framework lets the developer define custom logic, business rules, and user interfaces. The developer can do this through the inclusion of SQL, PL/SQL, HTML, JavaScript, or CSS as well as APEX plug-ins.
== Security ==
APEX applications are subject to the same level of application security risks as other web-based applications built on more direct technologies such as PHP, ASP.NET and Java. However, since APEX 4.0, the Application Builder interface has included a utility called Advisor, which provides a basic assessment of an application’s security posture.
The two main vulnerabilities that affect APEX applications are SQL injection and cross-site scripting (XSS).
SQL Injection
APEX applications inherently use PL/SQL constructs as the base server-side language and access data via PL/SQL blocks. An APEX application will use PL/SQL to implement authorization and to conditionally display web page elements. Because of this, APEX applications can suffer from an SQL injection when these PL/SQL blocks do not correctly validate and handle malicious user input.
Oracle implemented a special variable type for APEX called Substitution Variables (with a syntax of "&NAME."); however, these are insecure and can lead to SQL injections. When an injection occurs within a PL/SQL block, an attacker can inject an arbitrary number of queries or statements to execute. Escaping special characters and using bind variables can reduce, but not remove, XSS and SQL injection vulnerabilities.
Cross-Site Scripting (XSS)
XSS vulnerabilities arise in APEX applications just like in other web application languages. To counteract this, Oracle provides the htf.escape_sc() function to replace literal characters with HTML entity names and avoid undesired behaviors.
A developer can use authorization schemes to manage access to resources like pages and items within an APEX application. To ensure proper security, these schemes must be consistently applied across all relevant resources. An example of inconsistent access control arises when an authorization scheme is applied to a button item but not to the process linked to that button. This inconsistency could allow a user to trigger the process directly via JavaScript, bypassing the button entirely.
== Third-party libraries ==
Developers may improve and extend APEX applications by using third-party libraries. Among them are JQuery Mobile (HTML 5-based user interface), JQuery UI (user interface for the web), AnyChart (JavaScript/HTML 5 charts), CKEditor (web text editor), and others. Oracle claims that applying the latest APEX patches ensures that the external libraries bundled with the platform are updated in tandem, which theoretically enhances application stability and security. However, many of the libraries are updated more frequently than APEX patches are released, requiring developers to monitor and manually apply updates as necessary to maintain compatibility and security.
== APEX and Oracle Database Express Edition (XE) ==
Oracle APEX can be run inside Oracle Database Express Edition (XE), a free entry-level database. Although the functionality of APEX isn't intentionally limited when running on XE, the limitations of the database engine may prevent some APEX features from functioning. Furthermore, Oracle XE has limits for CPU, memory, and disk usage.
== See also ==
Oracle SQL Developer
Jam.py
== References ==
== Bibliography ==
== External links ==
Official website | Wikipedia/Oracle_Application_Express |
Afara Websystems Inc. was a Sunnyvale, California, USA server company whose goal was to build servers surrounding a custom high-throughput CPU architecture, "developing IP traffic management systems that will bring quality-of-service to the next generation of IP access infrastructure." The word "Afara" means "bridge" in the West African Yoruba language.
== History ==
The company was founded by Kunle Olukotun, a Stanford University professor. First employee to be hired was by Raza Foundries Board Member - Atul Kapadia. Neil Sadaranganey was the sole business person at Afara Web Systems. He was hired out of Real Networks. Subsequently, Les Kohn (employee #2), a microprocessor designer for: Sun Microsystems UltraSPARC; Intel i860 and i960; National Semiconductor Swordfish took the basic idea and developed a product plan.
Olukotun was talking with people running data centers in 2000 and understood the problem of those centers running out of power and space. Olukotun believed that multiple processors on a chip in conjunction with multi-threading could resolve those problems. Olukotun searched for venture capital support on the basis that a new architecture could lead to a 10x performance increase in server processing capabilities. Pierre Lamond, a partner at Sequoia Capital, introduced Olukotun to microprocessor architect Les Kohn, who designed microprocessors for Sun, Intel and National Semiconductor (where Les worked for Pierre). Les introduced Fermi Wang (a journeyman) one of his colleagues at C-Cube Microsystems, to be the acting CEO and to lead the company. It was a classic Silicon Valley startup - the headcount grew to 100 with 95 engineers to focus on engineering development and one marketing director.
Two meetings with venture capitalists were scheduled on September 11, 2001. The meetings in New York City were interrupted by the terrorist attack on the World Trade Center, but one of them resumed 2 days later. Available capital for funding the server company had vanished, as the economy started to dip into a new recession in 2001.
Rick Hetherington left Sun to create a start-up company. Venture capitalists Sequoia Capital introduced Hetherington to Olukotun. When Hetherington's startup failed, he returned to Sun. Hetherington wrote memos to Mike Splain, CTO of the Processor group at Sun, encouraging technology acquisition of Afara Websystems. Hetherington became Chief Architect for Horizontal Systems at Sun, which develops and sells servers for data centers and Web systems.
Although SPARC-based computers systems are almost exclusively deployed with Solaris, Afara instead planned to use Linux on the SPARC-based processor they developed.
The search for venture capital continued, since creating a server company requires substantial resources, but there was little available during the recession following 9/11. Afara began negotiations with Sun Microsystems, and the acquisition was consummated in July 2002. This new acquisition fell under the umbrella of Fred DeSantis, the vice president of engineering for horizontal systems at Sun. During the due-diligence process, Brian Sutphin sensed (as in Fermi Wang, the "CEO" mentioned that there were no term sheets on the table) from executives he was interacting with that Afara did not have any alternate sources of funding and reduced the offer from high triple digit millions of dollars to < $500M.
== Contributions and impact ==
The project included many technology contributions among Linux, Solaris and SPARC. The Afara CPU used a SPARC port of Debian GNU/Linux initially. Debian GNU/Linux contributions to Afara Websystem's former CPU architecture continued to grow, including commercial support for Ubuntu, a Debian GNU/Linux-based operating system. Afara Websystems' former platform direction seemed further validated when Sun hired Ian Murdock, founder of the Debian distribution, to head operating system platform strategy, and cross-pollinate Solaris with a new OS packaging technology similar to that of Debian GNU/Linux.
The new CPU architecture of Afara Websystems, which became known as "Niagara", had enough merit to cause a competing internal Sun project under DeSantis' organization, called "Honeybee", to be canceled.
Pressure was placed on the computing industry to add cores and threads. While competing microprocessor vendors were designing dual-core chips with two dual-threads per core, the original "Niagara" architecture was a more radical design: an eight core processor with four threads per core.
The new family of SPARC microprocessors, trademarked by Sun as "CoolThreads", was released with model names of UltraSPARC T1 (2005), UltraSPARC T2 (2007), UltraSPARC T2 Plus (2008) and the further derivative UltraSPARC T3 (2010). While SPARC is an open instruction set architecture, where vendors build their own processors to an open specification defined by SPARC International, this new family of microprocessors was not only created to the open specification, but its implementation was now free, where people could download the source code, and manufacture them independently.
For web serving loads, Sun had catapulted to become the uncontested fastest single processor on the planet in December 2005, performing 7x faster than the closest Intel server, and has been consistently the highest throughput web server, with the closest competition being 2x-3x slower (socket to socket comparison) as of mid-2009.
Oracle Corporation announced its intention to acquire Sun in April 2009, a deal which closed in January 2010. By the end of 2010, market competitors started to release similar products with multiple cores, a less radical approach to threading, but with similar performance characteristics. Oracle continued the radical approach of the original Afara SPARC architecture (large numbers of threads per large number of simple cores) with the release of the SPARC T3 processor in September 2010 - the first 16 core commodity central processing unit, yielding another top performance benchmark, but only by a slim margin.
Olokotun returned to Stanford University to head its "Pervasive Parallelism Lab" in 2008, to help shape the future of software, as he did with hardware.
Fermi Wang and Les Kohn founded Ambarella with a focus on high definition video capture and delivery markets.
== References == | Wikipedia/Afara_Websystems |
The acquisition of Sun Microsystems by Oracle Corporation was completed on January 27, 2010. After the acquisition was completed, Oracle, only a software vendor prior to the merger, owned Sun's hardware product lines, such as SPARC Enterprise, as well as Sun's software product lines, including the Java programming language.
Concerns about Sun's position as a competitor to Oracle were raised by antitrust regulators, open source advocates, customers, and employees over the acquisition. The European Commission delayed the acquisition for several months over questions about Oracle's plans for MySQL, Sun's competitor to Oracle Database. The DG COMP of the European Commission finally approved the takeover, apparently pressured by the U.S. DOJ Antitrust Division to do so, according to a diplomatic cable leaked in September 2011.
== History ==
In 2006, it was disclosed that Sun and Apple have discussed a merger on multiple occasions.
In late 2008, Sun was approached by IBM to discuss a possible merger. At about the same time, Sun also began discussions with another company, widely rumored but not confirmed to be Hewlett-Packard, about a potential acquisition. By March 2009, talks had stalled between Sun and both IBM and the other potential suitor.
On April 20, 2009, Sun and Oracle announced that they had entered into a definitive agreement under which Oracle would acquire Sun for $9.50 a share in cash. Net of Sun's cash and debt, this amounted to a $5.6 billion offer from Oracle. Sun's shareholders voted to approve the proposal on July 16, 2009, although the deal was still subject to regulatory approvals. The terms of the agreement between Oracle and Sun included dependencies on the antitrust laws of "the United States and Canada, European Union, China, Israel, Switzerland, Russia, Australia, Turkey, Korea, Japan, Mexico and South Africa".
On August 20, 2009, the U.S. government, pursuant to the Clayton Antitrust Act, approved Oracle's purchase of Sun.
On September 3, 2009, the European Commission announced that it would not immediately approve the deal, but would instead perform a second round of investigation, focusing on the implications of Oracle's control of MySQL (acquired by Sun in 2008).
On October 20, 2009, Sun filed with the U.S. Securities and Exchange Commission (SEC) its intention to cut 3,000 jobs globally over next 12 months, citing losses caused by delays in the acquisition process.
On November 6, in its 10-Q filing for the 1st quarter of the 2010 fiscal year, Sun announced a 25% total revenue decrease compared to the 1st quarter of the previous year, due to "economic downturn, the uncertainty associated with our proposed acquisition by Oracle, increased competition and delays in customer purchasing decisions".
On January 21, 2010, EU Competition Commissioner Neelie Kroes announced unconditional approval of the deal.
On January 27, 2010, Oracle announced that it had completed the acquisition.
== Resignations ==
Several notable engineers resigned following the acquisition, including James Gosling, the creator of Java (resigned April 2010); Tim Bray, the creator of XML (resigned February 2010); Kohsuke Kawaguchi, lead developer of Hudson (resigned April 2010); and Bryan Cantrill, the co-creator of DTrace (resigned July 2010).
While the deal was still pending regulatory approval, the JRuby team collectively resigned from Sun and moved to Engine Yard.
In early 2010, the Drizzle DBMS team collectively resigned from Sun and moved to Rackspace.
Most of Sun's executive management team, including CEO Jonathan Schwartz, resigned immediately after the acquisition was complete. John Fowler, Executive VP of Sun's systems group, remained at Oracle as Executive Vice President of Hardware Engineering.
Simon Phipps, Sun's Chief Open Source Officer, left the company in March 2010.
== OpenSolaris and Solaris ==
In early 2010, troubling signals began to emerge concerning the future of OpenSolaris, including its absence from Oracle product roadmaps.
In August 2010, a leaked internal memo indicated that Oracle would no longer release OpenSolaris distributions, including the long-delayed pending release, OpenSolaris 2010.05. The same memo announced that Oracle would no longer release Solaris source code as it was developed, instead only publishing it after the release of each Solaris version. Since Oracle was no longer supporting all the development of an open version of Solaris, the OpenSolaris Governing Board disbanded shortly after this was revealed, ending the project. Independent development continues with the Illumos fork.
On September 2, 2017, Phipps reported that Oracle had laid off virtually all of its Solaris core development staff, interpreting it as a sign that Oracle no longer intends to support future development of the platform.
== MySQL petition and forks ==
A major issue discussed in media and considered by the EU Commission was Oracle's acquisition of MySQL, an open-source competitor to Oracle acquired by Sun in February 2008, as part of the deal.
In response, several forks were made with the intent to ensure the future success of MySQL despite being purchased by its biggest competitor. These include Drizzle (discontinued) and MariaDB (actively developed). Monty Widenius, one of the founders of MySQL, also started a petition asking that MySQL either be divested to a third party, or have its licensing changed to be less restrictive than the previous GPL terms it operated under prior to and during its ownership by Sun.
== Java Android lawsuit ==
Oracle filed a patent infringement lawsuit against Google over its use of Java in the Android platform. Android apps run in the Dalvik Java virtual machine. The apps are written in Java but are compiled into Dalvik's custom bytecode format which is incompatible with standard Java runtime environments. Google thus avoided licensing fees associated with J2ME, the mobile version of Java. However, aspects of the Dalvik system are very similar to the Java technology patented by Sun and now Oracle.
The court found that Oracle's primary copyright claim, based on the Java Application Programming Interface (API), failed because the portions Google reused were not copyrightable. Google was found liable for a small amount of literal code copying. Oracle was limited to statutory damages for these claims. The jury found that Google did not infringe Oracle's patents.
Oracle appealed to the Federal Circuit, and Google filed a cross-appeal on the literal copying claim. The hearing was held on December 4, 2013, and the judgement was released on May 9, 2014. The circuit court reversed the district court on the central issue, holding that the "structure, sequence and organization" of an API was copyrightable. It also ruled for Oracle regarding the small amount of literal copying, holding that it was not de minimis. The case was remanded back to the district court for reconsideration of the fair use defense.
A jury determined in 2016 that Google's use of Oracle's APIs was legal under the copyright law's fair use doctrine. Oracle appealed the decision. On March 27, 2018, an appeals court ruled Google violated copyright laws when it used Oracle's open-source Java software to build the Android platform in 2009. "There is nothing fair about taking a copyrighted work verbatim and using it for the same purpose and function as the original in a competing platform," a panel of three Federal Circuit judges concluded.
The Supreme Court issued its decision on April 5, 2021. In a 6–2 majority, the Court ruled that Google's use of the Java APIs was within the bounds of fair use, reversing the Federal Circuit Appeals Court ruling and remanding the case for further hearing.
== Apache Software Foundation resignations ==
The Apache Software Foundation resigned its seat on the Java SE/EE Executive Committee due to Oracle's refusal to provide a technology compatibility kit (TCK) to the ASF for its Apache Harmony open-source implementation of Java.
== OpenOffice resignations and forks ==
After Oracle ended OpenSolaris, some members of the similarly open source OpenOffice.org Project became worried about their project's future with Oracle. They formed The Document Foundation and created the LibreOffice fork. The LibreOffice brand was hoped to be provisional, as Oracle had been invited to join The Document Foundation and donate the OpenOffice.org brand.
In response, Oracle demanded that all members of the OpenOffice.org Community Council involved with The Document Foundation step down from the council, citing a conflict of interest. Many community members decided to leave for LibreOffice, which already had the support of Red Hat, Novell, Google, and Canonical. LibreOffice produced its first release in January 2011.
In June 2011 Oracle contributed the OpenOffice.org trademarks and source code to the Apache Software Foundation, which Apache re-licensed under the Apache License. IBM donated the Lotus Symphony codebase to the Apache Software Foundation in 2012. The developer pool for the Apache project was seeded by IBM employees, and Symphony codebase was included in Apache OpenOffice.
== Hudson/Jenkins fork ==
During November 2010, an issue arose in the Hudson community with respect to the infrastructure used. This grew to encompass questions over the stewardship and control by Oracle. Negotiations between the principal project contributors and Oracle took place. There were many areas of agreement, but a key sticking point was the trademarked name "Hudson", after Oracle claimed the right to the name and applied for a trademark in December 2010. As a result, on January 11, 2011, a call for votes was made to change the project name from "Hudson" to "Jenkins". The proposal was overwhelmingly approved by community vote on January 29, 2011, creating the Jenkins project. On February 1, 2011, Oracle said that they intended to continue development of Hudson, and considered Jenkins a fork rather than a rename. Jenkins and Hudson therefore continue as two independent projects, each claiming the other is the fork.
== Grid Engine ==
Oracle Grid Engine (previously Sun Grid Engine) was changed to a close-source commercial-only product.
== Program closures ==
Project Kenai, a SourceForge-like project for Java apps, was migrated to Java.net by Oracle.
Project Darkstar, a project to investigate and create solutions for issues in massive online gaming environments, was closed by Oracle on February 2, 2010.
== Customer relations ==
Oracle has changed the software support model to also require hardware support. The new policy states "when acquiring technical support, all hardware systems must be supported (e.g., Oracle Premier Support for Systems or Oracle Premier Support for Operating Systems) or unsupported."
In March 2010 the Solaris 10 download license changed to limit unpaid use to 90 days.
== Virtualization ==
In 2013, Oracle stopped development of several former Sun virtualization solutions, including Virtual Desktop Infrastructure (VDI), Sun Ray, and Oracle Virtual Desktop Client. Two other virtualization technologies acquired from Sun, Oracle Secure Global Desktop and VirtualBox, remained as products.
== See also ==
Acquisition of the IBM PC business by Lenovo
== References == | Wikipedia/Acquisition_of_Sun_Microsystems_by_Oracle_Corporation |
Lighthouse Design Ltd. was an American software company that operated from 1989 to 1996. Lighthouse developed software for NeXT computers running the NeXTSTEP operating system. The company was founded in 1989 by Alan Chung, Roger Rosner, Jonathan Schwartz, Kevin Steele and Brian Skinner, in Bethesda, Maryland. Lighthouse later moved to San Mateo, California. In 1996, Lighthouse was acquired by Sun Microsystems.
== History ==
Two of the first products developed at Lighthouse were Diagram! and Exploder.
Diagram! was a drawing tool, originally called BLT (for Box-and-Line Tool) in which objects (boxes) are connected together using "smart links" (lines) to construct diagrams such a flow charts.
Exploder was a programming tool for storing Objective-C objects in a relational database. Lighthouse marketed Diagram! directly, and in 1991 spun off the Exploder into a new startup, Persistence Software. Persistence Software went public with an IPO on June 25, 1999.
Lighthouse went on to develop and acquire more software products, and marketed an office suite for NeXTSTEP, which included ParaSheet (a traditional spreadsheet), Quantrix (a spreadsheet program based on Lotus Improv), Diagram!, TaskMaster (a project management program), WetPaint (an image editing/retouching program), LightPlan (an OMT-based computer data modeling tool, based on Diagram!), and Concurrence (a presentation program).
In the early 1990s, Sun Microsystems entered a major partnership with NeXT to develop OpenStep, essentially a cross-platform version of the "upper layers" of the NeXTSTEP operating system. OpenStep would provide a NeXT-like system running on top of any suitably powerful underlying operating system, in Sun's case, Solaris. Sun planned a distributed computing environment, with users running OpenStep on the desktop, and the transaction processing occurring on servers in the back-office. The two would communicate with NeXT's Portable Distributed Objects technology, which was known as Distributed Objects Everywhere (DOE), later released as NEO.
In mid-1996, Sun purchased Lighthouse for $22 million, turning them into their in-house OpenStep applications group. At the time, Scott McNealy had visions of turning Sun into a powerhouse that would compete head-to-head with Microsoft, and an office applications suite was a requirement for any such plan. Lighthouse's applications were not up to par with Microsoft Office as a whole, but certainly could have been developed into a direct competitor with additional development.
But even as the purchase of Lighthouse was going through, Sun was already turning their attention from DOE/NEO on the back-end and OpenStep on the front-end to "Java everywhere". Java was seen as a better solution to infiltrating Sun into the applications market, as it ran on all platforms, not just those supported by OpenStep. Lighthouse was soon moved into the JavaSoft division, becoming the Java Applications Group.
The only problem with this move was that any attempt to port Lighthouse's OpenStep applications written in Objective-C to Java would be almost impossible. Additionally, Sun was worried that releasing their own suite would make third party developers less interested in the platform (see Claris) as they would have to compete with Sun directly in the office application space. Some attempts were made: LightPlan was ported to Java and released as JavaPlan (and also switched from OMT to UML). Sun eventually gave up on the idea, if it ever entertained it seriously in the first place, abandoning the office application market for many years.
Later, OmniGroup cloned Diagram! as OmniGraffle, which conceptually operates in much the same way as Diagram! and the original BLT.
It was not until 1999 that Sun once again entered this market. Oddly, it did so not with a Java suite, but by purchasing the C++-based StarOffice suite. According to Jonathan Schwartz, the former chief executive officer of Lighthouse, the Lighthouse application suite will probably never again be offered to the public.
Lighthouse co-founder Schwartz continued to move up through the ranks at Sun, becoming the head of its software division in 2002, and in April 2006 was named Sun's CEO and President.
== See also ==
OmniWeb
== References ==
== External links ==
Archive of Lighthouse Design's products. Accessed on June 6, 2011. | Wikipedia/Lighthouse_Design |
Applied Materials, Inc. is an American corporation that supplies equipment, services and software for the manufacture of semiconductor (integrated circuit) chips for electronics, flat panel displays for computers, smartphones, televisions, and solar products. The company also supplies equipment to produce coatings for flexible electronics, packaging and other applications. The company is headquartered in Santa Clara, California, and is the second largest supplier of semiconductor equipment in the world based on revenue behind Dutch company ASML.
== History ==
Founded in 1967 by Michael A. McNeilly and others, Applied Materials went public in 1972 on the National Association of Securities Dealers Automated Quotations (NASDAQ), a then-recently established stock exchange. In subsequent years, the company diversified, until James C. Morgan became CEO in 1976 and returned the company's focus to its core business of semiconductor manufacturing equipment. By 1978, sales increased by 17%.
In 1984, Applied Materials became the first U.S. semiconductor equipment manufacturer to open its own technology center in Japan, and the first semiconductor equipment company to operate a service center in China. In 1987, Applied introduced a chemical vapor deposition (CVD) machine called the Precision 5000, which differed from existing machines by incorporating diverse processes into a single machine that had multiple process chambers.
In 1992, the corporation settled a lawsuit with three former employees for an estimated $600,000. The suit complained that the employees were driven out of the company after complaining about the courses Applied Scholastics had been hired to teach there.
In 1993, the Applied Materials' Precision 5000 was inducted into the Smithsonian Institution's permanent collection of Information Age technology.
In November 1996, Applied Materials acquired two Israeli companies for an aggregate amount of $285 million: Opal Technologies and Orbot Instruments for $175 million and $110 million in cash, respectively. Orbot produces systems for inspecting patterned silicon wafers for yield enhancement during the semiconductor manufacturing process, as well as systems for inspecting masks used during the patterning process. Opal develops and manufactures high-speed metrology systems used by semiconductor manufacturers to verify critical dimensions during the production of integrated circuits.
In 2000, Etec Systems, Inc. was purchased. On June 27, 2001, Applied Materials acquired Israeli company Oramir Semiconductor Equipment Ltd., a supplier of laser cleaning technologies for semiconductor wafers, in a purchase business combination for $21 million in cash.
In January 2008, Applied Materials purchased Baccini, an Italian company and designer of tools used in manufacturing solar cells.
In 2009, Applied Materials opened its Solar Technology Center, the world's largest commercial solar energy research and development facility, in Xi'an, China.
Applied Materials acquired Semitool Inc. in December 2009, and announced its acquisition of Varian Semiconductor in May 2011. Applied Materials then announced a planned merger with Tokyo Electron on September 24, 2013. If it had been approved by government regulators, the proposed combined company, to be called Eteris, would have been the world's largest supplier of semiconductor processing equipment, with a total market value of $29 billion. However, on April 27, 2015, Applied Materials announced that its merger with Tokyo Electron has been scrapped due to antitrust concerns and fears of dominating the semiconductor equipment industry.
In 2015, Applied Materials left the solar wafer sawing and the solar ion implantation businesses.
Applied Materials was named among FORTUNE World's Most Admired Companies in 2018.
In 2019, Applied Materials announced its intention to buy semiconductor equipment manufacturer (and former Hitachi group member) Kokusai Electric Corporation from private equity firm KKR for $2.2 billion, but terminated the deal in March 2021 citing delays in getting approval from China's regulator.
In November 2023, Applied Materials was reported to be under criminal investigation by the United States Department of Justice for routing equipment to Semiconductor Manufacturing International Corporation via South Korea in violation of US sanctions.
== Finances ==
For the fiscal year 2021, Applied Materials reported earnings of US$5.888 billion, with an annual revenue of US$23.063 billion, a 34% increase over the previous fiscal. Applied Materials market capitalization was valued at over US$36.6 billion in November 2018.
== Organization ==
Applied is organized into three major business sectors: Semiconductor Products, Applied Global Services, and Display and Adjacent Markets. Applied Materials also operates a venture investing arm called Applied Ventures.
=== Semiconductor Products ===
The company develops and manufactures equipment used in the wafer fabrication steps of creating a semiconductor device, including atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), rapid thermal processing (RTP), chemical mechanical polishing (CMP), etch, ion implantation and wafer inspection. The company acquired Semitool for this group in late 2009. In 2019, Applied Materials agreed to buy semiconductor manufacturer Kokusai for $2.2 Billion.
=== Applied Global Services ===
The Applied Global Services (AGS) group offers equipment installation support and warranty extended support, as well as maintenance support. AGS also offers new and refurbished equipment, as well as upgrades and enhancements for installed base equipment. This sector also includes automation software for manufacturing environments.
=== Display and Adjacent Markets ===
AGS combined an existing business unit with the display business of Applied Films Corporation, acquired in mid-2006.
The manufacturing process for TFT LCDs (thin film transistor liquid crystal displays), commonly employed in computer monitors and televisions, is similar to that employed for integrated circuits. In cleanroom environments both TFT-LCD and integrated circuit production use photolithography, chemical and physical vapor deposition, and testing.
=== Energy and Environmental Solutions (former sector) ===
In 2006, the company acquired Applied Films, a glass coating and web coating business. Also in 2006, Applied announced it was entering the solar manufacturing equipment business. The solar, glass and web businesses were organized into the company's Energy and Environmental Solutions (EES) sector.
In 2007, Applied Materials announced the Applied SunFab thin film photovoltaic module production line, with single or tandem junction capability. SunFab applies silicon thin film layers to glass substrate that then produce electricity when exposed to sunlight. In 2009, the company's SunFab line was certified by the International Electrotechnical Commission (IEC). In 2010, Applied announced that it was abandoning the thin film market and closing down their SunFab division. Also in 2007, the company acquired privately held, Switzerland-based HCT Shaping Systems SA, a specialist in wafer sawing tools for both solar and semiconductor wafer manufacture, paying approximately $475 million.
In 2008, Applied acquired privately held, Italy-based Baccini SpA for $330M, company that worked in the metallization steps of solar cell manufacturing. The company was listed at the top of VLSI Research's list of supplier of photovoltaic manufacturing equipment for 2008, with sales of $797M.
Since July 2016 the Energy and Environmental Solutions sector is no longer reported separately. Remaining solar business activities have been included in "Corporate and Others".
== Locations ==
Applied moved into its Bowers Avenue headquarters in Santa Clara, California, in 1974 and operates in Europe, Japan, Canada, the United States, Israel, China, Italy, India, Korea, Southeast Asia, Singapore and Taiwan.
== Management ==
Chairman of the Board of Directors: Thomas J. Iannotti
President and chief executive officer: Gary E. Dickerson
Chief Financial Officer: Brice Hill
Chief Technology Officer: Omkaram Nalamasu
== See also ==
Lam Research
San Francisco Bay Area portal
Companies portal
== References ==
== External links ==
Official website
Business data for Applied Materials, Inc.: | Wikipedia/Applied_Materials |
Montalvo Systems was a Silicon Valley start-up reportedly working on an asymmetrical, x86 capable processor similar to the Cell microprocessor. The processor was to use high-performance cores for performance-intensive threads, and delegate minor tasks to the simpler cores to save silicon and power. Matt Perry, former Transmeta CEO, was CEO and president of Montalvo; Peter Song, founder of failed x86 manufacturer MemoryLogix, was chief architect. Greg Favor (former NexGen/AMD) was responsible for chip microarchitecture and Carlos Puchol (former architect for power management at Transmeta and Nvidia) was system and power architect. Another founding member, Kevin Lawton, of bochs (x86 emulation) and plex86 (x86 virtualization) fame, was the processor simulator architect.
The official description of business from Montalvo's security filings was:
A fabless semiconductor company developing ultra low-power system-on-chips for mobile devices.
As of 24 April 2008, Sun Microsystems had acquired the company's assets for an undisclosed sum.
== Locations ==
Headquarters were in Santa Clara, California, next door to the remnants of Transmeta,
and nearby to Intel and Sun. It had offices in Boulder, Colorado and Bangalore, India. According to news reports, it had close to 300 employees.
In March 2008 news broke that Montalvo was seeking funds to avoid shutdown. According to a news article released on March 31, Montalvo had laid off two-thirds of its engineers. At the same time, rumors surfaced that Sun Microsystems was in talks to buy Montalvo. About three weeks later, on 24 April 2008, The Register confirmed the rumors to be true.
== Finances ==
From the Cal-EASI database, the following information is available about Montalvo's financing.
== News ==
2008-04-24 Sun buys low-power x86 disaster Montalvo
2008-04-03 Sun Microsystems could use Montalvo as a strategic lever against Intel
2008-04-01 Sun close to buying Intel would-be competitor Montalvo
2008-03-31 Montalvo Systems cuts two thirds of staff
2008-03-31 Rumor: Intel competitor Montalvo bracing for cuts
2008-03-20 Montalvo seeking a hoard of cash to avoid shutdown
2008-02-18 VIA Continues Transition From Chipsets To CPU To Profitability. Skeptical on Montalvo X86 Chip Success
2008-02-15 Montalvo, a competitor of Intel and AMD, not yet born and already in trouble
2008-02-14 Secret recipe inside Intel's latest competitor
2008-02-13 Cash-burning Montalvo tapes out Silverthorne rival
2008-02-06 Silent start-up readies to take on Intel in notebooks
2007-06-05 Montalvo CFO leaves and joins Agami Systems
2006-08-25 Is that a VMware CTO and Transmeta CEO at your start-up?
2006-08-19 Former Transmeta CEO goes at Intel with another low-power chip
2005-10-27 Chip start-up Montalvo looks to speed mobile devices
== References == | Wikipedia/Montalvo_Systems |
Planar Systems, Inc. is an American digital display manufacturing corporation with a facility in Hillsboro, Oregon. Founded in 1983 as a spin-off from Tektronix, it was the first U.S. manufacturer of electroluminescent (EL) digital displays. Planar currently makes a variety of other specialty displays, and has been an independent subsidiary of Leyard Optoelectronic Co. since 2015. The headquarters, leadership team and employees still remain in Hillsboro, Oregon.
== History ==
=== 1980s ===
Planar was founded on May 23, 1983 by Jim Hurd, Chris King, John Laney and others as a spin-off from the Solid State Research and Development Group of the Beaverton, Oregon, based Tektronix. In 1986, a division spun off from Planar to work on projection technology and formed InFocus.
=== 1990s ===
In 1991, Planar purchased FinLux, a competitor in Espoo, Finland. This location now serves as the company's European headquarters. Planar's executives took the company public in 1993, listing the stock on the NASDAQ boards Planar acquired Tektronix's avionics display business, creating the short-lived Planar Advance in 1994. Standish Industries, a manufacturer of flat panel LCDs in Lake Mills, Wisconsin, was sold to Planar in 1997. This plant was closed in 2002 as worldwide LCD manufacturing shifted to East Asian countries.
=== 2000s ===
On April 23, 2002, DOME Imaging Systems was purchased by Planar and became the company's medical business unit. Planar acquired Clarity Visual Systems (founded by former InFocus employees) on September 12, 2006, now referred to as the Control Room and Signage business unit. On June 19, 2006, Planar acquired Runco International, a leading brand in the high-end, custom home theater market. On August 6, 2008, Planar sold its medical business unit to NDS Surgical Imaging.
=== 2010s ===
In November 2012, Planar announced the sale of its electroluminescent business to Beneq Oy, a supplier of production and research equipment for thin film coatings. Under the terms of the transaction, consideration consists of a $6.5 million base purchase price, of which $3.9 million was paid in cash at closing and $2.6 million was paid in the form of a promissory note. Planar was purchased by Leyard Optoelectronic Co. of China in 2015 for $157 million. It became a subsidiary after formerly trading on the NASDAQ under the symbol PLNR.
In November 2016, Planar announced that it was to enter a merger agreement with NaturalPoint Inc., which sells infrared point tracking systems for use on CGI movie sets (Optitrack), and home use both for assisted computing (Smartnav) and computer gaming (TrackIR). The merger was finalized in January 2017. NaturalPoint will remain a separate business with its own executive team, customers, and market initiatives.
=== 2020s ===
In 2020, a nearly 32-foot-long, 5-foot-high Planar TVF Series LED video wall was added to Lea County Communication Authority (LCCA)’s Lea County 911 Call Center.
Planar completed the latest of three installations at the University of Oregon. The addition of Planar® CarbonLight™ CLI Flex™ pliable LED video wall displays, custom designed into two curved LED installations, at Matthew Knight Arena follows the companies deployments at the university’s Hatfield-Dowlin Complex in 2013 and Student Recreation Center in 2015.
The company also expanded its presence at Clemson University in Clemson, South Carolina by adding an impressive collection of 126 Planar LCD displays and two Planar LED video walls in the Wilbur O. and Ann Powers College of Business. 200 Planar displays also appear in the university’s four-story Watt Family Innovation Center following an installation in 2016.
On November 10, 2020, Planar expanded their US government division to enhance the company’s product security program to further adapt products and processes to best meet the product security needs of customers.
== Operations ==
Planar currently assembles and services videowalls, projectors, and other displays in Hillsboro. Planar's EL manufacturing operations were consolidated into Planar's Espoo, Finland facility in 2002. Additional large-format displays are assembled and integrated in Albi, France.
== Leyard Merger ==
On November 27, 2015, Planar closed its sale to become a subsidiary of Leyard Optoelectronic Co., a Chinese LED display product corporation. Headquarters operations for Planar remain in Beaverton, OR following the sale.
== Locations ==
In addition to its Oregon, U.S. headquarters, Planar has worldwide reach. Its sales offices are located in Europe, North America, and Asia. It has manufacturing facilities in France, North America, and Finland.
== See also ==
Silicon Forest
List of companies based in Oregon
== References ==
== External links ==
Hoover's Profile of Planar Systems
International Directory of Company Histories, Volume 61 (1990) via Answers.com
Planar Systems topic at New York Times | Wikipedia/Planar_Systems |
The Network Information Service, or NIS (originally called Yellow Pages or YP), is a client–server directory service protocol for distributing system configuration data such as user and host names between computers on a computer network. Sun Microsystems developed the NIS; the technology is licensed to virtually all other Unix vendors.
Because British Telecom PLC owned the name "Yellow Pages" as a registered trademark in the United Kingdom for its paper-based, commercial telephone directory, Sun changed the name of its system to NIS, though all the commands and functions still start with "yp".
A NIS/YP system maintains and distributes a central directory of user and group information, hostnames, e-mail aliases and other text-based tables of information in a computer network. For example, in a common UNIX environment, the list of users for identification is placed in /etc/passwd and secret authentication hashes in /etc/shadow. NIS adds another "global" user list which is used for identifying users on any client of the NIS domain.
Administrators have the ability to configure NIS to serve password data to outside processes to authenticate users using various versions of the Unix crypt(3) hash algorithms. However, in such cases, any NIS(0307) client can retrieve the entire password database for offline inspection.
== Successor technologies ==
The original NIS design was seen to have inherent limitations, especially in the areas of scalability and security, so other technologies have come to replace it.
Sun introduced NIS+ as part of Solaris 2 in 1992, with the intention for it to eventually supersede NIS. NIS+ features much stronger security and authentication features, as well as a hierarchical design intended to provide greater scalability and flexibility. However, it was also more cumbersome to set up and administer, and was more difficult to integrate into an existing NIS environment than many existing users wished. NIS+ was removed from Solaris 11.
As a result, many users choose to remain with NIS, and over time other modern and secure distributed directory systems, most notably Lightweight Directory Access Protocol (LDAP), came to replace it. For example, slapd (the standalone LDAP daemon) generally runs as a non-root user, and SASL-based encryption of LDAP traffic is natively supported.
On large LANs, DNS servers may provide better nameserver functionality than NIS or LDAP can provide, leaving just site-wide identification information for NIS master and slave systems to serve. However, some functions—such as the distribution of netmask information to clients, as well as the maintenance of e-mail aliases—may still be performed by NIS or LDAP. NIS maintains an NFS database information file as well as so called maps.
== See also ==
Dynamic Host Configuration Protocol (DHCP)
Hesiod (name service)
Name Service Switch (NSS)
Network information system, for a broader use of NIS to manage other system and networks
== References ==
== External links ==
Thorsten Kukuk (2003-07-01). "The Linux NIS(YP)/NYS/NIS+ HOWTO". Linux Documentation Project.
Van Emery (2005-04-15). "Distributed Authentication System (DAS) Handbook". Archived from the original on 2006-07-15.
Kristy Westphal (2001-01-22). "NFS and NIS Security". Symantec.
"Red Hat Enterprise Linux 6: 2.2.3. Securing NIS". Red Hat.
Frédéric Raynal (2001-06-29). "Yellow Pages, part 1". ibiblio.
RHEL 9 will remove support for NIS Alexander Bokovoy, Sr. Principal Software Engineer slide show | Wikipedia/Network_Information_Service |
Diodes Incorporated is a global manufacturer and supplier of application specific standard products within the discrete, logic, analog, and mixed-signal semiconductor markets. Diodes serves the consumer electronics, computing, communications, industrial, and automotive markets.
Diodes' products include diodes, rectifiers, transistors, MOSFETs, protection devices, functional specific arrays, single gate logic, amplifiers and comparators, Hall effect and temperature sensors; power management devices, including LED drivers, AC-DC converters and controllers, DC-DC switching and linear voltage regulators, and voltage references along with special function devices, such as USB power switches, load switches, voltage supervisors, and motor controllers. Diodes Incorporated also has timing, connectivity, switching, and signal integrity solutions for high-speed signals. In January 2024 the company announced three dual-channel power-switches.
The company's product focus is on end-user equipment markets such as satellite TV set-top boxes, portable DVD players, datacom devices, ADSL modems, power supplies, medical devices (non-life support devices/systems), PCs and notebooks, flat panel displays, digital cameras, mobile handsets, AC-to-DC and DC-to-DC conversion, Wireless 802.11 LAN access points, brushless DC motor fans, serial connectivity, and automotive applications.
Over the years, Diodes Incorporated grew by acquiring other semiconductor companies. Notable acquisitions include Zetex Semiconductors (2008), Power Analog Microelectronics, Inc. (2012), Pericom Semiconductor (2015), Texas Instruments' Greenock wafer fabrication plant (2019), and Lite-On Semiconductor (2020).
On 3 June 2022, Diodes completed the acquisition of the South Portland wafer fabrication facility, and its operations from onsemi's known as SPFAB. This also includes the transfer of all of its employees there.
On 26 December 2023, Diodes announced that Gary Yu would become president as of 2 January 2024 and Dr. Keh Shew Lu will remain chairman and CEO until at least 31 May 2027.
== References == | Wikipedia/Diodes_Incorporated |
The Network Computer (or NC) was a diskless desktop computer device made by Oracle Corporation from about 1996 to 2000. The devices were designed and manufactured by an alliance, which included Sun Microsystems (acquired by Oracle in 2010), IBM, and others. The devices were designed with minimum specifications, based on the Network Computer Reference Profile. The brand was also employed as a marketing term to try to popularize this design of computer within enterprise and among consumers.
The NC brand was mainly intended to inspire a range of desktop computers from various suppliers that, by virtue of their diskless design and use of inexpensive components and software, were cheaper and easier to manage than standard fat client desktops. However, due to the commoditization of standard desktop components, and due to the increasing availability and popularity of various software options for using full desktops as diskless nodes, thin clients, and hybrid clients, the Network Computer brand never achieved the popularity hoped for by Oracle and was eventually mothballed.
The term "network computer" is now used for any diskless desktop computer or a thin client.
== History ==
The failure of the NC to impact on the scale predicted by Larry Ellison may have been caused by a number of factors. Firstly, prices of PCs quickly fell below $1000, making the competition very hard. Secondly, the software available for NCs was neither mature nor open.
Thirdly, the idea could simply have been ahead of its time, as at the NC's launch in 1996, the typical home Internet connection was only a 28.8 kbit/s modem dialup. This was simply insufficient for the delivery of executable content. The World Wide Web itself was not considered mainstream until its breakout year, 1998. Prior to this, very few Internet service providers advertised in mainstream press (at least outside of the US), and knowledge of the Internet was limited. This could have held back uptake of what would be seen as a very niche device with no (then) obvious appeal.
NCs ended up being used as the very 'dumb terminals' they were intended to replace, as the proprietary backend infrastructure is not readily available. 1990s era NCs are often network-booted into a minimal Unix with X, to serve as X terminals. While NC purists may consider this to be a suboptimal use of NC hardware, the NCs work well as terminals, and are considerably cheaper than purpose-built terminal hardware.
== NC standards and drafts ==
=== Reference Profile ===
The initial Network Computing standard, the Network Computer Reference Profile (NCRef), required that all 'NC' appliances supported HTML, Java, HTTP, JPEG, and other key standards.
=== Other standards ===
Because many NCs did not use Intel CPUs or Microsoft software, Microsoft and Intel developed a competing standard called NetPC. Other alternatives to the NCRef were WeBRef (Motorola and HDS Network Systems) and Odin (National Semiconductor). The HDS @workStation was stated to ship by the end of June 1996.
=== NC extensions ===
== NC implementations ==
=== Acorn Network Computer ===
The Acorn Network Computer was Oracle's initial reference implementation of the NC. Its development was subcontracted to British company Acorn Computers, who adapted its own RISC OS to create NCOS. Acorn made use of local partner companies ANT, Icon Technology and Design Edge to fulfil their contract.
=== Macintosh NC ===
In 1997 Apple announced the Mac NC, its attempt to develop the Pippin into a network computer platform. By the end of 1997, Steve Jobs discontinued all Macintosh clone efforts, effectively killing the Pippin, although key components of the Mac NC technology were inherited by the original iMac.
=== NetProducts NetStation ===
The first generation NetStation design and the NetStation trademark was licensed to NChannel, which provided the consumer equipment and Internet service (with associated infrastructure) for the UK market. After a few months, NChannel split into two entities: NetChannel (which provided the Internet service) and NetProducts which provided the consumer hardware.
NetProducts started working with Acorn to develop a next-generation product, NetStation II and started developing an email-only set-top-box (the TVemail). NetProducts went into voluntary liquidation in 1998 before either project was completed.
=== Sun Microsystems JavaStation ===
Sun Microsystems developed the JavaStation, a JavaOS-based NC based on SPARC hardware, initially similar to Sun's range of Unix workstations.
=== IBM Network Station ===
IBM launched its Network Station in September 1996. As with the later reference design, the Network Station used a NetBSD-based NCOS booted over a LAN from an AS/400 or IBM PC server. The Network Station supported local execution of basic applications, such as a web browser and console. In addition, X capability was also implemented to allow both locally and remotely run applications to be used on the same machine. In practice, the lack of real applications meant that this was little more than a hardware X terminal.
The IBM Network Station was originally based on the PowerPC architecture, but the final few models used Intel Pentium processors.
== Contemporary analogy ==
The idea behind the NC can be seen as existing in contemporary times in the system of cloud computing and in particular ChromeOS. In Wired magazine, Daniel Roth claims that the failure of the network computer eventually led to the development of cloud computing. A large contribution to this transition was attributed to Eric Schmidt, once the CTO of Sun Microsystems, a proponent of the network computer, who eventually became the CEO of Google. Google is a large purveyor of cloud technology, "most notably Google Docs and Spreadsheets".
== See also ==
== References ==
== External links ==
FAQ from the network computers Usenet newsgroup
Contemporary press coverage of early NC pre-announcements: https://archive.today/20130119192531/http://news.com.com/Oracle+down+to+brass+tacks+for+NC/2100-1001_3-243680.html | Wikipedia/Network_Computer |
A computer network is a collection of communicating computers and other devices, such as printers and smart phones. In order to communicate, the computers and devices must be connected by wired media like copper cables, optical fibers, or by wireless communication. The devices may be connected in a variety of network topologies. In order to communicate over the network, computers use agreed-on rules, called communication protocols, over whatever medium is used.
The computer network can include personal computers, servers, networking hardware, or other specialized or general-purpose hosts. They are identified by network addresses and may have hostnames. Hostnames serve as memorable labels for the nodes and are rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as the Internet Protocol.
Computer networks may be classified by many criteria, including the transmission medium used to carry signals, bandwidth, communications protocols to organize network traffic, the network size, the topology, traffic control mechanisms, and organizational intent.
Computer networks support many applications and services, such as access to the World Wide Web, digital video and audio, shared use of application and storage servers, printers and fax machines, and use of email and instant messaging applications.
== History ==
Computer networking may be considered a branch of computer science, computer engineering, and telecommunications, since it relies on the theoretical and practical application of the related disciplines. Computer networking was influenced by a wide array of technological developments and historical milestones.
In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s).
In 1959, Christopher Strachey filed a patent application for time-sharing in the United Kingdom and John McCarthy initiated the first project to implement time-sharing of user programs at MIT. Strachey passed the concept on to J. C. R. Licklider at the inaugural UNESCO Information Processing Conference in Paris that year. McCarthy was instrumental in the creation of three of the earliest time-sharing systems (the Compatible Time-Sharing System in 1961, the BBN Time-Sharing System in 1962, and the Dartmouth Time-Sharing System in 1963).
In 1959, Anatoly Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers. Kitov's proposal was rejected, as later was the 1962 OGAS economy management network project.
In 1960, the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes.
In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1965, Western Electric introduced the first widely used telephone switch that implemented computer control in the switching fabric.
Throughout the 1960s, Paul Baran and Donald Davies independently invented the concept of packet switching for data communication between computers over a network. Baran's work addressed adaptive routing of message blocks across a distributed network, but did not include routers with software switches, nor the idea that users, rather than the network itself, would provide the reliability. Davies' hierarchical network design included high-speed routers, communication protocols and the essence of the end-to-end principle. The NPL network, a local area network at the National Physical Laboratory (United Kingdom), pioneered the implementation of the concept in 1968-69 using 768 kbit/s links. Both Baran's and Davies' inventions were seminal contributions that influenced the development of computer networks.
In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah. Designed principally by Bob Kahn, the network's routing, flow control, software design and network control were developed by the IMP team working for Bolt Beranek & Newman. In the early 1970s, Leonard Kleinrock carried out mathematical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.
In 1972, commercial services were first deployed on experimental public data networks in Europe.
In 1973, the French CYCLADES network, directed by Louis Pouzin was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself.
In 1973, Peter Kirstein put internetworking into practice at University College London (UCL), connecting the ARPANET to British academic networks, the first international heterogeneous computer network.
In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a local area networking system he created with David Boggs. It was inspired by the packet radio ALOHAnet, started by Norman Abramson and Franklin Kuo at the University of Hawaii in the late 1960s. Metcalfe and Boggs, with John Shoch and Edward Taft, also developed the PARC Universal Packet for internetworking.
In 1974, Vint Cerf and Bob Kahn published their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. Later that year, Cerf, Yogen Dalal, and Carl Sunshine wrote the first Transmission Control Protocol (TCP) specification, RFC 675, coining the term Internet as a shorthand for internetworking.
In July 1976, Metcalfe and Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and in December 1977, together with Butler Lampson and Charles P. Thacker, they received U.S. patent 4063220A for their invention.
Public data networks in Europe, North America and Japan began using X.25 in the late 1970s and interconnected with X.75. This underlying infrastructure was used for expanding TCP/IP networks in the 1980s.
In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices.
In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California.
In 1979, Robert Metcalfe pursued making Ethernet an open standard.
In 1980, Ethernet was upgraded from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was developed by Ron Crane, Bob Garner, Roy Ogus, and Yogen Dalal.
In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of 1 Gbit/s. Subsequently, higher speeds of up to 400 Gbit/s were added (as of 2018). The scaling of Ethernet has been a contributing factor to its continued use.
== Use ==
Computer networks enhance how users communicate with each other by using various electronic methods like email, instant messaging, online chat, voice and video calls, and video conferencing. Networks also enable the sharing of computing resources. For example, a user can print a document on a shared printer or use shared storage devices. Additionally, networks allow for the sharing of files and information, giving authorized users access to data stored on other computers. Distributed computing leverages resources from multiple computers across a network to perform tasks collaboratively.
== Network packet ==
Most modern computer networks use protocols based on packet-mode transmission. A network packet is a formatted unit of data carried by a packet-switched network.
Packets consist of two types of data: control information and user data (payload). The control information provides data the network needs to deliver the user data, for example, source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.
With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link is not overused. Often the route a packet needs to take through a network is not immediately available. In that case, the packet is queued and waits until a link is free.
The physical link technologies of packet networks typically limit the size of packets to a certain maximum transmission unit (MTU). A longer message may be fragmented before it is transferred and once the packets arrive, they are reassembled to construct the original message.
== Network topology ==
The physical or geographic locations of network nodes and links generally have relatively little effect on a network, but the topology of interconnections of a network can significantly affect its throughput and reliability. With many technologies, such as bus or star networks, a single failure can cause the network to fail entirely. In general, the more interconnections there are, the more robust the network is; but the more expensive it is to install. Therefore, most network diagrams are arranged by their network topology which is the map of logical interconnections of network hosts.
Common topologies are:
Bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2. This is still a common topology on the data link layer, although modern physical layer variants use point-to-point links instead, forming a star or a tree.
Star network: all nodes are connected to a special central node. This is the typical layout found in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client associates with the central wireless access point.
Ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. Token ring networks, and the Fiber Distributed Data Interface (FDDI), made use of such a topology.
Mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
Fully connected network: each node is connected to every other node in the network.
Tree network: nodes are arranged hierarchically. This is the natural topology for a larger Ethernet network with multiple switches and without redundant meshing.
The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. Physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding.
=== Overlay network ===
An overlay network is a virtual network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.
Overlay networks have been used since the early days of networking, back when computers were connected via telephone lines using modems, even before data networks were developed.
The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.
Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network. On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others.
== Network links ==
The transmission media (often referred to in the literature as the physical medium) used to link devices to form a computer network include electrical cable, optical fiber, and free space. In the OSI model, the software to handle the media is defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adopted family that uses copper and fiber media in local area network (LAN) technology are collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Wireless LAN standards use radio waves, others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.
=== Wired ===
The following classes of wired technologies are used in computer networking.
Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed local area network.
Twisted pair cabling is used for wired Ethernet and other standards. It typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 Mbit/s to 10 Gbit/s. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
An optical fiber is a glass fiber. It carries pulses of light that represent data via lasers and optical amplifiers. Some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. Using dense wave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. There are two basic types of fiber optics, single-mode optical fiber (SMF) and multi-mode optical fiber (MMF). Single-mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. Multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade.
=== Wireless ===
Network connections can be established wirelessly using radio or other electromagnetic means of communication.
Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 40 miles (64 km) apart.
Communications satellites – Satellites also communicate via microwave. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular networks use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area is served by a low-power transceiver.
Radio and spread spectrum technologies – Wireless LANs use a high-frequency radio technology similar to digital cellular. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
Extending the Internet to interplanetary dimensions via radio waves and optical means, the Interplanetary Internet.
IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.
The last two cases have a large round-trip delay time, which gives slow two-way communication but does not prevent sending large amounts of information (they can have high throughput).
== Network nodes ==
Apart from any physical transmission media, networks are built from additional basic system building blocks, such as network interface controllers, repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and so may perform multiple functions.
=== Network interfaces ===
A network interface controller (NIC) is computer hardware that connects the computer to the network media and has the ability to process low-level network information. For example, the NIC may have a connector for plugging in a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
In Ethernet networks, each NIC has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.
=== Repeaters and hubs ===
A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of obstruction so that the signal can cover longer distances without degradation. In most twisted-pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.
Repeaters work on the physical layer of the OSI model but still require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters used in a network, e.g., the Ethernet 5-4-3 rule.
An Ethernet repeater with multiple ports is known as an Ethernet hub. In addition to reconditioning and distributing network signals, a repeater hub assists with collision detection and fault isolation for the network. Hubs and repeaters in LANs have been largely obsoleted by modern network switches.
=== Bridges and switches ===
Network bridges and network switches are distinct from a hub in that they only forward frames to the ports involved in the communication whereas a hub forwards to all ports. Bridges only have two ports but a switch can be thought of as a multi-port bridge. Switches normally have numerous ports, facilitating a star topology for devices, and for cascading additional switches.
Bridges and switches operate at the data link layer (layer 2) of the OSI model and bridge traffic between two or more network segments to form a single local network. Both are devices that forward frames of data between ports based on the destination MAC address in each frame.
They learn the association of physical ports to MAC addresses by examining the source addresses of received frames and only forward the frame when necessary. If an unknown destination MAC is targeted, the device broadcasts the request to all ports except the source, and discovers the location from the reply.
Bridges and switches divide the network's collision domain but maintain a single broadcast domain. Network segmentation through bridging and switching helps break down a large, congested network into an aggregation of smaller, more efficient networks.
=== Routers ===
A router is an internetworking device that forwards packets between networks by processing the addressing or routing information included in the packet. The routing information is often processed in conjunction with the routing table. A router uses its routing table to determine where to forward packets and does not require broadcasting packets which is inefficient for very big networks.
=== Modems ===
Modems (modulator-demodulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Early modems modulated audio signals sent over a standard voice telephone line. Modems are still commonly used for telephone lines, using a digital subscriber line technology and cable television systems using DOCSIS technology.
=== Firewalls ===
A firewall is a network device or software for controlling network security and access rules. Firewalls are inserted in connections between secure internal networks and potentially insecure external networks such as the Internet. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.
== Communication protocols ==
A communication protocol is a set of rules for exchanging information over a network. Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing.
In a protocol stack, often constructed per the OSI model, communications functions are divided up into protocol layers, where each layer leverages the services of the layer below it until the lowest layer controls the hardware that sends information across the media. The use of protocol layering is ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web.
There are many communication protocols, a few of which are described below.
=== Common protocols ===
==== Internet protocol suite ====
The Internet protocol suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less and connection-oriented services over an inherently unreliable network traversed by datagram transmission using Internet protocol (IP). At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability. The Internet protocol suite is the defining set of protocols for the Internet.
==== IEEE 802 ====
IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at layers 1 and 2 of the OSI model.
For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based network access control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key".
===== Ethernet =====
Ethernet is a family of technologies used in wired LANs. It is described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers.
===== Wireless LAN =====
Wireless LAN based on the IEEE 802.11 standards, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. IEEE 802.11 shares many properties with wired Ethernet.
==== SONET/SDH ====
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support circuit-switched digital telephony. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.
==== Asynchronous Transfer Mode ====
Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet protocol suite or Ethernet that use variable-sized packets or frames. ATM has similarities with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.
ATM still plays a role in the last mile, which is the connection between an Internet service provider and the home user.
==== Cellular standards ====
There are a number of different digital cellular standards, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN).
=== Routing ===
Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks.
In packet-switched networks, routing protocols direct packet forwarding through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though because they lack specialized hardware, may offer limited performance. The routing process directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths.
Routing can be contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, the structured addressing used by routers outperforms unstructured addressing used by bridging. Structured IP addresses are used on the Internet. Unstructured MAC addresses are used for bridging on Ethernet and similar local area networks.
== Geographic scale ==
Networks may be characterized by many properties or features, such as physical capacity, organizational purpose, user authorization, access rights, and others. Another distinct classification method is that of the physical extent or geographic scale.
=== Nanoscale network ===
A nanoscale network has key components implemented at the nanoscale, including message carriers, and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for other communication techniques.
=== Personal area network ===
A personal area network (PAN) is a computer network used for communication among computers and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.
=== Local area network ===
A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Wired LANs are most commonly based on Ethernet technology. Other networking technologies such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.
A LAN can be connected to a wide area network (WAN) using a router. The defining characteristics of a LAN, in contrast to a WAN, include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to and in excess of 100 Gbit/s, standardized by IEEE in 2010.
=== Home area network ===
A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable Internet access or digital subscriber line (DSL) provider.
=== Storage area network ===
A storage area network (SAN) is a dedicated network that provides access to consolidated, block-level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the storage appears as locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.
=== Campus area network ===
A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, Cat5 cabling, etc.) are almost entirely owned by the campus tenant or owner (an enterprise, university, government, etc.).
For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls.
=== Backbone network ===
A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it.
For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. Another example of a backbone network is the Internet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data between wide area networks (WANs), metro, regional, national and transoceanic networks.
=== Metropolitan area network ===
A metropolitan area network (MAN) is a large computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area.
=== Wide area network ===
A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and airwaves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI model: the physical layer, the data link layer, and the network layer.
=== Enterprise private network ===
An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.
=== Virtual private network ===
A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
VPN may have best-effort performance or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider.
=== Global area network ===
A global area network (GAN) is a network used for supporting mobile users across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.
== Organizational scope ==
Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.
=== Intranet ===
An intranet is a set of networks that are under the control of a single administrative entity. An intranet typically uses the Internet Protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits the use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information.
=== Extranet ===
An extranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. The network connection to an extranet is often, but not always, implemented via WAN technology.
=== Internet ===
An internetwork is the connection of multiple different types of computer networks to form a single computer network using higher-layer network protocols and connecting them together using routers.
The Internet is the largest example of internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet protocol suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet utilizes copper communications and an optical networking backbone to enable the World Wide Web (WWW), the Internet of things, video transfer, and a broad range of information services.
Participants on the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol suite and the IP addressing system administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
=== Darknet ===
A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. It is an anonymizing network where connections are made only between trusted peers — sometimes called friends (F2F) — using non-standard protocols and ports.
Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.
== Network service ==
Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.
The World Wide Web, E-mail, printing and network file sharing are examples of well-known network services. Network services such as Domain Name System (DNS) give names for IP and MAC addresses (people remember names like nm.lan better than numbers like 210.121.67.18), and Dynamic Host Configuration Protocol (DHCP) to ensure that the equipment on the network has a valid IP address.
Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.
== Network performance ==
=== Bandwidth ===
Bandwidth in bit/s may refer to consumed bandwidth, corresponding to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The throughput is affected by processes such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap and bandwidth allocation (using, for example, bandwidth allocation protocol and dynamic bandwidth allocation).
=== Network delay ===
Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several components, the sum of which is the total delay:
Processing delay – time it takes a router to process the packet header
Queuing delay – time the packet spends in routing queues
Transmission delay – time it takes to push the packet's bits onto the link
Propagation delay – time for a signal to propagate through the media
A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from less than a microsecond to several hundred milliseconds.
=== Performance metrics ===
The parameters that affect performance typically can include throughput, jitter, bit error rate and latency.
In circuit-switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include the level of noise and echo.
In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem enhancements.
There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.
=== Network congestion ===
Network congestion occurs when a link or node is subjected to a greater data load than it is rated for, resulting in a deterioration of its quality of service. When networks are congested and queues become too full, packets have to be discarded, and participants must rely on retransmission to maintain reliable communications. Typical effects of congestion include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either to only a small increase in the network throughput or to a potential reduction in network throughput.
Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.
Modern networks use congestion control, congestion avoidance and traffic control techniques where endpoints typically slow down or sometimes even stop transmission entirely when the network is congested to try to avoid congestive collapse. Specific techniques include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers.
Another method to avoid the negative effects of network congestion is implementing quality of service priority schemes allowing selected traffic to bypass congestion. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for critical services. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn home networking standard.
For the Internet, RFC 2914 addresses the subject of congestion control in detail.
=== Network resilience ===
Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation."
== Security ==
Computer networks are also used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack.
=== Network security ===
Network Security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies, and individuals.
=== Network surveillance ===
Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.
Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.
Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent or investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.
However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".
=== End to end encryption ===
End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet service providers or application service providers, from reading or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.
Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio.
Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox.
The end-to-end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent.
=== SSL/TLS ===
The introduction and rapid growth of e-commerce on the World Wide Web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client.
== Views of networks ==
Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a network administrator is responsible for keeping that network up and running. A community of interest has less of a connection of being in a local area and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies.
Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application-layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using VLANs.
Users and administrators are aware, to varying extents, of a network's trust and scope characteristics. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).
Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, that share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).
Over the Internet, there can be business-to-business, business-to-consumer and consumer-to-consumer communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure VPN technology.
== See also ==
== References ==
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22.
== Further reading ==
Kurose James F and Keith W. Ross: Computer Networking: A Top-Down Approach Featuring the Internet, Pearson Education 2005.
William Stallings, Computer Networking with Internet Protocols and Technology, Pearson Education 2004.
Dimitri Bertsekas, and Robert Gallager, "Data Networks," Prentice Hall, 1992. | Wikipedia/Networking_software |
Solaris network virtualization and resource control is a set of features originally developed by Sun Microsystems as the OpenSolaris Crossbow umbrella project, providing an internal network virtualization and quality of service framework within the Solaris Operating System. It also enables secure and efficient virtual network interfaces and zones, making it easier to manage network resources.
Major features of the Crossbow project include:
Virtual NIC (VNIC) pseudo-network interface technology
Exclusive IP zones
Bandwidth management and flow control on a per interface and per VNIC basis
== Description ==
The Crossbow project software, combined with next generation network interfaces like xge and bge, enable network virtualization and resource control for a single system. By combining VNICs with features such as exclusive IP zones or the Sun xVM hypervisor, system administrators can run applications on separate virtual machines to improve performance and provide security.
Resource management and flow control features provide bandwidth management and quality of service for packet flows on separate virtual machines. You can allocate bandwidth amounts and manage data flows not only for the physical network interface but also for any containers configured on the interface. The Crossbow resource control features enable increased system efficiency and the ability to limit the amount of bandwidth consumed by a process or virtual machine.
== Features of the Crossbow project ==
This section briefly describes the main features of the Crossbow network virtualization and resource control project. For further details on each feature, see the Oracle Solaris 11 Network Virtualization and Network Resource Management white paper.
=== VNIC ===
A VNIC is a pseudo network interface that is configured on top of a system's physical network adapter, also called a network interface controller (NIC). A physical interface can have more than one VNIC. Each VNIC operates like and appears to the system as a physical NIC. The individual VNIC is assigned a media access control address (MAC address), which can be configured to a value other than the default MAC address assigned to the physical NIC. You can use the resource control features of Crossbow to allocate separate bandwidths to the individual VNICs. Moreover, you can configure a virtual machine, such as an exclusive IP zone or xVM domain on top of a VNIC.
=== Virtual switch ===
When the first VNIC is created on a system, a virtual switch is also created above the physical interface. Though not directly accessible to the user, the virtual switch provides connectivity between all VNICs configured on the same physical interface, enabling the virtual network in a box scenario. The virtual switch forwards packets between the system's VNICs. Thus, packets from an internal VNIC source never have to pass to the external network to reach an internal network destination.
=== Exclusive IP zones ===
An exclusive IP zone is a separate instance of a full TCP/IP stack, which functions as a non-global zone. Each exclusive IP zone is built upon a physical network interface and has its own IP-related state. IP instances support DHCPv4 and IPv6 address autoconfiguration. An exclusive IP zone can have its own routing table and routing protocols separate from the global zone on a system. Moreover, a system administrator can run the ifconfig command within an exclusive IP instance to set up a logical interface within the exclusive IP zone.
=== Modifications to the TCP/IP MAC layer ===
In Solaris, the MAC layer is part of the larger data link layer of the TCP/IP protocol stack. The Crossbow project modifies this layer with several new features, including the MAC client interface. This virtual entity is a kernel data structure that is not externally visible to the system administrator. However, the MAC client interface along with the VNIC driver provides the VNIC functionality in OpenSolaris. Additionally, Crossbow modifications to the MAC layer enable a system administrator to assign a different MAC address to each VNIC on a system.
=== Resource management and flow control ===
The Crossbow project features provide bandwidth management and flow control on a per VNIC basis. A system administrator can configure different bandwidth allocations to the various VNICs on a host through the new Crossbow-related commands dladm.1m and flowadm.1m. Traffic through each VNIC can be classified and separated into individual flows, based on port number, destination IP address, and other parameters. These features can be used to improve system efficiency and enable differentiated services for separate VNICs.
=== Observability features ===
Standard Solaris observability tools can be used to monitor the status of exclusive IP instances, VNICs, and virtual machines running on VNICs. For example, familiar tools such as ping and snoop can report status on the operations of a VNIC. Additionally, the Netstat.1m command has been extended for Crossbow to report statistics on packet flows defined with the flowadm command.
== Feature and code availability ==
The exclusive IP zones feature was first introduced in the Solaris 10 8/07 release. The first version of the Crossbow feature set was incorporated in OpenSolaris 2009.06. The full Crossbow feature set became part of Solaris with the 2011 release of Solaris 11.
Oracle discontinued the OpenSolaris download sites after its acquisition of Sun Microsystems, but source code for Crossbow can be downloaded from the sites of the derivatives of illumos (see illumos § Distributions).
== See also ==
Solaris Containers
Network virtualization
== References ==
Belgaied, Kais and Lu, Roamer. “Crossbow Hardware Resources Management and Virtualization”
Droux, Nicolas, "Crossbow Network Virtualization Architecture"
Rami, Rosen, Virtualization in OpenSolaris
System Administration Guide: Solaris Containers-Resource Management and Solaris Zones
Rami, Rosen, Open Solaris lecture (slides in pdf)
Moellenkamp, Joerg Configuration of Crossbow Network Virtualisation
Moellenkamp, Joerg Configuration of Crossbow Bandwidth Limiting and Accounting
== External links ==
"OpenSolaris Project: Crossbow: Network Virtualization and Resource Control". Archived from the original on 2009-10-21. The project page for OpenSolaris Crossbow, which includes technical specifications, documentation and latest news about the project.
dladm man pages. Links for the most current dladm man pages, which is one of the main tools used to manage virtual network resources. | Wikipedia/Solaris_network_virtualization_and_resource_control |
The IP network multipathing or IPMP is a facility provided by Solaris to provide fault-tolerance and load spreading for network interface cards (NICs). With IPMP, two or more NICs are dedicated for each network to which the host connects. Each interface can be assigned a static "test" IP address, which is used to assess the operational state of the interface. Each virtual IP address is assigned to an interface, though there may be more interfaces than virtual IP addresses, some of the interfaces being purely for standby purposes. When the failure of an interface is detected its virtual IP addresses are swapped to an operational interface in the group.
The IPMP load spreading feature increases the machine's bandwidth by spreading the outbound load between all the cards in the same IPMP group.
in.mpathd is the daemon in the Solaris OS responsible for IPMP functionality.
== See also ==
Multihoming
Multipath routing
Multipath TCP
Common Address Redundancy Protocol
== External links ==
Enterprise Networking Article, February 2, 2006
Introducing IPMP - Oracle Solaris 11
IPMP section from Sun Solaris 10 System Administration Guide | Wikipedia/Solaris_IP_network_multipathing |
Callan Data Systems, Inc. was an American computer manufacturer founded by David Callan in Westlake Village, California on January 24, 1980. The company was best known for their Unistar range of Unix workstations, and shut down again in 1985.
== Unistar ==
After initial success building a Multibus chassis with a self-contained VT100-compatible CRT display terminal to OEMs, the company designed and built desktop workstations named Unistar using the Sun-1 board, which was based on the Motorola 68000 CPU, and which ran UNIX licensed from AT&T. The manufacturing consisted of building the chassis, power supplies, motherboard, and a few critical Multibus boards such as the CPU, memory, and floppy and hard drive controllers. Other peripheral boards such as an Ethernet controller were purchased from other OEMs. The software development consisted chiefly of writing device drivers for the integrated system, based on the UNIX kernel, and integrating third-party applications for resale to customers. Investment totaled $10 million, raised from the founders and from venture capital. Employment peaked in 1984 at 80 persons.
Other firms at the time were competing to build the first commercial UNIX workstations based on inexpensive microprocessor-based Multibus-single-board CPUs. Among these competitors were Sun Microsystems (which based their initial enormous success on their original similar SUN-based workstation), HP, Apollo, Ithaca InterSystems and Wicat.
Callan sold about a thousand units in various models, including the Unistar 100, 200, and 300. The 100 and 200 models, first delivered in 1982, used the desktop chassis/CRT combination with Multibus backplane, with a list price of about $12,000. The 300 model of 1985 was a floor-standing chassis using dumb terminals, and sold for about $20,000. CPU speeds were typically 8 MHz, with 256KB to 2MB of main memory, and from 10MB to 43MB of hard disk storage. A 400 model using 360MB Fujitsu hard drives was prototyped. UNIX V7 was originally ported to the Unistars, and later UNIX System V; all the Uniplus ports were provided by UniSoft.
== Decline ==
Although aggressive sales of the Unistar computers won a modest number of industrial and government buyers, with sales peaking at $7 million in 1984, Callan was not selling enough to be profitable. Competitive workstations from Sun and HP running BSD UNIX were gaining market share, and the UNIX System V incompatibilities, though slight, made it even more difficult for Callan to compete. Sales in 1985 shrank to less than half the previous year, and Callan was reorganized in bankruptcy under the control of numerous creditors. After a few futile months of attempting recovery, the committee of creditors voted to liquidate the company assets valued at $1.6 million by public auction in bulk. The Dove family auctioneers, who had famously handled the recent liquidation of the Osborne Computer Corporation, won the company assets for $201 thousand (13 cents per dollar of valuation) in December 1985, and began selling inventory to owners of systems who wanted spare parts or upgrades at full price. After several weeks of this retailing, the Doves held a public auction at the plant site in February 1986, selling the entire remaining inventory to the highest bidders, and reaping many times their original investment. The bankruptcy proceeding eventually paid secured creditors in full. Unsecured creditors were left holding $1.9 million in debt, and in 1988 were paid 1.3 cents for each dollar to finally close the case.
Callan Unistar computers continued to be used during the 1980s. At least one Unistar 300 was still running a critical database application for the U.S. Government into the 1990s.
== See also ==
Workstation
== References ==
== External links ==
Richard J Kinch. An independent systems integrator of Callan computers, who sold Callan spare parts for many years after the demise of the company. | Wikipedia/Callan_Data_Systems |
BEA Systems, Inc. was a company that specialized in enterprise infrastructure software products, which was wholly acquired by Oracle Corporation on April 29, 2008.
== History ==
BEA began as a software company, founded in 1995 and headquartered in San Jose, California. It grew to have 78 offices worldwide at the time of its acquisition by Oracle.
The company's name is an initialism of the first names of the company's three founders: Bill Coleman, Ed Scott, and Alfred Chuang. All were former employees of Sun Microsystems, and launched the business in 1995 by acquiring Information Management and Independence Technologies. These firms were the largest resellers of Tuxedo, a distributed transaction management system sold by Novell. BEA soon acquired the Tuxedo product itself, and went on to acquire other middleware companies and products.
In 1998, BEA acquired the San Francisco start-up WebLogic, which had built the first standards-based Java application server. WebLogic's application server became the impetus for the Sun Microsystems' J2EE specification and formed the basis of BEA's WebLogic application server sold today.
They were a sponsor for Team Rahal (now Rahal Letterman Lanigan Racing) from 2002 to 2008, which included Buddy Rice's 2004 Indianapolis 500 win and Vitor Meira's 2005 Indianapolis 500 runner-up finish.
In 2005, BEA launched a new brand identity with the slogan "Think Liquid.". BEA also announced a new product line called AquaLogic, which is an infrastructure software family for service-oriented architecture (SOA). The same year, it made its entrance into telecommunications infrastructure through the acquisition of Incomit, a Swedish telecommunications software provider. In late 2005, the company announced the acquisitions of Compoze Software, a provider of collaboration software, M7, an Eclipse-based tools company, and SolarMetric, editors of the Kodo persistence engine.
The acquisitions continued in 2006 with Plumtree Software, an enterprise portal company; Fuego, a business process management (BPM) software company; and Flashline, a metadata repository company. These acquisitions have since become parts of the AquaLogic SOA product stack.
On October 12, 2007, Oracle announced their intent to buy BEA Systems for $6.7 billion. As a result of the offer, BEA's stock price rose over five dollars upon the opening of trading for the day. BEA turned the offer down the same day, saying that the company is "worth substantially more". On January 16, 2008, Oracle signed a definite agreement to buy BEA for $8.5 billion. It is believed that Carl Icahn, one of the company's most prominent shareholders, was the main reason that the deal happened.
On April 29, 2008, Oracle completed its acquisition of BEA.
== Products ==
BEA had three major product lines:
Tuxedo, now Oracle Tuxedo – transaction-oriented middleware platform
BEA WebLogic, now Oracle WebLogic Server – Java EE enterprise infrastructure platform
AquaLogic, now Oracle Service Bus – service-oriented architecture (SOA) platform
BEA started out with the Tuxedo software product, but currently the products they are best known for in the computer industry are the WebLogic product family, which consists of WebLogic Server, WebLogic Workshop, WebLogic Portal, WebLogic Integration, and JRockit. In 2005, BEA launched a new product family called AquaLogic for service-oriented architecture deployment. They have also entered the telecommunications field with their WebLogic Communications Platform, which includes WebLogic SIP Server and WebLogic Network Gatekeeper, technologies obtained through the acquisition of Swedish telecommunications software company Incomit. BEA also has a product offering for the RFID market called the BEA WebLogic RFID Product Family.
=== AquaLogic ===
BEA Systems produced the AquaLogic software suite for managing service-oriented architecture (SOA). It includes following products:
BEA AquaLogic BPM suite, a set of business process management (BPM) tools. It combines workflow and process technology with enterprise application integration functionality. The suite consists of tools aimed for line of business personnel for creating business process models (AquaLogic BPM Designer), as well as tools for IT personnel to create actual business process applications directly from said models (AquaLogic BPM Studio). The completed business process applications are deployed on a production server (AquaLogic BPM Enterprise Server), from which they integrate to backend applications and generate portal views for human interactions in the process. It also comes with a customizable tools for live business activity monitoring (BAM).
BEA AquaLogic User Interaction, a set of tools used to create portals, collaborative communities composite applications and other applications that use service architecture. These technologies work cross-platform. This technology came to BEA Systems from its acquisition of Plumtree Software.
BEA AquaLogic Enterprise Repository, a vital element of effective Service-oriented architecture life cycle governance, manages the metadata for any type of software asset, from business processes and web services to patterns, frameworks, applications, and components. It maps the relationships and interdependencies that connect these assets to improve impact analysis, promote and systematize code reuse, and measure the impact on the bottom line.
BEA AquaLogic Service Bus, an enterprise service bus (ESB) with operational service-management that allows the interaction between services, routing relationships, transformations, and policies.
BEA AquaLogic Service Registry, a UDDI v3 registry with an embedded governance framework. It provides a repository where services can be registered and reused for developing or modifying applications.
BEA AquaLogic Data Services Platform (previously known as Liquid Data), providing tools for creating and managing different data services. It uses the XQuery language for data composition and transformation for a variety of data sources, including relational databases and web services.
BEA AquaLogic Enterprise Security, a security infrastructure application for distributed authentication, fine-grained entitlements and other security services. Features include allowing users to define access rules for applications without modifying the software itself, including JSP pages, EJBs, and portlets.
BEA AquaLogic Commerce Services (often shortened as ALCS), an e-Commerce solution based on Elastic Path e-Commerce solution integrated with WebLogic application server. Discontinued on version 6.0 in 2009, a year after acquisition by Oracle.
== See also ==
List of acquisitions by Oracle
== References ==
== External links ==
BEA Systems - World Website | Wikipedia/BEA_Systems |
Middleware in the context of distributed applications is software that provides services beyond those provided by the operating system to enable the various components of a distributed system to communicate and manage data. Middleware supports and simplifies complex distributed applications. It includes web servers, application servers, messaging and similar tools that support application development and delivery. Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture.
Middleware often enables interoperability between applications that run on different operating systems, by supplying services so the application can exchange data in a standards-based way. Middleware sits "in the middle" between application software that may be working on different operating systems. It is similar to the middle layer of a three-tier single system architecture, except that it is stretched across multiple systems or applications. Examples include EAI software, telecommunications software, transaction monitors, and messaging-and-queueing software.
The distinction between operating system and middleware functionality is, to some extent, arbitrary. While core kernel functionality can only be provided by the operating system itself, some functionality previously provided by separately sold middleware is now integrated in operating systems. A typical example is the TCP/IP stack for telecommunications, nowadays included virtually in every operating system.
== Definitions ==
Middleware is defined as software that provides a link between separate software applications. It is sometimes referred to as plumbing because it connects two applications and passes data between them. Middleware allows data contained in one database to be accessed through another. This makes it particularly useful for enterprise application integration and data integration tasks.
In more abstract terms, middleware is "The software layer that lies between the operating system and applications on each side of a distributed computing system in a network."
== Origins ==
Middleware gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968. It also facilitated distributed processing, the connection of multiple applications to create a larger application, usually over a network.
== Use ==
Middleware services provide a more functional set of application programming interfaces to allow an application to:
Locate transparently across the network, thus providing interaction with another service or application
Filter data to make them friendly usable or public via anonymization process for privacy protection (for example)
Be independent from network services
Be reliable and always available
Add complementary attributes like semantics
when compared to the operating system and network services.
Middleware offers some unique technological advantages for business and industry. For example, traditional database systems are usually deployed in closed environments where users access the system only via a restricted network or intranet (e.g., an enterprise’s internal network). With the phenomenal growth of the World Wide Web, users can access virtually any database for which they have proper access rights from anywhere in the world. Middleware addresses the problem of varying levels of interoperability among different database structures. Middleware facilitates transparent access to legacy database management systems (DBMSs) or applications via a web server without regard to database-specific characteristics.
Businesses frequently use middleware applications to link information from departmental databases, such as payroll, sales, and accounting, or databases housed in multiple geographic locations. In the highly competitive healthcare community, laboratories make extensive use of middleware applications for data mining, laboratory information system (LIS) backup, and to combine systems during hospital mergers. Middleware helps bridge the gap between separate LISs in a newly formed healthcare network following a hospital buyout.
Middleware can help software developers avoid having to write application programming interfaces (API) for every control program, by serving as an independent programming interface for their applications.
For Future Internet network operation through traffic monitoring in multi-domain scenarios, using mediator tools (middleware) is a powerful help since they allow operators, searchers and service providers to supervise Quality of service and analyse eventual failures in telecommunication services. The Middleware stack is devised of several components (CSMS, TV Statistics & Client applications). It is known as the software brains of OTT platforms as it controls and interconnects all the components of the solution. The Content and Subscriber Management System (CSMS) is the central part of the solution commonly referred to as an administration portal. Apart from being the main interface for operator personnel to administer the TV service (Subscribers, Content, Packages, etc.) it also controls the majority of TV services and interacts with streaming & CDN and DRM serves to deliver Live, VOD and recorded content to the end users. It also integrates with external systems for billing, provisioning and with EPG and VOD content providers. Client applications authorize the CSMS and communicate with it, to provide required TV services to the end users on different devices.
Finally, e-commerce uses middleware to assist in handling rapid and secure transactions over many different types of computer environments. In short, middleware has become a critical element across a broad range of industries, thanks to its ability to bring together resources across dissimilar networks or computing platforms.
In 2004 members of the European Broadcasting Union (EBU) carried out a study of Middleware with respect to system integration in broadcast environments. This involved system design engineering experts from 10 major European broadcasters working over a 12-month period to understand the effect of predominantly software-based products to media production and broadcasting system design techniques. The resulting reports Tech 3300 and Tech 3300s were published and are freely available from the EBU web site.
== Types ==
=== Message-oriented middleware ===
Message-oriented middleware (MOM) is middleware where transactions or event notifications are delivered between disparate systems or components by way of messages, often via an enterprise messaging system. With MOM, messages sent to the client are collected and stored until they are acted upon, while the client continues with other processing.
Enterprise messaging
An enterprise messaging system is a type of middleware that facilitates message passing between disparate systems or components in standard formats, often using XML, SOAP or web services. As part of an enterprise messaging system, message broker software may queue, duplicate, translate and deliver messages to disparate systems or components in a messaging system.
Enterprise service bus
Enterprise service bus (ESB) is defined by the Burton Group as "some type of integration middleware product that supports both message-oriented middleware and Web services".
=== Intelligent middleware ===
Intelligent Middleware (IMW) provides real-time intelligence and event management through intelligent agents. The IMW manages the real-time processing of high volume sensor signals and turns these signals into intelligent and actionable business information. The actionable information is then delivered in end-user power dashboards to individual users or is pushed to systems within or outside the enterprise. It is able to support various heterogeneous types of hardware and software and provides an API for interfacing with external systems. It should have a highly scalable, distributed architecture which embeds intelligence throughout the network to transform raw data systematically into actionable and relevant knowledge. It can also be packaged with tools to view and manage operations and build advanced network applications most effectively.
=== Content-centric middleware ===
Content-centric middleware offers a simple provider-consumer abstraction through which applications can issue requests for uniquely identified content, without worrying about where or how it is obtained. Juno is one example, which allows applications to generate content requests associated with high-level delivery requirements. The middleware then adapts the underlying delivery to access the content from sources that are best suited to matching the requirements. This is therefore similar to Publish/subscribe middleware, as well as the Content-centric networking paradigm.
Remote procedure call
Remote procedure call middleware enables a client to use services running on remote systems. The process can be synchronous or asynchronous.
Object request broker
With object request broker middleware, it is possible for applications to send objects and request services in an object-oriented system.
SQL-oriented data access
SQL-oriented Data Access is middleware between applications and database servers.
Embedded middleware
Embedded middleware provides communication services and software/firmware integration interface that operates between embedded applications, the embedded operating system, and external applications.
=== Policy Appliances ===
Policy appliance is a generic term referring to any form of middleware that manages policy rules. They can mediate between data owners or producers, data aggregators, and data users. Among heterogeneous institutional systems or networks they may be used to enforce, reconcile, and monitor agreed information management policies and laws across systems (or between jurisdictions) with divergent information policies or needs. Policy appliances can interact with smart data (data that carries with it contextual relevant terms for its own use), intelligent agents (queries that are self-credentialed, authenticating, or contextually adaptive), or context-aware applications to control information flows, protect security and confidentiality, and maintain privacy. Policy appliances support policy-based information management processes by enabling rules-based processing, selective disclosure, and accountability and oversight.
Examples of policy appliance technologies for rules-based processing include analytic filters, contextual search, semantic programs, labeling and wrapper tools, and DRM, among others; policy appliance technologies for selective disclosure include anonymization, content personalization, subscription and publishing tools, among others; and, policy appliance technologies for accountability and oversight include authentication, authorization, immutable and non-repudiable logging, and audit tools, among others.
=== Other ===
Other sources include these additional classifications:
Transaction processing monitors – provides tools and an environment to develop and deploy distributed applications.
Application servers – software installed on a computer to facilitate the serving (running) of other applications.
== Integration Levels ==
=== Data Integration ===
Integration of data resources like files and databases
=== Cloud Integration ===
Integration between various cloud services
=== B2B Integration ===
Integration of data resources and partner interfaces
=== Application Integration ===
Integration of applications managed by a company
== Vendors ==
IBM, Red Hat, Oracle Corporation and Microsoft are some of the vendors that provide middleware software. Vendors such as Axway, SAP, TIBCO, Informatica, Objective Interface Systems, Pervasive, ScaleOut Software and webMethods were specifically founded to provide more niche middleware solutions. Groups such as the Apache Software Foundation, OpenSAF, the ObjectWeb Consortium (now OW2) and OASIS' AMQP encourage the development of open source middleware. Microsoft .NET "Framework" architecture is essentially "Middleware" with typical middleware functions distributed between the various products, with most inter-computer interaction by industry standards, open APIs or RAND software licence. Solace provides middleware in purpose-built hardware for implementations that may experience scale. StormMQ provides Message Oriented Middleware as a service.
== See also ==
Comparison of business integration software
Middleware Analysts
Service-oriented architecture
Enterprise Service Bus
Event-driven SOA
ObjectWeb
== References ==
== External links ==
Internet2 Middleware Initiative Archived 2005-07-23 at the Wayback Machine
SWAMI - Swedish Alliance for Middleware Infrastructure
Open Middleware Infrastructure Institute (OMII-UK)
Middleware Integration Levels
European Broadcasting Union Middleware report.
More detailed supplement to the European Broadcasting Union Middleware report.
ObjectWeb - international community developing open-source middleware | Wikipedia/Middleware_(distributed_applications) |
Hyperion Solutions Corporation was a software company located in Santa Clara, California, which was acquired by Oracle Corporation in 2007. Many of its products were targeted at the business intelligence (BI) and business performance management markets, and as of 2013 were developed and sold as Oracle Hyperion products.
Hyperion Solutions was formed from the merger of Hyperion Software (formerly IMRS) and Arbor Software in 1998.
== History ==
1981 - IMRS founded by Bob Thomson and Marco Arese
1983 - IMRS launches financial and management consolidation software called "Micro Control"
1985 - IMRS hires Jim Perakis as CEO; he remains in this position during growth from $1M to almost $300M
1991 - IMRS becomes a public company and launches a Windows-based successor to 'Micro Control' called 'Hyperion'
1992 - Arbor Software ships first version of Essbase Online Analytical processing OLAP software
1995 - Due to the success of the "Hyperion" product IMRS changes name to "Hyperion Software Corporation" and the name of the product is changed to "Hyperion Enterprise." Arbor becomes a publicly held company
1997 - Arbor acquires Appsource
1998 - Hyperion Software merges with Arbor and the combined company is renamed Hyperion Solutions
1999 - Jeffrey Rodek named as Hyperion Chairman and CEO of Hyperion. Hyperion acquires Sapling Corporation (Enterprise Performance Management applications)
2001 - Godfrey Sullivan is named Hyperion President and COO
2003 - Hyperion acquires Brio Technology and The Alcar Group
2004 - Hyperion names Jeffrey Rodek Executive Chairman; Godfrey Sullivan President and CEO
2005 - Hyperion acquires Razza Solutions (Master data management) and appoints Northdoor as a reseller in the UK and Ireland.
2006 - Hyperion acquires UpStream (Financial Data Quality Management)
2006 - Hyperion acquires Beatware (Data visualization for Web and Mobile Devices)
2007 - Hyperion acquired Decisioneering (Crystal Ball software).
Oracle Corporation announced on March 1, 2007 it had agreed to purchase Hyperion Solutions Corporation for $3.3 billion in cash.
The transaction was completed on April 18, 2007 and Hyperion now operates as a division of Oracle.
Oracle extended support for most Hyperion products (v11.1.2.x) to 2018.
Hyperion BI tools were bundled into Oracle Business Intelligence Suite Enterprise Edition.
== Market ==
Vendors in the business intelligence space are often categorized into:
The consolidated big four "megavendors", which include Oracle Hyperion as well as SAP BusinessObjects, IBM Cognos, and Microsoft BI.
The independent "pure-play" vendors, the largest being MicroStrategy, Tableau, QlikView and SAS.
== Products ==
Hyperion software products included:
Essbase
Hyperion Intelligence and SQR Production Reporting (products acquired in 2003 takeover of Brio Technology)
Hyperion Enterprise
Hyperion Planning
Hyperion Strategic Finance
Hyperion Financial Data Management
Hyperion Enterprise Performance Management Architect
Hyperion Financial Close Management
Hyperion Account Reconciliation
Hyperion Disclosure Management
Hyperion Performance Scorecard
Hyperion Business Modelling
Hyperion Financial Management
Hyperion Master Data Management/Oracle Data Relationship Management
Hyperion Financial Reporting
Hyperion Web Analysis
Hyperion SmartView
Hyperion EPM Workspace
Hyperion Profitability and Cost Management
Hyperion System 9 BI+ (a combination of Interactive Reporting, SQR, Web Analysis, Financial Reporting, EPM Workspace and SmartView)
Hyperion Financial Data Quality Management (also referred to as FDMEE, for Enterprise Edition)
Hyperion Tax Provision
Planning Budgeting Cloud Service
Enterprise Performance Reporting Cloud Service
== References ==
== External links ==
Official website
The Hyperion Developer Network
Hyperion Press Kit | Wikipedia/Hyperion_Solutions |
Lustre is a type of parallel distributed file system, generally used for large-scale cluster computing. The name Lustre is a portmanteau word derived from Linux and cluster. Lustre file system software is available under the GNU General Public License (version 2 only) and provides high performance file systems for computer clusters ranging in size from small workgroup clusters to large-scale, multi-site systems. Since June 2005, Lustre has consistently been used by at least half of the top ten, and more than 60 of the top 100 fastest supercomputers in the world,
including the world's No. 1 ranked TOP500 supercomputer in November 2022, Frontier, as well as previous top supercomputers such as Fugaku,
Titan and Sequoia.
Lustre file systems are scalable and can be part of multiple computer clusters with tens of thousands of client nodes, hundreds of petabytes (PB) of storage on hundreds of servers, and tens of terabytes per second (TB/s) of aggregate I/O throughput. This makes Lustre file systems a popular choice for businesses with large data centers, including those in industries such as meteorology, simulation, artificial intelligence and machine learning, oil and gas, life science, rich media, and finance. The I/O performance of Lustre has widespread impact on these applications and has attracted broad attention.
== History ==
The Lustre file system architecture was started as a research project in 1999 by Peter J. Braam, who was a staff member of Carnegie Mellon University (CMU) at the time. Braam went on to found his own company Cluster File Systems in 2001, starting from work on the InterMezzo file system in the Coda project at CMU.
Lustre was developed under the Accelerated Strategic Computing Initiative Path Forward project funded by the United States Department of Energy, which included Hewlett-Packard and Intel.
In September 2007, Sun Microsystems acquired the assets of Cluster File Systems Inc. including its "intellectual property".
Sun included Lustre with its high-performance computing hardware offerings, with the intent to bring Lustre technologies to Sun's ZFS file system and the Solaris operating system. In November 2008, Braam left Sun Microsystems, and Eric Barton and Andreas Dilger took control of the project.
In 2010 Oracle Corporation, by way of its acquisition of Sun, began to manage and release Lustre.
In December 2010, Oracle announced that they would cease Lustre 2.x development and place Lustre 1.8 into maintenance-only support, creating uncertainty around the future development of the file system.
Following this announcement, several new organizations sprang up to provide support and development in an open community development model, including Whamcloud,
Open Scalable File Systems, Inc. (OpenSFS), EUROPEAN Open File Systems (EOFS) and others. By the end of 2010, most Lustre developers had left Oracle. Braam and several associates joined the hardware-oriented Xyratex when it acquired the assets of ClusterStor,
while Barton, Dilger, and others formed software startup Whamcloud, where they continued to work on Lustre.
In August 2011, OpenSFS awarded a contract for Lustre feature development to Whamcloud. This contract covered the completion of features, including improved Single Server Metadata Performance scaling, which allows Lustre to better take advantage of many-core metadata server; online Lustre distributed filesystem checking (LFSCK), which allows verification of the distributed filesystem state between data and metadata servers while the filesystem is mounted and in use; and Distributed Namespace Environment (DNE), formerly Clustered Metadata (CMD), which allows the Lustre metadata to be distributed across multiple servers. Development also continued on ZFS-based back-end object storage at Lawrence Livermore National Laboratory. These features were in the Lustre 2.2 through 2.4 community release roadmap.
In November 2011, a separate contract was awarded to Whamcloud for the maintenance of the Lustre 2.x source code to ensure that the Lustre code would receive sufficient testing and bug fixing while new features were being developed.
In July 2012 Whamcloud was acquired by Intel, after Whamcloud won the FastForward DOE contract to prepare Lustre for use with exascale computing systems in the 2018 timeframe. OpenSFS then transitioned contracts for Lustre development to Intel.
In February 2013, Xyratex Ltd., announced it acquired the original Lustre trademark, logo, website and associated intellectual property from Oracle. In June 2013, Intel began expanding Lustre usage beyond traditional HPC, such as within Hadoop. For 2013 as a whole, OpenSFS announced request for proposals (RFP) to cover Lustre feature development, parallel file system tools, addressing Lustre technical debt, and parallel file system incubators. OpenSFS also established the Lustre Community Portal, a technical site that provides a collection of information and documentation in one area for reference and guidance to support the Lustre open source community. On April 8, 2014, Ken Claffey announced that Xyratex/Seagate was donating the lustre.org domain back to the user community, and this was completed in March, 2015.
In June 2018, the Lustre team and assets were acquired from Intel by DDN. DDN organized the new acquisition as an independent division, reviving the Whamcloud name for the new division.
In November 2019, OpenSFS and EOFS announced at the SC19 Lustre BOF that the Lustre trademark had been transferred to them jointly from Seagate.
== Release history ==
Lustre file system was first installed for production use in March 2003 on the MCR Linux Cluster at the Lawrence Livermore National Laboratory, the third-largest supercomputer in the Top500 list at the time.
Lustre 1.0.0 was released in December 2003, and provided basic Lustre filesystem functionality, including server failover and recovery.
Lustre 1.2.0, released in March 2004, worked on Linux kernel 2.6, and had a "size glimpse" feature to avoid lock revocation on files undergoing write, and client side data write-back cache accounting (grant).
Lustre 1.4.0, released in November 2004, provided protocol compatibility between versions, could use InfiniBand networks, and could exploit extents/mballoc in the ldiskfs on-disk filesystem.
Lustre 1.6.0, released in April 2007, allowed mount configuration (“mountconf”) allowing servers to be configured with "mkfs" and "mount", allowed dynamic addition of object storage targets (OSTs), enabled Lustre distributed lock manager (LDLM) scalability on symmetric multiprocessing (SMP) servers, and provided free space management for object allocations.
Lustre 1.8.0, released in May 2009, provided OSS Read Cache, improved recovery in the face of multiple failures, added basic heterogeneous storage management via OST Pools, adaptive network timeouts, and version-based recovery. It was a transition release, being interoperable with both Lustre 1.6 and Lustre 2.0.
Lustre 2.0, released in August 2010, was based on significant internally restructured code to prepare for major architectural advancements. Lustre 2.x clients cannot interoperate with 1.8 or earlier servers. However, Lustre 1.8.6 and later clients can interoperate with Lustre 2.0 and later servers. The Metadata Target (MDT) and OST on-disk format from 1.8 can be upgraded to 2.0 and later without the need to reformat the filesystem.
Lustre 2.1, released in September 2011, was a community-wide initiative in response to Oracle suspending development on Lustre 2.x releases. It added the ability to run servers on Red Hat Linux 6 and increased the maximum ext4-based OST size from 24 TB to 128 TB, as well as a number of performance and stability improvements. Lustre 2.1 servers remained inter-operable with 1.8.6 and later clients.
Lustre 2.2, released in March 2012, focused on providing metadata performance improvements and new features. It added parallel directory operations allowing multiple clients to traverse and modify a single large directory concurrently, faster recovery from server failures, increased stripe counts for a single file (across up to 2000 OSTs), and improved single-client directory traversal performance.
Lustre 2.3, released in October 2012, continued to improve the metadata server code to remove internal locking bottlenecks on nodes with many CPU cores (over 16). The object store added a preliminary ability to use ZFS as the backing file system. The Lustre File System ChecK (LFSCK) feature can verify and repair the MDS Object Index (OI) while the file system is in use, after a file-level backup/restore or in case of MDS corruption. The server-side IO statistics were enhanced to allow integration with batch job schedulers such as SLURM to track per-job statistics. Client-side software was updated to work with Linux kernels up to version 3.0.
Lustre 2.4, released in May 2013, added a considerable number of major features, many funded directly through OpenSFS. Distributed Namespace Environment (DNE) allows horizontal metadata capacity and performance scaling for 2.4 clients, by allowing subdirectory trees of a single namespace to be located on separate MDTs. ZFS can now be used as the backing filesystem for both MDT and OST storage. The LFSCK feature added the ability to scan and verify the internal consistency of the MDT FID and LinkEA attributes. The Network Request Scheduler
(NRS) adds policies to optimize client request processing for disk ordering or fairness. Clients can optionally send bulk RPCs up to 4 MB in size. Client-side software was updated to work with Linux kernels up to version 3.6, and is still interoperable with 1.8 clients.
Lustre 2.5, released in October 2013, added the highly anticipated feature, Hierarchical Storage Management (HSM). A core requirement in enterprise environments, HSM allows customers to easily implement tiered storage solutions in their operational environment. This release is the current OpenSFS-designated Maintenance Release branch of Lustre. The most recent maintenance version is 2.5.3 and was released in September 2014.
Lustre 2.6, released in July 2014, was a more modest release feature wise, adding LFSCK functionality to do local consistency checks on the OST as well as consistency checks between MDT and OST objects. The NRS Token Bucket Filter
(TBF) policy was added. Single-client IO performance was improved over the previous releases. This release also added a preview of DNE striped directories, allowing single large directories to be stored on multiple MDTs to improve performance and scalability.
Lustre 2.7, released in March 2015, added LFSCK functionality to verify DNE consistency of remote and striped directories between multiple MDTs. Dynamic LNet Config adds the ability to configure and modify LNet network interfaces, routes, and routers at runtime. A new evaluation feature was added for UID/GID mapping for clients with different administrative domains, along with improvements to the DNE striped directory functionality.
Lustre 2.8, released in March 2016, finished the DNE striped directory feature, including support for migrating directories between MDTs, and cross-MDT hard link and rename. As well, it included improved support for Security-Enhanced Linux (SELinux) on the client, Kerberos authentication and RPC encryption over the network, and performance improvements for LFSCK.
Lustre 2.9 was released in December 2016
and included a number of features related to security and performance. The Shared Secret Key security flavour uses the same GSSAPI mechanism as Kerberos to provide client and server node authentication, and RPC message integrity and security (encryption). The Nodemap feature allows categorizing client nodes into groups and then mapping the UID/GID for those clients, allowing remotely administered clients to transparently use a shared filesystem without having a single set of UID/GIDs for all client nodes. The subdirectory mount feature allows clients to mount a subset of the filesystem namespace from the MDS. This release also added support for up to 16 MiB RPCs for more efficient I/O submission to disk, and added the ladvise interface to allow clients to provide I/O hints to the servers to prefetch file data into server cache or flush file data from server cache. There was improved support for specifying filesystem-wide default OST pools, and improved inheritance of OST pools in conjunction with other file layout parameters.
Lustre 2.10 was released in July 2017
and has a number of significant improvements. The LNet Multi-Rail (LMR) feature allows bonding multiple network interfaces (InfiniBand, Omni-Path, and/or Ethernet) on a client and server to increase aggregate I/O bandwidth. Individual files can use composite file layouts that are constructed of multiple components, which are file regions based on the file offset, that allow different layout parameters such as stripe count, OST pool/storage type, etc. Progressive File Layout (PFL) is the first feature to use composite layouts, but the implementation is flexible for use with other file layouts such as mirroring and erasure coding. The NRS Token Bucket Filter (TBF) server-side scheduler has implemented new rule types, including RPC-type scheduling and the ability to specify multiple parameters such as JobID and NID for rule matching. Tools for managing ZFS snapshots of Lustre filesystems have been added, to simplify the creation, mounting, and management of MDT and OST ZFS snapshots as separate Lustre mountpoints.
Lustre 2.11 was released in April 2018
and contains two significant new features, and several smaller features. The File Level Redundancy (FLR) feature expands on the 2.10 PFL implementation, adding the ability to specify mirrored file layouts for improved availability in case of storage or server failure and/or improved performance with highly concurrent reads. The Data-on-MDT (DoM) feature allows small (few MiB) files to be stored on the MDT to leverage typical flash-based RAID-10 storage for lower latency and reduced IO contention, instead of the typical HDD RAID-6 storage used on OSTs. As well, the LNet Dynamic Discovery feature allows auto-configuration of LNet Multi-Rail between peers that share an LNet network. The LDLM Lock Ahead feature allows appropriately modified applications and libraries to pre-fetch DLM extent locks from the OSTs for files, if the application knows (or predicts) that this file extent will be modified in the near future, which can reduce lock contention for multiple clients writing to the same file.
Lustre 2.12 was released on December 21, 2018 and focused on improving Lustre usability and stability, with improvements the performance and functionality of the FLR and DoM features added in Lustre 2.11, as well as smaller changes to NRS TBF, HSM, and JobStats. It added LNet Network Health Archived 2019-02-12 at the Wayback Machine to allow the LNet Multi-Rail feature from Lustre 2.10 to better handle network faults when a node has multiple network interfaces. The Lazy Size on MDT (LSOM) feature allows storing an estimate of the file size on the MDT for use by policy engines, filesystem scanners, and other management tools that can more efficiently make decisions about files without a fully accurate file sizes or blocks count without having to query the OSTs for this information. This release also added the ability to manually restripe an existing directory across multiple MDTs, to allow migration of directories with large numbers of files to use the capacity and performance of several MDS nodes. The Lustre RPC data checksum added SCSI T10-PI integrated data checksums from the client to the kernel block layer, SCSI host adapter, and T10-enabled hard drives.
Lustre 2.13 was released on December 5, 2019 and added a new performance-related features Persistent Client Cache (PCC), which allows direct use of NVMe and NVRAM storage on the client nodes while keeping the files part of the global filesystem namespace, and OST Overstriping which allows files to store multiple stripes on a single OST to better utilize fast OSS hardware. As well, the LNet Multi-Rail Network Health functionality was improved to work with LNet RDMA router nodes. The PFL functionality was enhanced with Self-Extending Layouts (SEL) to allow file components to be dynamically sized, to better deal with flash OSTs that may be much smaller than disk OSTs within the same filesystem. The release also included a number of smaller improvements, such as balancing DNE remote directory creation across MDTs, using Lazy-size-on-MDT to reduce the overhead of "lfs find", directories with 10M files per shard for ldiskfs, and bulk RPC sizes up to 64 MB.
Lustre 2.14 was released on February 19, 2021 and includes three main features. Client Data Encryption implements fscrypt to allow file data to be encrypted on the client before network transfer and persistent storage on the OST and MDT. OST Pool Quotas extends the quota framework to allow the assignment and enforcement of quotas on the basis of OST storage pools. DNE Auto Restriping can now adjust how many MDTs a large directory is striped over based on size thresholds defined by the administrator, similar to Progressive File Layouts for directories.
Lustre 2.15 was released on June 16, 2022 and includes three main features. Client Directory Encryption
expands on the fscrypt data encryption in the 2.14 release to also allow file and directory names to be encrypted on the client before network transfer and persistent storage on the MDT. DNE MDT space balancing automatically balances new directory creation across MDTs in the filesystem in round-robin and/or based on available inodes and space, which in turn helps distribute client metadata workload over MDTs more evenly. For applications using the NVIDIA GPU Direct Storage interface (GDS),
the Lustre client can do zero-copy RDMA read and write from the storage server directly into the GPU memory to avoid an extra data copy from CPU memory and extra processing overhead.
User Defined Selection Policy (UDSP) allows setting interface selection policies for nodes with multiple network interfaces.
Lustre 2.16 was released on November 8, 2024 and includes three main features. Large network addressing support allows for IPv6 and potentially other large address formats like Infiniband GUIDs beyond to be used for client and server node addressing, in addition to the standard IPv4 addresses. The Unaligned and Hybrid Direct IO feature improves performance for applications doing large buffered and direct read/write operations by avoiding overhead in the client page cache. The Optimized Directory Traversal (batched statahead) feature improves application workloads that traverse directory hierarchies and access file attributes in a systematic access pattern by prefetching file attributes in parallel from the MDS(es) using bulk RPCs.
== Architecture ==
A Lustre file system has three major functional units:
One or more metadata servers (MDS) nodes that have one or more metadata target (MDT) devices per Lustre filesystem that stores namespace metadata, such as filenames, directories, access permissions, and file layout. The MDT data is stored in a local disk filesystem. However, unlike block-based distributed filesystems, such as GPFS and PanFS, where the metadata server controls all of the block allocation, the Lustre metadata server is only involved in pathname and permission checks, and is not involved in any file I/O operations, avoiding I/O scalability bottlenecks on the metadata server. The ability to have multiple MDTs in a single filesystem is a new feature in Lustre 2.4, and allows directory subtrees to reside on the secondary MDTs, while 2.7 and later allow large single directories to be distributed across multiple MDTs as well.
One or more object storage server (OSS) nodes that store file data on one or more object storage target (OST) devices. Depending on the server's hardware, an OSS typically serves between two and eight OSTs, with each OST managing a single local disk filesystem. The capacity of a Lustre file system is the sum of the capacities provided by the OSTs.
Client(s) that access and use the data. Lustre presents all clients with a unified namespace for all of the files and data in the filesystem, using standard POSIX semantics, and allows concurrent and coherent read and write access to the files in the filesystem.
The MDT, OST, and client may be on the same node (usually for testing purposes), but in typical production installations these devices are on separate nodes communicating over a network. Each MDT and OST may be part of only a single filesystem, though it is possible to have multiple MDTs or OSTs on a single node that are part of different filesystems. The Lustre Network (LNet) layer can use several types of network interconnects, including native InfiniBand verbs, Omni-Path, RoCE, and iWARP via OFED, TCP/IP on Ethernet, and other proprietary network technologies such as the Cray Gemini interconnect. In Lustre 2.3 and earlier, Myrinet, Quadrics, Cray SeaStar and RapidArray networks were also supported, but these network drivers were deprecated when these networks were no longer commercially available, and support was removed completely in Lustre 2.8. Lustre will take advantage of remote direct memory access (RDMA) transfers, when available, to improve throughput and reduce CPU usage.
The storage used for the MDT and OST backing filesystems is normally provided by hardware RAID devices, though will work with any block devices. Since Lustre 2.4, the MDT and OST can also use ZFS for the backing filesystem in addition to ext4, allowing them to effectively use JBOD storage instead of hardware RAID devices. The Lustre OSS and MDS servers read, write, and modify data in the format imposed by the backing filesystem and return this data to the clients. This allows Lustre to take advantage of improvements and features in the underlying filesystem, such as compression and data checksums in ZFS. Clients do not have any direct access to the underlying storage, which ensures that a malfunctioning or malicious client cannot corrupt the filesystem structure.
An OST is a dedicated filesystem that exports an interface to byte ranges of file objects for read/write operations, with extent locks to protect data consistency. An MDT is a dedicated filesystem that stores inodes, directories, POSIX and extended file attributes, controls file access permissions/ACLs, and tells clients the layout of the object(s) that make up each regular file. MDTs and OSTs currently use either an enhanced version of ext4 called ldiskfs, or ZFS/DMU for back-end data storage to store files/objects using the open source ZFS-on-Linux port.
The client mounts the Lustre filesystem locally with a VFS driver for the Linux kernel that connects the client to the server(s). Upon initial mount, the client is provided a File Identifier (FID) for the root directory of the mountpoint. When the client accesses a file, it performs a filename lookup on the MDS. When the MDS filename lookup is complete and the user and client have permission to access and/or create the file, either the layout of an existing file is returned to the client or a new file is created on behalf of the client, if requested. For read or write operations, the client then interprets the file layout in the logical object volume (LOV) layer, which maps the file logical offset and size to one or more objects. The client then locks the file range being operated on and executes one or more parallel read or write operations directly to the OSS nodes that hold the data objects. With this approach, bottlenecks for client-to-OSS communications are eliminated, so the total bandwidth available for the clients to read and write data scales almost linearly with the number of OSTs in the filesystem.
After the initial lookup of the file layout, the MDS is not normally involved in file IO operations since all block allocation and data IO is managed internally by the OST. Clients do not directly modify the objects or data on the OST filesystems, but instead delegate this task to OSS nodes. This approach ensures scalability for large-scale clusters and supercomputers, as well as improved security and reliability. In contrast, shared block-based filesystems such as GPFS and OCFS allow direct access to the underlying storage by all of the clients in the filesystem, which requires a large back-end SAN attached to all clients, and increases the risk of filesystem corruption from misbehaving/defective clients.
== Implementation ==
In a typical Lustre installation on a Linux client, a Lustre filesystem driver module is loaded into the kernel and the filesystem is mounted like any other local or network filesystem. Client applications see a single, unified filesystem even though it may be composed of tens to thousands of individual servers and MDT/OST filesystems.
On some massively parallel processor (MPP) installations, computational processors can access a Lustre file system by redirecting their I/O requests to a dedicated I/O node configured as a Lustre client. This approach is used in the Blue Gene installation at Lawrence Livermore National Laboratory.
Another approach used in the early years of Lustre is the liblustre library on the Cray XT3 using the Catamount operating system on systems such as Sandia Red Storm, which provided userspace applications with direct filesystem access. Liblustre was a user-level library that allows computational processors to mount and use the Lustre file system as a client. Using liblustre, the computational processors could access a Lustre file system even if the service node on which the job was launched is not a Linux client. Liblustre allowed data movement directly between application space and the Lustre OSSs without requiring an intervening data copy through the kernel, thus providing access from computational processors to the Lustre file system directly in a constrained operating environment. The liblustre functionality was deleted from Lustre 2.7.0 after having been disabled since Lustre 2.6.0, and was untested since Lustre 2.3.0.
In Linux Kernel version 4.18, the incomplete port of the Lustre client was removed from the kernel staging area in order to speed up development and porting to newer kernels. The out-of-tree Lustre client and server is still available for RHEL, SLES, and Ubuntu distro kernels, as well as vanilla kernels.
== Data objects and file striping ==
In a traditional Unix disk file system, an inode data structure contains basic information about each file, such as where the data contained in the file is stored. The Lustre file system also uses inodes, but inodes on MDTs point to one or more OST objects associated with the file rather than to data blocks. These objects are implemented as files on the OSTs. When a client opens a file, the file open operation transfers a set of object identifiers and their layout from the MDS to the client, so that the client can directly interact with the OSS node(s) that hold the object(s). This allows the client(s) to perform I/O in parallel across all of the OST objects in the file without further communication with the MDS, avoiding contention from centralized block and lock management.
If only one OST object is associated with an MDT inode, that object contains all the data in the Lustre file. When more than one object is associated with a file, data in the file is "striped" in chunks in a round-robin manner across the OST objects similar to RAID 0 in chunks typically 1 MB or larger. Striping a file over multiple OST objects provides significant performance benefits if there is a need for high bandwidth access to a single large file. When striping is used, the maximum file size is not limited by the size of a single target. Capacity and aggregate I/O bandwidth scale with the number of OSTs a file is striped over. Also, since the locking of each object is managed independently for each OST, adding more stripes (one per OST) scales the file I/O locking capacity of the file proportionately. Each file created in the filesystem may specify different layout parameters, such as the stripe count (number of OST objects making up that file), stripe size (unit of data stored on each OST before moving to the next), and OST selection, so that performance and capacity can be tuned optimally for each file. When many application threads are reading or writing to separate files in parallel, it is optimal to have a single stripe per file, since the application is providing its own parallelism. When there are many threads reading or writing a single large file concurrently, then it is optimal to have at least one stripe on each OST to maximize the performance and capacity of that file.
In the Lustre 2.10 release, the ability to specify composite layouts was added to allow files to have different layout parameters for different regions of the file. The Progressive File Layout (PFL) feature uses composite layouts to improve file IO performance over a wider range of workloads, as well as simplify usage and administration. For example, a small PFL file can have a single stripe on flash for low access overhead, while larger files can have many stripes for high aggregate bandwidth and better OST load balancing. The composite layouts are further enhanced in the 2.11 release with the File Level Redundancy (FLR) feature, which allows a file to have multiple overlapping layouts for a file, providing RAID 0+1 redundancy for these files as well as improved read performance. The Lustre 2.11 release also added the Data-on-Metadata (DoM) feature, which allows the first component of a PFL file to be stored directly on the MDT with the inode. This reduces overhead for accessing small files, both in terms of space usage (no OST object is needed) as well as network usage (fewer RPCs needed to access the data). DoM also improves performance for small files if the MDT is SSD-based, while the OSTs are disk-based. In Lustre 2.13 the OST Overstriping feature allows a single component to have multiple stripes on one OST to further improve parallelism of locking, while the Self-Extending Layout feature allows the component size to be dynamic during write so that it can cope with individual (flash) OSTs running out of space before the whole filesystem is out of space.
== Metadata objects and DNE remote or striped directories ==
When a client initially mounts a filesystem, it is provided the 128-bit Lustre File Identifier (FID, composed of the 64-bit Sequence number, 32-bit Object ID, and 32-bit Version) of the root directory for the mountpoint. When doing a filename lookup, the client performs a lookup of each pathname component by mapping the parent directory FID Sequence number to a specific MDT via the FID Location Database (FLDB), and then does a lookup on the MDS managing this MDT using the parent FID and filename. The MDS will return the FID for the requested pathname component along with a DLM lock. Once the MDT of the last directory in the path is determined, further directory operations (for non-striped directories) will normally take place on that MDT, avoiding contention between MDTs.
For DNE striped directories, the per-directory layout stored on the parent directory provides a hash function and a list of MDT directory FIDs across which the directory is distributed. The Logical Metadata Volume (LMV) on the client hashes the filename and maps it to a specific MDT directory shard, which will handle further operations on that file in an identical manner to a non-striped directory. For readdir() operations, the entries from each directory shard are returned to the client sorted in the local MDT directory hash order, and the client performs a merge sort to interleave the filenames in hash order so that a single 64-bit cookie can be used to determine the current offset within the directory.
In Lustre 2.15, the client LMV implements round-robin and space balanced default directory layouts, so that clients can use a large number of MDTs in a single filesystem more effectively. When a new subdirectories is created near the root of the filesystem (the top 3 directory levels by default), it will automatically be created as a remote directory one of the available MDTs (selected in sequential order) to balance space usage and load across servers. If the free space on the MDTs becomes imbalanced (more than 5% difference in free space and inodes) then the clients will bias new subdirectory creation toward MDTs with more free space in order to restore balance.
== Locking ==
The Lustre distributed lock manager (LDLM), implemented in the OpenVMS style, protects the integrity of each file's data and metadata. Access and modification of a Lustre file is completely cache coherent among all of the clients. Metadata locks are managed by the MDT that stores the inode for the file, using FID as the resource name. The metadata locks are split into separate bits that protect the lookup of the file (file owner and group, permission and mode, and access control list (ACL)), the state of the inode (directory size, directory contents, link count, timestamps), layout (file striping, since Lustre 2.4), and extended attributes (xattrs, since Lustre 2.5). A client can fetch multiple metadata lock bits for a single inode with a single RPC request, but currently they are only ever granted a read lock for the inode. The MDS manages all modifications to the inode in order to avoid lock resource contention and is currently the only node that gets write locks on inodes.
File data locks are managed by the OST on which each object of the file is striped, using byte-range extent locks. Clients can be granted overlapping read extent locks for part or all of the file, allowing multiple concurrent readers of the same file, and/or non-overlapping write extent locks for independent regions of the file. This allows many Lustre clients to access a single file concurrently for both read and write, avoiding bottlenecks during file I/O. In practice, because Linux clients manage their data cache in units of pages, the clients will request locks that are always an integer multiple of the page size (4096 bytes on most clients). When a client is requesting an extent lock the OST may grant a lock for a larger extent than originally requested, in order to reduce the number of lock requests that the client makes. The actual size of the granted lock depends on several factors, including the number of currently granted locks on that object, whether there are conflicting write locks for the requested lock extent, and the number of pending lock requests on that object. The granted lock is never smaller than the originally requested extent. OST extent locks use the Lustre FID of the object as the resource name for the lock. Since the number of extent lock servers scales with the number of OSTs in the filesystem, this also scales the aggregate locking performance of the filesystem, and of a single file if it is striped over multiple OSTs.
== Networking ==
The communication between the Lustre clients and servers is implemented using Lustre Networking (LNet), which was originally based on the Sandia Portals network programming application programming interface. Disk storage is connected to the Lustre MDS and OSS server nodes using direct attached storage (SAS, FC, iSCSI) or traditional storage area network (SAN) technologies, which is independent of the client-to-server network.
LNet can use many commonly used network types, such as InfiniBand and TCP (commonly Ethernet) networks, and allows simultaneous availability across multiple network types with routing between them. Remote Direct Memory Access (RDMA) is used for data and metadata transfer between nodes when provided by the underlying networks, such as InfiniBand, RoCE, iWARP, and Omni-Path, as well as proprietary high-speed networks such as Cray Aries and Gemini, and Atos BXI. High availability and recovery features enable transparent recovery in conjunction with failover servers.
Since Lustre 2.10 the LNet Multi-Rail (MR) feature
allows link aggregation of two or more network interfaces between a client and server to improve bandwidth. The LNet interface types do not need to be the same network type. In 2.12 Multi-Rail was enhanced to improve fault tolerance if multiple network interfaces are available between peers.
LNet provides end-to-end throughput over Gigabit Ethernet networks in excess of 100 MB/s, throughput up to 11 GB/s using InfiniBand enhanced data rate (EDR) links, and throughput over 11 GB/s across 100 Gigabit Ethernet interfaces.
== High availability ==
Lustre file system high availability features include a robust failover and recovery mechanism, making server failures and reboots transparent. Version interoperability between successive minor versions of the Lustre software enables a server to be upgraded by taking it offline (or failing it over to a standby server), performing the upgrade, and restarting it, while all active jobs continue to run, experiencing a delay while the backup server takes over the storage.
Lustre MDSes are configured as an active/passive pair exporting a single MDT, or one or more active/active MDS pairs with DNE exporting two or more separate MDTs, while OSSes are typically deployed in an active/active configuration exporting separate OSTs to provide redundancy without extra system overhead. In single-MDT filesystems, the standby MDS for one filesystem is the MGS and/or monitoring node, or the active MDS for another file system, so no nodes are idle in the cluster.
== HSM (Hierarchical Storage Management) ==
Lustre provides the capability to have multiple storage tiers within a single filesystem namespace. It allows traditional HSM functionality to copy (archive) files off the primary filesystem to a secondary archive storage tier. The archive tier is typically a tape-based system, that is often fronted by a disk cache. Once a file is archived, it can be released from the main filesystem, leaving only a stub that references the archive copy. If a released file is opened, the Coordinator blocks the open, sends a restore request to a copytool, and then completes the open once the copytool has completed restoring the file.
In addition to external storage tiering, it is possible to have multiple storage tiers within a single filesystem namespace. OSTs of different types (e.g. HDD and SSD) can be declared in named storage pools. The OST pools can be selected when specifying file layouts, and different pools can be used within a single PFL file layout. Files can be migrated between storage tiers either manually or under control of the Policy Engine. Since Lustre 2.11, it is also possible to mirror a file to different OST pools with a FLR file layout, for example to pre-stage files into flash for a computing job.
HSM includes some additional Lustre components to manage the interface between the primary filesystem and the archive:
Coordinator: receives archive and restore requests and dispatches them to agent nodes.
Agent: runs a copytool to copy data from primary storage to the archive and vice versa.
Copytool: handles data motion and metadata updates. There are different copytools to interface with different archive systems. A generic POSIX copytool is available for archives that provide a POSIX-like front-end interface. Copytools are also available for the High Performance Storage System (HPSS), Tivoli Storage Manager (TSM), Amazon S3, and Google Drive.
Policy Engine: watches filesystem Changelogs for new files to archive, applies policies to release files based on age or space usage, and communicates with MDT and Coordinator. The Policy Engine can also trigger actions like migration between, purge, and removal. The most commonly used policy engine is RobinHood, but other policy engines can also be used.
HSM also defines new states for files including:
Exist: Some copy, possibly incomplete exists in a HSM.
Archive: A full copy exists on the archive side of the HSM.
Dirty: The primary copy of the file has been modified and differs from the archived copy.
Released: A stub inode exists on an MDT, but the data objects have been removed and the only copy exists in the archive.
Lost: the archive copy of the file has been lost and cannot be restored
No Release: the file should not be released from the filesystem
No Archive: the file should not be archived
== Deployments ==
Lustre is used by many of the TOP500 supercomputers and large multi-cluster sites. Six of the top 10 and more than 60 of the top 100 supercomputers use Lustre file systems. These include the 700PB 13 TB/s Orion filesystem for the Frontier supercomputer at Oak Ridge National Laboratory (ORNL),
Fugaku and K Computer at the RIKEN Advanced Institute for Computational Science, Tianhe-1A at the National Supercomputing Center in Tianjin, China, LUMI at CSC, Jaguar and Titan at ORNL, Blue Waters at the University of Illinois, and Sequoia and Blue Gene/L at Lawrence Livermore National Laboratory (LLNL).
There are also large Lustre filesystems at the National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Texas Advanced Computing Center, Brazilian National Laboratory of Scientific Computing, and NASA in North America, in Asia at Tokyo Institute of Technology, in Europe at CEA, and many others.
== Commercial technical support ==
Commercial technical support for Lustre is often bundled along with the computing system or storage hardware sold by the vendor. Some vendors include Hewlett-Packard (as the HP StorageWorks Scalable File Share, circa 2004 through 2008),
ATOS, Fujitsu. Vendors selling storage hardware with bundled Lustre support include Hitachi Data Systems (2012), DataDirect Networks (DDN), Aeon Computing, and others. It is also possible to get software-only support for Lustre file systems from some vendors, including Whamcloud.
Amazon Web Services offers Amazon FSx for Lustre, a fully managed service, making it easy to launch and run high-performance file systems cost effectively in their cloud.
Microsoft Azure offers Azure Managed Lustre (AMLFS). Azure Managed Lustre is a fully managed, pay-as-you-go file system for high-performance computing (HPC) and AI workloads in their cloud.
== See also ==
List of file systems, the distributed parallel fault-tolerant file system section
== References ==
== External links ==
Official website
=== Documentation ===
Understanding Lustre Internals, Second Edition
Internal workings of Lustre file system and its core subsystems
=== Information wikis ===
Lustre Community wiki
Lustre (DDN) wiki
Lustre (OpenSFS) wiki
=== Community foundations ===
OpenSFS
EOFS – European Open File System
=== Hardware/software vendors ===
DataDirect Networks (DDN)
Hewlett Packard Enterprise / Cray (including former Xyratex employees)
NetApp
Aeon Computing | Wikipedia/Cluster_File_Systems |
A standard Sudoku contains 81 cells, in a 9×9 grid, and has 9 boxes, each box being the intersection of the first, middle, or last 3 rows, and the first, middle, or last 3 columns. Each cell may contain a number from one to nine, and each number can only occur once in each row, column, and box. A Sudoku starts with some cells containing numbers (clues), and the goal is to solve the remaining cells. Proper Sudokus have one solution. Players and investigators use a wide range of computer algorithms to solve Sudokus, study their properties, and make new puzzles, including Sudokus with interesting symmetries and other properties.
There are several computer algorithms that will solve 9×9 puzzles (n = 9) in fractions of a second, but combinatorial explosion occurs as n increases, creating limits to the properties of Sudokus that can be constructed, analyzed, and solved as n increases.
== Techniques ==
=== Backtracking ===
Some hobbyists have developed computer programs that will solve Sudoku puzzles using a backtracking algorithm, which is a type of brute force search. Backtracking is a depth-first search (in contrast to a breadth-first search), because it will completely explore one branch to a possible solution before moving to another branch. Although it has been established that approximately 5.96 x 1026 final grids exist, a brute force algorithm can be a practical method to solve Sudoku puzzles.
A brute force algorithm visits the empty cells in some order, filling in digits sequentially, or backtracking when the number is found to be not valid. Briefly, a program would solve a puzzle by placing the digit "1" in the first cell and checking if it is allowed to be there. If there are no violations (checking row, column, and box constraints) then the algorithm advances to the next cell and places a "1" in that cell. When checking for violations, if it is discovered that the "1" is not allowed, the value is advanced to "2". If a cell is discovered where none of the 9 digits is allowed, then the algorithm leaves that cell blank and moves back to the previous cell. The value in that cell is then incremented by one. This is repeated until the allowed value in the last (81st) cell is discovered.
The animation shows how a Sudoku is solved with this method. The puzzle's clues (red numbers) remain fixed while the algorithm tests each unsolved cell with a possible solution. Notice that the algorithm may discard all the previously tested values if it finds the existing set does not fulfill the constraints of the Sudoku.
Advantages of this method are:
A solution is guaranteed (as long as the puzzle is valid).
Solving time is mostly unrelated to degree of difficulty.
The algorithm (and therefore the program code) is simpler than other algorithms, especially compared to strong algorithms that ensure a solution to the most difficult puzzles.
The disadvantage of this method is that the solving time may be slow compared to algorithms modeled after deductive methods. One programmer reported that such an algorithm may typically require as few as 15,000 cycles, or as many as 900,000 cycles to solve a Sudoku, each cycle being the change in position of a "pointer" as it moves through the cells of a Sudoku.
A different approach which also uses backtracking, draws from the fact that in the solution to a standard sudoku the distribution for every individual symbol (value) must be one of only 46656 patterns.
In manual sudoku solving this technique is referred to as pattern overlay or using templates and is confined to filling in the last values only.
A library with all the possible patterns may get loaded or created at program start. Then every given symbol gets assigned a filtered set with those patterns, which are in accordance with the given clues.
In the last step, the actual backtracking part, patterns from these sets are tried to be combined or overlayed in a non-conflicting way until the one permissible combination is hit upon.
The Implementation is exceptionally easy when using bit vectors, because for all the tests only bit-wise logical operations are needed, instead of any nested iterations across rows and columns.
Significant optimization can be achieved by reducing the sets of patterns even further during filtering. By testing every questionable pattern against all the reduced sets that were already accepted for the other symbols the total number of patterns left for backtracking is greatly diminished.
And as with all sudoku brute-force techniques, run time can be vastly reduced by first applying some of the most simple solving practices which may fill in some 'easy' values.
A Sudoku can be constructed to work against backtracking. Assuming the solver works from top to bottom (as in the animation), a puzzle with few clues (17), no clues in the top row, and has a solution "987654321" for the first row, would work in opposition to the algorithm. Thus the program would spend significant time "counting" upward before it arrives at the grid which satisfies the puzzle. In one case, a programmer found a brute force program required six hours to arrive at the solution for such a Sudoku (albeit using a 2008-era computer). Such a Sudoku can be solved nowadays in less than 1 second using an exhaustive search routine and faster processors.p:25
=== Stochastic search / optimization methods ===
Sudoku can be solved using stochastic (random-based) algorithms. An example of this method is to:
Randomly assign numbers to the blank cells in the grid.
Calculate the number of errors.
"Shuffle" the inserted numbers until the number of mistakes is reduced to zero.
A solution to the puzzle is then found. Approaches for shuffling the numbers include simulated annealing, genetic algorithm and tabu search. Stochastic-based algorithms are known to be fast, though perhaps not as fast as deductive techniques. Unlike the latter however, optimisation algorithms do not necessarily require problems to be logic-solvable, giving them the potential to solve a wider range of problems. Algorithms designed for graph colouring are also known to perform well with Sudokus. It is also possible to express a Sudoku as an integer linear programming problem. Such approaches get close to a solution quickly, and can then use branching towards the end. The simplex algorithm is able to solve proper Sudokus, indicating if the Sudoku is not valid (no solution). If there is more than one solution (non-proper Sudokus) the simplex algorithm will generally yield a solution with fractional amounts of more than one digit in some squares. However, for proper Sudokus, linear programming presolve techniques alone will deduce the solution without any need for simplex iterations. The logical rules used by presolve techniques for the reduction of LP problems include the set of logical rules used by humans to solve Sudokus.
=== Constraint programming ===
A Sudoku may also be modelled as a constraint satisfaction problem. In his paper Sudoku as a Constraint Problem, Helmut Simonis describes many reasoning algorithms based on constraints which can be applied to model and solve problems. Some constraint solvers include a method to model and solve Sudokus, and a program may require fewer than 100 lines of code to solve a simple Sudoku. If the code employs a strong reasoning algorithm, incorporating backtracking is only needed for the most difficult Sudokus. An algorithm combining a constraint-model-based algorithm with backtracking would have the advantage of fast solving time – of the order of a few milliseconds – and the ability to solve all sudokus.
=== Exact cover ===
Sudoku puzzles may be described as an exact cover problem, or more precisely, an exact hitting set problem. This allows for an elegant description of the problem and an efficient solution. Modelling Sudoku as an exact cover problem and using an algorithm such as Knuth's Algorithm X and his Dancing Links technique "is the method of choice for rapid finding [measured in microseconds] of all possible solutions to Sudoku puzzles."
An alternative approach is the use of Gauss elimination in combination with column and row striking.
=== Relations and residuals ===
Let Q be the 9x9 Sudoku matrix, N = {1, 2, 3, 4, 5, 6, 7, 8, 9}, and X represent a generic row, column, or block. N supplies symbols for filling Q as well as the index set for the 9 elements of any X. The given elements q in Q represent a univalent relation from Q to N. The solution R is a total relation and hence a function. Sudoku rules require that the restriction of R to X is a bijection, so any partial solution C, restricted to an X, is a partial permutation of N.
Let T = { X : X is a row, column, or block of Q }, so T has 27 elements. An arrangement is either a partial permutation or a permutation on N. Let Z be the set of all arrangements on N. A partial solution C can be reformulated to include the rules as a composition of relations A (one-to-three) and B requiring compatible arrangements:
Q
→
A
T
→
B
Z
with
A
;
B
⊆
C
.
{\displaystyle Q\xrightarrow {A} T\xrightarrow {B} Z\quad {\text{with}}\quad A;B\subseteq C.}
Solution of the puzzle, suggestions for new q to enter Q, come from prohibited arrangements
C
¯
,
{\displaystyle {\bar {C}},}
, the complement of C in QxZ: useful tools in the calculus of relations are residuals:
A
∖
C
=
A
T
;
C
¯
¯
{\displaystyle A\backslash C={\overline {A^{T};{\bar {C}}}}}
maps T to Z, and
C
/
B
=
C
¯
;
B
T
¯
{\displaystyle C/B={\overline {{\bar {C}};B^{T}}}}
maps Q to T.
== See also ==
Sudoku
Mathematics of Sudoku
Combinatorial explosion (with summary of grid count of Sudoku compared to Latin squares)
Glossary of Sudoku
== References ==
== External links ==
http://diuf.unifr.ch/pai/people/juillera/Sudoku/Sudoku.html Sudoku Explainer by Nicolas Juillerat (Popular for rating Sudokus in general) Archived 2013-11-12 at the Wayback Machine
A Pencil-and-Paper Algorithm for Solving Sudoku Puzzles | Wikipedia/Sudoku_solving_algorithms |
Interactive evolutionary computation (IEC) or aesthetic selection is a general term for methods of evolutionary computation that use human evaluation. Usually human evaluation is necessary when the form of fitness function is not known (for example, visual appeal or attractiveness; as in Dawkins, 1986) or the result of optimization should fit a particular user preference (for example, taste of coffee or color set of the user interface).
== IEC design issues ==
The number of evaluations that IEC can receive from one human user is limited by user fatigue which was reported by many researchers as a major problem. In addition, human evaluations are slow and expensive as compared to fitness function computation. Hence, one-user IEC methods should be designed to converge using a small number of evaluations, which necessarily implies very small populations. Several methods were proposed by researchers to speed up convergence, like interactive constrain evolutionary search (user intervention) or fitting user preferences using a convex function. IEC human–computer interfaces should be carefully designed in order to reduce user fatigue. There is also evidence that the addition of computational agents can successfully counteract user fatigue.
However IEC implementations that can concurrently accept evaluations from many users overcome the limitations described above. An example of this approach is an interactive media installation by Karl Sims that allows one to accept preferences from many visitors by using floor sensors to evolve attractive 3D animated forms. Some of these multi-user IEC implementations serve as collaboration tools, for example HBGA.
== IEC types ==
IEC methods include interactive evolution strategy, interactive genetic algorithm, interactive genetic programming, and human-based genetic algorithm.
=== IGA ===
An interactive genetic algorithm (IGA) is defined as a genetic algorithm that uses human evaluation. These algorithms belong to a more general category of Interactive evolutionary computation. The main application of these techniques include domains where it is hard or impossible to design a computational fitness function, for example, evolving images, music, various artistic designs and forms to fit a user's aesthetic preferences. Interactive computation methods can use different representations, both linear (as in traditional genetic algorithms) and tree-like ones (as in genetic programming).
== See also ==
Evolutionary art
Human-based evolutionary computation
Human-based genetic algorithm
Human–computer interaction
Karl Sims
Electric Sheep
SCM-Synthetic Curriculum Modeling
User review
== References ==
Banzhaf, W. (1997), Interactive Evolution, Entry C2.9, in: Handbook of Evolutionary Computation, Oxford University Press, ISBN 978-0750308953
== External links ==
"EndlessForms.com, Collaborative interactive evolution allowing you to evolve 3D objects and have them 3D printed". Archived from the original on 2018-11-14. Retrieved 2011-06-18.
"Art by Evolution on the Web Interactive Art Generator". Archived from the original on 2018-04-15. Retrieved 2010-04-09.
"Facial composite system using interactive genetic algorithms".
"Galapagos by Karl Sims".
"E-volver".
"SBART, a program to evolve 2D images".
"GenJam (Genetic Jammer)".
"Evolutionary music".
"Darwin poetry". Archived from the original on 2006-04-12.
"Takagi Lab at Kyushu University".
"Interactive one-max problem allows to compare the performance of interactive and human-based genetic algorithms". Archived from the original on 2011-07-09. Retrieved 2006-12-03..
"Webpage that uses interactive evolutionary computation with a generative design algorithm to generate 2d images".
"Picbreeder service, Collaborative interactive evolution allowing branching from other users' creations that produces pictures like faces and spaceships". Archived from the original on 2011-07-25. Retrieved 2007-08-02.
"Peer to Peer IGA Using collaborative IGA sessions for floorplanning and document design". | Wikipedia/Interactive_evolutionary_algorithm |
Selection is a genetic operator in an evolutionary algorithm (EA). An EA is a metaheuristic inspired by biological evolution and aims to solve challenging problems at least approximately. Selection has a dual purpose: on the one hand, it can choose individual genomes from a population for subsequent breeding (e.g., using the crossover operator). In addition, selection mechanisms are also used to choose candidate solutions (individuals) for the next generation. The biological model is natural selection.
Retaining the best individual(s) of one generation unchanged in the next generation is called elitism or elitist selection. It is a successful (slight) variant of the general process of constructing a new population.
The basis for selection is the quality of an individual, which is determined by the fitness function. In memetic algorithms, an extension of EA, selection also takes place in the selection of those offspring that are to be improved with the help of a meme (e.g. a heuristic).
A selection procedure for breeding used early on may be implemented as follows:
The fitness values that have been computed (fitness function) are normalized, such that the sum of all resulting fitness values equals 1.
Accumulated normalized fitness values are computed: the accumulated fitness value of an individual is the sum of its own fitness value plus the fitness values of all the previous individuals; the accumulated fitness of the last individual should be 1, otherwise something went wrong in the normalization step.
A random number R between 0 and 1 is chosen.
The selected individual is the first one whose accumulated normalized value is greater than or equal to R.
For many problems the above algorithm might be computationally demanding. A simpler and faster alternative uses the so-called stochastic acceptance.
If this procedure is repeated until there are enough selected individuals, this selection method is called fitness proportionate selection or roulette-wheel selection. If instead of a single pointer spun multiple times, there are multiple, equally spaced pointers on a wheel that is spun once, it is called stochastic universal sampling.
Repeatedly selecting the best individual of a randomly chosen subset is tournament selection. Taking the best half, third or another proportion of the individuals is truncation selection.
There are other selection algorithms that do not consider all individuals for selection, but only those with a fitness value that is higher than a given (arbitrary) constant. Other algorithms select from a restricted pool where only a certain percentage of the individuals are allowed, based on fitness value.
== Methods of selection ==
The listed methods differ mainly in the selection pressure, which can be set by a strategy parameter in the rank selection described below. The higher the selection pressure, the faster a population converges against a certain solution and the search space may not be explored sufficiently. This premature convergence can be counteracted by structuring the population appropriately. There is a close correlation between the population model used and a suitable selection pressure. If the pressure is too low, it must be expected that the population will not converge even after a long computing time. For more selection methods and further detail see.
=== Roulette wheel selection ===
In the roulette wheel selection, the probability of choosing an individual for breeding of the next generation is proportional to its fitness, the better the fitness is, the higher chance for that individual to be chosen.
Choosing individuals can be depicted as spinning a roulette that has as many pockets as there are individuals in the current generation, with sizes depending on their probability.
Probability of choosing individual
i
{\displaystyle i}
is equal to
p
i
=
f
i
Σ
j
=
1
N
f
j
{\displaystyle p_{i}={\frac {f_{i}}{\Sigma _{j=1}^{N}f_{j}}}}
, where
f
i
{\displaystyle f_{i}}
is the fitness of
i
{\displaystyle i}
and
N
{\displaystyle N}
is the size of current generation (note that in this method one individual can be drawn multiple times).
==== Stochastic universal sampling ====
Stochastic universal sampling is a development of roulette wheel selection with minimal spread and no bias.
=== Rank selection ===
In rank selection, the probability for selection does not depend directly on the fitness, but on the fitness rank of an individual within the population. The exact fitness values themselves do not have to be available, but only a sorting of the individuals according to quality.
In addition to the adjustable selection pressure, an advantage of rank-based selection can be seen in the fact that it also gives worse individuals a chance to reproduce and thus to improve. This can be particularly helpful in applications with restrictions, since it facilitates the overcoming of a restriction in several intermediate steps, i.e. via a sequence of several individuals rated poorly due to restriction violations.
==== Linear rank selection ====
Linear ranking, which goes back to Baker, is often used. It allows the selection pressure to be set by the parameter
s
p
{\displaystyle sp}
, which can take values between 1.0 (no selection pressure) and 2.0 (high selection pressure). The probability
P
{\displaystyle P}
for
n
{\displaystyle n}
rank positions
R
i
{\displaystyle R_{i}}
is obtained as follows:
P
(
R
i
)
=
1
n
(
s
p
−
(
2
s
p
−
2
)
i
−
1
n
−
1
)
1
≤
i
≤
n
,
1
≤
s
p
≤
2
w
i
t
h
P
(
R
i
)
≥
0
,
∑
i
=
1
n
P
(
R
i
)
=
1
{\displaystyle P(R_{i})={\frac {1}{n}}{\Bigl (}sp-(2sp-2){\frac {i-1}{n-1}}{\Bigr )}\quad \quad 1\leq i\leq n,\quad 1\leq sp\leq 2\quad {\mathsf {with}}\quad P(R_{i})\geq 0,\quad \sum _{i=1}^{n}P(R_{i})=1}
Another definition for the probability
P
{\displaystyle P}
for rank positions
i
{\displaystyle i}
is:
P
(
i
)
=
2
∗
(
n
−
i
+
1
)
n
∗
(
n
+
1
)
{\displaystyle P(i)={\frac {2*(n-i+1)}{n*(n+1)}}}
==== Exponential rank selection ====
Exponential rank selection is defined as follows:
P
(
i
)
=
w
n
−
i
∑
k
=
1
n
w
n
−
k
,
0
≤
w
≤
1
{\displaystyle P(i)={\frac {w^{n-i}}{\sum _{k=1}^{n}{w^{n-k}}}},0\leq w\leq 1}
=== Steady state selection ===
In every generation few chromosomes are selected (good - with high fitness) for creating a new offspring. Then some (bad - with low fitness) chromosomes are removed and the new offspring is placed in their place. The rest of population survives to new generation.
=== Tournament selection ===
Tournament selection is a method of choosing the individual from the set of individuals. The winner of each tournament is selected to perform crossover.
=== Truncation selection ===
For truncation selection, individuals are sorted according to their fitness and a portion (10% to 50%) of the top individuals is selected for next generation.
=== Elitist selection ===
Often to get better results, strategies with partial reproduction are used. One of them is elitism, in which a small portion of the best individuals from the last generation is carried over (without any changes) to the next one.
=== Boltzmann selection ===
In Boltzmann selection, a continuously varying temperature controls the rate of selection according to a preset schedule. The temperature starts out high, which means that the selection pressure is low. The temperature is gradually lowered, which gradually increases the selection pressure, thereby allowing the GA to narrow in more closely to the best part of the search space while maintaining the appropriate degree of diversity.
== See also ==
Natural selection
Sexual selection
== References ==
== External links ==
Introduction to Genetic Algorithms
An outline of implementation of the stochastic-acceptance version | Wikipedia/Selection_(genetic_algorithm) |
The genetic algorithm is an operational research method that may be used to solve scheduling problems in production planning.
== Importance of production scheduling ==
To be competitive, corporations must minimize inefficiencies and maximize productivity. In manufacturing, productivity is inherently linked to how well the firm can optimize the available resources, reduce waste and increase efficiency. Finding the best way to maximize efficiency in a manufacturing process can be extremely complex. Even on simple projects, there are multiple inputs, multiple steps, many constraints and limited resources. In general a resource constrained scheduling problem consists of:
A set of jobs that must be executed
A finite set of resources that can be used to complete each job
A set of constraints that must be satisfied
Temporal constraints – the time window to complete the task
Procedural constraints – the order each task must be completed
Resource constraints – is the resource available
A set of objectives to evaluate the scheduling performance
A typical factory floor setting is a good example of this, where it is necessary to schedule which jobs need to be completed on which machines, by which employees, in what order and at what time.
== Use of algorithms in scheduling ==
In very complex problems such as scheduling there is no known way to get to a final answer, so we resort to searching for it trying to find a "good" answer. Scheduling problems most often use heuristic algorithms to search for the optimal solution. Heuristic search methods suffer as the inputs become more complex and varied. This type of problem is known in computer science as an NP-Hard problem. This means that there are no known algorithms for finding an optimal solution in polynomial time.
Genetic algorithms are well suited to solving production scheduling problems, because unlike heuristic methods genetic algorithms operate on a population of solutions rather than a single solution. In production scheduling this population of solutions consists of many answers that may have different sometimes conflicting objectives. For example, in one solution we may be optimizing a production process to be completed in a minimal amount of time. In another solution we may be optimizing for a minimal amount of defects. By cranking up the speed at which we produce we may run into an increase in defects in our final product.
As we increase the number of objectives we are trying to achieve we also increase the number of constraints on the problem and similarly increase the complexity. Genetic algorithms are ideal for these types of problems where the search space is large and the number of feasible solutions is small.
== Application of a genetic algorithm ==
To apply a genetic algorithm to a scheduling problem we must first represent it as a genome. One way to represent a scheduling genome is to define a sequence of tasks and the start times of those tasks relative to one another. Each task and its corresponding start time represents a gene.
A specific sequence of tasks and start times (genes) represents one genome in our population. To make sure that our genome is a feasible solution we must take care that it obeys our precedence constraints. We generate an initial population using random start times within the precedence constraints. With genetic algorithms we then take this initial population and cross it, combining genomes along with a small amount of randomness (mutation). The offspring of this combination is selected based on a fitness function that includes one or many of our constraints, such as minimizing time and minimizing defects. We let this process continue either for a pre-allotted time or until we find a solution that fits our minimum criteria. Overall each successive generation will have a greater average fitness, i.e. taking less time with higher quality than the preceding generations. In scheduling problems, as with other genetic algorithm solutions, we must make sure that we do not select offspring that are infeasible, such as offspring that violate our precedence constraint. We of course may have to add further fitness values such as minimizing costs; however, each constraint added greatly increases the search space and lowers the number of solutions that are good matches.
== See also ==
Genetic algorithm in economics
Job shop scheduling
Quality control and genetic algorithms
== Bibliography ==
Wall, M., A Genetic Algorithm for Resource-Constrained Scheduling (PDF)
Lim, C.; Sim, E., Production Planning in Manufacturing/Remanufacturing Environment using Genetic Algorithm (PDF)
== External links ==
Demo applet of a genetic algorithm solving TSPs and VRPTW problems | Wikipedia/Genetic_algorithm_scheduling |
Estimation of distribution algorithms (EDAs), sometimes called probabilistic model-building genetic algorithms (PMBGAs), are stochastic optimization methods that guide the search for the optimum by building and sampling explicit probabilistic models of promising candidate solutions. Optimization is viewed as a series of incremental updates of a probabilistic model, starting with the model encoding an uninformative prior over admissible solutions and ending with the model that generates only the global optima.
EDAs belong to the class of evolutionary algorithms. The main difference between EDAs and most conventional evolutionary algorithms is that evolutionary algorithms generate new candidate solutions using an implicit distribution defined by one or more variation operators, whereas EDAs use an explicit probability distribution encoded by a Bayesian network, a multivariate normal distribution, or another model class. Similarly as other evolutionary algorithms, EDAs can be used to solve optimization problems defined over a number of representations from vectors to LISP style S expressions, and the quality of candidate solutions is often evaluated using one or more objective functions.
The general procedure of an EDA is outlined in the following:
t := 0
initialize model M(0) to represent uniform distribution over admissible solutions
while (termination criteria not met) do
P := generate N>0 candidate solutions by sampling M(t)
F := evaluate all candidate solutions in P
M(t + 1) := adjust_model(P, F, M(t))
t := t + 1
Using explicit probabilistic models in optimization allowed EDAs to feasibly solve optimization problems that were notoriously difficult for most conventional evolutionary algorithms and traditional optimization techniques, such as problems with high levels of epistasis. Nonetheless, the advantage of EDAs is also that these algorithms provide an optimization practitioner with a series of probabilistic models that reveal a lot of information about the problem being solved. This information can in turn be used to design problem-specific neighborhood operators for local search, to bias future runs of EDAs on a similar problem, or to create an efficient computational model of the problem.
For example, if the population is represented by bit strings of length 4, the EDA can represent the population of promising solution using a single vector of four probabilities (p1, p2, p3, p4) where each component of p defines the probability of that position being a 1. Using this probability vector it is possible to create an arbitrary number of candidate solutions.
== Estimation of distribution algorithms (EDAs) ==
This section describes the models built by some well known EDAs of different levels of complexity. It is always assumed a population
P
(
t
)
{\displaystyle P(t)}
at the generation
t
{\displaystyle t}
, a selection operator
S
{\displaystyle S}
, a model-building operator
α
{\displaystyle \alpha }
and a sampling operator
β
{\displaystyle \beta }
.
== Univariate factorizations ==
The most simple EDAs assume that decision variables are independent, i.e.
p
(
X
1
,
X
2
)
=
p
(
X
1
)
⋅
p
(
X
2
)
{\displaystyle p(X_{1},X_{2})=p(X_{1})\cdot p(X_{2})}
. Therefore, univariate EDAs rely only on univariate statistics and multivariate distributions must be factorized as the product of
N
{\displaystyle N}
univariate probability distributions,
D
Univariate
:=
p
(
X
1
,
…
,
X
N
)
=
∏
i
=
1
N
p
(
X
i
)
.
{\displaystyle D_{\text{Univariate}}:=p(X_{1},\dots ,X_{N})=\prod _{i=1}^{N}p(X_{i}).}
Such factorizations are used in many different EDAs, next we describe some of them.
=== Univariate marginal distribution algorithm (UMDA) ===
The UMDA is a simple EDA that uses an operator
α
U
M
D
A
{\displaystyle \alpha _{UMDA}}
to estimate marginal probabilities from a selected population
S
(
P
(
t
)
)
{\displaystyle S(P(t))}
. By assuming
S
(
P
(
t
)
)
{\displaystyle S(P(t))}
contain
λ
{\displaystyle \lambda }
elements,
α
U
M
D
A
{\displaystyle \alpha _{UMDA}}
produces probabilities:
p
t
+
1
(
X
i
)
=
1
λ
∑
x
∈
S
(
P
(
t
)
)
x
i
,
∀
i
∈
1
,
2
,
…
,
N
.
{\displaystyle p_{t+1}(X_{i})={\dfrac {1}{\lambda }}\sum _{x\in S(P(t))}x_{i},~\forall i\in 1,2,\dots ,N.}
Every UMDA step can be described as follows
D
(
t
+
1
)
=
α
UMDA
∘
S
∘
β
λ
(
D
(
t
)
)
.
{\displaystyle D(t+1)=\alpha _{\text{UMDA}}\circ S\circ \beta _{\lambda }(D(t)).}
=== Population-based incremental learning (PBIL) ===
The PBIL, represents the population implicitly by its model, from which it samples new solutions and updates the model. At each generation,
μ
{\displaystyle \mu }
individuals are sampled and
λ
≤
μ
{\displaystyle \lambda \leq \mu }
are selected. Such individuals are then used to update the model as follows
p
t
+
1
(
X
i
)
=
(
1
−
γ
)
p
t
(
X
i
)
+
(
γ
/
λ
)
∑
x
∈
S
(
P
(
t
)
)
x
i
,
∀
i
∈
1
,
2
,
…
,
N
,
{\displaystyle p_{t+1}(X_{i})=(1-\gamma )p_{t}(X_{i})+(\gamma /\lambda )\sum _{x\in S(P(t))}x_{i},~\forall i\in 1,2,\dots ,N,}
where
γ
∈
(
0
,
1
]
{\displaystyle \gamma \in (0,1]}
is a parameter defining the learning rate, a small value determines that the previous model
p
t
(
X
i
)
{\displaystyle p_{t}(X_{i})}
should be only slightly modified by the new solutions sampled. PBIL can be described as
D
(
t
+
1
)
=
α
PIBIL
∘
S
∘
β
μ
(
D
(
t
)
)
{\displaystyle D(t+1)=\alpha _{\text{PIBIL}}\circ S\circ \beta _{\mu }(D(t))}
=== Compact genetic algorithm (cGA) ===
The CGA, also relies on the implicit populations defined by univariate distributions. At each generation
t
{\displaystyle t}
, two individuals
x
,
y
{\displaystyle x,y}
are sampled,
P
(
t
)
=
β
2
(
D
(
t
)
)
{\displaystyle P(t)=\beta _{2}(D(t))}
. The population
P
(
t
)
{\displaystyle P(t)}
is then sorted in decreasing order of fitness,
S
Sort
(
f
)
(
P
(
t
)
)
{\displaystyle S_{{\text{Sort}}(f)}(P(t))}
, with
u
{\displaystyle u}
being the best and
v
{\displaystyle v}
being the worst solution. The CGA estimates univariate probabilities as follows
p
t
+
1
(
X
i
)
=
p
t
(
X
i
)
+
γ
(
u
i
−
v
i
)
,
∀
i
∈
1
,
2
,
…
,
N
,
{\displaystyle p_{t+1}(X_{i})=p_{t}(X_{i})+\gamma (u_{i}-v_{i}),\quad \forall i\in 1,2,\dots ,N,}
where,
γ
∈
(
0
,
1
]
{\displaystyle \gamma \in (0,1]}
is a constant defining the learning rate, usually set to
γ
=
1
/
N
{\displaystyle \gamma =1/N}
. The CGA can be defined as
D
(
t
+
1
)
=
α
CGA
∘
S
Sort
(
f
)
∘
β
2
(
D
(
t
)
)
{\displaystyle D(t+1)=\alpha _{\text{CGA}}\circ S_{{\text{Sort}}(f)}\circ \beta _{2}(D(t))}
== Bivariate factorizations ==
Although univariate models can be computed efficiently, in many cases they are not representative enough to provide better performance than GAs. In order to overcome such a drawback, the use of bivariate factorizations was proposed in the EDA community, in which dependencies between pairs of variables could be modeled. A bivariate factorization can be defined as follows, where
π
i
{\displaystyle \pi _{i}}
contains a possible variable dependent to
X
i
{\displaystyle X_{i}}
, i.e.
|
π
i
|
=
1
{\displaystyle |\pi _{i}|=1}
.
D
Bivariate
:=
p
(
X
1
,
…
,
X
N
)
=
∏
i
=
1
N
p
(
X
i
|
π
i
)
.
{\displaystyle D_{\text{Bivariate}}:=p(X_{1},\dots ,X_{N})=\prod _{i=1}^{N}p(X_{i}|\pi _{i}).}
Bivariate and multivariate distributions are usually represented as probabilistic graphical models (graphs), in which edges denote statistical dependencies (or conditional probabilities) and vertices denote variables. To learn the structure of a PGM from data linkage-learning is employed.
=== Mutual information maximizing input clustering (MIMIC) ===
The MIMIC factorizes the joint probability distribution in a chain-like model representing successive dependencies between variables. It finds a permutation of the decision variables,
r
:
i
↦
j
{\displaystyle r:i\mapsto j}
, such that
x
r
(
1
)
x
r
(
2
)
,
…
,
x
r
(
N
)
{\displaystyle x_{r(1)}x_{r(2)},\dots ,x_{r(N)}}
minimizes the Kullback-Leibler divergence in relation to the true probability distribution, i.e.
π
r
(
i
+
1
)
=
{
X
r
(
i
)
}
{\displaystyle \pi _{r(i+1)}=\{X_{r(i)}\}}
. MIMIC models a distribution
p
t
+
1
(
X
1
,
…
,
X
N
)
=
p
t
(
X
r
(
N
)
)
∏
i
=
1
N
−
1
p
t
(
X
r
(
i
)
|
X
r
(
i
+
1
)
)
.
{\displaystyle p_{t+1}(X_{1},\dots ,X_{N})=p_{t}(X_{r(N)})\prod _{i=1}^{N-1}p_{t}(X_{r(i)}|X_{r(i+1)}).}
New solutions are sampled from the leftmost to the rightmost variable, the first is generated independently and the others according to conditional probabilities. Since the estimated distribution must be recomputed each generation, MIMIC uses concrete populations in the following way
P
(
t
+
1
)
=
β
μ
∘
α
MIMIC
∘
S
(
P
(
t
)
)
.
{\displaystyle P(t+1)=\beta _{\mu }\circ \alpha _{\text{MIMIC}}\circ S(P(t)).}
=== Bivariate marginal distribution algorithm (BMDA) ===
The BMDA factorizes the joint probability distribution in bivariate distributions. First, a randomly chosen variable is added as a node in a graph, the most dependent variable to one of those in the graph is chosen among those not yet in the graph, this procedure is repeated until no remaining variable depends on any variable in the graph (verified according to a threshold value).
The resulting model is a forest with multiple trees rooted at nodes
Υ
t
{\displaystyle \Upsilon _{t}}
. Considering
I
t
{\displaystyle I_{t}}
the non-root variables, BMDA estimates a factorized distribution in which the root variables can be sampled independently, whereas all the others must be conditioned to the parent variable
π
i
{\displaystyle \pi _{i}}
.
p
t
+
1
(
X
1
,
…
,
X
N
)
=
∏
X
i
∈
Υ
t
p
t
(
X
i
)
⋅
∏
X
i
∈
I
t
p
t
(
X
i
|
π
i
)
.
{\displaystyle p_{t+1}(X_{1},\dots ,X_{N})=\prod _{X_{i}\in \Upsilon _{t}}p_{t}(X_{i})\cdot \prod _{X_{i}\in I_{t}}p_{t}(X_{i}|\pi _{i}).}
Each step of BMDA is defined as follows
P
(
t
+
1
)
=
β
μ
∘
α
BMDA
∘
S
(
P
(
t
)
)
.
{\displaystyle P(t+1)=\beta _{\mu }\circ \alpha _{\text{BMDA}}\circ S(P(t)).}
== Multivariate factorizations ==
The next stage of EDAs development was the use of multivariate factorizations. In this case, the joint probability distribution is usually factorized in a number of components of limited size
|
π
i
|
≤
K
,
∀
i
∈
1
,
2
,
…
,
N
{\displaystyle |\pi _{i}|\leq K,~\forall i\in 1,2,\dots ,N}
.
p
(
X
1
,
…
,
X
N
)
=
∏
i
=
1
N
p
(
X
i
|
π
i
)
{\displaystyle p(X_{1},\dots ,X_{N})=\prod _{i=1}^{N}p(X_{i}|\pi _{i})}
The learning of PGMs encoding multivariate distributions is a computationally expensive task, therefore, it is usual for EDAs to estimate multivariate statistics from bivariate statistics. Such relaxation allows PGM to be built in polynomial time in
N
{\displaystyle N}
; however, it also limits the generality of such EDAs.
=== Extended compact genetic algorithm (eCGA) ===
The ECGA was one of the first EDA to employ multivariate factorizations, in which high-order dependencies among decision variables can be modeled. Its approach factorizes the joint probability distribution in the product of multivariate marginal distributions. Assume
T
eCGA
=
{
τ
1
,
…
,
τ
Ψ
}
{\displaystyle T_{\text{eCGA}}=\{\tau _{1},\dots ,\tau _{\Psi }\}}
is a set of subsets, in which every
τ
∈
T
eCGA
{\displaystyle \tau \in T_{\text{eCGA}}}
is a linkage set, containing
|
τ
|
≤
K
{\displaystyle |\tau |\leq K}
variables. The factorized joint probability distribution is represented as follows
p
(
X
1
,
…
,
X
N
)
=
∏
τ
∈
T
eCGA
p
(
τ
)
.
{\displaystyle p(X_{1},\dots ,X_{N})=\prod _{\tau \in T_{\text{eCGA}}}p(\tau ).}
The ECGA popularized the term "linkage-learning" as denoting procedures that identify linkage sets. Its linkage-learning procedure relies on two measures: (1) the Model Complexity (MC) and (2) the Compressed Population Complexity (CPC). The MC quantifies the model representation size in terms of number of bits required to store all the marginal probabilities
M
C
=
log
2
(
λ
+
1
)
∑
τ
∈
T
eCGA
(
2
|
τ
|
−
1
)
,
{\displaystyle MC=\log _{2}(\lambda +1)\sum _{\tau \in T_{\text{eCGA}}}(2^{|\tau |-1}),}
The CPC, on the other hand, quantifies the data compression in terms of entropy of the marginal distribution over all partitions, where
λ
{\displaystyle \lambda }
is the selected population size,
|
τ
|
{\displaystyle |\tau |}
is the number of decision variables in the linkage set
τ
{\displaystyle \tau }
and
H
(
τ
)
{\displaystyle H(\tau )}
is the joint entropy of the variables in
τ
{\displaystyle \tau }
C
P
C
=
λ
∑
τ
∈
T
eCGA
H
(
τ
)
.
{\displaystyle CPC=\lambda \sum _{\tau \in T_{\text{eCGA}}}H(\tau ).}
The linkage-learning in ECGA works as follows: (1) Insert each variable in a cluster, (2) compute CCC = MC + CPC of the current linkage sets, (3) verify the increase on CCC provided by joining pairs of clusters, (4) effectively joins those clusters with highest CCC improvement. This procedure is repeated until no CCC improvements are possible and produces a linkage model
T
eCGA
{\displaystyle T_{\text{eCGA}}}
. The ECGA works with concrete populations, therefore, using the factorized distribution modeled by ECGA, it can be described as
P
(
t
+
1
)
=
β
μ
∘
α
eCGA
∘
S
(
P
(
t
)
)
{\displaystyle P(t+1)=\beta _{\mu }\circ \alpha _{\text{eCGA}}\circ S(P(t))}
=== Bayesian optimization algorithm (BOA) ===
The BOA uses Bayesian networks to model and sample promising solutions. Bayesian networks are directed acyclic graphs, with nodes representing variables and edges representing conditional probabilities between pair of variables. The value of a variable
x
i
{\displaystyle x_{i}}
can be conditioned on a maximum of
K
{\displaystyle K}
other variables, defined in
π
i
{\displaystyle \pi _{i}}
. BOA builds a PGM encoding a factorized joint distribution, in which the parameters of the network, i.e. the conditional probabilities, are estimated from the selected population using the maximum likelihood estimator.
p
(
X
1
,
X
2
,
…
,
X
N
)
=
∏
i
=
1
N
p
(
X
i
|
π
i
)
.
{\displaystyle p(X_{1},X_{2},\dots ,X_{N})=\prod _{i=1}^{N}p(X_{i}|\pi _{i}).}
The Bayesian network structure, on the other hand, must be built iteratively (linkage-learning). It starts with a network without edges and, at each step, adds the edge which better improves some scoring metric (e.g. Bayesian information criterion (BIC) or Bayesian-Dirichlet metric with likelihood equivalence (BDe)). The scoring metric evaluates the network structure according to its accuracy in modeling the selected population. From the built network, BOA samples new promising solutions as follows: (1) it computes the ancestral ordering for each variable, each node being preceded by its parents; (2) each variable is sampled conditionally to its parents. Given such scenario, every BOA step can be defined as
P
(
t
+
1
)
=
β
μ
∘
α
BOA
∘
S
(
P
(
t
)
)
{\displaystyle P(t+1)=\beta _{\mu }\circ \alpha _{\text{BOA}}\circ S(P(t))}
=== Linkage-tree Genetic Algorithm (LTGA) ===
The LTGA differs from most EDA in the sense it does not explicitly model a probability distribution but only a linkage model, called linkage-tree. A linkage
T
{\displaystyle T}
is a set of linkage sets with no probability distribution associated, therefore, there is no way to sample new solutions directly from
T
{\displaystyle T}
. The linkage model is a linkage-tree produced stored as a Family of sets (FOS).
T
LT
=
{
{
x
1
}
,
{
x
2
}
,
{
x
3
}
,
{
x
4
}
,
{
x
1
,
x
2
}
,
{
x
3
,
x
4
}
}
.
{\displaystyle T_{\text{LT}}=\{\{x_{1}\},\{x_{2}\},\{x_{3}\},\{x_{4}\},\{x_{1},x_{2}\},\{x_{3},x_{4}\}\}.}
The linkage-tree learning procedure is a hierarchical clustering algorithm, which work as follows. At each step the two closest clusters
i
{\displaystyle i}
and
j
{\displaystyle j}
are merged, this procedure repeats until only one cluster remains, each subtree is stored as a subset
τ
∈
T
LT
{\displaystyle \tau \in T_{\text{LT}}}
.
The LTGA uses
T
LT
{\displaystyle T_{\text{LT}}}
to guide an "optimal mixing" procedure which resembles a recombination operator but only accepts improving moves. We denote it as
R
LTGA
{\displaystyle R_{\text{LTGA}}}
, where the notation
x
[
τ
]
←
y
[
τ
]
{\displaystyle x[\tau ]\gets y[\tau ]}
indicates the transfer of the genetic material indexed by
τ
{\displaystyle \tau }
from
y
{\displaystyle y}
to
x
{\displaystyle x}
.
The LTGA does not implement typical selection operators, instead, selection is performed during recombination. Similar ideas have been usually applied into local-search heuristics and, in this sense, the LTGA can be seen as an hybrid method. In summary, one step of the LTGA is defined as
P
(
t
+
1
)
=
R
LTGA
(
P
(
t
)
)
∘
α
LTGA
(
P
(
t
)
)
{\displaystyle P(t+1)=R_{\text{LTGA}}(P(t))\circ \alpha _{\text{LTGA}}(P(t))}
== Other ==
Probability collectives (PC)
Hill climbing with learning (HCwL)
Estimation of multivariate normal algorithm (EMNA)
Estimation of Bayesian networks algorithm (EBNA)
Stochastic hill climbing with learning by vectors of normal distributions (SHCLVND)
Real-coded PBIL
Selfish Gene Algorithm (SG)
Compact Differential Evolution (cDE) and its variants
Compact Particle Swarm Optimization (cPSO)
Compact Bacterial Foraging Optimization (cBFO)
Probabilistic incremental program evolution (PIPE)
Estimation of Gaussian networks algorithm (EGNA)
Estimation multivariate normal algorithm with thresheld convergence
Dependency Structure Matrix Genetic Algorithm (DSMGA)
== Related ==
CMA-ES
Cross-entropy method
Ant colony optimization algorithms
== References == | Wikipedia/Estimation_of_Distribution_Algorithm |
Neural coding (or neural representation) is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that
sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information.
== Overview ==
Neurons have an ability uncommon among the cells of the body to propagate signals rapidly over large distances by generating characteristic electrical pulses called action potentials: voltage spikes that can travel down axons. Sensory neurons change their activities by firing sequences of action potentials in various temporal patterns, with the presence of external sensory stimuli, such as light, sound, taste, smell and touch. Information about the stimulus is encoded in this pattern of action potentials and transmitted into and around the brain. Beyond this, specialized neurons, such as those of the retina, can communicate more information through graded potentials. These differ from action potentials because information about the strength of a stimulus directly correlates with the strength of the neurons' output. The signal decays much faster for graded potentials, necessitating short inter-neuron distances and high neuronal density. The advantage of graded potentials is higher information rates capable of encoding more states (i.e. higher fidelity) than spiking neurons.
Although action potentials can vary somewhat in duration, amplitude and shape, they are typically treated as identical stereotyped events in neural coding studies. If the brief duration of an action potential (about 1 ms) is ignored, an action potential sequence, or spike train, can be characterized simply by a series of all-or-none point events in time. The lengths of interspike intervals (ISIs) between two successive spikes in a spike train often vary, apparently randomly. The study of neural coding involves measuring and characterizing how stimulus attributes, such as light or sound intensity, or motor actions, such as the direction of an arm movement, are represented by neuron action potentials or spikes. In order to describe and analyze neuronal firing, statistical methods and methods of probability theory and stochastic point processes have been widely applied.
With the development of large-scale neural recording and decoding technologies, researchers have begun to crack the neural code and have already provided the first glimpse into the real-time neural code as memory is formed and recalled in the hippocampus, a brain region known to be central for memory formation. Neuroscientists have initiated several large-scale brain decoding projects.
== Encoding and decoding ==
The link between stimulus and response can be studied from two opposite points of view. Neural encoding refers to the map from stimulus to response. The main focus is to understand how neurons respond to a wide variety of stimuli, and to construct models that attempt to predict responses to other stimuli. Neural decoding refers to the reverse map, from response to stimulus, and the challenge is to reconstruct a stimulus, or certain aspects of that stimulus, from the spike sequences it evokes.
== Hypothesized coding schemes ==
A sequence, or 'train', of spikes may contain information based on different coding schemes. In some neurons the strength with which a postsynaptic partner responds may depend solely on the 'firing rate', the average number of spikes per unit time (a 'rate code'). At the other end, a complex 'temporal code' is based on the precise timing of single spikes. They may be locked to an external stimulus such as in the visual and auditory system or be generated intrinsically by the neural circuitry.
Whether neurons use rate coding or temporal coding is a topic of intense debate within the neuroscience community, even though there is no clear definition of what these terms mean.
=== Rate code ===
The rate coding model of neuronal firing communication states that as the intensity of a stimulus increases, the frequency or rate of action potentials, or "spike firing", increases. Rate coding is sometimes called frequency coding.
Rate coding is a traditional coding scheme, assuming that most, if not all, information about the stimulus is contained in the firing rate of the neuron. Because the sequence of action potentials generated by a given stimulus varies from trial to trial, neuronal responses are typically treated statistically or probabilistically. They may be characterized by firing rates, rather than as specific spike sequences. In most sensory systems, the firing rate increases, generally non-linearly, with increasing stimulus intensity. Under a rate coding assumption, any information possibly encoded in the temporal structure of the spike train is ignored. Consequently, rate coding is inefficient but highly robust with respect to the ISI 'noise'.
During rate coding, precisely calculating firing rate is very important. In fact, the term "firing rate" has a few different definitions, which refer to different averaging procedures, such as an average over time (rate as a single-neuron spike count) or an average over several repetitions (rate of PSTH) of experiment.
In rate coding, learning is based on activity-dependent synaptic weight modifications.
Rate coding was originally shown by Edgar Adrian and Yngve Zotterman in 1926. In this simple experiment different weights were hung from a muscle. As the weight of the stimulus increased, the number of spikes recorded from sensory nerves innervating the muscle also increased. From these original experiments, Adrian and Zotterman concluded that action potentials were unitary events, and that the frequency of events, and not individual event magnitude, was the basis for most inter-neuronal communication.
In the following decades, measurement of firing rates became a standard tool for describing the properties of all types of sensory or cortical neurons, partly due to the relative ease of measuring rates experimentally. However, this approach neglects all the information possibly contained in the exact timing of the spikes. During recent years, more and more experimental evidence has suggested that a straightforward firing rate concept based on temporal averaging may be too simplistic to describe brain activity.
==== Spike-count rate (average over time) ====
The spike-count rate, also referred to as temporal average, is obtained by counting the number of spikes that appear during a trial and dividing by the duration of trial. The length T of the time window is set by the experimenter and depends on the type of neuron recorded from and to the stimulus. In practice, to get sensible averages, several spikes should occur within the time window. Typical values are T = 100 ms or T = 500 ms, but the duration may also be longer or shorter (Chapter 1.5 in the textbook 'Spiking Neuron Models' ).
The spike-count rate can be determined from a single trial, but at the expense of losing all temporal resolution about variations in neural response during the course of the trial. Temporal averaging can work well in cases where the stimulus is constant or slowly varying and does not require a fast reaction of the organism — and this is the situation usually encountered in experimental protocols. Real-world input, however, is hardly stationary, but often changing on a fast time scale. For example, even when viewing a static image, humans perform saccades, rapid changes of the direction of gaze. The image projected onto the retinal photoreceptors changes therefore every few hundred milliseconds (Chapter 1.5 in )
Despite its shortcomings, the concept of a spike-count rate code is widely used not only in experiments, but also in models of neural networks. It has led to the idea that a neuron transforms information about a single input variable (the stimulus strength) into a single continuous output variable (the firing rate).
There is a growing body of evidence that in Purkinje neurons, at least, information is not simply encoded in firing but also in the timing and duration of non-firing, quiescent periods. There is also evidence from retinal cells, that information is encoded not only in the firing rate but also in spike timing. More generally, whenever a rapid response of an organism is required a firing rate defined as a spike-count over a few hundred milliseconds is simply too slow.
==== Time-dependent firing rate (averaging over several trials) ====
The time-dependent firing rate is defined as the average number of spikes (averaged over trials) appearing during a short interval between times t and t+Δt, divided by the duration of the interval. It works for stationary as well as for time-dependent stimuli. To experimentally measure the time-dependent firing rate, the experimenter records from a neuron while stimulating with some input sequence. The same stimulation sequence is repeated several times and the neuronal response is reported in a Peri-Stimulus-Time Histogram (PSTH). The time t is measured with respect to the start of the stimulation sequence. The Δt must be large enough (typically in the range of one or a few milliseconds) so that there is a sufficient number of spikes within the interval to obtain a reliable estimate of the average. The number of occurrences of spikes nK(t;t+Δt) summed over all repetitions of the experiment divided by the number K of repetitions is a measure of the typical activity of the neuron between time t and t+Δt. A further division by the interval length Δt yields time-dependent firing rate r(t) of the neuron, which is equivalent to the spike density of PSTH (Chapter 1.5 in ).
For sufficiently small Δt, r(t)Δt is the average number of spikes occurring between times t and t+Δt over multiple trials. If Δt is small, there will never be more than one spike within the interval between t and t+Δt on any given trial. This means that r(t)Δt is also the fraction of trials on which a spike occurred between those times. Equivalently, r(t)Δt is the probability that a spike occurs during this time interval.
As an experimental procedure, the time-dependent firing rate measure is a useful method to evaluate neuronal activity, in particular in the case of time-dependent stimuli. The obvious problem with this approach is that it can not be the coding scheme used by neurons in the brain. Neurons can not wait for the stimuli to repeatedly present in an exactly same manner before generating a response.
Nevertheless, the experimental time-dependent firing rate measure can make sense, if there are large populations of independent neurons that receive the same stimulus. Instead of recording from a population of N neurons in a single run, it is experimentally easier to record from a single neuron and average over N repeated runs. Thus, the time-dependent firing rate coding relies on the implicit assumption that there are always populations of neurons.
=== Temporal coding ===
When precise spike timing or high-frequency firing-rate fluctuations are found to carry information, the neural code is often identified as a temporal code. A number of studies have found that the temporal resolution of the neural code is on a millisecond time scale, indicating that precise spike timing is a significant element in neural coding. Such codes, that communicate via the time between spikes are also referred to as interpulse interval codes, and have been supported by recent studies.
Neurons exhibit high-frequency fluctuations of firing-rates which could be noise or could carry information. Rate coding models suggest that these irregularities are noise, while temporal coding models suggest that they encode information. If the nervous system only used rate codes to convey information, a more consistent, regular firing rate would have been evolutionarily advantageous, and neurons would have utilized this code over other less robust options. Temporal coding supplies an alternate explanation for the “noise," suggesting that it actually encodes information and affects neural processing. To model this idea, binary symbols can be used to mark the spikes: 1 for a spike, 0 for no spike. Temporal coding allows the sequence 000111000111 to mean something different from 001100110011, even though the mean firing rate is the same for both sequences, at 6 spikes/10 ms.
Until recently, scientists had put the most emphasis on rate encoding as an explanation for post-synaptic potential patterns. However, functions of the brain are more temporally precise than the use of only rate encoding seems to allow. In other words, essential information could be lost due to the inability of the rate code to capture all the available information of the spike train. In addition, responses are different enough between similar (but not identical) stimuli to suggest that the distinct patterns of spikes contain a higher volume of information than is possible to include in a rate code.
Temporal codes (also called spike codes ), employ those features of the spiking activity that cannot be described by the firing rate. For example, time-to-first-spike after the stimulus onset, phase-of-firing with respect to background oscillations, characteristics based on the second and higher statistical moments of the ISI probability distribution, spike randomness, or precisely timed groups of spikes (temporal patterns) are candidates for temporal codes. As there is no absolute time reference in the nervous system, the information is carried either in terms of the relative timing of spikes in a population of neurons (temporal patterns) or with respect to an ongoing brain oscillation (phase of firing). One way in which temporal codes are decoded, in presence of neural oscillations, is that spikes occurring at specific phases of an oscillatory cycle are more effective in depolarizing the post-synaptic neuron.
The temporal structure of a spike train or firing rate evoked by a stimulus is determined both by the dynamics of the stimulus and by the nature of the neural encoding process. Stimuli that change rapidly tend to generate precisely timed spikes (and rapidly changing firing rates in PSTHs) no matter what neural coding strategy is being used. Temporal coding in the narrow sense refers to temporal precision in the response that does not arise solely from the dynamics of the stimulus, but that nevertheless relates to properties of the stimulus. The interplay between stimulus and encoding dynamics makes the identification of a temporal code difficult.
In temporal coding, learning can be explained by activity-dependent synaptic delay modifications. The modifications can themselves depend not only on spike rates (rate coding) but also on spike timing patterns (temporal coding), i.e., can be a special case of spike-timing-dependent plasticity.
The issue of temporal coding is distinct and independent from the issue of independent-spike coding. If each spike is independent of all the other spikes in the train, the temporal character of the neural code is determined by the behavior of time-dependent firing rate r(t). If r(t) varies slowly with time, the code is typically called a rate code, and if it varies rapidly, the code is called temporal.
==== Temporal coding in sensory systems ====
For very brief stimuli, a neuron's maximum firing rate may not be fast enough to produce more than a single spike. Due to the density of information about the abbreviated stimulus contained in this single spike, it would seem that the timing of the spike itself would have to convey more information than simply the average frequency of action potentials over a given period of time. This model is especially important for sound localization, which occurs within the brain on the order of milliseconds. The brain must obtain a large quantity of information based on a relatively short neural response. Additionally, if low firing rates on the order of ten spikes per second must be distinguished from arbitrarily close rate coding for different stimuli, then a neuron trying to discriminate these two stimuli may need to wait for a second or more to accumulate enough information. This is not consistent with numerous organisms which are able to discriminate between stimuli in the time frame of milliseconds, suggesting that a rate code is not the only model at work.
To account for the fast encoding of visual stimuli, it has been suggested that neurons of the retina encode visual information in the latency time between stimulus onset and first action potential, also called latency to first spike or time-to-first-spike. This type of temporal coding has been shown also in the auditory and somato-sensory system. The main drawback of such a coding scheme is its sensitivity to intrinsic neuronal fluctuations. In the primary visual cortex of macaques, the timing of the first spike relative to the start of the stimulus was found to provide more information than the interval between spikes. However, the interspike interval could be used to encode additional information, which is especially important when the spike rate reaches its limit, as in high-contrast situations. For this reason, temporal coding may play a part in coding defined edges rather than gradual transitions.
The mammalian gustatory system is useful for studying temporal coding because of its fairly distinct stimuli and the easily discernible responses of the organism. Temporally encoded information may help an organism discriminate between different tastants of the same category (sweet, bitter, sour, salty, umami) that elicit very similar responses in terms of spike count. The temporal component of the pattern elicited by each tastant may be used to determine its identity (e.g., the difference between two bitter tastants, such as quinine and denatonium). In this way, both rate coding and temporal coding may be used in the gustatory system – rate for basic tastant type, temporal for more specific differentiation.
Research on mammalian gustatory system has shown that there is an abundance of information present in temporal patterns across populations of neurons, and this information is different from that which is determined by rate coding schemes. Groups of neurons may synchronize in response to a stimulus. In studies dealing with the front cortical portion of the brain in primates, precise patterns with short time scales only a few milliseconds in length were found across small populations of neurons which correlated with certain information processing behaviors. However, little information could be determined from the patterns; one possible theory is they represented the higher-order processing taking place in the brain.
As with the visual system, in mitral/tufted cells in the olfactory bulb of mice, first-spike latency relative to the start of a sniffing action seemed to encode much of the information about an odor. This strategy of using spike latency allows for rapid identification of and reaction to an odorant. In addition, some mitral/tufted cells have specific firing patterns for given odorants. This type of extra information could help in recognizing a certain odor, but is not completely necessary, as average spike count over the course of the animal's sniffing was also a good identifier. Along the same lines, experiments done with the olfactory system of rabbits showed distinct patterns which correlated with different subsets of odorants, and a similar result was obtained in experiments with the locust olfactory system.
==== Temporal coding applications ====
The specificity of temporal coding requires highly refined technology to measure informative, reliable, experimental data. Advances made in optogenetics allow neurologists to control spikes in individual neurons, offering electrical and spatial single-cell resolution. For example, blue light causes the light-gated ion channel channelrhodopsin to open, depolarizing the cell and producing a spike. When blue light is not sensed by the cell, the channel closes, and the neuron ceases to spike. The pattern of the spikes matches the pattern of the blue light stimuli. By inserting channelrhodopsin gene sequences into mouse DNA, researchers can control spikes and therefore certain behaviors of the mouse (e.g., making the mouse turn left). Researchers, through optogenetics, have the tools to effect different temporal codes in a neuron while maintaining the same mean firing rate, and thereby can test whether or not temporal coding occurs in specific neural circuits.
Optogenetic technology also has the potential to enable the correction of spike abnormalities at the root of several neurological and psychological disorders. If neurons do encode information in individual spike timing patterns, key signals could be missed by attempting to crack the code while looking only at mean firing rates. Understanding any temporally encoded aspects of the neural code and replicating these sequences in neurons could allow for greater control and treatment of neurological disorders such as depression, schizophrenia, and Parkinson's disease. Regulation of spike intervals in single cells more precisely controls brain activity than the addition of pharmacological agents intravenously.
==== Phase-of-firing code ====
Phase-of-firing code is a neural coding scheme that combines the spike count code with a time reference based on oscillations. This type of code takes into account a time label for each spike according to a time reference based on phase of local ongoing oscillations at low or high frequencies.
It has been shown that neurons in some cortical sensory areas encode rich naturalistic stimuli in terms of their spike times relative to the phase of ongoing network oscillatory fluctuations, rather than only in terms of their spike count. The local field potential signals reflect population (network) oscillations. The phase-of-firing code is often categorized as a temporal code although the time label used for spikes (i.e. the network oscillation phase) is a low-resolution (coarse-grained) reference for time. As a result, often only four discrete values for the phase are enough to represent all the information content in this kind of code with respect to the phase of oscillations in low frequencies. Phase-of-firing code is loosely based on the phase precession phenomena observed in place cells of the hippocampus. Another feature of this code is that neurons adhere to a preferred order of spiking between a group of sensory neurons, resulting in firing sequence.
Phase code has been shown in visual cortex to involve also high-frequency oscillations. Within a cycle of gamma oscillation, each neuron has its own preferred relative firing time. As a result, an entire population of neurons generates a firing sequence that has a duration of up to about 15 ms.
=== Population coding ===
Population coding is a method to represent stimuli by using the joint activities of a number of neurons. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine some value about the inputs. From the theoretical point of view, population coding is one of a few mathematically well-formulated problems in neuroscience. It grasps the essential features of neural coding and yet is simple enough for theoretic analysis. Experimental studies have revealed that this coding paradigm is widely used in the sensory and motor areas of the brain.
For example, in the visual area medial temporal (MT), neurons are tuned to the direction of object motion. In response to an object moving in a particular direction, many neurons in MT fire with a noise-corrupted and bell-shaped activity pattern across the population. The moving direction of the object is retrieved from the population activity, to be immune from the fluctuation existing in a single neuron's signal. When monkeys are trained to move a joystick towards a lit target, a single neuron will fire for multiple target directions. However it fires the fastest for one direction and more slowly depending on how close the target was to the neuron's "preferred" direction. If each neuron represents movement in its preferred direction, and the vector sum of all neurons is calculated (each neuron has a firing rate and a preferred direction), the sum points in the direction of motion. In this manner, the population of neurons codes the signal for the motion. This particular population code is referred to as population vector coding.
Place-time population codes, termed the averaged-localized-synchronized-response (ALSR) code, have been derived for neural representation of auditory acoustic stimuli. This exploits both the place or tuning within the auditory nerve, as well as the phase-locking within each nerve fiber auditory nerve. The first ALSR representation was for steady-state vowels; ALSR representations of pitch and formant frequencies in complex, non-steady state stimuli were later demonstrated for voiced-pitch, and formant representations in consonant-vowel syllables.
The advantage of such representations is that global features such as pitch or formant transition profiles can be represented as global features across the entire nerve simultaneously via both rate and place coding.
Population coding has a number of other advantages as well, including reduction of uncertainty due to neuronal variability and the ability to represent a number of different stimulus attributes simultaneously. Population coding is also much faster than rate coding and can reflect changes in the stimulus conditions nearly instantaneously. Individual neurons in such a population typically have different but overlapping selectivities, so that many neurons, but not necessarily all, respond to a given stimulus.
Typically an encoding function has a peak value such that activity of the neuron is greatest if the perceptual value is close to the peak value, and becomes reduced accordingly for values less close to the peak value. It follows that the actual perceived value can be reconstructed from the overall pattern of activity in the set of neurons. Vector coding is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method of maximum likelihood based on a multivariate distribution of the neuronal responses. These models can assume independence, second order correlations,
or even more detailed dependencies such as higher order maximum entropy models, or copulas.
==== Correlation coding ====
The correlation coding model of neuronal firing claims that correlations between action potentials, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the total mutual information present in the two spike trains about a stimulus feature. However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign. Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons.
==== Independent-spike coding ====
The independent-spike coding model of neuronal firing claims that each individual action potential, or "spike", is independent of each other spike within the spike train.
==== Position coding ====
A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response. However, the noise inherent in neural responses means that a maximum likelihood estimation function is more accurate.
This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as in grid cells that represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.
==== Topology of population dynamics ====
Dimensionality reduction and topological data analysis, have revealed that the population code is constrained to low-dimensional manifolds, sometimes also referred to as attractors. The position along the neural manifold correlates to certain behavioral conditions like head direction neurons in the anterodorsal thalamic nucleus forming a ring structure, grid cells encoding spatial position in entorhinal cortex along the surface of a torus, or motor cortex neurons encoding hand movements and preparatory activity. The low-dimensional manifolds are known to change in a state dependent manner, such as eye closure in the visual cortex, or breathing behavior in the ventral respiratory column.
=== Sparse coding ===
The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons. In contrast to sensor-sparse coding, sensor-dense coding implies that all information from possible sensor locations is known.
As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produces wavelet-like oriented filters that resemble the receptive fields of simple cells in the visual cortex. The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.
Given a potentially large set of input patterns, sparse coding algorithms (e.g. sparse autoencoder) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.
==== Linear generative model ====
Most models of sparse coding are based on the linear generative model. In this model, the symbols are combined in a linear fashion to approximate the input.
More formally, given a k-dimensional set of real-numbered input vectors
ξ
→
∈
R
k
{\displaystyle {\vec {\xi }}\in \mathbb {R} ^{k}}
, the goal of sparse coding is to determine n k-dimensional basis vectors
b
1
→
,
…
,
b
n
→
∈
R
k
{\displaystyle {\vec {b_{1}}},\ldots ,{\vec {b_{n}}}\in \mathbb {R} ^{k}}
, corresponding to neuronal receptive fields, along with a sparse n-dimensional vector of weights or coefficients
s
→
∈
R
n
{\displaystyle {\vec {s}}\in \mathbb {R} ^{n}}
for each input vector, so that a linear combination of the basis vectors with proportions given by the coefficients results in a close approximation to the input vector:
ξ
→
≈
∑
j
=
1
n
s
j
b
→
j
{\displaystyle {\vec {\xi }}\approx \sum _{j=1}^{n}s_{j}{\vec {b}}_{j}}
.
The codings generated by algorithms implementing a linear generative model can be classified into codings with soft sparseness and those with hard sparseness. These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smooth Gaussian-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values, no or hardly any small absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing.
Another measure of coding is whether it is critically complete or overcomplete. If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding is overcomplete. Overcomplete codings smoothly interpolate between input vectors and are robust under input noise. The human primary visual cortex is estimated to be overcomplete by a factor of 500, so that, for example, a 14 x 14 patch of input (a 196-dimensional space) is coded by roughly 100,000 neurons.
Other models are based on matching pursuit, a sparse approximation algorithm which finds the "best matching" projections of multidimensional data, and dictionary learning, a representation learning method which aims to find a sparse matrix representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.
==== Biological evidence ====
Sparse coding may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specific associative memories in which only a few neurons out of a population respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli.
Theoretical work on sparse distributed memory has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations. Experimentally, sparse representations of sensory information have been observed in many systems, including vision, audition, touch, and olfaction. However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain.
In the Drosophila olfactory system, sparse odor coding by the Kenyon cells of the mushroom body is thought to generate a large number of precisely addressable locations for the storage of odor-specific memories. Sparseness is controlled by a negative feedback circuit between Kenyon cells and GABAergic anterior paired lateral (APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories.
== See also ==
== References ==
== Further reading ==
Földiák P, Endres D, Sparse coding, Scholarpedia, 3(1):2984, 2008.
Dayan P & Abbott LF. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Cambridge, Massachusetts: The MIT Press; 2001. ISBN 0-262-04199-5
Rieke F, Warland D, de Ruyter van Steveninck R, Bialek W. Spikes: Exploring the Neural Code. Cambridge, Massachusetts: The MIT Press; 1999. ISBN 0-262-68108-0
Olshausen, B. A.; Field, D. J. (1996). "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". Nature. 381 (6583): 607–9. Bibcode:1996Natur.381..607O. doi:10.1038/381607a0. PMID 8637596. S2CID 4358477.
Tsien, JZ.; et al. (2014). "On initial Brain Activity Mapping of episodic and semantic memory code in the hippocampus". Neurobiology of Learning and Memory. 105: 200–210. doi:10.1016/j.nlm.2013.06.019. PMC 3769419. PMID 23838072. | Wikipedia/Rate_code |
The Hochschule Bielefeld – University of Applied Sciences and Arts (Hochschule Bielefeld) is the largest state university of applied sciences in East Westphalia-Lippe. The main location of this educational institution is in Bielefeld. Other locations are in Minden and Gütersloh. The range of courses includes bachelor's and master's degree programs, as well as certificate programs in six subject areas. At present, 10,535 students are taught by 286 professors and teaching staff at Bielefeld University of Applied Sciences, while 629 employees provide administrative support.
== History ==
Bielefeld University of Applied Sciences and Arts was founded on August 1, 1971 as the Bielefeld University of Applied Sciences and was one of the first universities of applied sciences in Germany. It was created by the merger of several educational institutions, including the State Engineering School for Mechanical Engineering in Bielefeld, the Municipal Werkkunstschule Bielefeld, the Landeshauptmann-Salzmann-Schule & Higher Technical School for Social Work, the State Higher School of Economics Bielefeld and the State Engineering School for Civil Engineering Minden. The aim of this integration was to promote practice-oriented academic training in the region of East Westphalia-Lippe.
Prof. Dr. Germanus Wegmann was the first rector of the Bielefeld University of Applied Sciences.
Over the years, the Bielefeld University of Applied Sciences expanded its range of courses and opened additional locations in Minden and Gütersloh.
On April 19, 2023, Bielefeld University of Applied Sciences was renamed “Bielefeld University of Applied Sciences and Arts (HSBI)” to reflect its broad range of subjects and its claim to be a modern university of applied sciences.
Today, Bielefeld University of Applied Sciences and Arts is the largest state university of applied sciences in East Westphalia-Lippe, with over 10,500 students, and offers a wide range of courses in six departments.
== Locations ==
Bielefeld University of Applied Sciences and Arts (HSBI) is located at three sites, each offering different focuses in research, teaching and practice. The main campus is in Bielefeld, while the additional campuses in Minden and Gütersloh offer specialized courses of study. This geographical distribution enables links with the respective regional economic and research structures.
=== Main Campus Bielefeld ===
The university's main campus is located in the Bielefeld-Altstadt district. This location is the center of the university and houses the majority of the departments, including engineering and mathematics, design, economics, and health and social services.
=== Minden Campus ===
The Minden Campus is a specialized location of the Bielefeld University of Applied Sciences and Arts and focuses on the fields of architecture, civil engineering and supply engineering.ref>"Fachbereich Campus Minden | Hochschule Bielefeld (HSBI)". www.hsbi.de. Retrieved 2025-01-13.</ref>
=== Gütersloh Campus ===
The Gütersloh Campus specializes in dual study programs and offers a unique model in which students can gain practical experience in partner companies while studying. Degree programs such as production management, digital technologies and applied computer science are based here.
== Faculties ==
Bielefeld University of Applied Sciences and Arts is divided into six faculties, each offering a specific range of practice-oriented courses and research fields. The university's range of courses is supplemented by the Minden and Gütersloh campuses, which function as specialized campuses. The university's faculties are described in detail below:
=== Faculty of Engineering and Mathematics ===
The Faculty of Engineering and Mathematics offers degree programs in mechanical engineering, electrical engineering, mechatronics, industrial engineering, and mathematics. The main research areas are automation, sustainable energy technologies, and digitalization.
=== Faculty of Design and Art ===
In the Faculty of Design and Art, students can specialize in areas such as communication design, digital media, photography, fashion and textile design. This faculty combines artistic approaches with technological innovations such as virtual reality, 3D printing and digital media production.
=== Faculty of Business and Health ===
The Faculty of Business and Health combines business and health-related degree programs, including business administration, business psychology, health management, and social management. Research focuses include digitalization in healthcare, process optimization, and sustainable management.
=== Faculty of Social Sciences ===
The Faculty of Social Sciences focuses on degree programs such as social work, special education, and education. Research focuses include topics such as social integration, inclusion, and child and youth services.
=== Faculty of Architecture and Civil Engineering ===
This faculty specializes in courses of study such as architecture and civil engineering. The practice-oriented teaching is complemented by laboratories and research projects that focus on sustainable construction, energy efficiency and construction techniques.
=== Faculty of Electrical Engineering and Computer Science ===
The Faculty of Electrical Engineering and Computer Science offers degree programs in electrical engineering, applied computer science, and related disciplines. The research and teaching focus is on areas such as artificial intelligence, embedded systems, and automation technology.
== Research Priorities ==
Bielefeld University of Applied Sciences and Arts (HSBI) aligns its research profile with global societal challenges and places particular emphasis on the areas of climate and energy, health, mobility and communication.
Within these subject areas, the HSBI combines its research activities in various institutes and research priorities.
An example is the research focus IFE – Interdisciplinary Research and Application Development in Environmental Informatics. This combines computer science, IT security, physics and measurement technology to contribute to the development of climate-friendly residential buildings. Research focuses on areas such as machine learning, applications of artificial intelligence, photovoltaic yield forecasts, energy and room climate monitoring, and IT security.
Another important research focus is AMMO – Applied Mathematical Modeling and Optimization. This is dedicated to the development of mathematical models and optimization methods for solving complex problems in various fields of application.
In addition, the School of Business addresses topics such as digital transformation, internationalization and sustainability/CSR. These strategic cornerstones complement the university's own research profile and promote the innovative development of the Ostwestfalen-Lippe region.
== Partnerships ==
The university maintains partnerships with over 150 universities worldwide. It also works with around 350 companies, particularly in the OWL region, to ensure practical training and research.
== References ==
== External links ==
Official website (in English) | Wikipedia/Bielefeld_University_of_Applied_Sciences |
Biological neuron models, also known as spiking neuron models, are mathematical descriptions of the conduction of electrical signals in neurons. Neurons (or nerve cells) are electrically excitable cells within the nervous system, able to fire electric signals, called action potentials, across a neural network. These mathematical models describe the role of the biophysical and geometrical characteristics of neurons on the conduction of electrical activity.
Central to these models is the description of how the membrane potential (that is, the difference in electric potential between the interior and the exterior of a biological cell) across the cell membrane changes over time. In an experimental setting, stimulating neurons with an electrical current generates an action potential (or spike), that propagates down the neuron's axon. This axon can branch out and connect to a large number of downstream neurons at sites called synapses. At these synapses, the spike can cause the release of neurotransmitters, which in turn can change the voltage potential of downstream neurons. This change can potentially lead to even more spikes in those downstream neurons, thus passing down the signal. As many as 95% of neurons in the neocortex, the outermost layer of the mammalian brain, consist of excitatory pyramidal neurons, and each pyramidal neuron receives tens of thousands of inputs from other neurons. Thus, spiking neurons are a major information processing unit of the nervous system.
One such example of a spiking neuron model may be a highly detailed mathematical model that includes spatial morphology. Another may be a conductance-based neuron model that views neurons as points and describes the membrane voltage dynamics as a function of trans-membrane currents. A mathematically simpler "integrate-and-fire" model significantly simplifies the description of ion channel and membrane potential dynamics (initially studied by Lapique in 1907).
== Biological background, classification, and aims of neuron models ==
Non-spiking cells, spiking cells, and their measurement
Not all the cells of the nervous system produce the type of spike that defines the scope of the spiking neuron models. For example, cochlear hair cells, retinal receptor cells, and retinal bipolar cells do not spike. Furthermore, many cells in the nervous system are not classified as neurons but instead are classified as glia.
Neuronal activity can be measured with different experimental techniques, such as the "Whole cell" measurement technique, which captures the spiking activity of a single neuron and produces full amplitude action potentials.
With extracellular measurement techniques, one or more electrodes are placed in the extracellular space. Spikes, often from several spiking sources, depending on the size of the electrode and its proximity to the sources, can be identified with signal processing techniques. Extracellular measurement has several advantages:
It is easier to obtain experimentally;
It is robust and lasts for a longer time;
It can reflect the dominant effect, especially when conducted in an anatomical region with many similar cells.
Overview of neuron models
Neuron models can be divided into two categories according to the physical units of the interface of the model. Each category could be further divided according to the abstraction/detail level:
Electrical input–output membrane voltage models – These models produce a prediction for membrane output voltage as a function of electrical stimulation given as current or voltage input. The various models in this category differ in the exact functional relationship between the input current and the output voltage and in the level of detail. Some models in this category predict only the moment of occurrence of the output spike (also known as "action potential"); other models are more detailed and account for sub-cellular processes. The models in this category can be either deterministic or probabilistic.
Natural stimulus or pharmacological input neuron models – The models in this category connect the input stimulus, which can be either pharmacological or natural, to the probability of a spike event. The input stage of these models is not electrical but rather has either pharmacological (chemical) concentration units, or physical units that characterize an external stimulus such as light, sound, or other forms of physical pressure. Furthermore, the output stage represents the probability of a spike event and not an electrical voltage.
Although it is not unusual in science and engineering to have several descriptive models for different abstraction/detail levels, the number of different, sometimes contradicting, biological neuron models is exceptionally high. This situation is partly the result of the many different experimental settings, and the difficulty to separate the intrinsic properties of a single neuron from measurement effects and interactions of many cells (network effects).
Aims of neuron models
Ultimately, biological neuron models aim to explain the mechanisms underlying the operation of the nervous system. However, several approaches can be distinguished, from more realistic models (e.g., mechanistic models) to more pragmatic models (e.g., phenomenological models). Modeling helps to analyze experimental data and address questions. Models are also important in the context of restoring lost brain functionality through neuroprosthetic devices.
== Electrical input–output membrane voltage models ==
The models in this category describe the relationship between neuronal membrane currents at the input stage and membrane voltage at the output stage. This category includes (generalized) integrate-and-fire models and biophysical models inspired by the work of Hodgkin–Huxley in the early 1950s using an experimental setup that punctured the cell membrane and allowed to force a specific membrane voltage/current.
Most modern electrical neural interfaces apply extra-cellular electrical stimulation to avoid membrane puncturing, which can lead to cell death and tissue damage. Hence, it is not clear to what extent the electrical neuron models hold for extra-cellular stimulation (see e.g.).
=== Hodgkin–Huxley ===
The Hodgkin–Huxley model (H&H model)
is a model of the relationship between the flow of ionic currents across the neuronal cell membrane and the membrane voltage of the cell. It consists of a set of nonlinear differential equations describing the behavior of ion channels that permeate the cell membrane of the squid giant axon. Hodgkin and Huxley were awarded the 1963 Nobel Prize in Physiology or Medicine for this work.
It is important to note the voltage-current relationship, with multiple voltage-dependent currents charging the cell membrane of capacity Cm
C
m
d
V
(
t
)
d
t
=
−
∑
i
I
i
(
t
,
V
)
.
{\displaystyle C_{\mathrm {m} }{\frac {dV(t)}{dt}}=-\sum _{i}I_{i}(t,V).}
The above equation is the time derivative of the law of capacitance, Q = CV where the change of the total charge must be explained as the sum over the currents. Each current is given by
I
(
t
,
V
)
=
g
(
t
,
V
)
⋅
(
V
−
V
e
q
)
{\displaystyle I(t,V)=g(t,V)\cdot (V-V_{\mathrm {eq} })}
where g(t,V) is the conductance, or inverse resistance, which can be expanded in terms of its maximal conductance ḡ and the activation and inactivation fractions m and h, respectively, that determine how many ions can flow through available membrane channels. This expansion is given by
g
(
t
,
V
)
=
g
¯
⋅
m
(
t
,
V
)
p
⋅
h
(
t
,
V
)
q
{\displaystyle g(t,V)={\bar {g}}\cdot m(t,V)^{p}\cdot h(t,V)^{q}}
and our fractions follow the first-order kinetics
d
m
(
t
,
V
)
d
t
=
m
∞
(
V
)
−
m
(
t
,
V
)
τ
m
(
V
)
=
α
m
(
V
)
⋅
(
1
−
m
)
−
β
m
(
V
)
⋅
m
{\displaystyle {\frac {dm(t,V)}{dt}}={\frac {m_{\infty }(V)-m(t,V)}{\tau _{\mathrm {m} }(V)}}=\alpha _{\mathrm {m} }(V)\cdot (1-m)-\beta _{\mathrm {m} }(V)\cdot m}
with similar dynamics for h, where we can use either τ and m∞ or α and β to define our gate fractions.
The Hodgkin–Huxley model may be extended to include additional ionic currents. Typically, these include inward Ca2+ and Na+ input currents, as well as several varieties of K+ outward currents, including a "leak" current.
The result can be at the small end of 20 parameters which one must estimate or measure for an accurate model. In a model of a complex system of neurons, numerical integration of the equations are computationally expensive. Careful simplifications of the Hodgkin–Huxley model are therefore needed.
The model can be reduced to two dimensions thanks to the dynamic relations which can be established between the gating variables. it is also possible to extend it to take into account the evolution of the concentrations (considered fixed in the original model).
=== Perfect Integrate-and-fire ===
One of the earliest models of a neuron is the perfect integrate-and-fire model (also called non-leaky integrate-and-fire), first investigated in 1907 by Louis Lapicque. A neuron is represented by its membrane voltage V which evolves in time during stimulation with an input current I(t) according
I
(
t
)
=
C
d
V
(
t
)
d
t
{\displaystyle I(t)=C{\frac {dV(t)}{dt}}}
which is just the time derivative of the law of capacitance, Q = CV. When an input current is applied, the membrane voltage increases with time until it reaches a constant threshold Vth, at which point a delta function spike occurs and the voltage is reset to its resting potential, after which the model continues to run. The firing frequency of the model thus increases linearly without bound as input current increases.
The model can be made more accurate by introducing a refractory period tref that limits the firing frequency of a neuron by preventing it from firing during that period. For constant input I(t)=I the threshold voltage is reached after an integration time tint=CVthr/I after starting from zero. After a reset, the refractory period introduces a dead time so that the total time until the next firing is tref+tint . The firing frequency is the inverse of the total inter-spike interval (including dead time). The firing frequency as a function of a constant input current, is therefore
f
(
I
)
=
I
C
V
t
h
+
t
r
e
f
I
.
{\displaystyle \,\!f(I)={\frac {I}{C_{\mathrm {} }V_{\mathrm {th} }+t_{\mathrm {ref} }I}}.}
A shortcoming of this model is that it describes neither adaptation nor leakage. If the model receives a below-threshold short current pulse at some time, it will retain that voltage boost forever - until another input later makes it fire. This characteristic is not in line with observed neuronal behavior. The following extensions make the integrate-and-fire model more plausible from a biological point of view.
=== Leaky integrate-and-fire ===
The leaky integrate-and-fire model, which can be traced back to Louis Lapicque, contains a "leak" term in the membrane potential equation that reflects the diffusion of ions through the membrane, unlike the non-leaky integrate-and-fire model. The model equation looks like
C
m
d
V
m
(
t
)
d
t
=
I
(
t
)
−
V
m
(
t
)
R
m
{\displaystyle C_{\mathrm {m} }{\frac {dV_{\mathrm {m} }(t)}{dt}}=I(t)-{\frac {V_{\mathrm {m} }(t)}{R_{\mathrm {m} }}}}
where Vm is the voltage across the cell membrane and Rm is the membrane resistance. (The non-leaky integrate-and-fire model is retrieved in the limit Rm to infinity, i.e. if the membrane is a perfect insulator). The model equation is valid for arbitrary time-dependent input until a threshold Vth is reached; thereafter the membrane potential is reset.
For constant input, the minimum input to reach the threshold is Ith = Vth / Rm. Assuming a reset to zero, the firing frequency thus looks like
f
(
I
)
=
{
0
,
I
≤
I
t
h
[
t
r
e
f
−
R
m
C
m
log
(
1
−
V
t
h
I
R
m
)
]
−
1
,
I
>
I
t
h
{\displaystyle f(I)={\begin{cases}0,&I\leq I_{\mathrm {th} }\\\left[t_{\mathrm {ref} }-R_{\mathrm {m} }C_{\mathrm {m} }\log \left(1-{\tfrac {V_{\mathrm {th} }}{IR_{\mathrm {m} }}}\right)\right]^{-1},&I>I_{\mathrm {th} }\end{cases}}}
which converges for large input currents to the previous leak-free model with the refractory period. The model can also be used for inhibitory neurons.
The most significant disadvantage of this model is that it does not contain neuronal adaptation, so that it cannot describe an experimentally measured spike train in response to constant input current. This disadvantage is removed in generalized integrate-and-fire models that also contain one or several adaptation-variables and are able to predict spike times of cortical neurons under current injection to a high degree of accuracy.
=== Adaptive integrate-and-fire ===
Neuronal adaptation refers to the fact that even in the presence of a constant current injection into the soma, the intervals between output spikes increase. An adaptive integrate-and-fire neuron model combines the leaky integration of voltage V with one or several adaptation variables wk (see Chapter 6.1. in the textbook Neuronal Dynamics)
τ
m
d
V
m
(
t
)
d
t
=
R
I
(
t
)
−
[
V
m
(
t
)
−
E
m
]
−
R
∑
k
w
k
{\displaystyle \tau _{\mathrm {m} }{\frac {dV_{\mathrm {m} }(t)}{dt}}=RI(t)-[V_{\mathrm {m} }(t)-E_{\mathrm {m} }]-R\sum _{k}w_{k}}
τ
k
d
w
k
(
t
)
d
t
=
−
a
k
[
V
m
(
t
)
−
E
m
]
−
w
k
+
b
k
τ
k
∑
f
δ
(
t
−
t
f
)
{\displaystyle \tau _{k}{\frac {dw_{k}(t)}{dt}}=-a_{k}[V_{\mathrm {m} }(t)-E_{\mathrm {m} }]-w_{k}+b_{k}\tau _{k}\sum _{f}\delta (t-t^{f})}
where
τ
m
{\displaystyle \tau _{m}}
is the membrane time constant, wk is the adaptation current number, with index k,
τ
k
{\displaystyle \tau _{k}}
is the time constant of adaptation current wk, Em is the resting potential and tf is the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a value Vr below the firing threshold. The reset value is one of the important parameters of the model. The simplest model of adaptation has only a single adaptation variable w and the sum over k is removed.
Integrate-and-fire neurons with one or several adaptation variables can account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting. Moreover, adaptive integrate-and-fire neurons with several adaptation variables are able to predict spike times of cortical neurons under time-dependent current injection into the soma.
=== Fractional-order leaky integrate-and-fire ===
Recent advances in computational and theoretical fractional calculus lead to a new form of model called Fractional-order leaky integrate-and-fire. An advantage of this model is that it can capture adaptation effects with a single variable. The model has the following form
I
(
t
)
−
V
m
(
t
)
R
m
=
C
m
d
α
V
m
(
t
)
d
α
t
{\displaystyle I(t)-{\frac {V_{\mathrm {m} }(t)}{R_{\mathrm {m} }}}=C_{\mathrm {m} }{\frac {d^{\alpha }V_{\mathrm {m} }(t)}{d^{\alpha }t}}}
Once the voltage hits the threshold it is reset. Fractional integration has been used to account for neuronal adaptation in experimental data.
=== 'Exponential integrate-and-fire' and 'adaptive exponential integrate-and-fire' ===
In the exponential integrate-and-fire model, spike generation is exponential, following the equation:
d
V
d
t
−
R
τ
m
I
(
t
)
=
1
τ
m
[
E
m
−
V
+
Δ
T
exp
(
V
−
V
T
Δ
T
)
]
.
{\displaystyle {\frac {dV}{dt}}-{\frac {R}{\tau _{m}}}I(t)={\frac {1}{\tau _{m}}}\left[E_{m}-V+\Delta _{T}\exp \left({\frac {V-V_{T}}{\Delta _{T}}}\right)\right].}
where
V
{\displaystyle V}
is the membrane potential,
V
T
{\displaystyle V_{T}}
is the intrinsic membrane potential threshold,
τ
m
{\displaystyle \tau _{m}}
is the membrane time constant,
E
m
{\displaystyle E_{m}}
is the resting potential, and
Δ
T
{\displaystyle \Delta _{T}}
is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons. Once the membrane potential crosses
V
T
{\displaystyle V_{T}}
, it diverges to infinity in finite time. In numerical simulation the integration is stopped if the membrane potential hits an arbitrary threshold (much larger than
V
T
{\displaystyle V_{T}}
) at which the membrane potential is reset to a value Vr . The voltage reset value Vr is one of the important parameters of the model. Importantly, the right-hand side of the above equation contains a nonlinearity that can be directly extracted from experimental data. In this sense the exponential nonlinearity is strongly supported by experimental evidence.
In the adaptive exponential integrate-and-fire neuron the above exponential nonlinearity of the voltage equation is combined with an adaptation variable w
τ
m
d
V
d
t
=
R
I
(
t
)
+
[
E
m
−
V
+
Δ
T
exp
(
V
−
V
T
Δ
T
)
]
−
R
w
{\displaystyle \tau _{m}{\frac {dV}{dt}}=RI(t)+\left[E_{m}-V+\Delta _{T}\exp \left({\frac {V-V_{T}}{\Delta _{T}}}\right)\right]-Rw}
τ
d
w
(
t
)
d
t
=
−
a
[
V
m
(
t
)
−
E
m
]
−
w
+
b
τ
δ
(
t
−
t
f
)
{\displaystyle \tau {\frac {dw(t)}{dt}}=-a[V_{\mathrm {m} }(t)-E_{\mathrm {m} }]-w+b\tau \delta (t-t^{f})}
where w denotes the adaptation current with time scale
τ
{\displaystyle \tau }
. Important model parameters are the voltage reset value Vr, the intrinsic threshold
V
T
{\displaystyle V_{T}}
, the time constants
τ
{\displaystyle \tau }
and
τ
m
{\displaystyle \tau _{m}}
as well as the coupling parameters a and b. The adaptive exponential integrate-and-fire model inherits the experimentally derived voltage nonlinearity of the exponential integrate-and-fire model. But going beyond this model, it can also account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting. However, since the adaptation is in the form of a current, aberrant hyperpolarization may appear. This problem was solved by expressing it as a conductance.
=== Adaptive Threshold Neuron Model ===
In this model, a time-dependent function
θ
(
t
)
{\displaystyle \theta (t)}
is added to the fixed threshold,
v
t
h
0
{\displaystyle v_{th0}}
, after every spike, causing an adaptation of the threshold. The threshold potential,
v
t
h
{\displaystyle v_{th}}
, gradually returns to its steady state value depending on the threshold adaptation time constant
τ
θ
{\displaystyle \tau _{\theta }}
. This is one of the simpler techniques to achieve spike frequency adaptation. The expression for the adaptive threshold is given by:
v
t
h
(
t
)
=
v
t
h
0
+
∑
θ
(
t
−
t
f
)
f
=
v
t
h
0
+
∑
θ
0
exp
[
−
(
t
−
t
f
)
τ
θ
]
f
{\displaystyle v_{th}(t)=v_{th0}+{\frac {\sum \theta (t-t_{f})}{f}}=v_{th0}+{\frac {\sum \theta _{0}\exp \left[-{\frac {(t-t_{f})}{\tau _{\theta }}}\right]}{f}}}
where
θ
(
t
)
{\displaystyle \theta (t)}
is defined by:
θ
(
t
)
=
θ
0
exp
[
−
t
τ
θ
]
{\displaystyle \theta (t)=\theta _{0}\exp \left[-{\frac {t}{\tau _{\theta }}}\right]}
When the membrane potential,
u
(
t
)
{\displaystyle u(t)}
, reaches a threshold, it is reset to
v
r
e
s
t
{\displaystyle v_{rest}}
:
u
(
t
)
≥
v
t
h
(
t
)
⇒
v
(
t
)
=
v
rest
{\displaystyle u(t)\geq v_{th}(t)\Rightarrow v(t)=v_{\text{rest}}}
A simpler version of this with a single time constant in threshold decay with an LIF neuron is realized in to achieve LSTM like recurrent spiking neural networks to achieve accuracy nearer to ANNs on few spatio temporal tasks.
=== Double Exponential Adaptive Threshold (DEXAT) ===
The DEXAT neuron model is a flavor of adaptive neuron model in which the threshold voltage decays with a double exponential having two time constants. Double exponential decay is governed by a fast initial decay and then a slower decay over a longer period of time. This neuron used in SNNs through surrogate gradient creates an adaptive learning rate yielding higher accuracy and faster convergence, and flexible long short-term memory compared to existing counterparts in the literature. The membrane potential dynamics are described through equations and the threshold adaptation rule is:
v
t
h
(
t
)
=
b
0
+
β
1
b
1
(
t
)
+
β
2
b
2
(
t
)
{\displaystyle v_{th}(t)=b_{0}+\beta _{1}b_{1}(t)+\beta _{2}b_{2}(t)}
The dynamics of
b
1
(
t
)
{\displaystyle b_{1}(t)}
and
b
2
(
t
)
{\displaystyle b_{2}(t)}
are given by
b
1
(
t
+
δ
t
)
=
p
j
1
b
1
(
t
)
+
(
1
−
p
j
1
)
z
(
t
)
δ
(
t
)
{\displaystyle b_{1}(t+\delta t)=p_{j1}b_{1}(t)+(1-p_{j1})z(t)\delta (t)}
,
b
2
(
t
+
δ
t
)
=
p
j
2
b
2
(
t
)
+
(
1
−
p
j
2
)
z
(
t
)
δ
(
t
)
{\displaystyle b_{2}(t+\delta t)=p_{j2}b_{2}(t)+(1-p_{j2})z(t)\delta (t)}
,
where
p
j
1
=
exp
[
−
δ
t
τ
b
1
]
{\displaystyle p_{j1}=\exp \left[-{\frac {\delta t}{\tau _{b1}}}\right]}
and
p
j
2
=
exp
[
−
δ
t
τ
b
2
]
{\displaystyle p_{j2}=\exp \left[-{\frac {\delta t}{\tau _{b2}}}\right]}
.
Further, multi-time scale adaptive threshold neuron model showing more complex dynamics is shown in.
== Stochastic models of membrane voltage and spike timing ==
The models in this category are generalized integrate-and-fire models that include a certain level of stochasticity. Cortical neurons in experiments are found to respond reliably to time-dependent input, albeit with a small degree of variations between one trial and the next if the same stimulus is repeated. Stochasticity in neurons has two important sources. First, even in a very controlled experiment where input current is injected directly into the soma, ion channels open and close stochastically and this channel noise leads to a small amount of variability in the exact value of the membrane potential and the exact timing of output spikes. Second, for a neuron embedded in a cortical network, it is hard to control the exact input because most inputs come from unobserved neurons somewhere else in the brain.
Stochasticity has been introduced into spiking neuron models in two fundamentally different forms: either (i) a noisy input current is added to the differential equation of the neuron model; or (ii) the process of spike generation is noisy. In both cases, the mathematical theory can be developed for continuous time, which is then, if desired for the use in computer simulations, transformed into a discrete-time model.
The relation of noise in neuron models to the variability of spike trains and neural codes is discussed in Neural Coding and in Chapter 7 of the textbook Neuronal Dynamics.
=== Noisy input model (diffusive noise) ===
A neuron embedded in a network receives spike input from other neurons. Since the spike arrival times are not controlled by an experimentalist they can be considered as stochastic. Thus a (potentially nonlinear) integrate-and-fire model with nonlinearity f(v) receives two inputs: an input
I
(
t
)
{\displaystyle I(t)}
controlled by the experimentalists and a noisy input current
I
n
o
i
s
e
(
t
)
{\displaystyle I^{\rm {noise}}(t)}
that describes the uncontrolled background input.
τ
m
d
V
d
t
=
f
(
V
)
+
R
I
(
t
)
+
R
I
noise
(
t
)
{\displaystyle \tau _{m}{\frac {dV}{dt}}=f(V)+RI(t)+RI^{\text{noise}}(t)}
Stein's model is the special case of a leaky integrate-and-fire neuron and a stationary white noise current
I
n
o
i
s
e
(
t
)
=
ξ
(
t
)
{\displaystyle I^{\rm {noise}}(t)=\xi (t)}
with mean zero and unit variance. In the subthreshold regime, these assumptions yield the equation of the Ornstein–Uhlenbeck process
τ
m
d
V
d
t
=
[
E
m
−
V
]
+
R
I
(
t
)
+
R
ξ
(
t
)
{\displaystyle \tau _{m}{\frac {dV}{dt}}=[E_{m}-V]+RI(t)+R\xi (t)}
However, in contrast to the standard Ornstein–Uhlenbeck process, the membrane voltage is reset whenever V hits the firing threshold Vth . Calculating the interval distribution of the Ornstein–Uhlenbeck model for constant input with threshold leads to a first-passage time problem. Stein's neuron model and variants thereof have been used to fit interspike interval distributions of spike trains from real neurons under constant input current.
In the mathematical literature, the above equation of the Ornstein–Uhlenbeck process is written in the form
d
V
=
[
E
m
−
V
+
R
I
(
t
)
]
d
t
τ
m
+
σ
d
W
{\displaystyle dV=[E_{m}-V+RI(t)]{\frac {dt}{\tau _{m}}}+\sigma \,dW}
where
σ
{\displaystyle \sigma }
is the amplitude of the noise input and dW are increments of a Wiener process. For discrete-time implementations with time step dt the voltage updates are
Δ
V
=
[
E
m
−
V
+
R
I
(
t
)
]
Δ
t
τ
m
+
σ
τ
m
y
{\displaystyle \Delta V=[E_{m}-V+RI(t)]{\frac {\Delta t}{\tau _{m}}}+\sigma {\sqrt {\tau _{m}}}y}
where y is drawn from a Gaussian distribution with zero mean unit variance. The voltage is reset when it hits the firing threshold Vth .
The noisy input model can also be used in generalized integrate-and-fire models. For example, the exponential integrate-and-fire model with noisy input reads
τ
m
d
V
d
t
=
E
m
−
V
+
Δ
T
exp
(
V
−
V
T
Δ
T
)
+
R
I
(
t
)
+
R
ξ
(
t
)
{\displaystyle \tau _{m}{\frac {dV}{dt}}=E_{m}-V+\Delta _{T}\exp \left({\frac {V-V_{T}}{\Delta _{T}}}\right)+RI(t)+R\xi (t)}
For constant deterministic input
I
(
t
)
=
I
0
{\displaystyle I(t)=I_{0}}
it is possible to calculate the mean firing rate as a function of
I
0
{\displaystyle I_{0}}
. This is important because the frequency-current relation (f-I-curve) is often used by experimentalists to characterize a neuron.
The leaky integrate-and-fire with noisy input has been widely used in the analysis of networks of spiking neurons. Noisy input is also called 'diffusive noise' because it leads to a diffusion of the subthreshold membrane potential around the noise-free trajectory (Johannesma, The theory of spiking neurons with noisy input is reviewed in Chapter 8.2 of the textbook Neuronal Dynamics.
=== Noisy output model (escape noise) ===
In deterministic integrate-and-fire models, a spike is generated if the membrane potential V(t) hits the threshold
V
t
h
{\displaystyle V_{th}}
. In noisy output models, the strict threshold is replaced by a noisy one as follows. At each moment in time t, a spike is generated stochastically with instantaneous stochastic intensity or 'escape rate'
ρ
(
t
)
=
f
(
V
(
t
)
−
V
t
h
)
{\displaystyle \rho (t)=f(V(t)-V_{th})}
that depends on the momentary difference between the membrane voltage V(t) and the threshold
V
t
h
{\displaystyle V_{th}}
. A common choice for the 'escape rate'
f
{\displaystyle f}
(that is consistent with biological data) is
f
(
V
−
V
t
h
)
=
1
τ
0
exp
[
β
(
V
−
V
t
h
)
]
{\displaystyle f(V-V_{th})={\frac {1}{\tau _{0}}}\exp[\beta (V-V_{th})]}
where
τ
0
{\displaystyle \tau _{0}}
is a time constant that describes how quickly a spike is fired once the membrane potential reaches the threshold and
β
{\displaystyle \beta }
is a sharpness parameter. For
β
→
∞
{\displaystyle \beta \to \infty }
the threshold becomes sharp and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments is
1
/
β
≈
4
m
V
{\displaystyle 1/\beta \approx 4mV}
which means that neuronal firing becomes non-negligible as soon as the membrane potential is a few mV below the formal firing threshold.
The escape rate process via a soft threshold is reviewed in Chapter 9 of the textbook Neuronal Dynamics.
For models in discrete time, a spike is generated with probability
P
F
(
t
n
)
=
F
[
V
(
t
n
)
−
V
t
h
]
{\displaystyle P_{F}(t_{n})=F[V(t_{n})-V_{th}]}
that depends on the momentary difference between the membrane voltage V at time
t
n
{\displaystyle t_{n}}
and the threshold
V
t
h
{\displaystyle V_{th}}
. The function F is often taken as a standard sigmoidal
F
(
x
)
=
0.5
[
1
+
tanh
(
γ
x
)
]
{\displaystyle F(x)=0.5[1+\tanh(\gamma x)]}
with steepness parameter
γ
{\displaystyle \gamma }
, similar to the update dynamics in artificial neural networks. But the functional form of F can also be derived from the stochastic intensity
f
{\displaystyle f}
in continuous time introduced above as
F
(
y
n
)
≈
1
−
exp
[
y
n
Δ
t
]
{\displaystyle F(y_{n})\approx 1-\exp[y_{n}\Delta t]}
where
y
n
=
V
(
t
n
)
−
V
t
h
{\displaystyle y_{n}=V(t_{n})-V_{th}}
is the threshold distance.
Integrate-and-fire models with output noise can be used to predict the peristimulus time histogram (PSTH) of real neurons under arbitrary time-dependent input. For non-adaptive integrate-and-fire neurons, the interval distribution under constant stimulation can be calculated from stationary renewal theory.
=== Spike response model (SRM) ===
main article: Spike response model
The spike response model (SRM) is a generalized linear model for the subthreshold membrane voltage combined with a nonlinear output noise process for spike generation. The membrane voltage V(t) at time t is
V
(
t
)
=
∑
f
η
(
t
−
t
f
)
+
∫
0
∞
κ
(
s
)
I
(
t
−
s
)
d
s
+
V
r
e
s
t
{\displaystyle V(t)=\sum _{f}\eta (t-t^{f})+\int \limits _{0}^{\infty }\kappa (s)I(t-s)\,ds+V_{\mathrm {rest} }}
where tf is the firing time of spike number f of the neuron, Vrest is the resting voltage in the absence of input, I(t-s) is the input current at time t-s and
κ
(
s
)
{\displaystyle \kappa (s)}
is a linear filter (also called kernel) that describes the contribution of an input current pulse at time t-s to the voltage at time t. The contributions to the voltage caused by a spike at time
t
f
{\displaystyle t^{f}}
are described by the refractory kernel
η
(
t
−
t
f
)
{\displaystyle \eta (t-t^{f})}
. In particular,
η
(
t
−
t
f
)
{\displaystyle \eta (t-t^{f})}
describes the reset after the spike and the time course of the spike-afterpotential following a spike. It therefore expresses the consequences of refractoriness and adaptation. The voltage V(t) can be interpreted as the result of an integration of the differential equation of a leaky integrate-and-fire model coupled to an arbitrary number of spike-triggered adaptation variables.
Spike firing is stochastic and happens with a time-dependent stochastic intensity (instantaneous rate)
f
(
V
−
ϑ
(
t
)
)
=
1
τ
0
exp
[
β
(
V
−
ϑ
(
t
)
)
]
{\displaystyle f(V-\vartheta (t))={\frac {1}{\tau _{0}}}\exp[\beta (V-\vartheta (t))]}
with parameters
τ
0
{\displaystyle \tau _{0}}
and
β
{\displaystyle \beta }
and a dynamic threshold
ϑ
(
t
)
{\displaystyle \vartheta (t)}
given by
ϑ
(
t
)
=
ϑ
0
+
∑
f
θ
1
(
t
−
t
f
)
{\displaystyle \vartheta (t)=\vartheta _{0}+\sum _{f}\theta _{1}(t-t^{f})}
Here
ϑ
0
{\displaystyle \vartheta _{0}}
is the firing threshold of an inactive neuron and
θ
1
(
t
−
t
f
)
{\displaystyle \theta _{1}(t-t^{f})}
describes the increase of the threshold after a spike at time
t
f
{\displaystyle t^{f}}
. In case of a fixed threshold, one sets
θ
1
(
t
−
t
f
)
=
0
{\displaystyle \theta _{1}(t-t^{f})=0}
. For
β
→
∞
{\displaystyle \beta \to \infty }
the threshold process is deterministic.
The time course of the filters
η
,
κ
,
θ
1
{\displaystyle \eta ,\kappa ,\theta _{1}}
that characterize the spike response model can be directly extracted from experimental data. With optimized parameters the SRM describes the time course of the subthreshold membrane voltage for time-dependent input with a precision of 2mV and can predict the timing of most output spikes with a precision of 4ms. The SRM is closely related to linear-nonlinear-Poisson cascade models (also called Generalized Linear Model). The estimation of parameters of probabilistic neuron models such as the SRM using methods developed for Generalized Linear Models is discussed in Chapter 10 of the textbook Neuronal Dynamics.
The name spike response model arises because, in a network, the input current for neuron i is generated by the spikes of other neurons so that in the case of a network the voltage equation becomes
V
i
(
t
)
=
∑
f
η
i
(
t
−
t
i
f
)
+
∑
j
=
1
N
w
i
j
∑
f
′
ε
i
j
(
t
−
t
j
f
′
)
+
V
r
e
s
t
{\displaystyle V_{i}(t)=\sum _{f}\eta _{i}(t-t_{i}^{f})+\sum _{j=1}^{N}w_{ij}\sum _{f'}\varepsilon _{ij}(t-t_{j}^{f'})+V_{\mathrm {rest} }}
where
t
j
f
′
{\displaystyle t_{j}^{f'}}
is the firing times of neuron j (i.e., its spike train);
η
i
(
t
−
t
i
f
)
{\displaystyle \eta _{i}(t-t_{i}^{f})}
describes the time course of the spike and the spike after-potential for neuron i; and
w
i
j
{\displaystyle w_{ij}}
and
ε
i
j
(
t
−
t
j
f
′
)
{\displaystyle \varepsilon _{ij}(t-t_{j}^{f'})}
describe the amplitude and time course of an excitatory or inhibitory postsynaptic potential (PSP) caused by the spike
t
j
f
′
{\displaystyle t_{j}^{f'}}
of the presynaptic neuron j. The time course
ε
i
j
(
s
)
{\displaystyle \varepsilon _{ij}(s)}
of the PSP results from the convolution of the postsynaptic current
I
(
t
)
{\displaystyle I(t)}
caused by the arrival of a presynaptic spike from neuron j with the membrane filter
κ
(
s
)
{\displaystyle \kappa (s)}
.
=== SRM0 ===
The SRM0 is a stochastic neuron model related to time-dependent nonlinear renewal theory and a simplification of the Spike Response Model (SRM). The main difference to the voltage equation of the SRM introduced above is that in the term containing the refractory kernel
η
(
s
)
{\displaystyle \eta (s)}
there is no summation sign over past spikes: only the most recent spike (denoted as the time
t
^
{\displaystyle {\hat {t}}}
) matters. Another difference is that the threshold is constant. The model SRM0 can be formulated in discrete or continuous time. For example, in continuous time, the single-neuron equation is
V
(
t
)
=
η
(
t
−
t
^
)
+
∫
0
∞
κ
(
s
)
I
(
t
−
s
)
d
s
+
V
r
e
s
t
{\displaystyle V(t)=\eta (t-{\hat {t}})+\int _{0}^{\infty }\kappa (s)I(t-s)\,ds+V_{\mathrm {rest} }}
and the network equations of the SRM0 are
V
i
(
t
∣
t
^
i
)
=
η
i
(
t
−
t
^
i
)
+
∑
j
w
i
j
∑
f
ε
i
j
(
t
−
t
^
i
,
t
−
t
f
)
+
V
r
e
s
t
{\displaystyle V_{i}(t\mid {\hat {t}}_{i})=\eta _{i}(t-{\hat {t}}_{i})+\sum _{j}w_{ij}\sum _{f}\varepsilon _{ij}(t-{\hat {t}}_{i},t-t^{f})+V_{\mathrm {rest} }}
where
t
^
i
{\displaystyle {\hat {t}}_{i}}
is the last firing time neuron i. Note that the time course of the postsynaptic potential
ε
i
j
{\displaystyle \varepsilon _{ij}}
is also allowed to depend on the time since the last spike of neuron i to describe a change in membrane conductance during refractoriness. The instantaneous firing rate (stochastic intensity) is
f
(
V
−
ϑ
)
=
1
τ
0
exp
[
β
(
V
−
V
t
h
)
]
{\displaystyle f(V-\vartheta )={\frac {1}{\tau _{0}}}\exp[\beta (V-V_{th})]}
where
V
t
h
{\displaystyle V_{th}}
is a fixed firing threshold. Thus spike firing of neuron i depends only on its input and the time since neuron i has fired its last spike.
With the SRM0, the interspike-interval distribution for constant input can be mathematically linked to the shape of the refractory kernel
η
{\displaystyle \eta }
. Moreover the stationary frequency-current relation can be calculated from the escape rate in combination with the refractory kernel
η
{\displaystyle \eta }
. With an appropriate choice of the kernels, the SRM0 approximates the dynamics of the Hodgkin-Huxley model to a high degree of accuracy. Moreover, the PSTH response to arbitrary time-dependent input can be predicted.
=== Galves–Löcherbach model ===
The Galves–Löcherbach model is a stochastic neuron model closely related to the spike response model SRM0 and the leaky integrate-and-fire model. It is inherently stochastic and, just like the SRM0, it is linked to time-dependent nonlinear renewal theory. Given the model specifications, the probability that a given neuron
i
{\displaystyle i}
spikes in a period
t
{\displaystyle t}
may be described by
P
r
o
b
(
X
t
(
i
)
=
1
∣
F
t
−
1
)
=
φ
i
(
∑
j
∈
I
W
j
→
i
∑
s
=
L
t
i
t
−
1
g
j
(
t
−
s
)
X
s
(
j
)
,
t
−
L
t
i
)
,
{\displaystyle \mathop {\mathrm {Prob} } (X_{t}(i)=1\mid {\mathcal {F}}_{t-1})=\varphi _{i}{\Biggl (}\sum _{j\in I}W_{j\rightarrow i}\sum _{s=L_{t}^{i}}^{t-1}g_{j}(t-s)X_{s}(j),~~~t-L_{t}^{i}{\Biggl )},}
where
W
j
→
i
{\displaystyle W_{j\rightarrow i}}
is a synaptic weight, describing the influence of neuron
j
{\displaystyle j}
on neuron
i
{\displaystyle i}
,
g
j
{\displaystyle g_{j}}
expresses the leak, and
L
t
i
{\displaystyle L_{t}^{i}}
provides the spiking history of neuron
i
{\displaystyle i}
before
t
{\displaystyle t}
, according to
L
t
i
=
sup
{
s
<
t
:
X
s
(
i
)
=
1
}
.
{\displaystyle L_{t}^{i}=\sup\{s<t:X_{s}(i)=1\}.}
Importantly, the spike probability of neuron
i
{\displaystyle i}
depends only on its spike input (filtered with a kernel
g
j
{\displaystyle g_{j}}
and weighted with a factor
W
j
→
i
{\displaystyle W_{j\to i}}
) and the timing of its most recent output spike (summarized by
t
−
L
t
i
{\displaystyle t-L_{t}^{i}}
).
== Didactic toy models of membrane voltage ==
The models in this category are highly simplified toy models that qualitatively describe the membrane voltage as a function of input. They are mainly used for didactic reasons in teaching but are not considered valid neuron models for large-scale simulations or data fitting.
=== FitzHugh–Nagumo ===
Sweeping simplifications to Hodgkin–Huxley were introduced by FitzHugh and Nagumo in 1961 and 1962. Seeking to describe "regenerative self-excitation" by a nonlinear positive-feedback membrane voltage and recovery by a linear negative-feedback gate voltage, they developed the model described by
r
c
l
d
V
d
t
=
V
−
V
3
/
3
−
w
+
I
e
x
t
τ
d
w
d
t
=
V
−
a
−
b
w
{\displaystyle {\begin{aligned}{rcl}{\dfrac {dV}{dt}}&=V-V^{3}/3-w+I_{\mathrm {ext} }\\\tau {\dfrac {dw}{dt}}&=V-a-bw\end{aligned}}}
where we again have a membrane-like voltage and input current with a slower general gate voltage w and experimentally-determined parameters a = -0.7, b = 0.8, τ = 1/0.08. Although not derivable from biology, the model allows for a simplified, immediately available dynamic, without being a trivial simplification. The experimental support is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7 in the textbook Methods of Neuronal Modeling.
=== Morris–Lecar ===
In 1981, Morris and Lecar combined the Hodgkin–Huxley and FitzHugh–Nagumo models into a voltage-gated calcium channel model with a delayed-rectifier potassium channel represented by
C
d
V
d
t
=
−
I
i
o
n
(
V
,
w
)
+
I
d
w
d
t
=
φ
⋅
w
∞
−
w
τ
w
{\displaystyle {\begin{aligned}C{\frac {dV}{dt}}&=-I_{\mathrm {ion} }(V,w)+I\\{\frac {dw}{dt}}&=\varphi \cdot {\frac {w_{\infty }-w}{\tau _{w}}}\end{aligned}}}
where
I
i
o
n
(
V
,
w
)
=
g
¯
C
a
m
∞
⋅
(
V
−
V
C
a
)
+
g
¯
K
w
⋅
(
V
−
V
K
)
+
g
¯
L
⋅
(
V
−
V
L
)
{\displaystyle I_{\mathrm {ion} }(V,w)={\bar {g}}_{\mathrm {Ca} }m_{\infty }\cdot (V-V_{\mathrm {Ca} })+{\bar {g}}_{\mathrm {K} }w\cdot (V-V_{\mathrm {K} })+{\bar {g}}_{\mathrm {L} }\cdot (V-V_{\mathrm {L} })}
. The experimental support of the model is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7 in the textbook Methods of Neuronal Modeling.
A two-dimensional neuron model very similar to the Morris-Lecar model can be derived step-by-step starting from the Hodgkin-Huxley model. See Chapter 4.2 in the textbook Neuronal Dynamics.
=== Hindmarsh–Rose ===
Building upon the FitzHugh–Nagumo model, Hindmarsh and Rose proposed in 1984 a model of neuronal activity described by three coupled first-order differential equations:
d
x
d
t
=
y
+
3
x
2
−
x
3
−
z
+
I
d
y
d
t
=
1
−
5
x
2
−
y
d
z
d
t
=
r
⋅
(
4
(
x
+
8
5
)
−
z
)
{\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=y+3x^{2}-x^{3}-z+I\\{\frac {dy}{dt}}&=1-5x^{2}-y\\{\frac {dz}{dt}}&=r\cdot (4(x+{\tfrac {8}{5}})-z)\end{aligned}}}
with r2 = x2 + y2 + z2, and r ≈ 10−2 so that the z variable only changes very slowly. This extra mathematical complexity allows a great variety of dynamic behaviors for the membrane potential, described by the x variable of the model, which includes chaotic dynamics. This makes the Hindmarsh–Rose neuron model very useful, because it is still simple, allows a good qualitative description of the many different firing patterns of the action potential, in particular bursting, observed in experiments. Nevertheless, it remains a toy model and has not been fitted to experimental data. It is widely used as a reference model for bursting dynamics.
=== Theta model and quadratic integrate-and-fire ===
The theta model, or Ermentrout–Kopell canonical Type I model, is mathematically equivalent to the quadratic integrate-and-fire model which in turn is an approximation to the exponential integrate-and-fire model and the Hodgkin-Huxley model. It is called a canonical model because it is one of the generic models for constant input close to the bifurcation point, which means close to the transition from silent to repetitive firing.
The standard formulation of the theta model is
d
θ
(
t
)
d
t
=
(
I
−
I
0
)
[
1
+
cos
(
θ
)
]
+
[
1
−
cos
(
θ
)
]
{\displaystyle {\frac {d\theta (t)}{dt}}=(I-I_{0})[1+\cos(\theta )]+[1-\cos(\theta )]}
The equation for the quadratic integrate-and-fire model is (see Chapter 5.3 in the textbook Neuronal Dynamics )
τ
m
d
V
m
(
t
)
d
t
=
(
I
−
I
0
)
R
+
[
V
m
(
t
)
−
E
m
]
[
V
m
(
t
)
−
V
T
]
{\displaystyle \tau _{\mathrm {m} }{\frac {dV_{\mathrm {m} }(t)}{dt}}=(I-I_{0})R+[V_{\mathrm {m} }(t)-E_{\mathrm {m} }][V_{\mathrm {m} }(t)-V_{\mathrm {T} }]}
The equivalence of theta model and quadratic integrate-and-fire is for example reviewed in Chapter 4.1.2.2 of spiking neuron models.
For input
I
(
t
)
{\displaystyle I(t)}
that changes over time or is far away from the bifurcation point, it is preferable to work with the exponential integrate-and-fire model (if one wants to stay in the class of one-dimensional neuron models), because real neurons exhibit the nonlinearity of the exponential integrate-and-fire model.
== Sensory input-stimulus encoding neuron models ==
The models in this category were derived following experiments involving natural stimulation such as light, sound, touch, or odor. In these experiments, the spike pattern resulting from each stimulus presentation varies from trial to trial, but the averaged response from several trials often converges to a clear pattern. Consequently, the models in this category generate a probabilistic relationship between the input stimulus to spike occurrences. Importantly, the recorded neurons are often located several processing steps after the sensory neurons, so that these models summarize the effects of the sequence of processing steps in a compact form
=== The non-homogeneous Poisson process model (Siebert) ===
Siebert modeled the neuron spike firing pattern using a non-homogeneous Poisson process model, following experiments involving the auditory system. According to Siebert, the probability of a spiking event at the time interval
[
t
,
t
+
Δ
t
]
{\displaystyle [t,t+\Delta _{t}]}
is proportional to a non-negative function
g
[
s
(
t
)
]
{\displaystyle g[s(t)]}
, where
s
(
t
)
{\displaystyle s(t)}
is the raw stimulus.:
P
spike
(
t
∈
[
t
′
,
t
′
+
Δ
t
]
)
=
Δ
t
⋅
g
[
s
(
t
)
]
{\displaystyle P_{\text{spike}}(t\in [t',t'+\Delta _{t}])=\Delta _{t}\cdot g[s(t)]}
Siebert considered several functions as
g
[
s
(
t
)
]
{\displaystyle g[s(t)]}
, including
g
[
s
(
t
)
]
∝
s
2
(
t
)
{\displaystyle g[s(t)]\propto s^{2}(t)}
for low stimulus intensities.
The main advantage of Siebert's model is its simplicity. The shortcomings of the model is its inability to reflect properly the following phenomena:
The transient enhancement of the neuronal firing activity in response to a step stimulus.
The saturation of the firing rate.
The values of inter-spike-interval-histogram at short intervals values (close to zero).
These shortcomings are addressed by the age-dependent point process model and the two-state Markov Model.
=== Refractoriness and age-dependent point process model ===
Berry and Meister studied neuronal refractoriness using a stochastic model that predicts spikes as a product of two terms, a function f(s(t)) that depends on the time-dependent stimulus s(t) and one a recovery function
w
(
t
−
t
^
)
{\displaystyle w(t-{\hat {t}})}
that depends on the time since the last spike
ρ
(
t
)
=
f
(
s
(
t
)
)
w
(
t
−
t
^
)
{\displaystyle \rho (t)=f(s(t))w(t-{\hat {t}})}
The model is also called an inhomogeneous Markov interval (IMI) process. Similar models have been used for many years in auditory neuroscience. Since the model keeps memory of the last spike time it is non-Poisson and falls in the class of time-dependent renewal models. It is closely related to the model SRM0 with exponential escape rate. Importantly, it is possible to fit parameters of the age-dependent point process model so as to describe not just the PSTH response, but also the interspike-interval statistics.
=== Linear-nonlinear Poisson cascade model and GLM ===
The linear-nonlinear-Poisson cascade model is a cascade of a linear filtering process followed by a nonlinear spike generation step. In the case that output spikes feed back, via a linear filtering process, we arrive at a model that is known in the neurosciences as Generalized Linear Model (GLM). The GLM is mathematically equivalent to the spike response model SRM) with escape noise; but whereas in the SRM the internal variables are interpreted as the membrane potential and the firing threshold, in the GLM the internal variables are abstract quantities that summarizes the net effect of input (and recent output spikes) before spikes are generated in the final step.
=== The two-state Markov model (Nossenson & Messer) ===
The spiking neuron model by Nossenson & Messer produces the probability of the neuron firing a spike as a function of either an external or pharmacological stimulus. The model consists of a cascade of a receptor layer model and a spiking neuron model, as shown in Fig 4. The connection between the external stimulus to the spiking probability is made in two steps: First, a receptor cell model translates the raw external stimulus to neurotransmitter concentration, and then, a spiking neuron model connects neurotransmitter concentration to the firing rate (spiking probability). Thus, the spiking neuron model by itself depends on neurotransmitter concentration at the input stage.
An important feature of this model is the prediction for neurons firing rate pattern which captures, using a low number of free parameters, the characteristic edge emphasized response of neurons to a stimulus pulse, as shown in Fig. 5. The firing rate is identified both as a normalized probability for neural spike firing and as a quantity proportional to the current of neurotransmitters released by the cell. The expression for the firing rate takes the following form:
R
fire
(
t
)
=
P
spike
(
t
;
Δ
t
)
Δ
t
=
[
y
(
t
)
+
R
0
]
⋅
P
0
(
t
)
{\displaystyle R_{\text{fire}}(t)={\frac {P_{\text{spike}}(t;\Delta _{t})}{\Delta _{t}}}=[y(t)+R_{0}]\cdot P_{0}(t)}
where,
P0 is the probability of the neuron being "armed" and ready to fire. It is given by the following differential equation:
P
˙
0
=
−
[
y
(
t
)
+
R
0
+
R
1
]
⋅
P
0
(
t
)
+
R
1
{\displaystyle {\dot {P}}_{0}=-[y(t)+R_{0}+R_{1}]\cdot P_{0}(t)+R_{1}}
P0 could be generally calculated recursively using the Euler method, but in the case of a pulse of stimulus, it yields a simple closed-form expression.
y(t) is the input of the model and is interpreted as the neurotransmitter concentration on the cell surrounding (in most cases glutamate). For an external stimulus it can be estimated through the receptor layer model:
y
(
t
)
≃
g
gain
⋅
⟨
s
2
(
t
)
⟩
,
{\displaystyle y(t)\simeq g_{\text{gain}}\cdot \langle s^{2}(t)\rangle ,}
with
⟨
s
2
(
t
)
⟩
{\displaystyle \langle s^{2}(t)\rangle }
being a short temporal average of stimulus power (given in Watt or other energy per time unit).
R0 corresponds to the intrinsic spontaneous firing rate of the neuron.
R1 is the recovery rate of the neuron from the refractory state.
Other predictions by this model include:
1) The averaged evoked response potential (ERP) due to the population of many neurons in unfiltered measurements resembles the firing rate.
2) The voltage variance of activity due to multiple neuron activity resembles the firing rate (also known as Multi-Unit-Activity power or MUA).
3) The inter-spike-interval probability distribution takes the form a gamma-distribution like function.
== Pharmacological input stimulus neuron models ==
The models in this category produce predictions for experiments involving pharmacological stimulation.
=== Synaptic transmission (Koch & Segev) ===
According to the model by Koch and Segev, the response of a neuron to individual neurotransmitters can be modeled as an extension of the classical Hodgkin–Huxley model with both standard and nonstandard kinetic currents. Four neurotransmitters primarily influence the CNS. AMPA/kainate receptors are fast excitatory mediators while NMDA receptors mediate considerably slower currents. Fast inhibitory currents go through GABAA receptors, while GABAB receptors mediate by secondary G-protein-activated potassium channels. This range of mediation produces the following current dynamics:
I
A
M
P
A
(
t
,
V
)
=
g
¯
A
M
P
A
⋅
[
O
]
⋅
(
V
(
t
)
−
E
A
M
P
A
)
{\displaystyle I_{\mathrm {AMPA} }(t,V)={\bar {g}}_{\mathrm {AMPA} }\cdot [O]\cdot (V(t)-E_{\mathrm {AMPA} })}
I
N
M
D
A
(
t
,
V
)
=
g
¯
N
M
D
A
⋅
B
(
V
)
⋅
[
O
]
⋅
(
V
(
t
)
−
E
N
M
D
A
)
{\displaystyle I_{\mathrm {NMDA} }(t,V)={\bar {g}}_{\mathrm {NMDA} }\cdot B(V)\cdot [O]\cdot (V(t)-E_{\mathrm {NMDA} })}
I
G
A
B
A
A
(
t
,
V
)
=
g
¯
G
A
B
A
A
⋅
(
[
O
1
]
+
[
O
2
]
)
⋅
(
V
(
t
)
−
E
C
l
)
{\displaystyle I_{\mathrm {GABA_{A}} }(t,V)={\bar {g}}_{\mathrm {GABA_{A}} }\cdot ([O_{1}]+[O_{2}])\cdot (V(t)-E_{\mathrm {Cl} })}
I
G
A
B
A
B
(
t
,
V
)
=
g
¯
G
A
B
A
B
⋅
[
G
]
n
[
G
]
n
+
K
d
⋅
(
V
(
t
)
−
E
K
)
{\displaystyle I_{\mathrm {GABA_{B}} }(t,V)={\bar {g}}_{\mathrm {GABA_{B}} }\cdot {\tfrac {[G]^{n}}{[G]^{n}+K_{\mathrm {d} }}}\cdot (V(t)-E_{\mathrm {K} })}
where ḡ is the maximal conductance (around 1S) and E is the equilibrium potential of the given ion or transmitter (AMDA, NMDA, Cl, or K), while [O] describes the fraction of open receptors. For NMDA, there is a significant effect of magnesium block that depends sigmoidally on the concentration of intracellular magnesium by B(V). For GABAB, [G] is the concentration of the G-protein, and Kd describes the dissociation of G in binding to the potassium gates.
The dynamics of this more complicated model have been well-studied experimentally and produce important results in terms of very quick synaptic potentiation and depression, that is fast, short-term learning.
The stochastic model by Nossenson and Messer translates neurotransmitter concentration at the input stage to the probability of releasing neurotransmitter at the output stage. For a more detailed description of this model, see the Two state Markov model section above.
== HTM neuron model ==
The HTM neuron model was developed by Jeff Hawkins and researchers at Numenta and is based on a theory called Hierarchical Temporal Memory, originally described in the book On Intelligence. It is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the human brain.
== Applications ==
Spiking Neuron Models are used in a variety of applications that need encoding into or decoding from neuronal spike trains in the context of neuroprosthesis and brain-computer interfaces such as retinal prosthesis: or artificial limb control and sensation. Applications are not part of this article; for more information on this topic please refer to the main article.
== Relation between artificial and biological neuron models ==
The most basic model of a neuron consists of an input with some synaptic weight vector and an activation function or transfer function inside the neuron determining output. This is the basic structure used for artificial neurons, which in a neural network often looks like
y
i
=
φ
(
∑
j
w
i
j
x
j
)
{\displaystyle y_{i}=\varphi \left(\sum _{j}w_{ij}x_{j}\right)}
where yi is the output of the i th neuron, xj is the jth input neuron signal, wij is the synaptic weight (or strength of connection) between the neurons i and j, and φ is the activation function. While this model has seen success in machine-learning applications, it is a poor model for real (biological) neurons, because it lacks time-dependence in input and output.
When an input is switched on at a time t and kept constant thereafter, biological neurons emit a spike train. Importantly, this spike train is not regular but exhibits a temporal structure characterized by adaptation, bursting, or initial bursting followed by regular spiking. Generalized integrate-and-fire models such as the Adaptive Exponential Integrate-and-Fire model, the spike response model, or the (linear) adaptive integrate-and-fire model can capture these neuronal firing patterns.
Moreover, neuronal input in the brain is time-dependent. Time-dependent input is transformed by complex linear and nonlinear filters into a spike train in the output. Again, the spike response model or the adaptive integrate-and-fire model enables to prediction of the spike train in the output for arbitrary time-dependent input, whereas an artificial neuron or a simple leaky integrate-and-fire does not.
If we take the Hodkgin-Huxley model as a starting point, generalized integrate-and-fire models can be derived systematically in a step-by-step simplification procedure. This has been shown explicitly for the exponential integrate-and-fire model and the spike response model.
In the case of modeling a biological neuron, physical analogs are used in place of abstractions such as "weight" and "transfer function". A neuron is filled and surrounded with water-containing ions, which carry electric charge. The neuron is bound by an insulating cell membrane and can maintain a concentration of charged ions on either side that determines a capacitance Cm. The firing of a neuron involves the movement of ions into the cell, that occurs when neurotransmitters cause ion channels on the cell membrane to open. We describe this by a physical time-dependent current I(t). With this comes a change in voltage, or the electrical potential energy difference between the cell and its surroundings, which is observed to sometimes result in a voltage spike called an action potential which travels the length of the cell and triggers the release of further neurotransmitters. The voltage, then, is the quantity of interest and is given by Vm(t).
If the input current is constant, most neurons emit after some time of adaptation or initial bursting a regular spike train. The frequency of regular firing in response to a constant current I is described by the frequency-current relation, which corresponds to the transfer function
φ
{\displaystyle \varphi }
of artificial neural networks. Similarly, for all spiking neuron models, the transfer function
φ
{\displaystyle \varphi }
can be calculated numerically (or analytically).
== Cable theory and compartmental models ==
All of the above deterministic models are point-neuron models because they do not consider the spatial structure of a neuron. However, the dendrite contributes to transforming input into output. Point neuron models are valid description in three cases. (i) If input current is directly injected into the soma. (ii) If synaptic input arrives predominantly at or close to the soma (closeness is defined by a length scale
λ
{\displaystyle \lambda }
introduced below. (iii) If synapse arrives anywhere on the dendrite, but the dendrite is completely linear. In the last case, the cable acts as a linear filter; these linear filter properties can be included in the formulation of generalized integrate-and-fire models such as the spike response model.
The filter properties can be calculated from a cable equation.
Let us consider a cell membrane in the form of a cylindrical cable. The position on the cable is denoted by x and the voltage across the cell membrane by V. The cable is characterized by a longitudinal resistance
r
l
{\displaystyle r_{l}}
per unit length and a membrane resistance
r
m
{\displaystyle r_{m}}
. If everything is linear, the voltage changes as a function of timeWe introduce a length scale
λ
2
=
r
m
/
r
l
{\displaystyle \lambda ^{2}={r_{m}}/{r_{l}}}
on the left side and time constant
τ
=
c
m
r
m
{\displaystyle \tau =c_{m}r_{m}}
on the right side. The cable equation can now be written in its perhaps best-known form:
The above cable equation is valid for a single cylindrical cable.
Linear cable theory describes the dendritic arbor of a neuron as a cylindrical structure undergoing a regular pattern of bifurcation, like branches in a tree. For a single cylinder or an entire tree, the static input conductance at the base (where the tree meets the cell body or any such boundary) is defined as
G
i
n
=
G
∞
tanh
(
L
)
+
G
L
1
+
(
G
L
/
G
∞
)
tanh
(
L
)
{\displaystyle G_{in}={\frac {G_{\infty }\tanh(L)+G_{L}}{1+(G_{L}/G_{\infty })\tanh(L)}}}
,
where L is the electrotonic length of the cylinder, which depends on its length, diameter, and resistance. A simple recursive algorithm scales linearly with the number of branches and can be used to calculate the effective conductance of the tree. This is given by
G
D
=
G
m
A
D
tanh
(
L
D
)
/
L
D
{\displaystyle \,\!G_{D}=G_{m}A_{D}\tanh(L_{D})/L_{D}}
where AD = πld is the total surface area of the tree of total length l, and LD is its total electrotonic length. For an entire neuron in which the cell body conductance is GS and the membrane conductance per unit area is Gmd = Gm / A, we find the total neuron conductance GN for n dendrite trees by adding up all tree and soma conductances, given by
G
N
=
G
S
+
∑
j
=
1
n
A
D
j
F
d
g
a
j
,
{\displaystyle G_{N}=G_{S}+\sum _{j=1}^{n}A_{D_{j}}F_{dga_{j}},}
where we can find the general correction factor Fdga experimentally by noting GD = GmdADFdga.
The linear cable model makes several simplifications to give closed analytic results, namely that the dendritic arbor must branch in diminishing pairs in a fixed pattern and that dendrites are linear. A compartmental model allows for any desired tree topology with arbitrary branches and lengths, as well as arbitrary nonlinearities. It is essentially a discretized computational implementation of nonlinear dendrites.
Each piece, or compartment, of a dendrite, is modeled by a straight cylinder of arbitrary length l and diameter d which connects with fixed resistance to any number of branching cylinders. We define the conductance ratio of the ith cylinder as Bi = Gi / G∞, where
G
∞
=
π
d
3
/
2
2
R
i
R
m
{\displaystyle G_{\infty }={\tfrac {\pi d^{3/2}}{2{\sqrt {R_{i}R_{m}}}}}}
and Ri is the resistance between the current compartment and the next. We obtain a series of equations for conductance ratios in and out of a compartment by making corrections to the normal dynamic Bout,i = Bin,i+1, as
B
o
u
t
,
i
=
B
i
n
,
i
+
1
(
d
i
+
1
/
d
i
)
3
/
2
R
m
,
i
+
1
/
R
m
,
i
{\displaystyle B_{\mathrm {out} ,i}={\frac {B_{\mathrm {in} ,i+1}(d_{i+1}/d_{i})^{3/2}}{\sqrt {R_{\mathrm {m} ,i+1}/R_{\mathrm {m} ,i}}}}}
B
i
n
,
i
=
B
o
u
t
,
i
+
tanh
X
i
1
+
B
o
u
t
,
i
tanh
X
i
{\displaystyle B_{\mathrm {in} ,i}={\frac {B_{\mathrm {out} ,i}+\tanh X_{i}}{1+B_{\mathrm {out} ,i}\tanh X_{i}}}}
B
o
u
t
,
p
a
r
=
B
i
n
,
d
a
u
1
(
d
d
a
u
1
/
d
p
a
r
)
3
/
2
R
m
,
d
a
u
1
/
R
m
,
p
a
r
+
B
i
n
,
d
a
u
2
(
d
d
a
u
2
/
d
p
a
r
)
3
/
2
R
m
,
d
a
u
2
/
R
m
,
p
a
r
+
…
{\displaystyle B_{\mathrm {out,par} }={\frac {B_{\mathrm {in,dau1} }(d_{\mathrm {dau1} }/d_{\mathrm {par} })^{3/2}}{\sqrt {R_{\mathrm {m,dau1} }/R_{\mathrm {m,par} }}}}+{\frac {B_{\mathrm {in,dau2} }(d_{\mathrm {dau2} }/d_{\mathrm {par} })^{3/2}}{\sqrt {R_{\mathrm {m,dau2} }/R_{\mathrm {m,par} }}}}+\ldots }
where the last equation deals with parents and daughters at branches, and
X
i
=
l
i
4
R
i
d
i
R
m
{\displaystyle X_{i}={\tfrac {l_{i}{\sqrt {4R_{i}}}}{\sqrt {d_{i}R_{m}}}}}
. We can iterate these equations through the tree until we get the point where the dendrites connect to the cell body (soma), where the conductance ratio is Bin,stem. Then our total neuron conductance for static input is given by
G
N
=
A
s
o
m
a
R
m
,
s
o
m
a
+
∑
j
B
i
n
,
s
t
e
m
,
j
G
∞
,
j
.
{\displaystyle G_{N}={\frac {A_{\mathrm {soma} }}{R_{\mathrm {m,soma} }}}+\sum _{j}B_{\mathrm {in,stem} ,j}G_{\infty ,j}.}
Importantly, static input is a very special case. In biology, inputs are time-dependent. Moreover, dendrites are not always linear.
Compartmental models enable to include nonlinearities via ion channels positioned at arbitrary locations along the dendrites. For static inputs, it is sometimes possible to reduce the number of compartments (increase the computational speed) and yet retain the salient electrical characteristics.
== Conjectures regarding the role of the neuron in the wider context of the brain principle of operation ==
=== The neurotransmitter-based energy detection scheme ===
The neurotransmitter-based energy detection scheme suggests that the neural tissue chemically executes a Radar-like detection procedure.
As shown in Fig. 6, the key idea of the conjecture is to account for neurotransmitter concentration, neurotransmitter generation, and neurotransmitter removal rates as the important quantities in executing the detection task, while referring to the measured electrical potentials as a side effect that only in certain conditions coincide with the functional purpose of each step. The detection scheme is similar to a radar-like "energy detection" because it includes signal squaring, temporal summation, and a threshold switch mechanism, just like the energy detector, but it also includes a unit that emphasizes stimulus edges and a variable memory length (variable memory). According to this conjecture, the physiological equivalent of the energy test statistics is neurotransmitter concentration, and the firing rate corresponds to neurotransmitter current. The advantage of this interpretation is that it leads to a unit-consistent explanation which allows for bridge between electrophysiological measurements, biochemical measurements, and psychophysical results.
The evidence reviewed in suggests the following association between functionality to histological classification:
Stimulus squaring is likely to be performed by receptor cells.
Stimulus edge emphasizing and signal transduction is performed by neurons.
Temporal accumulation of neurotransmitters is performed by glial cells. Short-term neurotransmitter accumulation is likely to occur also in some types of neurons.
Logical switching is executed by glial cells, and it results from exceeding a threshold level of neurotransmitter concentration. This threshold crossing is also accompanied by a change in neurotransmitter leak rate.
Physical all-or-non movement switching is due to muscle cells and results from exceeding a certain neurotransmitter concentration threshold on muscle surroundings.
Note that although the electrophysiological signals in Fig.6 are often similar to the functional signal (signal power/neurotransmitter concentration / muscle force), there are some stages in which the electrical observation differs from the functional purpose of the corresponding step. In particular, Nossenson et al. suggested that glia threshold crossing has a completely different functional operation compared to the radiated electrophysiological signal and that the latter might only be a side effect of glia break.
== General comments regarding the modern perspective of scientific and engineering models ==
The models above are still idealizations. Corrections must be made for the increased membrane surface area given by numerous dendritic spines, temperatures significantly hotter than room-temperature experimental data, and nonuniformity in the cell's internal structure. Certain observed effects do not fit into some of these models. For instance, the temperature cycling (with minimal net temperature increase) of the cell membrane during action potential propagation is not compatible with models that rely on modeling the membrane as a resistance that must dissipate energy when current flows through it. The transient thickening of the cell membrane during action potential propagation is also not predicted by these models, nor is the changing capacitance and voltage spike that results from this thickening incorporated into these models. The action of some anesthetics such as inert gases is problematic for these models as well. New models, such as the soliton model attempt to explain these phenomena, but are less developed than older models and have yet to be widely applied.
Modern views regarding the role of the scientific model suggest that "All models are wrong but some are useful" (Box and Draper, 1987, Gribbin, 2009; Paninski et al., 2009).
Recent conjecture suggests that each neuron might function as a collection of independent threshold units. It is suggested that a neuron could be anisotropically activated following the origin of its arriving signals to the membrane, via its dendritic trees. The spike waveform was also proposed to be dependent on the origin of the stimulus.
== External links ==
Neuronal Dynamics: from single neurons to networks and models of cognition (W. Gerstner, W. Kistler, R. Naud, L. Paninski, Cambridge University Press, 2014). In particular, Chapters 6 - 10, html online version.
Spiking Neuron Models (W. Gerstner and W. Kistler, Cambridge University Press, 2002)
== See also ==
Binding neuron
Bayesian approaches to brain function
Brain-computer interfaces
Free energy principle
Models of neural computation
Neural coding
Neural oscillation
Quantitative models of the action potential
Spiking neural network
== References == | Wikipedia/Integrate-and-fire |
Biological neuron models, also known as spiking neuron models, are mathematical descriptions of the conduction of electrical signals in neurons. Neurons (or nerve cells) are electrically excitable cells within the nervous system, able to fire electric signals, called action potentials, across a neural network. These mathematical models describe the role of the biophysical and geometrical characteristics of neurons on the conduction of electrical activity.
Central to these models is the description of how the membrane potential (that is, the difference in electric potential between the interior and the exterior of a biological cell) across the cell membrane changes over time. In an experimental setting, stimulating neurons with an electrical current generates an action potential (or spike), that propagates down the neuron's axon. This axon can branch out and connect to a large number of downstream neurons at sites called synapses. At these synapses, the spike can cause the release of neurotransmitters, which in turn can change the voltage potential of downstream neurons. This change can potentially lead to even more spikes in those downstream neurons, thus passing down the signal. As many as 95% of neurons in the neocortex, the outermost layer of the mammalian brain, consist of excitatory pyramidal neurons, and each pyramidal neuron receives tens of thousands of inputs from other neurons. Thus, spiking neurons are a major information processing unit of the nervous system.
One such example of a spiking neuron model may be a highly detailed mathematical model that includes spatial morphology. Another may be a conductance-based neuron model that views neurons as points and describes the membrane voltage dynamics as a function of trans-membrane currents. A mathematically simpler "integrate-and-fire" model significantly simplifies the description of ion channel and membrane potential dynamics (initially studied by Lapique in 1907).
== Biological background, classification, and aims of neuron models ==
Non-spiking cells, spiking cells, and their measurement
Not all the cells of the nervous system produce the type of spike that defines the scope of the spiking neuron models. For example, cochlear hair cells, retinal receptor cells, and retinal bipolar cells do not spike. Furthermore, many cells in the nervous system are not classified as neurons but instead are classified as glia.
Neuronal activity can be measured with different experimental techniques, such as the "Whole cell" measurement technique, which captures the spiking activity of a single neuron and produces full amplitude action potentials.
With extracellular measurement techniques, one or more electrodes are placed in the extracellular space. Spikes, often from several spiking sources, depending on the size of the electrode and its proximity to the sources, can be identified with signal processing techniques. Extracellular measurement has several advantages:
It is easier to obtain experimentally;
It is robust and lasts for a longer time;
It can reflect the dominant effect, especially when conducted in an anatomical region with many similar cells.
Overview of neuron models
Neuron models can be divided into two categories according to the physical units of the interface of the model. Each category could be further divided according to the abstraction/detail level:
Electrical input–output membrane voltage models – These models produce a prediction for membrane output voltage as a function of electrical stimulation given as current or voltage input. The various models in this category differ in the exact functional relationship between the input current and the output voltage and in the level of detail. Some models in this category predict only the moment of occurrence of the output spike (also known as "action potential"); other models are more detailed and account for sub-cellular processes. The models in this category can be either deterministic or probabilistic.
Natural stimulus or pharmacological input neuron models – The models in this category connect the input stimulus, which can be either pharmacological or natural, to the probability of a spike event. The input stage of these models is not electrical but rather has either pharmacological (chemical) concentration units, or physical units that characterize an external stimulus such as light, sound, or other forms of physical pressure. Furthermore, the output stage represents the probability of a spike event and not an electrical voltage.
Although it is not unusual in science and engineering to have several descriptive models for different abstraction/detail levels, the number of different, sometimes contradicting, biological neuron models is exceptionally high. This situation is partly the result of the many different experimental settings, and the difficulty to separate the intrinsic properties of a single neuron from measurement effects and interactions of many cells (network effects).
Aims of neuron models
Ultimately, biological neuron models aim to explain the mechanisms underlying the operation of the nervous system. However, several approaches can be distinguished, from more realistic models (e.g., mechanistic models) to more pragmatic models (e.g., phenomenological models). Modeling helps to analyze experimental data and address questions. Models are also important in the context of restoring lost brain functionality through neuroprosthetic devices.
== Electrical input–output membrane voltage models ==
The models in this category describe the relationship between neuronal membrane currents at the input stage and membrane voltage at the output stage. This category includes (generalized) integrate-and-fire models and biophysical models inspired by the work of Hodgkin–Huxley in the early 1950s using an experimental setup that punctured the cell membrane and allowed to force a specific membrane voltage/current.
Most modern electrical neural interfaces apply extra-cellular electrical stimulation to avoid membrane puncturing, which can lead to cell death and tissue damage. Hence, it is not clear to what extent the electrical neuron models hold for extra-cellular stimulation (see e.g.).
=== Hodgkin–Huxley ===
The Hodgkin–Huxley model (H&H model)
is a model of the relationship between the flow of ionic currents across the neuronal cell membrane and the membrane voltage of the cell. It consists of a set of nonlinear differential equations describing the behavior of ion channels that permeate the cell membrane of the squid giant axon. Hodgkin and Huxley were awarded the 1963 Nobel Prize in Physiology or Medicine for this work.
It is important to note the voltage-current relationship, with multiple voltage-dependent currents charging the cell membrane of capacity Cm
C
m
d
V
(
t
)
d
t
=
−
∑
i
I
i
(
t
,
V
)
.
{\displaystyle C_{\mathrm {m} }{\frac {dV(t)}{dt}}=-\sum _{i}I_{i}(t,V).}
The above equation is the time derivative of the law of capacitance, Q = CV where the change of the total charge must be explained as the sum over the currents. Each current is given by
I
(
t
,
V
)
=
g
(
t
,
V
)
⋅
(
V
−
V
e
q
)
{\displaystyle I(t,V)=g(t,V)\cdot (V-V_{\mathrm {eq} })}
where g(t,V) is the conductance, or inverse resistance, which can be expanded in terms of its maximal conductance ḡ and the activation and inactivation fractions m and h, respectively, that determine how many ions can flow through available membrane channels. This expansion is given by
g
(
t
,
V
)
=
g
¯
⋅
m
(
t
,
V
)
p
⋅
h
(
t
,
V
)
q
{\displaystyle g(t,V)={\bar {g}}\cdot m(t,V)^{p}\cdot h(t,V)^{q}}
and our fractions follow the first-order kinetics
d
m
(
t
,
V
)
d
t
=
m
∞
(
V
)
−
m
(
t
,
V
)
τ
m
(
V
)
=
α
m
(
V
)
⋅
(
1
−
m
)
−
β
m
(
V
)
⋅
m
{\displaystyle {\frac {dm(t,V)}{dt}}={\frac {m_{\infty }(V)-m(t,V)}{\tau _{\mathrm {m} }(V)}}=\alpha _{\mathrm {m} }(V)\cdot (1-m)-\beta _{\mathrm {m} }(V)\cdot m}
with similar dynamics for h, where we can use either τ and m∞ or α and β to define our gate fractions.
The Hodgkin–Huxley model may be extended to include additional ionic currents. Typically, these include inward Ca2+ and Na+ input currents, as well as several varieties of K+ outward currents, including a "leak" current.
The result can be at the small end of 20 parameters which one must estimate or measure for an accurate model. In a model of a complex system of neurons, numerical integration of the equations are computationally expensive. Careful simplifications of the Hodgkin–Huxley model are therefore needed.
The model can be reduced to two dimensions thanks to the dynamic relations which can be established between the gating variables. it is also possible to extend it to take into account the evolution of the concentrations (considered fixed in the original model).
=== Perfect Integrate-and-fire ===
One of the earliest models of a neuron is the perfect integrate-and-fire model (also called non-leaky integrate-and-fire), first investigated in 1907 by Louis Lapicque. A neuron is represented by its membrane voltage V which evolves in time during stimulation with an input current I(t) according
I
(
t
)
=
C
d
V
(
t
)
d
t
{\displaystyle I(t)=C{\frac {dV(t)}{dt}}}
which is just the time derivative of the law of capacitance, Q = CV. When an input current is applied, the membrane voltage increases with time until it reaches a constant threshold Vth, at which point a delta function spike occurs and the voltage is reset to its resting potential, after which the model continues to run. The firing frequency of the model thus increases linearly without bound as input current increases.
The model can be made more accurate by introducing a refractory period tref that limits the firing frequency of a neuron by preventing it from firing during that period. For constant input I(t)=I the threshold voltage is reached after an integration time tint=CVthr/I after starting from zero. After a reset, the refractory period introduces a dead time so that the total time until the next firing is tref+tint . The firing frequency is the inverse of the total inter-spike interval (including dead time). The firing frequency as a function of a constant input current, is therefore
f
(
I
)
=
I
C
V
t
h
+
t
r
e
f
I
.
{\displaystyle \,\!f(I)={\frac {I}{C_{\mathrm {} }V_{\mathrm {th} }+t_{\mathrm {ref} }I}}.}
A shortcoming of this model is that it describes neither adaptation nor leakage. If the model receives a below-threshold short current pulse at some time, it will retain that voltage boost forever - until another input later makes it fire. This characteristic is not in line with observed neuronal behavior. The following extensions make the integrate-and-fire model more plausible from a biological point of view.
=== Leaky integrate-and-fire ===
The leaky integrate-and-fire model, which can be traced back to Louis Lapicque, contains a "leak" term in the membrane potential equation that reflects the diffusion of ions through the membrane, unlike the non-leaky integrate-and-fire model. The model equation looks like
C
m
d
V
m
(
t
)
d
t
=
I
(
t
)
−
V
m
(
t
)
R
m
{\displaystyle C_{\mathrm {m} }{\frac {dV_{\mathrm {m} }(t)}{dt}}=I(t)-{\frac {V_{\mathrm {m} }(t)}{R_{\mathrm {m} }}}}
where Vm is the voltage across the cell membrane and Rm is the membrane resistance. (The non-leaky integrate-and-fire model is retrieved in the limit Rm to infinity, i.e. if the membrane is a perfect insulator). The model equation is valid for arbitrary time-dependent input until a threshold Vth is reached; thereafter the membrane potential is reset.
For constant input, the minimum input to reach the threshold is Ith = Vth / Rm. Assuming a reset to zero, the firing frequency thus looks like
f
(
I
)
=
{
0
,
I
≤
I
t
h
[
t
r
e
f
−
R
m
C
m
log
(
1
−
V
t
h
I
R
m
)
]
−
1
,
I
>
I
t
h
{\displaystyle f(I)={\begin{cases}0,&I\leq I_{\mathrm {th} }\\\left[t_{\mathrm {ref} }-R_{\mathrm {m} }C_{\mathrm {m} }\log \left(1-{\tfrac {V_{\mathrm {th} }}{IR_{\mathrm {m} }}}\right)\right]^{-1},&I>I_{\mathrm {th} }\end{cases}}}
which converges for large input currents to the previous leak-free model with the refractory period. The model can also be used for inhibitory neurons.
The most significant disadvantage of this model is that it does not contain neuronal adaptation, so that it cannot describe an experimentally measured spike train in response to constant input current. This disadvantage is removed in generalized integrate-and-fire models that also contain one or several adaptation-variables and are able to predict spike times of cortical neurons under current injection to a high degree of accuracy.
=== Adaptive integrate-and-fire ===
Neuronal adaptation refers to the fact that even in the presence of a constant current injection into the soma, the intervals between output spikes increase. An adaptive integrate-and-fire neuron model combines the leaky integration of voltage V with one or several adaptation variables wk (see Chapter 6.1. in the textbook Neuronal Dynamics)
τ
m
d
V
m
(
t
)
d
t
=
R
I
(
t
)
−
[
V
m
(
t
)
−
E
m
]
−
R
∑
k
w
k
{\displaystyle \tau _{\mathrm {m} }{\frac {dV_{\mathrm {m} }(t)}{dt}}=RI(t)-[V_{\mathrm {m} }(t)-E_{\mathrm {m} }]-R\sum _{k}w_{k}}
τ
k
d
w
k
(
t
)
d
t
=
−
a
k
[
V
m
(
t
)
−
E
m
]
−
w
k
+
b
k
τ
k
∑
f
δ
(
t
−
t
f
)
{\displaystyle \tau _{k}{\frac {dw_{k}(t)}{dt}}=-a_{k}[V_{\mathrm {m} }(t)-E_{\mathrm {m} }]-w_{k}+b_{k}\tau _{k}\sum _{f}\delta (t-t^{f})}
where
τ
m
{\displaystyle \tau _{m}}
is the membrane time constant, wk is the adaptation current number, with index k,
τ
k
{\displaystyle \tau _{k}}
is the time constant of adaptation current wk, Em is the resting potential and tf is the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a value Vr below the firing threshold. The reset value is one of the important parameters of the model. The simplest model of adaptation has only a single adaptation variable w and the sum over k is removed.
Integrate-and-fire neurons with one or several adaptation variables can account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting. Moreover, adaptive integrate-and-fire neurons with several adaptation variables are able to predict spike times of cortical neurons under time-dependent current injection into the soma.
=== Fractional-order leaky integrate-and-fire ===
Recent advances in computational and theoretical fractional calculus lead to a new form of model called Fractional-order leaky integrate-and-fire. An advantage of this model is that it can capture adaptation effects with a single variable. The model has the following form
I
(
t
)
−
V
m
(
t
)
R
m
=
C
m
d
α
V
m
(
t
)
d
α
t
{\displaystyle I(t)-{\frac {V_{\mathrm {m} }(t)}{R_{\mathrm {m} }}}=C_{\mathrm {m} }{\frac {d^{\alpha }V_{\mathrm {m} }(t)}{d^{\alpha }t}}}
Once the voltage hits the threshold it is reset. Fractional integration has been used to account for neuronal adaptation in experimental data.
=== 'Exponential integrate-and-fire' and 'adaptive exponential integrate-and-fire' ===
In the exponential integrate-and-fire model, spike generation is exponential, following the equation:
d
V
d
t
−
R
τ
m
I
(
t
)
=
1
τ
m
[
E
m
−
V
+
Δ
T
exp
(
V
−
V
T
Δ
T
)
]
.
{\displaystyle {\frac {dV}{dt}}-{\frac {R}{\tau _{m}}}I(t)={\frac {1}{\tau _{m}}}\left[E_{m}-V+\Delta _{T}\exp \left({\frac {V-V_{T}}{\Delta _{T}}}\right)\right].}
where
V
{\displaystyle V}
is the membrane potential,
V
T
{\displaystyle V_{T}}
is the intrinsic membrane potential threshold,
τ
m
{\displaystyle \tau _{m}}
is the membrane time constant,
E
m
{\displaystyle E_{m}}
is the resting potential, and
Δ
T
{\displaystyle \Delta _{T}}
is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons. Once the membrane potential crosses
V
T
{\displaystyle V_{T}}
, it diverges to infinity in finite time. In numerical simulation the integration is stopped if the membrane potential hits an arbitrary threshold (much larger than
V
T
{\displaystyle V_{T}}
) at which the membrane potential is reset to a value Vr . The voltage reset value Vr is one of the important parameters of the model. Importantly, the right-hand side of the above equation contains a nonlinearity that can be directly extracted from experimental data. In this sense the exponential nonlinearity is strongly supported by experimental evidence.
In the adaptive exponential integrate-and-fire neuron the above exponential nonlinearity of the voltage equation is combined with an adaptation variable w
τ
m
d
V
d
t
=
R
I
(
t
)
+
[
E
m
−
V
+
Δ
T
exp
(
V
−
V
T
Δ
T
)
]
−
R
w
{\displaystyle \tau _{m}{\frac {dV}{dt}}=RI(t)+\left[E_{m}-V+\Delta _{T}\exp \left({\frac {V-V_{T}}{\Delta _{T}}}\right)\right]-Rw}
τ
d
w
(
t
)
d
t
=
−
a
[
V
m
(
t
)
−
E
m
]
−
w
+
b
τ
δ
(
t
−
t
f
)
{\displaystyle \tau {\frac {dw(t)}{dt}}=-a[V_{\mathrm {m} }(t)-E_{\mathrm {m} }]-w+b\tau \delta (t-t^{f})}
where w denotes the adaptation current with time scale
τ
{\displaystyle \tau }
. Important model parameters are the voltage reset value Vr, the intrinsic threshold
V
T
{\displaystyle V_{T}}
, the time constants
τ
{\displaystyle \tau }
and
τ
m
{\displaystyle \tau _{m}}
as well as the coupling parameters a and b. The adaptive exponential integrate-and-fire model inherits the experimentally derived voltage nonlinearity of the exponential integrate-and-fire model. But going beyond this model, it can also account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting. However, since the adaptation is in the form of a current, aberrant hyperpolarization may appear. This problem was solved by expressing it as a conductance.
=== Adaptive Threshold Neuron Model ===
In this model, a time-dependent function
θ
(
t
)
{\displaystyle \theta (t)}
is added to the fixed threshold,
v
t
h
0
{\displaystyle v_{th0}}
, after every spike, causing an adaptation of the threshold. The threshold potential,
v
t
h
{\displaystyle v_{th}}
, gradually returns to its steady state value depending on the threshold adaptation time constant
τ
θ
{\displaystyle \tau _{\theta }}
. This is one of the simpler techniques to achieve spike frequency adaptation. The expression for the adaptive threshold is given by:
v
t
h
(
t
)
=
v
t
h
0
+
∑
θ
(
t
−
t
f
)
f
=
v
t
h
0
+
∑
θ
0
exp
[
−
(
t
−
t
f
)
τ
θ
]
f
{\displaystyle v_{th}(t)=v_{th0}+{\frac {\sum \theta (t-t_{f})}{f}}=v_{th0}+{\frac {\sum \theta _{0}\exp \left[-{\frac {(t-t_{f})}{\tau _{\theta }}}\right]}{f}}}
where
θ
(
t
)
{\displaystyle \theta (t)}
is defined by:
θ
(
t
)
=
θ
0
exp
[
−
t
τ
θ
]
{\displaystyle \theta (t)=\theta _{0}\exp \left[-{\frac {t}{\tau _{\theta }}}\right]}
When the membrane potential,
u
(
t
)
{\displaystyle u(t)}
, reaches a threshold, it is reset to
v
r
e
s
t
{\displaystyle v_{rest}}
:
u
(
t
)
≥
v
t
h
(
t
)
⇒
v
(
t
)
=
v
rest
{\displaystyle u(t)\geq v_{th}(t)\Rightarrow v(t)=v_{\text{rest}}}
A simpler version of this with a single time constant in threshold decay with an LIF neuron is realized in to achieve LSTM like recurrent spiking neural networks to achieve accuracy nearer to ANNs on few spatio temporal tasks.
=== Double Exponential Adaptive Threshold (DEXAT) ===
The DEXAT neuron model is a flavor of adaptive neuron model in which the threshold voltage decays with a double exponential having two time constants. Double exponential decay is governed by a fast initial decay and then a slower decay over a longer period of time. This neuron used in SNNs through surrogate gradient creates an adaptive learning rate yielding higher accuracy and faster convergence, and flexible long short-term memory compared to existing counterparts in the literature. The membrane potential dynamics are described through equations and the threshold adaptation rule is:
v
t
h
(
t
)
=
b
0
+
β
1
b
1
(
t
)
+
β
2
b
2
(
t
)
{\displaystyle v_{th}(t)=b_{0}+\beta _{1}b_{1}(t)+\beta _{2}b_{2}(t)}
The dynamics of
b
1
(
t
)
{\displaystyle b_{1}(t)}
and
b
2
(
t
)
{\displaystyle b_{2}(t)}
are given by
b
1
(
t
+
δ
t
)
=
p
j
1
b
1
(
t
)
+
(
1
−
p
j
1
)
z
(
t
)
δ
(
t
)
{\displaystyle b_{1}(t+\delta t)=p_{j1}b_{1}(t)+(1-p_{j1})z(t)\delta (t)}
,
b
2
(
t
+
δ
t
)
=
p
j
2
b
2
(
t
)
+
(
1
−
p
j
2
)
z
(
t
)
δ
(
t
)
{\displaystyle b_{2}(t+\delta t)=p_{j2}b_{2}(t)+(1-p_{j2})z(t)\delta (t)}
,
where
p
j
1
=
exp
[
−
δ
t
τ
b
1
]
{\displaystyle p_{j1}=\exp \left[-{\frac {\delta t}{\tau _{b1}}}\right]}
and
p
j
2
=
exp
[
−
δ
t
τ
b
2
]
{\displaystyle p_{j2}=\exp \left[-{\frac {\delta t}{\tau _{b2}}}\right]}
.
Further, multi-time scale adaptive threshold neuron model showing more complex dynamics is shown in.
== Stochastic models of membrane voltage and spike timing ==
The models in this category are generalized integrate-and-fire models that include a certain level of stochasticity. Cortical neurons in experiments are found to respond reliably to time-dependent input, albeit with a small degree of variations between one trial and the next if the same stimulus is repeated. Stochasticity in neurons has two important sources. First, even in a very controlled experiment where input current is injected directly into the soma, ion channels open and close stochastically and this channel noise leads to a small amount of variability in the exact value of the membrane potential and the exact timing of output spikes. Second, for a neuron embedded in a cortical network, it is hard to control the exact input because most inputs come from unobserved neurons somewhere else in the brain.
Stochasticity has been introduced into spiking neuron models in two fundamentally different forms: either (i) a noisy input current is added to the differential equation of the neuron model; or (ii) the process of spike generation is noisy. In both cases, the mathematical theory can be developed for continuous time, which is then, if desired for the use in computer simulations, transformed into a discrete-time model.
The relation of noise in neuron models to the variability of spike trains and neural codes is discussed in Neural Coding and in Chapter 7 of the textbook Neuronal Dynamics.
=== Noisy input model (diffusive noise) ===
A neuron embedded in a network receives spike input from other neurons. Since the spike arrival times are not controlled by an experimentalist they can be considered as stochastic. Thus a (potentially nonlinear) integrate-and-fire model with nonlinearity f(v) receives two inputs: an input
I
(
t
)
{\displaystyle I(t)}
controlled by the experimentalists and a noisy input current
I
n
o
i
s
e
(
t
)
{\displaystyle I^{\rm {noise}}(t)}
that describes the uncontrolled background input.
τ
m
d
V
d
t
=
f
(
V
)
+
R
I
(
t
)
+
R
I
noise
(
t
)
{\displaystyle \tau _{m}{\frac {dV}{dt}}=f(V)+RI(t)+RI^{\text{noise}}(t)}
Stein's model is the special case of a leaky integrate-and-fire neuron and a stationary white noise current
I
n
o
i
s
e
(
t
)
=
ξ
(
t
)
{\displaystyle I^{\rm {noise}}(t)=\xi (t)}
with mean zero and unit variance. In the subthreshold regime, these assumptions yield the equation of the Ornstein–Uhlenbeck process
τ
m
d
V
d
t
=
[
E
m
−
V
]
+
R
I
(
t
)
+
R
ξ
(
t
)
{\displaystyle \tau _{m}{\frac {dV}{dt}}=[E_{m}-V]+RI(t)+R\xi (t)}
However, in contrast to the standard Ornstein–Uhlenbeck process, the membrane voltage is reset whenever V hits the firing threshold Vth . Calculating the interval distribution of the Ornstein–Uhlenbeck model for constant input with threshold leads to a first-passage time problem. Stein's neuron model and variants thereof have been used to fit interspike interval distributions of spike trains from real neurons under constant input current.
In the mathematical literature, the above equation of the Ornstein–Uhlenbeck process is written in the form
d
V
=
[
E
m
−
V
+
R
I
(
t
)
]
d
t
τ
m
+
σ
d
W
{\displaystyle dV=[E_{m}-V+RI(t)]{\frac {dt}{\tau _{m}}}+\sigma \,dW}
where
σ
{\displaystyle \sigma }
is the amplitude of the noise input and dW are increments of a Wiener process. For discrete-time implementations with time step dt the voltage updates are
Δ
V
=
[
E
m
−
V
+
R
I
(
t
)
]
Δ
t
τ
m
+
σ
τ
m
y
{\displaystyle \Delta V=[E_{m}-V+RI(t)]{\frac {\Delta t}{\tau _{m}}}+\sigma {\sqrt {\tau _{m}}}y}
where y is drawn from a Gaussian distribution with zero mean unit variance. The voltage is reset when it hits the firing threshold Vth .
The noisy input model can also be used in generalized integrate-and-fire models. For example, the exponential integrate-and-fire model with noisy input reads
τ
m
d
V
d
t
=
E
m
−
V
+
Δ
T
exp
(
V
−
V
T
Δ
T
)
+
R
I
(
t
)
+
R
ξ
(
t
)
{\displaystyle \tau _{m}{\frac {dV}{dt}}=E_{m}-V+\Delta _{T}\exp \left({\frac {V-V_{T}}{\Delta _{T}}}\right)+RI(t)+R\xi (t)}
For constant deterministic input
I
(
t
)
=
I
0
{\displaystyle I(t)=I_{0}}
it is possible to calculate the mean firing rate as a function of
I
0
{\displaystyle I_{0}}
. This is important because the frequency-current relation (f-I-curve) is often used by experimentalists to characterize a neuron.
The leaky integrate-and-fire with noisy input has been widely used in the analysis of networks of spiking neurons. Noisy input is also called 'diffusive noise' because it leads to a diffusion of the subthreshold membrane potential around the noise-free trajectory (Johannesma, The theory of spiking neurons with noisy input is reviewed in Chapter 8.2 of the textbook Neuronal Dynamics.
=== Noisy output model (escape noise) ===
In deterministic integrate-and-fire models, a spike is generated if the membrane potential V(t) hits the threshold
V
t
h
{\displaystyle V_{th}}
. In noisy output models, the strict threshold is replaced by a noisy one as follows. At each moment in time t, a spike is generated stochastically with instantaneous stochastic intensity or 'escape rate'
ρ
(
t
)
=
f
(
V
(
t
)
−
V
t
h
)
{\displaystyle \rho (t)=f(V(t)-V_{th})}
that depends on the momentary difference between the membrane voltage V(t) and the threshold
V
t
h
{\displaystyle V_{th}}
. A common choice for the 'escape rate'
f
{\displaystyle f}
(that is consistent with biological data) is
f
(
V
−
V
t
h
)
=
1
τ
0
exp
[
β
(
V
−
V
t
h
)
]
{\displaystyle f(V-V_{th})={\frac {1}{\tau _{0}}}\exp[\beta (V-V_{th})]}
where
τ
0
{\displaystyle \tau _{0}}
is a time constant that describes how quickly a spike is fired once the membrane potential reaches the threshold and
β
{\displaystyle \beta }
is a sharpness parameter. For
β
→
∞
{\displaystyle \beta \to \infty }
the threshold becomes sharp and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments is
1
/
β
≈
4
m
V
{\displaystyle 1/\beta \approx 4mV}
which means that neuronal firing becomes non-negligible as soon as the membrane potential is a few mV below the formal firing threshold.
The escape rate process via a soft threshold is reviewed in Chapter 9 of the textbook Neuronal Dynamics.
For models in discrete time, a spike is generated with probability
P
F
(
t
n
)
=
F
[
V
(
t
n
)
−
V
t
h
]
{\displaystyle P_{F}(t_{n})=F[V(t_{n})-V_{th}]}
that depends on the momentary difference between the membrane voltage V at time
t
n
{\displaystyle t_{n}}
and the threshold
V
t
h
{\displaystyle V_{th}}
. The function F is often taken as a standard sigmoidal
F
(
x
)
=
0.5
[
1
+
tanh
(
γ
x
)
]
{\displaystyle F(x)=0.5[1+\tanh(\gamma x)]}
with steepness parameter
γ
{\displaystyle \gamma }
, similar to the update dynamics in artificial neural networks. But the functional form of F can also be derived from the stochastic intensity
f
{\displaystyle f}
in continuous time introduced above as
F
(
y
n
)
≈
1
−
exp
[
y
n
Δ
t
]
{\displaystyle F(y_{n})\approx 1-\exp[y_{n}\Delta t]}
where
y
n
=
V
(
t
n
)
−
V
t
h
{\displaystyle y_{n}=V(t_{n})-V_{th}}
is the threshold distance.
Integrate-and-fire models with output noise can be used to predict the peristimulus time histogram (PSTH) of real neurons under arbitrary time-dependent input. For non-adaptive integrate-and-fire neurons, the interval distribution under constant stimulation can be calculated from stationary renewal theory.
=== Spike response model (SRM) ===
main article: Spike response model
The spike response model (SRM) is a generalized linear model for the subthreshold membrane voltage combined with a nonlinear output noise process for spike generation. The membrane voltage V(t) at time t is
V
(
t
)
=
∑
f
η
(
t
−
t
f
)
+
∫
0
∞
κ
(
s
)
I
(
t
−
s
)
d
s
+
V
r
e
s
t
{\displaystyle V(t)=\sum _{f}\eta (t-t^{f})+\int \limits _{0}^{\infty }\kappa (s)I(t-s)\,ds+V_{\mathrm {rest} }}
where tf is the firing time of spike number f of the neuron, Vrest is the resting voltage in the absence of input, I(t-s) is the input current at time t-s and
κ
(
s
)
{\displaystyle \kappa (s)}
is a linear filter (also called kernel) that describes the contribution of an input current pulse at time t-s to the voltage at time t. The contributions to the voltage caused by a spike at time
t
f
{\displaystyle t^{f}}
are described by the refractory kernel
η
(
t
−
t
f
)
{\displaystyle \eta (t-t^{f})}
. In particular,
η
(
t
−
t
f
)
{\displaystyle \eta (t-t^{f})}
describes the reset after the spike and the time course of the spike-afterpotential following a spike. It therefore expresses the consequences of refractoriness and adaptation. The voltage V(t) can be interpreted as the result of an integration of the differential equation of a leaky integrate-and-fire model coupled to an arbitrary number of spike-triggered adaptation variables.
Spike firing is stochastic and happens with a time-dependent stochastic intensity (instantaneous rate)
f
(
V
−
ϑ
(
t
)
)
=
1
τ
0
exp
[
β
(
V
−
ϑ
(
t
)
)
]
{\displaystyle f(V-\vartheta (t))={\frac {1}{\tau _{0}}}\exp[\beta (V-\vartheta (t))]}
with parameters
τ
0
{\displaystyle \tau _{0}}
and
β
{\displaystyle \beta }
and a dynamic threshold
ϑ
(
t
)
{\displaystyle \vartheta (t)}
given by
ϑ
(
t
)
=
ϑ
0
+
∑
f
θ
1
(
t
−
t
f
)
{\displaystyle \vartheta (t)=\vartheta _{0}+\sum _{f}\theta _{1}(t-t^{f})}
Here
ϑ
0
{\displaystyle \vartheta _{0}}
is the firing threshold of an inactive neuron and
θ
1
(
t
−
t
f
)
{\displaystyle \theta _{1}(t-t^{f})}
describes the increase of the threshold after a spike at time
t
f
{\displaystyle t^{f}}
. In case of a fixed threshold, one sets
θ
1
(
t
−
t
f
)
=
0
{\displaystyle \theta _{1}(t-t^{f})=0}
. For
β
→
∞
{\displaystyle \beta \to \infty }
the threshold process is deterministic.
The time course of the filters
η
,
κ
,
θ
1
{\displaystyle \eta ,\kappa ,\theta _{1}}
that characterize the spike response model can be directly extracted from experimental data. With optimized parameters the SRM describes the time course of the subthreshold membrane voltage for time-dependent input with a precision of 2mV and can predict the timing of most output spikes with a precision of 4ms. The SRM is closely related to linear-nonlinear-Poisson cascade models (also called Generalized Linear Model). The estimation of parameters of probabilistic neuron models such as the SRM using methods developed for Generalized Linear Models is discussed in Chapter 10 of the textbook Neuronal Dynamics.
The name spike response model arises because, in a network, the input current for neuron i is generated by the spikes of other neurons so that in the case of a network the voltage equation becomes
V
i
(
t
)
=
∑
f
η
i
(
t
−
t
i
f
)
+
∑
j
=
1
N
w
i
j
∑
f
′
ε
i
j
(
t
−
t
j
f
′
)
+
V
r
e
s
t
{\displaystyle V_{i}(t)=\sum _{f}\eta _{i}(t-t_{i}^{f})+\sum _{j=1}^{N}w_{ij}\sum _{f'}\varepsilon _{ij}(t-t_{j}^{f'})+V_{\mathrm {rest} }}
where
t
j
f
′
{\displaystyle t_{j}^{f'}}
is the firing times of neuron j (i.e., its spike train);
η
i
(
t
−
t
i
f
)
{\displaystyle \eta _{i}(t-t_{i}^{f})}
describes the time course of the spike and the spike after-potential for neuron i; and
w
i
j
{\displaystyle w_{ij}}
and
ε
i
j
(
t
−
t
j
f
′
)
{\displaystyle \varepsilon _{ij}(t-t_{j}^{f'})}
describe the amplitude and time course of an excitatory or inhibitory postsynaptic potential (PSP) caused by the spike
t
j
f
′
{\displaystyle t_{j}^{f'}}
of the presynaptic neuron j. The time course
ε
i
j
(
s
)
{\displaystyle \varepsilon _{ij}(s)}
of the PSP results from the convolution of the postsynaptic current
I
(
t
)
{\displaystyle I(t)}
caused by the arrival of a presynaptic spike from neuron j with the membrane filter
κ
(
s
)
{\displaystyle \kappa (s)}
.
=== SRM0 ===
The SRM0 is a stochastic neuron model related to time-dependent nonlinear renewal theory and a simplification of the Spike Response Model (SRM). The main difference to the voltage equation of the SRM introduced above is that in the term containing the refractory kernel
η
(
s
)
{\displaystyle \eta (s)}
there is no summation sign over past spikes: only the most recent spike (denoted as the time
t
^
{\displaystyle {\hat {t}}}
) matters. Another difference is that the threshold is constant. The model SRM0 can be formulated in discrete or continuous time. For example, in continuous time, the single-neuron equation is
V
(
t
)
=
η
(
t
−
t
^
)
+
∫
0
∞
κ
(
s
)
I
(
t
−
s
)
d
s
+
V
r
e
s
t
{\displaystyle V(t)=\eta (t-{\hat {t}})+\int _{0}^{\infty }\kappa (s)I(t-s)\,ds+V_{\mathrm {rest} }}
and the network equations of the SRM0 are
V
i
(
t
∣
t
^
i
)
=
η
i
(
t
−
t
^
i
)
+
∑
j
w
i
j
∑
f
ε
i
j
(
t
−
t
^
i
,
t
−
t
f
)
+
V
r
e
s
t
{\displaystyle V_{i}(t\mid {\hat {t}}_{i})=\eta _{i}(t-{\hat {t}}_{i})+\sum _{j}w_{ij}\sum _{f}\varepsilon _{ij}(t-{\hat {t}}_{i},t-t^{f})+V_{\mathrm {rest} }}
where
t
^
i
{\displaystyle {\hat {t}}_{i}}
is the last firing time neuron i. Note that the time course of the postsynaptic potential
ε
i
j
{\displaystyle \varepsilon _{ij}}
is also allowed to depend on the time since the last spike of neuron i to describe a change in membrane conductance during refractoriness. The instantaneous firing rate (stochastic intensity) is
f
(
V
−
ϑ
)
=
1
τ
0
exp
[
β
(
V
−
V
t
h
)
]
{\displaystyle f(V-\vartheta )={\frac {1}{\tau _{0}}}\exp[\beta (V-V_{th})]}
where
V
t
h
{\displaystyle V_{th}}
is a fixed firing threshold. Thus spike firing of neuron i depends only on its input and the time since neuron i has fired its last spike.
With the SRM0, the interspike-interval distribution for constant input can be mathematically linked to the shape of the refractory kernel
η
{\displaystyle \eta }
. Moreover the stationary frequency-current relation can be calculated from the escape rate in combination with the refractory kernel
η
{\displaystyle \eta }
. With an appropriate choice of the kernels, the SRM0 approximates the dynamics of the Hodgkin-Huxley model to a high degree of accuracy. Moreover, the PSTH response to arbitrary time-dependent input can be predicted.
=== Galves–Löcherbach model ===
The Galves–Löcherbach model is a stochastic neuron model closely related to the spike response model SRM0 and the leaky integrate-and-fire model. It is inherently stochastic and, just like the SRM0, it is linked to time-dependent nonlinear renewal theory. Given the model specifications, the probability that a given neuron
i
{\displaystyle i}
spikes in a period
t
{\displaystyle t}
may be described by
P
r
o
b
(
X
t
(
i
)
=
1
∣
F
t
−
1
)
=
φ
i
(
∑
j
∈
I
W
j
→
i
∑
s
=
L
t
i
t
−
1
g
j
(
t
−
s
)
X
s
(
j
)
,
t
−
L
t
i
)
,
{\displaystyle \mathop {\mathrm {Prob} } (X_{t}(i)=1\mid {\mathcal {F}}_{t-1})=\varphi _{i}{\Biggl (}\sum _{j\in I}W_{j\rightarrow i}\sum _{s=L_{t}^{i}}^{t-1}g_{j}(t-s)X_{s}(j),~~~t-L_{t}^{i}{\Biggl )},}
where
W
j
→
i
{\displaystyle W_{j\rightarrow i}}
is a synaptic weight, describing the influence of neuron
j
{\displaystyle j}
on neuron
i
{\displaystyle i}
,
g
j
{\displaystyle g_{j}}
expresses the leak, and
L
t
i
{\displaystyle L_{t}^{i}}
provides the spiking history of neuron
i
{\displaystyle i}
before
t
{\displaystyle t}
, according to
L
t
i
=
sup
{
s
<
t
:
X
s
(
i
)
=
1
}
.
{\displaystyle L_{t}^{i}=\sup\{s<t:X_{s}(i)=1\}.}
Importantly, the spike probability of neuron
i
{\displaystyle i}
depends only on its spike input (filtered with a kernel
g
j
{\displaystyle g_{j}}
and weighted with a factor
W
j
→
i
{\displaystyle W_{j\to i}}
) and the timing of its most recent output spike (summarized by
t
−
L
t
i
{\displaystyle t-L_{t}^{i}}
).
== Didactic toy models of membrane voltage ==
The models in this category are highly simplified toy models that qualitatively describe the membrane voltage as a function of input. They are mainly used for didactic reasons in teaching but are not considered valid neuron models for large-scale simulations or data fitting.
=== FitzHugh–Nagumo ===
Sweeping simplifications to Hodgkin–Huxley were introduced by FitzHugh and Nagumo in 1961 and 1962. Seeking to describe "regenerative self-excitation" by a nonlinear positive-feedback membrane voltage and recovery by a linear negative-feedback gate voltage, they developed the model described by
r
c
l
d
V
d
t
=
V
−
V
3
/
3
−
w
+
I
e
x
t
τ
d
w
d
t
=
V
−
a
−
b
w
{\displaystyle {\begin{aligned}{rcl}{\dfrac {dV}{dt}}&=V-V^{3}/3-w+I_{\mathrm {ext} }\\\tau {\dfrac {dw}{dt}}&=V-a-bw\end{aligned}}}
where we again have a membrane-like voltage and input current with a slower general gate voltage w and experimentally-determined parameters a = -0.7, b = 0.8, τ = 1/0.08. Although not derivable from biology, the model allows for a simplified, immediately available dynamic, without being a trivial simplification. The experimental support is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7 in the textbook Methods of Neuronal Modeling.
=== Morris–Lecar ===
In 1981, Morris and Lecar combined the Hodgkin–Huxley and FitzHugh–Nagumo models into a voltage-gated calcium channel model with a delayed-rectifier potassium channel represented by
C
d
V
d
t
=
−
I
i
o
n
(
V
,
w
)
+
I
d
w
d
t
=
φ
⋅
w
∞
−
w
τ
w
{\displaystyle {\begin{aligned}C{\frac {dV}{dt}}&=-I_{\mathrm {ion} }(V,w)+I\\{\frac {dw}{dt}}&=\varphi \cdot {\frac {w_{\infty }-w}{\tau _{w}}}\end{aligned}}}
where
I
i
o
n
(
V
,
w
)
=
g
¯
C
a
m
∞
⋅
(
V
−
V
C
a
)
+
g
¯
K
w
⋅
(
V
−
V
K
)
+
g
¯
L
⋅
(
V
−
V
L
)
{\displaystyle I_{\mathrm {ion} }(V,w)={\bar {g}}_{\mathrm {Ca} }m_{\infty }\cdot (V-V_{\mathrm {Ca} })+{\bar {g}}_{\mathrm {K} }w\cdot (V-V_{\mathrm {K} })+{\bar {g}}_{\mathrm {L} }\cdot (V-V_{\mathrm {L} })}
. The experimental support of the model is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7 in the textbook Methods of Neuronal Modeling.
A two-dimensional neuron model very similar to the Morris-Lecar model can be derived step-by-step starting from the Hodgkin-Huxley model. See Chapter 4.2 in the textbook Neuronal Dynamics.
=== Hindmarsh–Rose ===
Building upon the FitzHugh–Nagumo model, Hindmarsh and Rose proposed in 1984 a model of neuronal activity described by three coupled first-order differential equations:
d
x
d
t
=
y
+
3
x
2
−
x
3
−
z
+
I
d
y
d
t
=
1
−
5
x
2
−
y
d
z
d
t
=
r
⋅
(
4
(
x
+
8
5
)
−
z
)
{\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=y+3x^{2}-x^{3}-z+I\\{\frac {dy}{dt}}&=1-5x^{2}-y\\{\frac {dz}{dt}}&=r\cdot (4(x+{\tfrac {8}{5}})-z)\end{aligned}}}
with r2 = x2 + y2 + z2, and r ≈ 10−2 so that the z variable only changes very slowly. This extra mathematical complexity allows a great variety of dynamic behaviors for the membrane potential, described by the x variable of the model, which includes chaotic dynamics. This makes the Hindmarsh–Rose neuron model very useful, because it is still simple, allows a good qualitative description of the many different firing patterns of the action potential, in particular bursting, observed in experiments. Nevertheless, it remains a toy model and has not been fitted to experimental data. It is widely used as a reference model for bursting dynamics.
=== Theta model and quadratic integrate-and-fire ===
The theta model, or Ermentrout–Kopell canonical Type I model, is mathematically equivalent to the quadratic integrate-and-fire model which in turn is an approximation to the exponential integrate-and-fire model and the Hodgkin-Huxley model. It is called a canonical model because it is one of the generic models for constant input close to the bifurcation point, which means close to the transition from silent to repetitive firing.
The standard formulation of the theta model is
d
θ
(
t
)
d
t
=
(
I
−
I
0
)
[
1
+
cos
(
θ
)
]
+
[
1
−
cos
(
θ
)
]
{\displaystyle {\frac {d\theta (t)}{dt}}=(I-I_{0})[1+\cos(\theta )]+[1-\cos(\theta )]}
The equation for the quadratic integrate-and-fire model is (see Chapter 5.3 in the textbook Neuronal Dynamics )
τ
m
d
V
m
(
t
)
d
t
=
(
I
−
I
0
)
R
+
[
V
m
(
t
)
−
E
m
]
[
V
m
(
t
)
−
V
T
]
{\displaystyle \tau _{\mathrm {m} }{\frac {dV_{\mathrm {m} }(t)}{dt}}=(I-I_{0})R+[V_{\mathrm {m} }(t)-E_{\mathrm {m} }][V_{\mathrm {m} }(t)-V_{\mathrm {T} }]}
The equivalence of theta model and quadratic integrate-and-fire is for example reviewed in Chapter 4.1.2.2 of spiking neuron models.
For input
I
(
t
)
{\displaystyle I(t)}
that changes over time or is far away from the bifurcation point, it is preferable to work with the exponential integrate-and-fire model (if one wants to stay in the class of one-dimensional neuron models), because real neurons exhibit the nonlinearity of the exponential integrate-and-fire model.
== Sensory input-stimulus encoding neuron models ==
The models in this category were derived following experiments involving natural stimulation such as light, sound, touch, or odor. In these experiments, the spike pattern resulting from each stimulus presentation varies from trial to trial, but the averaged response from several trials often converges to a clear pattern. Consequently, the models in this category generate a probabilistic relationship between the input stimulus to spike occurrences. Importantly, the recorded neurons are often located several processing steps after the sensory neurons, so that these models summarize the effects of the sequence of processing steps in a compact form
=== The non-homogeneous Poisson process model (Siebert) ===
Siebert modeled the neuron spike firing pattern using a non-homogeneous Poisson process model, following experiments involving the auditory system. According to Siebert, the probability of a spiking event at the time interval
[
t
,
t
+
Δ
t
]
{\displaystyle [t,t+\Delta _{t}]}
is proportional to a non-negative function
g
[
s
(
t
)
]
{\displaystyle g[s(t)]}
, where
s
(
t
)
{\displaystyle s(t)}
is the raw stimulus.:
P
spike
(
t
∈
[
t
′
,
t
′
+
Δ
t
]
)
=
Δ
t
⋅
g
[
s
(
t
)
]
{\displaystyle P_{\text{spike}}(t\in [t',t'+\Delta _{t}])=\Delta _{t}\cdot g[s(t)]}
Siebert considered several functions as
g
[
s
(
t
)
]
{\displaystyle g[s(t)]}
, including
g
[
s
(
t
)
]
∝
s
2
(
t
)
{\displaystyle g[s(t)]\propto s^{2}(t)}
for low stimulus intensities.
The main advantage of Siebert's model is its simplicity. The shortcomings of the model is its inability to reflect properly the following phenomena:
The transient enhancement of the neuronal firing activity in response to a step stimulus.
The saturation of the firing rate.
The values of inter-spike-interval-histogram at short intervals values (close to zero).
These shortcomings are addressed by the age-dependent point process model and the two-state Markov Model.
=== Refractoriness and age-dependent point process model ===
Berry and Meister studied neuronal refractoriness using a stochastic model that predicts spikes as a product of two terms, a function f(s(t)) that depends on the time-dependent stimulus s(t) and one a recovery function
w
(
t
−
t
^
)
{\displaystyle w(t-{\hat {t}})}
that depends on the time since the last spike
ρ
(
t
)
=
f
(
s
(
t
)
)
w
(
t
−
t
^
)
{\displaystyle \rho (t)=f(s(t))w(t-{\hat {t}})}
The model is also called an inhomogeneous Markov interval (IMI) process. Similar models have been used for many years in auditory neuroscience. Since the model keeps memory of the last spike time it is non-Poisson and falls in the class of time-dependent renewal models. It is closely related to the model SRM0 with exponential escape rate. Importantly, it is possible to fit parameters of the age-dependent point process model so as to describe not just the PSTH response, but also the interspike-interval statistics.
=== Linear-nonlinear Poisson cascade model and GLM ===
The linear-nonlinear-Poisson cascade model is a cascade of a linear filtering process followed by a nonlinear spike generation step. In the case that output spikes feed back, via a linear filtering process, we arrive at a model that is known in the neurosciences as Generalized Linear Model (GLM). The GLM is mathematically equivalent to the spike response model SRM) with escape noise; but whereas in the SRM the internal variables are interpreted as the membrane potential and the firing threshold, in the GLM the internal variables are abstract quantities that summarizes the net effect of input (and recent output spikes) before spikes are generated in the final step.
=== The two-state Markov model (Nossenson & Messer) ===
The spiking neuron model by Nossenson & Messer produces the probability of the neuron firing a spike as a function of either an external or pharmacological stimulus. The model consists of a cascade of a receptor layer model and a spiking neuron model, as shown in Fig 4. The connection between the external stimulus to the spiking probability is made in two steps: First, a receptor cell model translates the raw external stimulus to neurotransmitter concentration, and then, a spiking neuron model connects neurotransmitter concentration to the firing rate (spiking probability). Thus, the spiking neuron model by itself depends on neurotransmitter concentration at the input stage.
An important feature of this model is the prediction for neurons firing rate pattern which captures, using a low number of free parameters, the characteristic edge emphasized response of neurons to a stimulus pulse, as shown in Fig. 5. The firing rate is identified both as a normalized probability for neural spike firing and as a quantity proportional to the current of neurotransmitters released by the cell. The expression for the firing rate takes the following form:
R
fire
(
t
)
=
P
spike
(
t
;
Δ
t
)
Δ
t
=
[
y
(
t
)
+
R
0
]
⋅
P
0
(
t
)
{\displaystyle R_{\text{fire}}(t)={\frac {P_{\text{spike}}(t;\Delta _{t})}{\Delta _{t}}}=[y(t)+R_{0}]\cdot P_{0}(t)}
where,
P0 is the probability of the neuron being "armed" and ready to fire. It is given by the following differential equation:
P
˙
0
=
−
[
y
(
t
)
+
R
0
+
R
1
]
⋅
P
0
(
t
)
+
R
1
{\displaystyle {\dot {P}}_{0}=-[y(t)+R_{0}+R_{1}]\cdot P_{0}(t)+R_{1}}
P0 could be generally calculated recursively using the Euler method, but in the case of a pulse of stimulus, it yields a simple closed-form expression.
y(t) is the input of the model and is interpreted as the neurotransmitter concentration on the cell surrounding (in most cases glutamate). For an external stimulus it can be estimated through the receptor layer model:
y
(
t
)
≃
g
gain
⋅
⟨
s
2
(
t
)
⟩
,
{\displaystyle y(t)\simeq g_{\text{gain}}\cdot \langle s^{2}(t)\rangle ,}
with
⟨
s
2
(
t
)
⟩
{\displaystyle \langle s^{2}(t)\rangle }
being a short temporal average of stimulus power (given in Watt or other energy per time unit).
R0 corresponds to the intrinsic spontaneous firing rate of the neuron.
R1 is the recovery rate of the neuron from the refractory state.
Other predictions by this model include:
1) The averaged evoked response potential (ERP) due to the population of many neurons in unfiltered measurements resembles the firing rate.
2) The voltage variance of activity due to multiple neuron activity resembles the firing rate (also known as Multi-Unit-Activity power or MUA).
3) The inter-spike-interval probability distribution takes the form a gamma-distribution like function.
== Pharmacological input stimulus neuron models ==
The models in this category produce predictions for experiments involving pharmacological stimulation.
=== Synaptic transmission (Koch & Segev) ===
According to the model by Koch and Segev, the response of a neuron to individual neurotransmitters can be modeled as an extension of the classical Hodgkin–Huxley model with both standard and nonstandard kinetic currents. Four neurotransmitters primarily influence the CNS. AMPA/kainate receptors are fast excitatory mediators while NMDA receptors mediate considerably slower currents. Fast inhibitory currents go through GABAA receptors, while GABAB receptors mediate by secondary G-protein-activated potassium channels. This range of mediation produces the following current dynamics:
I
A
M
P
A
(
t
,
V
)
=
g
¯
A
M
P
A
⋅
[
O
]
⋅
(
V
(
t
)
−
E
A
M
P
A
)
{\displaystyle I_{\mathrm {AMPA} }(t,V)={\bar {g}}_{\mathrm {AMPA} }\cdot [O]\cdot (V(t)-E_{\mathrm {AMPA} })}
I
N
M
D
A
(
t
,
V
)
=
g
¯
N
M
D
A
⋅
B
(
V
)
⋅
[
O
]
⋅
(
V
(
t
)
−
E
N
M
D
A
)
{\displaystyle I_{\mathrm {NMDA} }(t,V)={\bar {g}}_{\mathrm {NMDA} }\cdot B(V)\cdot [O]\cdot (V(t)-E_{\mathrm {NMDA} })}
I
G
A
B
A
A
(
t
,
V
)
=
g
¯
G
A
B
A
A
⋅
(
[
O
1
]
+
[
O
2
]
)
⋅
(
V
(
t
)
−
E
C
l
)
{\displaystyle I_{\mathrm {GABA_{A}} }(t,V)={\bar {g}}_{\mathrm {GABA_{A}} }\cdot ([O_{1}]+[O_{2}])\cdot (V(t)-E_{\mathrm {Cl} })}
I
G
A
B
A
B
(
t
,
V
)
=
g
¯
G
A
B
A
B
⋅
[
G
]
n
[
G
]
n
+
K
d
⋅
(
V
(
t
)
−
E
K
)
{\displaystyle I_{\mathrm {GABA_{B}} }(t,V)={\bar {g}}_{\mathrm {GABA_{B}} }\cdot {\tfrac {[G]^{n}}{[G]^{n}+K_{\mathrm {d} }}}\cdot (V(t)-E_{\mathrm {K} })}
where ḡ is the maximal conductance (around 1S) and E is the equilibrium potential of the given ion or transmitter (AMDA, NMDA, Cl, or K), while [O] describes the fraction of open receptors. For NMDA, there is a significant effect of magnesium block that depends sigmoidally on the concentration of intracellular magnesium by B(V). For GABAB, [G] is the concentration of the G-protein, and Kd describes the dissociation of G in binding to the potassium gates.
The dynamics of this more complicated model have been well-studied experimentally and produce important results in terms of very quick synaptic potentiation and depression, that is fast, short-term learning.
The stochastic model by Nossenson and Messer translates neurotransmitter concentration at the input stage to the probability of releasing neurotransmitter at the output stage. For a more detailed description of this model, see the Two state Markov model section above.
== HTM neuron model ==
The HTM neuron model was developed by Jeff Hawkins and researchers at Numenta and is based on a theory called Hierarchical Temporal Memory, originally described in the book On Intelligence. It is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the human brain.
== Applications ==
Spiking Neuron Models are used in a variety of applications that need encoding into or decoding from neuronal spike trains in the context of neuroprosthesis and brain-computer interfaces such as retinal prosthesis: or artificial limb control and sensation. Applications are not part of this article; for more information on this topic please refer to the main article.
== Relation between artificial and biological neuron models ==
The most basic model of a neuron consists of an input with some synaptic weight vector and an activation function or transfer function inside the neuron determining output. This is the basic structure used for artificial neurons, which in a neural network often looks like
y
i
=
φ
(
∑
j
w
i
j
x
j
)
{\displaystyle y_{i}=\varphi \left(\sum _{j}w_{ij}x_{j}\right)}
where yi is the output of the i th neuron, xj is the jth input neuron signal, wij is the synaptic weight (or strength of connection) between the neurons i and j, and φ is the activation function. While this model has seen success in machine-learning applications, it is a poor model for real (biological) neurons, because it lacks time-dependence in input and output.
When an input is switched on at a time t and kept constant thereafter, biological neurons emit a spike train. Importantly, this spike train is not regular but exhibits a temporal structure characterized by adaptation, bursting, or initial bursting followed by regular spiking. Generalized integrate-and-fire models such as the Adaptive Exponential Integrate-and-Fire model, the spike response model, or the (linear) adaptive integrate-and-fire model can capture these neuronal firing patterns.
Moreover, neuronal input in the brain is time-dependent. Time-dependent input is transformed by complex linear and nonlinear filters into a spike train in the output. Again, the spike response model or the adaptive integrate-and-fire model enables to prediction of the spike train in the output for arbitrary time-dependent input, whereas an artificial neuron or a simple leaky integrate-and-fire does not.
If we take the Hodkgin-Huxley model as a starting point, generalized integrate-and-fire models can be derived systematically in a step-by-step simplification procedure. This has been shown explicitly for the exponential integrate-and-fire model and the spike response model.
In the case of modeling a biological neuron, physical analogs are used in place of abstractions such as "weight" and "transfer function". A neuron is filled and surrounded with water-containing ions, which carry electric charge. The neuron is bound by an insulating cell membrane and can maintain a concentration of charged ions on either side that determines a capacitance Cm. The firing of a neuron involves the movement of ions into the cell, that occurs when neurotransmitters cause ion channels on the cell membrane to open. We describe this by a physical time-dependent current I(t). With this comes a change in voltage, or the electrical potential energy difference between the cell and its surroundings, which is observed to sometimes result in a voltage spike called an action potential which travels the length of the cell and triggers the release of further neurotransmitters. The voltage, then, is the quantity of interest and is given by Vm(t).
If the input current is constant, most neurons emit after some time of adaptation or initial bursting a regular spike train. The frequency of regular firing in response to a constant current I is described by the frequency-current relation, which corresponds to the transfer function
φ
{\displaystyle \varphi }
of artificial neural networks. Similarly, for all spiking neuron models, the transfer function
φ
{\displaystyle \varphi }
can be calculated numerically (or analytically).
== Cable theory and compartmental models ==
All of the above deterministic models are point-neuron models because they do not consider the spatial structure of a neuron. However, the dendrite contributes to transforming input into output. Point neuron models are valid description in three cases. (i) If input current is directly injected into the soma. (ii) If synaptic input arrives predominantly at or close to the soma (closeness is defined by a length scale
λ
{\displaystyle \lambda }
introduced below. (iii) If synapse arrives anywhere on the dendrite, but the dendrite is completely linear. In the last case, the cable acts as a linear filter; these linear filter properties can be included in the formulation of generalized integrate-and-fire models such as the spike response model.
The filter properties can be calculated from a cable equation.
Let us consider a cell membrane in the form of a cylindrical cable. The position on the cable is denoted by x and the voltage across the cell membrane by V. The cable is characterized by a longitudinal resistance
r
l
{\displaystyle r_{l}}
per unit length and a membrane resistance
r
m
{\displaystyle r_{m}}
. If everything is linear, the voltage changes as a function of timeWe introduce a length scale
λ
2
=
r
m
/
r
l
{\displaystyle \lambda ^{2}={r_{m}}/{r_{l}}}
on the left side and time constant
τ
=
c
m
r
m
{\displaystyle \tau =c_{m}r_{m}}
on the right side. The cable equation can now be written in its perhaps best-known form:
The above cable equation is valid for a single cylindrical cable.
Linear cable theory describes the dendritic arbor of a neuron as a cylindrical structure undergoing a regular pattern of bifurcation, like branches in a tree. For a single cylinder or an entire tree, the static input conductance at the base (where the tree meets the cell body or any such boundary) is defined as
G
i
n
=
G
∞
tanh
(
L
)
+
G
L
1
+
(
G
L
/
G
∞
)
tanh
(
L
)
{\displaystyle G_{in}={\frac {G_{\infty }\tanh(L)+G_{L}}{1+(G_{L}/G_{\infty })\tanh(L)}}}
,
where L is the electrotonic length of the cylinder, which depends on its length, diameter, and resistance. A simple recursive algorithm scales linearly with the number of branches and can be used to calculate the effective conductance of the tree. This is given by
G
D
=
G
m
A
D
tanh
(
L
D
)
/
L
D
{\displaystyle \,\!G_{D}=G_{m}A_{D}\tanh(L_{D})/L_{D}}
where AD = πld is the total surface area of the tree of total length l, and LD is its total electrotonic length. For an entire neuron in which the cell body conductance is GS and the membrane conductance per unit area is Gmd = Gm / A, we find the total neuron conductance GN for n dendrite trees by adding up all tree and soma conductances, given by
G
N
=
G
S
+
∑
j
=
1
n
A
D
j
F
d
g
a
j
,
{\displaystyle G_{N}=G_{S}+\sum _{j=1}^{n}A_{D_{j}}F_{dga_{j}},}
where we can find the general correction factor Fdga experimentally by noting GD = GmdADFdga.
The linear cable model makes several simplifications to give closed analytic results, namely that the dendritic arbor must branch in diminishing pairs in a fixed pattern and that dendrites are linear. A compartmental model allows for any desired tree topology with arbitrary branches and lengths, as well as arbitrary nonlinearities. It is essentially a discretized computational implementation of nonlinear dendrites.
Each piece, or compartment, of a dendrite, is modeled by a straight cylinder of arbitrary length l and diameter d which connects with fixed resistance to any number of branching cylinders. We define the conductance ratio of the ith cylinder as Bi = Gi / G∞, where
G
∞
=
π
d
3
/
2
2
R
i
R
m
{\displaystyle G_{\infty }={\tfrac {\pi d^{3/2}}{2{\sqrt {R_{i}R_{m}}}}}}
and Ri is the resistance between the current compartment and the next. We obtain a series of equations for conductance ratios in and out of a compartment by making corrections to the normal dynamic Bout,i = Bin,i+1, as
B
o
u
t
,
i
=
B
i
n
,
i
+
1
(
d
i
+
1
/
d
i
)
3
/
2
R
m
,
i
+
1
/
R
m
,
i
{\displaystyle B_{\mathrm {out} ,i}={\frac {B_{\mathrm {in} ,i+1}(d_{i+1}/d_{i})^{3/2}}{\sqrt {R_{\mathrm {m} ,i+1}/R_{\mathrm {m} ,i}}}}}
B
i
n
,
i
=
B
o
u
t
,
i
+
tanh
X
i
1
+
B
o
u
t
,
i
tanh
X
i
{\displaystyle B_{\mathrm {in} ,i}={\frac {B_{\mathrm {out} ,i}+\tanh X_{i}}{1+B_{\mathrm {out} ,i}\tanh X_{i}}}}
B
o
u
t
,
p
a
r
=
B
i
n
,
d
a
u
1
(
d
d
a
u
1
/
d
p
a
r
)
3
/
2
R
m
,
d
a
u
1
/
R
m
,
p
a
r
+
B
i
n
,
d
a
u
2
(
d
d
a
u
2
/
d
p
a
r
)
3
/
2
R
m
,
d
a
u
2
/
R
m
,
p
a
r
+
…
{\displaystyle B_{\mathrm {out,par} }={\frac {B_{\mathrm {in,dau1} }(d_{\mathrm {dau1} }/d_{\mathrm {par} })^{3/2}}{\sqrt {R_{\mathrm {m,dau1} }/R_{\mathrm {m,par} }}}}+{\frac {B_{\mathrm {in,dau2} }(d_{\mathrm {dau2} }/d_{\mathrm {par} })^{3/2}}{\sqrt {R_{\mathrm {m,dau2} }/R_{\mathrm {m,par} }}}}+\ldots }
where the last equation deals with parents and daughters at branches, and
X
i
=
l
i
4
R
i
d
i
R
m
{\displaystyle X_{i}={\tfrac {l_{i}{\sqrt {4R_{i}}}}{\sqrt {d_{i}R_{m}}}}}
. We can iterate these equations through the tree until we get the point where the dendrites connect to the cell body (soma), where the conductance ratio is Bin,stem. Then our total neuron conductance for static input is given by
G
N
=
A
s
o
m
a
R
m
,
s
o
m
a
+
∑
j
B
i
n
,
s
t
e
m
,
j
G
∞
,
j
.
{\displaystyle G_{N}={\frac {A_{\mathrm {soma} }}{R_{\mathrm {m,soma} }}}+\sum _{j}B_{\mathrm {in,stem} ,j}G_{\infty ,j}.}
Importantly, static input is a very special case. In biology, inputs are time-dependent. Moreover, dendrites are not always linear.
Compartmental models enable to include nonlinearities via ion channels positioned at arbitrary locations along the dendrites. For static inputs, it is sometimes possible to reduce the number of compartments (increase the computational speed) and yet retain the salient electrical characteristics.
== Conjectures regarding the role of the neuron in the wider context of the brain principle of operation ==
=== The neurotransmitter-based energy detection scheme ===
The neurotransmitter-based energy detection scheme suggests that the neural tissue chemically executes a Radar-like detection procedure.
As shown in Fig. 6, the key idea of the conjecture is to account for neurotransmitter concentration, neurotransmitter generation, and neurotransmitter removal rates as the important quantities in executing the detection task, while referring to the measured electrical potentials as a side effect that only in certain conditions coincide with the functional purpose of each step. The detection scheme is similar to a radar-like "energy detection" because it includes signal squaring, temporal summation, and a threshold switch mechanism, just like the energy detector, but it also includes a unit that emphasizes stimulus edges and a variable memory length (variable memory). According to this conjecture, the physiological equivalent of the energy test statistics is neurotransmitter concentration, and the firing rate corresponds to neurotransmitter current. The advantage of this interpretation is that it leads to a unit-consistent explanation which allows for bridge between electrophysiological measurements, biochemical measurements, and psychophysical results.
The evidence reviewed in suggests the following association between functionality to histological classification:
Stimulus squaring is likely to be performed by receptor cells.
Stimulus edge emphasizing and signal transduction is performed by neurons.
Temporal accumulation of neurotransmitters is performed by glial cells. Short-term neurotransmitter accumulation is likely to occur also in some types of neurons.
Logical switching is executed by glial cells, and it results from exceeding a threshold level of neurotransmitter concentration. This threshold crossing is also accompanied by a change in neurotransmitter leak rate.
Physical all-or-non movement switching is due to muscle cells and results from exceeding a certain neurotransmitter concentration threshold on muscle surroundings.
Note that although the electrophysiological signals in Fig.6 are often similar to the functional signal (signal power/neurotransmitter concentration / muscle force), there are some stages in which the electrical observation differs from the functional purpose of the corresponding step. In particular, Nossenson et al. suggested that glia threshold crossing has a completely different functional operation compared to the radiated electrophysiological signal and that the latter might only be a side effect of glia break.
== General comments regarding the modern perspective of scientific and engineering models ==
The models above are still idealizations. Corrections must be made for the increased membrane surface area given by numerous dendritic spines, temperatures significantly hotter than room-temperature experimental data, and nonuniformity in the cell's internal structure. Certain observed effects do not fit into some of these models. For instance, the temperature cycling (with minimal net temperature increase) of the cell membrane during action potential propagation is not compatible with models that rely on modeling the membrane as a resistance that must dissipate energy when current flows through it. The transient thickening of the cell membrane during action potential propagation is also not predicted by these models, nor is the changing capacitance and voltage spike that results from this thickening incorporated into these models. The action of some anesthetics such as inert gases is problematic for these models as well. New models, such as the soliton model attempt to explain these phenomena, but are less developed than older models and have yet to be widely applied.
Modern views regarding the role of the scientific model suggest that "All models are wrong but some are useful" (Box and Draper, 1987, Gribbin, 2009; Paninski et al., 2009).
Recent conjecture suggests that each neuron might function as a collection of independent threshold units. It is suggested that a neuron could be anisotropically activated following the origin of its arriving signals to the membrane, via its dendritic trees. The spike waveform was also proposed to be dependent on the origin of the stimulus.
== External links ==
Neuronal Dynamics: from single neurons to networks and models of cognition (W. Gerstner, W. Kistler, R. Naud, L. Paninski, Cambridge University Press, 2014). In particular, Chapters 6 - 10, html online version.
Spiking Neuron Models (W. Gerstner and W. Kistler, Cambridge University Press, 2002)
== See also ==
Binding neuron
Bayesian approaches to brain function
Brain-computer interfaces
Free energy principle
Models of neural computation
Neural coding
Neural oscillation
Quantitative models of the action potential
Spiking neural network
== References == | Wikipedia/Spiking_neuron_model |
Pulse-coupled networks or pulse-coupled neural networks (PCNNs) are neural models proposed by modeling a cat's visual cortex, and developed for high-performance biomimetic image processing.
In 1989, Eckhorn introduced a neural model to emulate the mechanism of cat's visual cortex. The Eckhorn model provided a simple and effective tool for studying small mammal’s visual cortex, and was soon recognized as having significant application potential in image processing.
In 1994, Johnson adapted the Eckhorn model to an image processing algorithm, calling this algorithm a pulse-coupled neural network.
The basic property of the Eckhorn's linking-field model (LFM) is the coupling term. LFM is a modulation of the primary input by a biased offset factor driven by the linking input. These drive a threshold variable that decays from an initial high value. When the threshold drops below zero it is reset to a high value and the process starts over. This is different than the standard integrate-and-fire neural model, which accumulates the input until it passes an upper limit and effectively "shorts out" to cause the pulse.
LFM uses this difference to sustain pulse bursts, something the standard model does not do on a single neuron level. It is valuable to understand, however, that a detailed analysis of the standard model must include a shunting term, due to the floating voltages level in the dendritic compartment(s), and in turn this causes an elegant multiple modulation effect that enables a true higher-order network (HON).
A PCNN is a two-dimensional neural network. Each neuron in the network corresponds to one pixel in an input image, receiving its corresponding pixel's color information (e.g. intensity) as an external stimulus. Each neuron also connects with its neighboring neurons, receiving local stimuli from them. The external and local stimuli are combined in an internal activation system, which accumulates the stimuli until it exceeds a dynamic threshold, resulting in a pulse output. Through iterative computation, PCNN neurons produce temporal series of pulse outputs. The temporal series of pulse outputs contain information of input images and can be used for various image processing applications, such as image segmentation and feature generation. Compared with conventional image processing means, PCNNs have several significant merits, including robustness against noise, independence of geometric variations in input patterns, capability of bridging minor intensity variations in input patterns, etc.
A simplified PCNN called a spiking cortical model was developed in 2009.
== Applications ==
PCNNs are useful for image processing, as discussed in a book by Thomas Lindblad and Jason M. Kinser.
PCNNs have been used in a variety of image processing applications, including: image segmentation, pattern recognition, feature generation, face extraction, motion detection, region growing, image denoising and image enhancement
Multidimensional pulse image processing of chemical structure data using PCNN has been discussed by Kinser, et al.
They have also been applied to an all pairs shortest path problem.
== References == | Wikipedia/Pulse-coupled_networks |
In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert.
Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. Expert systems were among the first truly successful forms of AI software. They were created in the 1970s and then proliferated in the 1980s, being then widely regarded as the future of AI — before the advent of successful artificial neural networks.
An expert system is divided into two subsystems: 1) a knowledge base, which represents facts and rules; and 2) an inference engine, which applies the rules to the known facts to deduce new facts, and can include explaining and debugging abilities.
== History ==
=== Early development ===
Soon after the dawn of modern computers in the late 1940s and early 1950s, researchers started realizing the immense potential these machines had for modern society. One of the first challenges was to make such machines able to “think” like humans – in particular, making these machines able to make important decisions the way humans do. The medical–healthcare field presented the tantalizing challenge of enabling these machines to make medical diagnostic decisions.
Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision making. For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology. These early diagnostic systems used patients’ symptoms and laboratory test results as inputs to generate a diagnostic outcome.
These systems were often described as the early forms of expert systems. However, researchers realized that there were significant limits when using traditional methods such as flow charts,
statistical pattern matching, or probability theory.
=== Formal introduction and later developments ===
This previous situation gradually led to the development of expert systems, which used knowledge-based approaches. These expert systems in medicine were the MYCIN expert system, the Internist-I expert system and later, in the middle of the 1980s, the CADUCEUS.
Expert systems were formally introduced around 1965 by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes termed the "father of expert systems"; other key early contributors were Bruce Buchanan and Randall Davis. The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral). The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work of Allen Newell and Herbert Simon). Expert systems became some of the first truly successful forms of artificial intelligence (AI) software.
Research on expert systems was also active in Europe. In the US, the focus tended to be on the use of production rule systems, first on systems hard coded on top of Lisp programming environments and then on expert system shells developed by vendors such as Intellicorp. In Europe, research focused more on systems and expert systems shells developed in Prolog. The advantage of Prolog systems was that they employed a form of rule-based programming that was based on formal logic.
One such early expert system shell based on Prolog was APES.
One of the first use cases of Prolog and APES was in the legal area namely, the encoding of a large portion of the British Nationality Act. Lance Elliot wrote: "The British Nationality Act was passed in 1981 and shortly thereafter was used as a means of showcasing the efficacy of using Artificial Intelligence (AI) techniques and technologies, doing so to explore how the at-the-time newly enacted statutory law might be encoded into a computerized logic-based formalization. A now oft-cited research paper entitled “The British Nationality Act as a Logic Program” was published in 1986 and subsequently became a hallmark for subsequent work in AI and the law."
In the 1980s, expert systems proliferated. Universities offered expert system courses and two-thirds of the Fortune 500 companies applied the technology in daily business activities. Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe.
In 1981, the first IBM PC, with the PC DOS operating system, was introduced. The imbalance between the high affordability of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client–server model. Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC. This model also enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client-server had a tremendous impact on the expert systems market. Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop. They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts. Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments. With the rise of the PC and client-server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC-based tools. Also, new vendors, often financed by venture capital (such as Aion Corporation, Neuron Data, Exsys, VP-Expert, and many others), started appearing regularly.
The first expert system to be used in a design capacity for a large-scale product was the Synthesis of Integral Design (SID) software program, developed in 1982. Written in Lisp, SID generated 93% of the VAX 9000 CPU logic gates. Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves. Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker. The program was highly controversial but used nevertheless due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion.
During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the start of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s. Thanks to Karp's work, together with other scholars, like Hubert L. Dreyfus, it became clear that there are certain limits and possibilities when one designs computer algorithms. His findings describe what computers can do and what they cannot do. Many of the computational problems related to this type of expert systems have certain pragmatic limits. These findings laid down the groundwork that led to the next developments in the field.
In the 1990s and beyond, the term expert system and the idea of a standalone AI system mostly dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems did not deliver on their over hyped promise. The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose expert systems, to being one of many standard tools. Other researchers suggest that Expert Systems caused inter-company power struggles when the IT organization lost its exclusivity in software modifications to users or Knowledge Engineers.
In the first decade of the 2000s, there was a "resurrection" for the technology, while using the term rule-based systems, with significant success stories and adoption. Many of the leading major business application suite vendors (such as SAP, Siebel, and Oracle) integrated expert system abilities into their suite of products as a way to specify business logic. Rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments.
=== Current approaches to expert systems ===
The limits of prior type of expert systems prompted researchers to develop new types of approaches. They have developed more efficient, flexible, and powerful methods to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism. Recurrent neural networks often take advantage of such mechanisms. Related is the discussion on the disadvantages section.
Modern systems can incorporate new knowledge more easily and thus update themselves easily. Such systems can generalize from existing knowledge better and deal with vast amounts of complex data. Related is the subject of big data here. Sometimes these type of expert systems are called "intelligent systems."
More recently, it can be argued that expert systems have moved into the area of business rules and business rules management systems.
== Software architecture ==
An expert system is an example of a knowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. In general view, an expert system includes the following components: a knowledge base, an inference engine, an explanation facility, a knowledge acquisition facility, and a user interface.
The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables. In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming. The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances. The rules worked by querying and asserting values of the objects.
The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion.
There are mainly two modes for an inference engine: forward chaining and backward chaining. The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule. In forward chaining an antecedent fires and asserts the consequent. For example, consider the following rule:
R
1
:
M
a
n
(
x
)
⟹
M
o
r
t
a
l
(
x
)
{\displaystyle R1:{\mathit {Man}}(x)\implies {\mathit {Mortal}}(x)}
A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine. It would match R1 and assert Mortal(Socrates) into the knowledge base.
Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true. One of the early innovations of expert systems shells was to integrate inference engines with a user interface. This could be especially powerful with backward chaining. If the system needs to know a particular fact but does not, then it can simply generate an input screen and ask the user if the information is known. So in this example, it could use R1 to ask the user if Socrates was a Man and then use that new information accordingly.
The use of rules to explicitly represent knowledge also enabled explanation abilities. In the simple example above if the system had used R1 to assert that Socrates was Mortal and a user wished to understand why Socrates was mortal they could query the system and the system would look back at the rules which fired to cause the assertion and present those rules to the user as an explanation. In English, if the user asked "Why is Socrates Mortal?" the system would reply "Because all men are mortal and Socrates is a man". A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules.
As expert systems evolved, many new techniques were incorporated into various types of inference engines. Some of the most important of these were:
Truth maintenance. These systems record the dependencies in a knowledge-base so that when facts are altered, dependent knowledge can be altered accordingly. For example, if the system learns that Socrates is no longer known to be a man it will revoke the assertion that Socrates is mortal.
Hypothetical reasoning. In this, the knowledge base can be divided up into many possible views, a.k.a. worlds. This allows the inference engine to explore multiple possibilities in parallel. For example, the system may want to explore the consequences of both assertions, what will be true if Socrates is a Man and what will be true if he is not?
Uncertainty systems. One of the first extensions of simply using rules to represent knowledge was also to associate a probability with each rule. So, not to assert that Socrates is mortal, but to assert Socrates may be mortal with some probability value. Simple probabilities were extended in some systems with sophisticated mechanisms for uncertain reasoning, such as fuzzy logic, and combination of probabilities.
Ontology classification. With the addition of object classes to the knowledge base, a new type of reasoning was possible. Along with reasoning simply about object values, the system could also reason about object structures. In this simple example, Man can represent an object class and R1 can be redefined as a rule that defines the class of all men. These types of special purpose inference engines are termed classifiers. Although they were not highly used in expert systems, classifiers are very powerful for unstructured volatile domains, and are a key technology for the Internet and the emerging Semantic Web.
== Advantages ==
The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit. In a traditional computer program, the logic is embedded in code that can typically only be reviewed by an IT specialist. With an expert system, the goal was to specify the rules in a format that was intuitive and easily understood, reviewed, and even edited by domain experts rather than IT experts. The benefits of this explicit knowledge representation were rapid development and ease of maintenance.
Ease of maintenance is the most obvious benefit. This was achieved in two ways. First, by removing the need to write conventional code, many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. Essentially, the logical flow of the program (at least at the highest level) was simply a given for the system, simply invoke the inference engine. This also was a reason for the second benefit: rapid prototyping. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects.
A claim for expert system shells that was often made was that they removed the need for trained programmers and that experts could develop systems themselves. In reality, this was seldom if ever true. While the rules for an expert system were more comprehensible than typical computer code, they still had a formal syntax where a misplaced comma or other character could cause havoc as with any other computer language. Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical. Inevitably demands to integrate with, and take advantage of, large legacy databases and systems arose. To accomplish this, integration required the same skills as any other type of system.
Summing up the benefits of using expert systems, the following can be highlighted:
Increased availability and reliability: Expertise can be accessed on any computer hardware and the system always completes responses on time.
Multiple expertise: Several expert systems can be run simultaneously to solve a problem. and gain a higher level of expertise than a human expert.
Explanation: Expert systems always describe of how the problem was solved.
Fast response: The expert systems are fast and able to solve a problem in real-time.
Reduced cost: The cost of expertise for each user is significantly reduced.
== Disadvantages ==
The most common disadvantage cited for expert systems in the academic literature is the knowledge acquisition problem. Obtaining the time of domain experts for any software application is always difficult, but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. As a result of this problem, a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition, to help automate the process of designing, debugging, and maintaining rules defined by experts. However, when looking at the life-cycle of expert systems in actual use, other problems – essentially the same problems as those of any other large system – seem at least as critical as knowledge acquisition: integration, access to large databases, and performance.
Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them. This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such as C). System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcome in most corporate IT environments – programming languages such as Lisp and Prolog, and hardware platforms such as Lisp machines and personal computers. As a result, much effort in the later stages of expert system tool development was focused on integrating with legacy environments such as COBOL and large database systems, and on porting to more standard platforms. These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications.
Another major challenge of expert systems emerges when the size of the knowledge base increases. This causes the processing complexity to increase. For instance, when an expert system with 100 million rules was envisioned as the ultimate expert system, it became obvious that such system would be too complex and it would face too many computational problems. An inference engine would have to be able to process huge numbers of rules to reach a decision.
How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to a satisfiability (SAT) formulation. This is a well-known NP-complete problem Boolean satisfiability problem. If we assume only binary variables, say n of them, and then the corresponding search space is of size 2
n
{\displaystyle ^{n}}
. Thus, the search space can grow exponentially.
There are also questions on how to prioritize the use of the rules to operate more efficiently, or how to resolve ambiguities (for instance, if there are too many else-if sub-structures within one rule) and so on.
Other problems are related to the overfitting and overgeneralization effects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base. Such problems exist with methods that employ machine learning approaches too.
Another problem related to the knowledge base is how to make updates of its knowledge quickly and effectively. Also how to add a new piece of knowledge (i.e., where to add it among many rules) is challenging. Modern approaches that rely on machine learning methods are easier in this regard.
Because of the above challenges, it became clear that new approaches to AI were required instead of rule-based technologies. These new approaches are based on the use of machine learning techniques, along with the use of feedback mechanisms.
The key challenges that expert systems in medicine (if one considers computer-aided diagnostic systems as modern expert systems), and perhaps in other application domains, include issues related to aspects such as: big data, existing regulations, healthcare practice, various algorithmic issues, and system assessment.
Finally, the following disadvantages of using expert systems can be summarized:
Expert systems have superficial knowledge, and a simple task can potentially become computationally expensive.
Expert systems require knowledge engineers to input the data, data acquisition is very hard.
The expert system may choose the most inappropriate method for solving a particular problem.
Problems of ethics in the use of any form of AI are very relevant at present.
It is a closed world with specific knowledge, in which there is no deep perception of concepts and their interrelationships until an expert provides them.
== Applications ==
Hayes-Roth divides expert systems applications into 10 categories illustrated in the following table. The example applications were not in the original Hayes-Roth table, and some of them arose well afterward. Any application that is not footnoted is described in the Hayes-Roth book. Also, while these categories provide an intuitive framework to describe the space of expert systems applications, they are not rigid categories, and in some cases an application may show traits of more than one category.
Hearsay was an early attempt at solving voice recognition through an expert systems approach. For the most part this category of expert systems was not all that successful. Hearsay and all interpretation systems are essentially pattern recognition systems—looking for patterns in noisy data. In the case of Hearsay recognizing phonemes in an audio stream. Other early examples were analyzing sonar data to detect Russian submarines. These kinds of systems proved much more amenable to a neural network AI solution than a rule-based approach.
CADUCEUS and MYCIN were medical diagnosis systems. The user describes their symptoms to the computer as they would to a doctor and the computer returns a medical diagnosis.
Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuring Digital Equipment Corporation (DEC) VAX computers and mortgage loan application development.
SMH.PAL is an expert system for the assessment of students with multiple disabilities.
GARVAN-ES1 was a medical expert system, developed at the Garvan Institute of Medical Research, that provided automated clinical diagnostic comments on endocrine reports from a pathology laboratory. It was one of the first medical expert systems to go into routine clinical use internationally and the first expert system to be used for diagnosis daily in Australia. The system was written in "C" and ran on a PDP-11 in 64K of memory. It had 661 rules that were compiled; not interpreted.
Mistral is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g., Itaipu Dam in Brazil), and on landslide sites under the name of Eydenet, and on monuments under the name of Kaleidos. Mistral is a registered trade mark of CESI.
== See also ==
AI winter
CLIPS
Constraint logic programming
Constraint satisfaction
Knowledge engineering
Learning classifier system
Rule-based machine learning
== References ==
=== Works cited ===
== External links ==
Expert System tutorial on Code Project | Wikipedia/Expert_systems |
Legal informatics is an area within information science.
The American Library Association defines informatics as "the study of the structure and properties of information, as well as the application of technology to the organization, storage, retrieval, and dissemination of information." Legal informatics therefore, pertains to the application of informatics within the context of the legal environment and as such involves law-related organizations (e.g., law offices, courts, and law schools) and users of information and information technologies within these organizations.
== Policy issues ==
Policy issues in legal informatics arise from the use of informational technologies in the implementation of law, such as the use of subpoenas for information found in emails, search queries, and social networks. Policy approaches to legal informatics issues vary throughout the world. For example, European countries tend to require the destruction or anonymization of data so that it cannot be used for discovery.
== Technology ==
=== Cloud computing ===
The widespread introduction of cloud computing provides several benefits in delivering legal services. Legal service providers can use the Software as a Service model to earn a profit by charging customers a per-use or subscription fee. This model has several benefits over traditional bespoke services.
Software as a service is much more scalable. Traditional bespoke models require an attorney to spend more of a limited resource (their time) on each additional client. Using Software as a Service, a legal service provider can put in effort once to develop the product and then use a much less limited resource (cloud computing power) to provide service to each additional customer.
Software as a service can be used to complement traditional bespoke services by handling routine tasks, leaving an attorney free to concentrate on bespoke work.
Software as a service can be delivered more conveniently because it does not require the legal service provider to be available at the same time as the customer.
Software as a service also complicates the attorney-client relationship in a way that may have implications for attorney–client privilege. The traditional delivery model makes it easy to create delineations of when attorney-client privilege attaches and when it does not. But in more complex models of legal service delivery other actors or automated processes may moderate the relationship between a client and their attorney making it difficult to tell which communications should be legally privileged.
=== Artificial intelligence ===
Artificial intelligence is employed in online dispute resolution platforms that use optimization algorithms and blind-bidding. Artificial intelligence is also frequently employed in modeling the legal ontology, "an explicit, formal, and general specification of a conceptualization of properties of and relations between objects in a given domain".
Artificial intelligence and law (AI and law) is a subfield of artificial intelligence (AI) mainly concerned with applications of AI to legal informatics problems and original research on those problems. It is also concerned to contribute in the other direction: to export tools and techniques developed in the context of legal problems to AI in general. For example, theories of legal decision making, especially models of argumentation, have contributed to knowledge representation and reasoning; models of social organization based on norms have contributed to multi-agent systems; reasoning with legal cases has contributed to case-based reasoning; and the need to store and retrieve large amounts of textual data has resulted in contributions to conceptual information retrieval and intelligent databases.
==== History ====
Although Loevinger, Allen and Mehl anticipated several of the ideas that would become important in AI and Law, the first serious proposal for applying AI techniques to law is usually taken to be Buchanan and Headrick. Early work from this period includes Thorne McCarty's influential TAXMAN project in the US and Ronald Stamper's LEGOL project in the UK. Landmarks in the early 1980s include Carole Hafner's work on conceptual retrieval, Anne Gardner's work on contract law, Edwina Rissland's work on legal hypotheticals and the work at Imperial College London on the representation of legislation by means of executable logic programs.
Early meetings of scholars included a one-off meeting at Swansea, the series of conferences organized by IDG in Florence and the workshops organised by Charles Walter at the University of Houston in 1984 and 1985. In 1987 a biennial conference, the International Conference on AI and Law (ICAIL), was instituted. This conference began to be seen as the main venue for publishing and the developing ideas within AI and Law, and it led to the foundation of the International Association for Artificial Intelligence and Law (IAAIL), to organize and convene subsequent ICAILs. This, in turn, led to the foundation of the Artificial Intelligence and Law Journal, first published in 1992. In Europe, the annual JURIX conferences (organised by the Jurix Foundation for Legal Knowledge Based Systems), began in 1988. Initially intended to bring together the Dutch-speaking (i.e. Dutch and Flemish) researchers, JURIX quickly developed into an international, primarily European, conference and since 2002 has regularly been held outside the Dutch speaking countries. Since 2007 the JURISIN workshops have been held in Japan under the auspices of the Japanese Society for Artificial Intelligence.
The interoperable legal documents standard Akoma Ntoso allows machine-driven processes to operate on the syntactic and semantic components of digital parliamentary, judicial and legislative documents, thus facilitating the development of high-quality information resources and forming a basis for AI tools. Its goal is to substantially enhance the performance, accountability, quality and openness of parliamentary and legislative operations based on best practices and guidance through machine-assisted drafting and machine-assisted (legal) analysis. Embedded in the environment of the semantic web, it forms the basis for a heterogenous yet interoperable ecosystem, with which these tools can operate and communicate, as well as for future applications and use cases based on digital law or rule representation.
In 2019, the city of Hangzhou, China established a pilot program artificial intelligence-based Internet Court to adjudicate disputes related to ecommerce and internet-related intellectual property claims.: 124 Parties appear before the court via videoconference and AI evaluates the evidence presented and applies relevant legal standards.: 124
==== Scope ====
Today, AI and law embrace a wide range of topics, including:
Formal models of legal reasoning
Computational models of argumentation and decision-making
Computational models of evidential reasoning
Legal reasoning in multi-agent systems
Executable models of legislation
Automatic legal text classification and summarization
Automated information extraction from legal databases and texts
Machine learning and data mining for e-discovery and other legal applications
Conceptual or model-based legal information retrieval
Lawbots to automate minor and repetitive legal tasks
Risk assessment, pricing and timeline predictions of litigation using machine learning and artificial intelligence.
==== Formal models of legal reasoning ====
Formal models of legal texts and legal reasoning have been used in AI and Law to clarify issues, to give a more precise understanding and to provide a basis for implementations. A variety of formalisms have been used, including propositional and predicate calculi; deontic, temporal and non-monotonic logics; and state transition diagrams. Prakken and Sartor give a detailed and authoritative review of the use of logic and argumentation in AI and Law, together with a comprehensive set of references.
An important role of formal models is to remove ambiguity. In fact, legislation abounds with ambiguity: Because it is written in natural language there are no brackets and so the scope of connectives such as "and" and "or" can be unclear. "Unless" is also capable of several interpretations, and legal draftsman never write "if and only if", although this is often what they intend by "if". In perhaps the earliest use of logic to model law in AI and Law, Layman Allen advocated the use of propositional logic to resolve such syntactic ambiguities in a series of papers.
In the late 1970s and throughout the 1980s a significant strand of work on AI and Law involved the production of executable models of legislation, originating with Thorne McCarty's TAXMAN and Ronald Stamper's LEGOL. TAXMAN was used to model the majority and minority arguments in a US Tax law case (Eisner v Macomber), and was implemented in the micro-PLANNER programming language. LEGOL was used to provide a formal model of the rules and regulations that govern an organization, and was implemented in a condition-action rule language of the kind used for expert systems.
The TAXMAN and LEGOL languages were executable, rule-based languages, which did not have an explicit logical interpretation. However, the formalisation of a large portion of the British Nationality Act by Sergot et al. showed that the natural language of legal documents bears a close resemblance to the Horn clause subset of first order predicate calculus. Moreover, it identified the need to extend the use of Horn clauses by including negative conditions, to represent rules and exceptions. The resulting extended Horn clauses are executable as logic programs.
Later work on larger applications, such as that on Supplementary Benefits, showed that logic programs need further extensions, to deal with such complications as multiple cross references, counterfactuals, deeming provisions, amendments, and highly technical concepts (such as contribution conditions). The use of hierarchical representations was suggested to address the problem of cross reference; and so-called isomorphic representations were suggested to address the problems of verification and frequent amendment. As the 1990s developed this strand of work became partially absorbed into the development of formalisations of domain conceptualisations, (so-called ontologies), which became popular in AI following the work of Gruber. Early examples in AI and Law include Valente's functional ontology and the frame based ontologies of Visser and van Kralingen. Legal ontologies have since become the subject of regular workshops at AI and Law conferences and there are many examples ranging from generic top-level and core ontologies to very specific models of particular pieces of legislation.
Since law comprises sets of norms, it is unsurprising that deontic logics have been tried as the formal basis for models of legislation. These, however, have not been widely adopted as the basis for expert systems, perhaps because expert systems are supposed to enforce the norms, whereas deontic logic becomes of real interest only when we need to consider violations of the norms. In law directed obligations, whereby an obligation is owed to another named individual are of particular interest, since violations of such obligations are often the basis of legal proceedings. There is also some interesting work combining deontic and action logics to explore normative positions.
In the context of multi-agent systems, norms have been modelled using state transition diagrams. Often, especially in the context of electronic institutions, the norms so described are regimented (i.e., cannot be violated), but in other systems violations are also handled, giving a more faithful reflection of real norms. For a good example of this approach see Modgil et al.
Law often concerns issues about time, both relating to the content, such as time periods and deadlines, and those relating to the law itself, such as commencement. Some attempts have been made to model these temporal logics using both computational formalisms such as the Event Calculus and temporal logics such as defeasible temporal logic.
In any consideration of the use of logic to model law it needs to be borne in mind that law is inherently non-monotonic, as is shown by the rights of appeal enshrined in all legal systems, and the way in which interpretations of the law change over time. Moreover, in the drafting of law exceptions abound, and, in the application of law, precedents are overturned as well as followed. In logic programming approaches, negation as failure is often used to handle non-monotonicity, but specific non-monotonic logics such as defeasible logic have also been used. Following the development of abstract argumentation, however, these concerns are increasingly being addressed through argumentation in monotonic logic rather than through the use of non-monotonic logics.
Two recent prominent accounts of legal reasoning involve reasons, and they are John Horty's, which focuses on common law reasoning and the notion of precedent, and Federico Faroldi's, which focuses on civil law and uses justification logic.
==== Quantitative legal prediction ====
Both academic and proprietary quantitative legal prediction models exist. One of the earliest examples of a working quantitative legal prediction model occurred in the form of the Supreme Court forecasting project. The Supreme Court forecasting model attempted to predict the results of all the cases on the 2002 term of the Supreme Court. The model predicted 75% of cases correctly compared to experts who only predicted 59.1% of cases.
Another example of an academic quantitative legal prediction models is a 2012 model that predicted the result of Federal Securities class action lawsuits.
Some academics and legal technology startups are attempting to create algorithmic models to predict case outcomes. Part of this overall effort involves improved case assessment for litigation funding.
In order to better evaluate the quality of case outcome prediction systems, a proposal has been made to create a standardised dataset that would allow comparisons between systems.
== Legal practice ==
Within the practice issues conceptual area, progress continues to be made on both litigation and transaction focused technologies. In particular, technology including predictive coding has the potential to effect substantial efficiency gains in law practice. Though predictive coding has largely been applied in the litigation space, it is beginning to make inroads in transaction practice, where it is being used to improve document review in mergers and acquisitions. Other advances, including XML coding in transaction contracts, and increasingly advanced document preparation systems demonstrate the importance of legal informatics in the transactional law space.
Current applications of AI in the legal field utilize machines to review documents, particularly when a high level of completeness and confidence in the quality of document analysis is depended upon, such as in instances of litigation and where due diligence play a role. Predictive coding leverages small samples to cross-reference similar items, weed out less relevant documents so attorneys can focus on the truly important key documents, produces statistically validated results, equal to or surpassing the accuracy and, prominently, the rate of human review.
=== Delivery of services ===
Advances in technology and legal informatics have led to new models for the delivery of legal services. Legal services have traditionally been a "bespoke" product created by a professional attorney on an individual basis for each client. However, to work more efficiently, parts of these services will move sequentially from (1) bespoke to (2) standardized, (3) systematized, (4) packaged, and (5) commoditized. Moving from one stage to the next will require embracing different technologies and knowledge systems.
The spread of the Internet and development of legal technology and informatics are extending legal services to individuals and small-medium companies.
=== Corporate legal departments ===
Corporate legal departments may use legal informatics for such purposes as to manage patent portfolios, and for preparation, customization and management of documents.
== See also ==
Computational law
Jurimetrics
Legal Electronic Data Exchange Standard
Legal expert system
Legal Information Retrieval
Lawbot
== References == | Wikipedia/Applications_of_artificial_intelligence_to_legal_informatics |
100% renewable energy is the goal of the use renewable resources for all energy. 100% renewable energy for electricity, heating, cooling and transport is motivated by climate change, pollution and other environmental issues, as well as economic and energy security concerns. Shifting the total global primary energy supply to renewable sources requires a transition of the energy system, since most of today's energy is derived from non-renewable fossil fuels.
Research into this topic is fairly new, with few studies published before 2009, but has gained increasing attention in recent years.A cross-sectoral, holistic approach is seen as an important feature of 100% renewable energy systems and is based on the assumption "that the best solutions can be found only if one focuses on the synergies between the sectors" of the energy system such as electricity, heat, transport or industry.
== Feasibility ==
No uniform definition for 100% renewable energy systems has been adopted across the published literature.
Recent studies show that a global transition to 100% renewable energy across all sectors – power, heat, transport and desalination well before 2050 is feasible. According to a review of the 181 peer-reviewed papers on 100% renewable energy that were published until 2018, "[t]he great majority of all publications highlights the technical feasibility and economic viability of 100% RE systems." A review of 97 papers published since 2004 and focusing on islands concluded that across the studies 100% renewable energy was found to be "technically feasible and economically viable." A 2022 review found that the main conclusion of most of the literature in the field is that 100% renewables is feasible worldwide at low cost.
Existing technologies, including storage, are capable of generating a secure energy supply at every hour throughout the year. The sustainable energy system is more efficient and cost effective than the existing system. The United Nations Intergovernmental Panel on Climate Change (IPCC) stated in their 2011 report that there is little that limits integrating renewable technologies for satisfying the total global energy demand.
Mark Z. Jacobson, professor of civil and environmental engineering at Stanford University and director of its Atmosphere and Energy program, says that producing all new energy with wind power, solar power, and hydropower by 2030 is feasible, and that existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic". Jacobson says that energy costs today with a wind, solar, and water system should be similar to today's energy costs from other optimally cost-effective strategies. The main obstacle against this scenario is the lack of political will. His conclusions have been disputed by other researchers. Jacobson published a response that disputed the piece point by point and claimed that the authors were motivated by allegiance to energy technologies that the 2015 paper excluded.
Jacobson says that energy costs today with a wind, solar, and water system should be similar to today's energy costs from other optimally cost-effective strategies and he has rebutted their criticisms. A followup paper was published by Jacobson and others in 2022, in which paths to 100% renewable energy by 2035 and 2050 were developed for 145 countries. The study concluded that a wind-water-solar (WWS) based system "requires less energy, costs less, and creates more jobs than business as usual". The cost reduction was primarily due to the substantial (-56.4%) decrease in overall energy demand thanks to the increased efficiency of relying on renewable electricity for all energy needs.
In 2014, renewable sources such as wind, geothermal, solar, biomass, and burnt waste provided 19% of the total energy consumed worldwide, with roughly half of that coming from traditional use of biomass. The largest sector in terms of energy consumption is electricity with a renewable share of 22.8%, most of it coming from hydropower with a share of 16.6%, followed by wind with 3.1%. As of 2018, according to REN21, transformation is picking up speed in the power sector, but urgent action is required in heating, cooling and transport.
There are many places around the world with grids that are run almost exclusively on renewable energy (see below). At the national level, at least 30 nations already have renewable energy contributing more than 20% of the energy supply. Renewable energy use has grown more quickly than even advocates anticipated. As of 2019, however, it needs to grow six times faster to limit global warming to 2 °C (3.6 °F).
=== Energy transition ===
100% renewable energy is an energy system where all energy use is sourced from renewable energy sources. The endeavor to use 100% renewable energy for electricity, heating/cooling and transport is motivated by global warming, pollution and other environmental issues, as well as economic and energy security concerns. Shifting the total global primary energy supply to renewable sources requires a transition of the energy system, since most of today's energy is derived from non-renewable fossil fuels.
According to the Intergovernmental Panel on Climate Change there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. Renewable energy use has grown more quickly than even advocates anticipated. As of 2019, however, it needs to grow six times faster to limit global warming to 2 °C (3.6 °F).
100% renewable energy in a country is typically a more challenging goal than carbon neutrality. The latter is a climate mitigation target, politically decided by many countries, and may also be achieved by balancing the total carbon footprint of the country (not only emissions from energy and fuel) with carbon dioxide removal and carbon projects abroad.
As of 2018 according to REN21 transformation is picking up speed in the power sector, but urgent action is required in heating, cooling and transport. There are many places around the world with grids that are run almost exclusively on renewable energy. At the national level, at least 30 nations already have renewable energy contributing more than 20% of the energy supply.
According to a review of the 181 peer-reviewed papers on 100% renewable energy published until 2018, "[t]he great majority of all publications highlights the technical feasibility and economic viability of 100% RE systems." While there are still many publications that focus on electricity only, there is a growing number of papers that cover different energy sectors and sector-coupled, integrated energy systems. This cross-sectoral, holistic approach is seen as an important feature of 100% renewable energy systems and is based on the assumption "that the best solutions can be found only if one focuses on the synergies between the sectors" of the energy system such as electricity, heat, transport or industry.
Stephen W. Pacala and Robert H. Socolow of Princeton University have developed a series of "climate stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources", in aggregate, constitute the largest number of their "wedges".
Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs ... Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand."
The main barriers to the widespread implementation of large-scale renewable energy and low-carbon energy strategies are political rather than technological. According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.
Studies have shown that Southeast Asia countries could achieve almost 100% renewable elecitricity based on solar, wind, and off-river pumped hydro energy storage at a competitive LCOE of around US$55–115/MWh.
== History ==
Using 100% renewable energy was first suggested in a paper in Science published in 1975 by Danish physicist Bent Sørensen, which was followed by several other proposals. In 1976, energy policy analyst Amory Lovins coined the term "soft energy path" to describe an alternative future where energy efficiency and appropriate renewable energy sources steadily replace a centralized energy system based on fossil and nuclear fuels.
In 1998, the first detailed analysis of scenarios with high shares of renewables were published. These were followed by the first detailed 100% scenarios. In 2006, a PhD thesis was published by Czisch in which it was shown that in a 100% renewable scenario energy supply could match demand in every hour of the year in Europe and North Africa. In the same year, Danish Energy professor Henrik Lund published a first paper in which he addresses the optimal combination of renewables, which was followed by several other papers on the transition to 100% renewable energy in Denmark. Since then, Lund has been publishing several papers on 100% renewable energy. After 2009, publications began to rise steeply, covering 100% scenarios for countries in Europe, America, Australia and other parts of the world.
Even in the early 21st century, it was extraordinary for scientists and decision-makers to consider the concept of 100% renewable electricity. However, renewable energy progress has been so rapid that things have totally changed since then:
Solar photovoltaic modules have dropped about 75 percent in price. Current scientific and technological advances in the laboratory suggest that they may become less expensive than the cost of installation of a photovoltaic system on residential or commercial buildings. On-shore wind power is spreading over all continents and is economically competitive with fossil and nuclear power in several regions. Concentrated solar thermal power (CST) with thermal storage has moved from the demonstration stage of maturity to the limited commercial stage and still has the potential for further cost reductions of about 50 percent.
Renewable energy use has grown much faster than even advocates had anticipated. Wind turbines generate 39 percent of Danish electricity, and Denmark has many biogas digesters and waste-to-energy plants as well. Together, wind and biomass provide 44% of the electricity consumed by the country's six million inhabitants. In 2010, Portugal's 10 million people produced more than half their electricity from indigenous renewable energy resources. Spain's 40 million inhabitants meet one-third of their electrical needs from renewables.
Renewable energy has a history of strong public support. In America, for example, a 2013 Gallup survey showed that two in three Americans want the U.S. to increase domestic energy production using solar power (76%), wind power (71%), and natural gas (65%). Far fewer want more petroleum production (46%) and more nuclear power (37%). Least favored is coal, with about one in three Americans favouring it.
REN21 says renewable energy already plays a significant role and there are many policy targets that aim to increase this:
At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond, and some 120 countries have various policy targets for longer-term shares of renewable energy, including a binding 20% by 2020 target for the European Union. Some countries have much higher long-term policy targets of up to 100% renewables. Outside Europe, a diverse group of 20 or more other countries target renewable energy shares in the 2020–2030 time frame that range from 10% to 50%.
Supporters of 100% renewable energy do not consider nuclear power as renewable or sustainable due to perceived risks of disasters and high-level waste management, and consider carbon capture and storage to have limited safe storage potential. These constraints have also led to an interest in 100% renewable energy. A well established body of academic literature has been written over the past decade, evaluating scenarios for 100% renewable energy for various geographical areas. In recent years, more detailed analyses have emerged from government and industry sources. The incentive to use 100% renewable energy is created by global warming and ecological as well as economic concerns, post peak oil.
The first country to propose 100% renewable energy was Iceland, in 1998. Proposals have been made for Japan in 2003, and for Australia in 2011. Albania, Iceland, and Paraguay obtain essentially all of their electricity from renewable sources (Albania and Paraguay 100% from hydroelectricity, Iceland 72% hydro and 28% geothermal). Norway obtains nearly all of its electricity from renewable sources (97 percent from hydropower). Iceland proposed using hydrogen for transportation and its fishing fleet. Australia proposed biofuel for those elements of transportation not easily converted to electricity. The road map for the United States, commitment by Denmark, and Vision 2050 for Europe set a 2050 timeline for converting to 100% renewable energy, later reduced to 2040 in 2011. Zero Carbon Britain 2030 proposes eliminating carbon emissions in Britain by 2030 by transitioning to renewable energy. In 2015, Hawaii enacted a law that the Renewable Portfolio Standard shall be 100 percent by 2045. This is often confused with renewable energy. If electricity produced on the grid is 65 GWh from fossil fuel and 35 GWh from renewable energy and rooftop off grid solar produces 80 GWh of renewable energy, then the total renewable energy is 115 GWh and the total electricity on the grid is 100 GWh. Then the RPS is 115 percent.
Cities like Paris and Strasbourg in France, planned to use 100% renewable energy by 2050.
Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs ... Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand."
It is estimated that the world will spend an extra $8 trillion over the next 25 years to prolong the use of non-renewable resources, a cost that would be eliminated by transitioning instead to 100% renewable energy. Research that has been published in Energy Policy suggests that converting the entire world to 100% renewable energy by 2050 is both possible and affordable, but requires political support. It would require building many more wind turbines and solar power systems but wouldn't utilize bioenergy. Other changes involve use of electric cars and the development of enhanced transmission grids and storage. As part of the Paris Agreement, countries periodically update their climate change targets for the future, by 2018 no G20 country had committed to a 100% renewable target.
Until 2018, there were 181 peer-reviewed papers on 100% renewable energy. In the same year, 100% renewable energy was also mentioned in the Special Report on Global Warming of 1.5 °C as a potential means to "expand the range of 1.5 °C pathways", if the findings can be corroborated.
As of 2021, wind and solar were consistently increasing their share worldwide, but still represented just 5% of global primary energy consumption, albeit far more of useful energy consumption. A report by J.P. Morgan Asset Management (the biggest lender to fossil fuels in the world) analyzed renewable energy forecasts made by eight scientists and research bodies (including Bent Sorensen, Mark Z. Jacobson, Amory Lovins) between 1970 and 2020 and claimed that all of them were unrealistically optimistic as they ignored "energy density, intermittency and the complex realities of incumbent energy systems".
== Places with near 100% renewable electricity ==
The following places meet 90% or more of their average yearly electricity demand with renewable energy (incomplete list):
Albania: Hydroelectric
American Samoa
Tau: ~100% solar power, with battery backup
Australia
Tasmania: Hydropower supplies 100 percent of Tasmania's electricity. (Pending legislation plans for %200 renewable power by 2040, with the remainder to be sent to mainland Australia via submarine power cables)
Austria
Lower Austria: 63% hydroelectricity, 26% wind, 9% biomass, 2% solar
Bhutan: Largely hydroelectricity; exports 70% of its production due to excess energy generated; no fossil fuel power plants.
Canada
British Columbia: 97% hydroelectric
Manitoba: 97% hydroelectricity, 3% wind, <1% petroleum (diesel in four off-grid communities), <1% natural gas
Newfoundland and Labrador: 95% hydroelectricity
Quebec: 99% renewable electricity is the main energy used in Quebec (41%), followed by oil (38%) and natural gas (10%)
Yukon: 94% hydroelectricity
Costa Rica: 99% renewable electricity. Hydroelectric (90%), geothermal, wind (and others)
Democratic Republic of the Congo: Almost 100% hydro, but only 9% have access to electricity.
Denmark
Samsø: Net greater than 100% wind power and biomass, connected to mainland for balance and backup power
Ethiopia: Mostly hydroelectricity (>90%). Smaller quantities of wind, solar, and geothermal. 45% of the population has access to electricity As of 2018, and there is a 100% access target set in 2017 for 2025.
Germany
Aller-Leine Valley: 63.5% wind, 30% biogas, 10.7% hydro, 3.1% solar
Wildpoldsried, Bavaria: 500% wind, solar, hydro
Greece
Tilos: 100% wind and solar power, with battery backup
Iceland: 72% hydroelectricity, 28% geothermal, wind, and solar power, less than 0.1% combustible fuel (off-grid diesel)
Norway: 96% hydroelectricity, 2% combustible fuel, 2% geothermal, wind, and solar
New Zealand
South Island: 98.2% hydroelectricity and 1.6% wind. Around one-fifth of generation is exported to the North Island.
Tokelau: 93% solar power, with battery backup and 7% coconut biofuel
Paraguay: Electricity sector in Paraguay is 100% hydroelectricity, about 90% of which is exported, remaining 10% covers domestic demand
Tajikistan: Hydropower supplies nearly 100 percent of Tajikistan's electricity.
United Kingdom
Scotland: 97% of electricity (2020) produced from renewables, mainly wind followed by hydroelectric.
United States
Kodiak Island, Alaska: 80.9% hydroelectricity, 19.8% wind power, 0.3% diesel generator
Palo Alto, California: 50% hydro, rest a combination of solar, wind and biogas
Aspen, Colorado: Hydroelectric, wind and solar and geothermal
Greensburg, Kansas: 100% - wind balanced with grid connection
Georgetown, Texas: 100% - 154MW solar and wind balanced with grid connection
Burlington, Vermont: 35.3% hydro, 35.3% wood, 27.9% wind, 1.4% solar photovoltaic
Washington
Centralia: 90.6% hydro, 7.9% nuclear
Chelan County: 100% renewable energy made up of 99.98% hydroelectric and 0.02% wind power.
Douglas County: 100% hydro
Pend Oreille County: 97.1% hydro
Seattle: 86% hydroelectricity, 7% wind, 1% biogas
Tacoma: 85% hydro, 6% wind
Uruguay: 94.5% renewable electricity; wind power (and biomass and solar power) is used to stretch hydroelectricity reserves into the dry season
Some other places have high percentages, for example the electricity sector in Denmark, as of 2014, is 45% wind power, with plans in place to reach 85%. The electricity sector in Canada and the electricity sector in New Zealand have even higher percentages of renewables (mostly hydro), 65% and 75% respectively, and Austria is approaching 70%. As of 2015, the electricity sector in Germany sometimes meets almost 100% of the electricity demand with PV and wind power, and renewable electricity is over 25%. Albania has 94.8% of installed capacity as hydroelectric, 5.2% diesel generator; but Albania imports 39% of its electricity. In 2016, Portugal achieved 100% renewable electricity for four days between 7 and 11 May, partly because efficient energy use had reduced electricity demand. France and Sweden have low carbon intensity, since they predominantly use a mixture of nuclear power and hydroelectricity. In 2018 Scotland met 76% of their demand from renewable sources.
Although electricity is currently around a quarter of world energy supply and consumption; primary energy use is expected to decrease with renewable energy deployment as electricity use increases, as it is likely to be combined with some degree of further electrification. For example, electric cars achieve much better fuel efficiency than fossil fuel cars, and another example is renewable heat such as in the case of Denmark, which is proposing to move to greater use of heat pumps for heating buildings to provide multiple kilowatts of heat per kilowatt of electricity.
== 100% clean electricity ==
Other electricity generating sources are considered clean, though not necessarily renewable, as they also do not emit carbon dioxide or other greenhouse gases and air pollutants. The largest of these is nuclear energy, which produces no emissions. Some argue that transitioning to 100% renewable energy would be too slow to limit climate change, and that closing down nuclear power stations is a mistake. Carbon capture and storage projects may still use coal or natural gas but capture carbon dioxide for storage or alternative uses. Pathways to eliminate greenhouse gases may include these in addition to renewable energy to save money, or to avoid shutting down existing plants and allow for flexibility in designing a carbon-free electric grid.
In 2018, California passed SB 100, which mandates 100% clean, carbon-free by 2045, including a 60% renewable electricity goal by 2030. 2019 legislation in Washington also requires 100% clean electricity by 2045, eliminating coal by 2025. Further states and territories to require 100% carbon-free electricity are Hawaii, Maine, Minnesota, Nevada, New Mexico, New York, Virginia, Puerto Rico, and Washington, DC. According to a study by Global Energy Monitor, China is expected to generate 1,200 gigawatts of renewable energy (wind and solar) by 2025.
== Obstacles ==
According to Mark Z. Jacobson, the most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies, at the pace required to prevent runaway climate change, are primarily political and not technological. According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are:
Climate change denial
Efforts to impede renewable energy by the fossil fuel industry
Political paralysis
Unsustainable consumption of energy and resources
Path dependencies and outdated infrastructure
Financial and governance constraints
In 2011, the Intergovernmental Panel on Climate Change, some of the world's leading climate researchers selected by the United Nations, said "as infrastructure and energy systems develop, in spite of the complexities, there are few, if any, fundamental technological limits to integrating a portfolio of renewable energy technologies to meet a majority share of total energy demand in locations where suitable renewable resources exist or can be supplied". IPCC scenarios "generally indicate that growth in renewable energy will be widespread around the world". The IPCC said that if governments were supportive, and the full complement of renewable energy technologies were deployed, renewable energy supply could account for almost 80% of the world's energy use within forty years. Rajendra Pachauri, chairman of the IPCC, said the necessary investment in renewables would cost only about 1% of global GDP annually. This approach could contain greenhouse gas levels to less than 450 parts per million, the safe level beyond which climate change becomes catastrophic and irreversible.
Stephen W. Pacala and Robert H. Socolow have developed a series of "climate stabilization wedges" that can allow societies to maintain their quality of life while avoiding catastrophic climate change, and "renewable energy sources", in aggregate, constitute the largest number of their "wedges".
=== Lack of urgency and coordination ===
Lester R. Brown founder and president of the Earth Policy Institute, a nonprofit research organization based in Washington, D.C., says a rapid transition to 100% renewable energy is both possible and necessary. Brown compares with the U.S. entry into World War II and the subsequent rapid mobilization and transformation of the US industry and economy. A quick transition to 100% renewable energy and saving of our civilization is proposed by Brown to follow an approach with similar urgency.
=== Required minerals ===
According to World Bank the "below 2°C" climate scenario requires 3 billions of tonnes of metals and minerals by 2050. Supply of mined resources such as zinc, molybdenum, silver, nickel, copper must increase by up to 500%. A 2018 study analysed the metal requirements to transition the global energy system up to 2060. Currently used battery technologies and known reserves are not compatible with the transition scenario as a result of insufficient cobalt and lithium reserves. Batteries containing less or no cobalt are feasible. Lithium is much more difficult to replace with maintained performance and cost.
=== Institutional inertia ===
A review suggests large institutions are prone to resisting "the challenge of 100% RE scenarios based on the dogma that the world cannot do without fossil fuels and nuclear energy". Institutions that have received extensive criticism include the International Energy Agency and the Intergovernmental Panel on Climate Change, with the latter also being criticized for not including studies on 100% RE systems in their IPCC reports.
=== Manufacturing concentration in China ===
A report found that China is about to produce "almost 95% of the world's polysilicon and the ingots and wafers" of the solar panel supply chain, with this level of concentration in any global supply chain "would represent a considerable vulnerability".
=== Intermittency ===
One of the main obstacles to 100% renewable energy is the intermittency or variability of renewable energy sources – such as times when sufficient amounts of energy can be generated neither via wind nor via solar power ("Dunkelflauten").
Proposed notable options to manage this intermittency by the time the first transitional period to 100% renewable energy is completed include:
certain forms of (flexible) dispatchable generation such as biomass (including forms of pellet fuel, woodchips, algae fuel/bioreactors, and biomass grown on land formerly used for meat-production) or hydroelectricity
diversification of (nonsynchronous) renewables
super grids (due to foreign unique capacities/resources for generation and storage) and strengthening interconnections and larger grids in general (due to differences in weather or daytime)
curtailing excess generation and power-to-X (e.g. producing green hydrogen immediately when there is abundant energy)
Geothermal energy to reduce energy storage needs or potentially as dispatchable energy via in-reservoir storage
oversizing solar and wind capacities
flexible energy demand- and supply-regulating smart grids
Demand response technologies
Vehicle-to-grid uses a vehicle's battery to supply the grid when needed
Smart scheduling
Monitoring and/or controlling energy use that is noncritical during periods of peak power consumption, and returning their function during nonpeak hours, such as for residential devices (like washing machines)
Optimizing the interaction between electricity, heat, transport, and industry
technologies and options for energy storage For example:
Batteries
Thermal energy storage
Ammonia
Green hydrogen
In 2013, Smil analyzed proposals to depend on wind and solar-generated electricity including the proposals of Jacobson and colleagues, and writing in an issue of Spectrum prepared by the Institute of Electrical and Electronics Engineers, he identified numerous points of concern, such as cost, intermittent power supply, growing NIMBYism, and a lack of infrastructure as negative factors and said that "History and a consideration of the technical requirements show that the problem is much greater than these advocates have supposed." Smil and Hansen are concerned about the variable output of solar and wind power. According to Amory Lovins the electricity grid alone can compensate for variability, just as it routinely backs up nonworking coal-fired and nuclear plants with working ones.
In November 2014 the Intergovernmental Panel on Climate Change came out with their fifth report, saying that in the absence of any one technology (such as bioenergy, carbon dioxide capture and storage, nuclear, wind and solar), climate change mitigation costs can increase substantially depending on which technology is absent. For example, it may cost 40% more to reduce carbon emissions without carbon dioxide capture. (Table 3.2) According to a 2018 study, "in the absence of firm low-carbon [dispatchable] resources, the cost of decarbonizing power generation rises rapidly as the emissions limit approaches zero" and a renewable-only generation (with batteries) results in energy prices 42-163% higher in regions with lower VRE availability, and 11-105% higher in regions with higher VRE availability. The study introduced the term "firm low-carbon energy source" (e.g. nuclear, geothermal), which is intended to operate along "fast-burst" sources (e.g. batteries) and "fuel saving" (VRE).
The International Energy Agency says that there has been too much attention on issue of the variability of renewable electricity production. The issue of intermittent supply applies to popular renewable technologies, mainly wind power and solar photovoltaics, and its significance depends on a range of factors that include the market penetration of the renewables concerned, the balance of plant and the wider connectivity of the system, as well as the demand side flexibility. Variability is rarely a barrier to increased renewable energy deployment when dispatchable generation such as hydroelectricity or solar thermal storage is also available. But at high levels of market penetration it requires careful analysis and management, and additional costs may be required for back-up or system modification. Renewable electricity supply in the 20-50+% penetration range has already been implemented in several European systems, albeit in the context of an integrated European grid system:
=== Seasonal energy storage ===
Hydropower is currently the only large scale low-carbon seasonal energy storage. In countries with high variation in energy demand by season (for example the UK uses far more gas for heating in the winter than it uses electricity) but lacking hydropower electrical interconnectors to countries with lots of hydropower (e.g. UK - Norway), electricity from hydropower is likely to be insufficient and development of a hydrogen economy would likely be needed: this is being trialled in the UK and 8 TWh of inter-seasonal hydrogen energy storage has been proposed.
In Australia, as well as storing renewable energy as hydrogen, it is also proposed to be exported in the form of ammonia. This project has been cancelled.
=== Cost ===
McKinsey estimates that it will cost 7.5% of global domestic product between 2021 and 2050 to achieve net zero (not 100% renewable, which will be more expensive). Current spending is just over half of this.
=== Open research questions ===
A review identified major gaps and neglected aspects – open research questions – in the 100% RE literature. These include:
Coupling of energy system models and integrated assessment models
Holistic analysis of material criticality for 100% RE systems, with consideration of recycling
Impact of inter-annual resource variations and respective inter-annual storage demand
District heating and cooling in transition scenarios
Increased geo-spatial resolution and coverage of global 100% RE system analyses
Including off-grid solutions or a transition of off-grid and on-grid solutions in comprehensive energy system transition pathways
Societal risks and issues of the transition, including linking it to energy security and consequences for peace and stability, and maximum area availability in societies
Model intercomparisons of analyses
Various questions for design particulars of intermittency management
Issues of equity, environmental issues, community wellbeing, energy justice, social acceptance, and good governance – research on how to make RE technologies more equitable, accountable, and just, which may help to both contextualize and manage this potential barrier (including policy mechanisms)
== Plans and models ==
== Recent developments ==
The Fourth Revolution: Energy is a German documentary film released in 2010. It shows the vision of a global society, which lives in a world where the energy is produced 100% with renewable energies, showing a complete reconstruction of the economy, to reach this goal. In 2011, Hermann Scheer wrote the book The Energy Imperative: 100 Percent Renewable Now, published by Routledge.
Reinventing Fire is a book by Amory Lovins released in October 2011. Lovins claims that combining reduced energy use with energy efficiency gains would result in a $5 trillion saving and a faster-growing economy. This can all be done with the profitable commercialization of existing energy-saving technologies, through market forces, led by business. Former US president Bill Clinton says the book is a "wise, detailed and comprehensive blueprint". The first paragraph of the preface says:
Imagine fuel without fear. No climate change. No oil spills, dead coal miners, dirty air, devastated lands, lost wildlife. No energy poverty. No oil-fed wars, tyrannies, or terrorists. Nothing to run out. Nothing to cut off. Nothing to worry about. Just energy abundance, benign and affordable, for all, for ever.
The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. In a 2011 review of 164 recent scenarios of future renewable energy growth, the report noted that the majority expected renewable sources to supply more than 17% of total energy by 2030, and 27% by 2050; the highest forecast projected 43% supplied by renewables by 2030 and 77% by 2050.
In 2011, the International Energy Agency has said that solar energy technologies, in its many forms, can make considerable contributions to solving some of the most urgent problems the world now faces:
The development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared.
In 2011, the refereed journal Energy Policy published two articles by Mark Z. Jacobson, a professor of engineering at Stanford University, and research scientist Mark A. Delucchi, about changing our energy supply mix and "Providing all global energy with wind, water, and solar power". The articles analyze the feasibility of providing worldwide energy for electric power, transportation, and heating/cooling from wind, water, and sunlight (WWS), which are safe clean options. In Part I, Jacobson and Delucchi discuss WWS energy system characteristics, aspects of energy demand, WWS resource availability, WWS devices needed, and material requirements. They estimate that 3,800,000 5 MW wind turbines, 5350 100 MW geothermal power plants, and 270 new 1300 MW hydroelectric power plants would be required. In terms of solar power, an additional 49,000 300 MW concentrating solar plants, 40,000 300 MW solar photovoltaic power plants, and 1.7 billion 3 kW rooftop photovoltaic systems would also be needed. Such an extensive WWS infrastructure could decrease world power demand by 30%. In Part II, Jacobson and Delucchi address variability of supply, system economics, and energy policy initiatives associated with a WWS system. The authors advocate producing all new energy with WWS by 2030 and replacing existing energy supply arrangements by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic". Energy costs with a WWS system should be similar to today's energy costs.
In general, Jacobson has said wind, water and solar technologies can provide 100 percent of the world's energy, eliminating all fossil fuels. He advocates a "smart mix" of renewable energy sources to reliably meet electricity demand:
Because the wind blows during stormy conditions when the sun does not shine and the sun often shines on calm days with little wind, combining wind and solar can go a long way toward meeting demand, especially when geothermal provides a steady base and hydroelectric can be called on to fill in the gaps.
A 2012 study by the University of Delaware for a 72 GW system considered 28 billion combinations of renewable energy and storage and found the most cost-effective, for the PJM Interconnection, would use 17 GW of solar, 68 GW of offshore wind, and 115 GW of onshore wind, although at times as much as three times the demand would be provided. 0.1% of the time would require generation from other sources.
In March 2012, Denmark's parliament agreed on a comprehensive new set promotional programs for energy efficiency and renewable energy aimed at reaching 100 percent of electricity, heat and fuels from renewables by 2050.
IRENEC is an annual conference on 100% renewable energy started in 2011 by Eurosolar Turkey. The 2013 conference was in Istanbul.
More recently, Jacobson and his colleagues have developed detailed proposals for switching to 100% renewable energy produced by wind, water and sunlight, for New York, California and Washington states, by 2050. As of 2014, a more expansive new plan for the 50 states has been drawn up, which includes an online interactive map showing the renewable resource potential of each of the 50 states. The 50-state plan is part of The Solutions Project, an independent outreach effort led by Jacobson, actor Mark Ruffalo, and film director Josh Fox.
As of 2014, many detailed assessments show that the energy service needs of a world enjoying radically higher levels of wellbeing, can be economically met entirely through the diverse currently available technological and organizational innovations around wind, solar, biomass, biofuel, hydro, ocean and geothermal energy. Debate over detailed plans remain, but transformations in global energy services based entirely around renewable energy are in principle technically practicable, economically feasible, socially viable, and so realisable. This prospect underpins the ambitious commitment by Germany, one of the world's most successful industrial economies, to undertake a major energy transition, Energiewende.
In 2015 a study was published in Energy and Environmental Science that describes a pathway to 100% renewable energy in the United States by 2050 without using biomass. Implementation of this roadmap is regarded as both environmentally and economically feasible and reasonable, as by 2050 it would save about $600 Billion Dollars health costs a year due to reduced air pollution and $3.3 Trillion global warming costs. This would translate in yearly cost savings per head of around $8300 compared to a business as usual pathway. According to that study, barriers that could hamper implementation are neither technical nor economic but social and political, as most people didn't know that benefits from such a transformation far exceeded the costs.
In June 2017, twenty-one researchers published an article in the Proceedings of the National Academy of Sciences of the United States of America rejecting Jacobson's earlier PNAS article, accusing him of modeling errors and of using invalid modeling tools. They further asserted he made implausible assumptions through his reliance upon increasing national energy storage from 43 minutes to 7 weeks, increasing hydrogen production by 100,000%, and increasing hydropower by the equivalent of 600 Hoover Dams. Article authors David G. Victor called Jacobson's work "dangerous" and Ken Caldeira emphasized that increasing hydropower output by 1,300 gigawatts, a 25% increase, is the equivalent flow of 100 Mississippi Rivers. Jacobson published a response in the same issue of the PNAS and also authored a blog post where he asserted the researchers were advocates of the fossil fuel industry. Another study published in 2017 confirmed the earlier results for a 100% renewable power system for North America, without changes in hydropower assumptions, but with more realistic emphasis on a balanced storage portfolio, in particular seasonal storage, and for competitive economics.
=== Grid integration simulation ===
In 2015, Jacobson and Delucchi, together with Mary Cameron and Bethany Frew, examined with computer simulation (Loadmatch), in more detail how a wind-water-solar (WWS) system can track the energy demand from minute to minute. This turned out to be possible in the United States for 6 years, including WWS variability by extreme weather events.
In 2017, the plan was further developed for 139 countries by a team of 27 researchers and in 2018, Jacobson and Delucchi with Mary Cameron and Brian Mathiesen published the Loadmatch results for 20 regions in which the 139 countries in the world are divided. According to this research, a WWS system can follow the demand in all regions.
The program Loadmatch receives as input estimated series, per half minute during 2050–2055, of
the energy demand
the intermittent wind and solar energy supply predicted with a 3D global climate / weather model GATOR-GCMOM
the hydropower, geothermal, tidal and wave energy
and specifications of
the capacities and maximum loading / unloading speeds of the different types of storage
losses due to storage, transport, distribution and maintenance
a demand-supply management system (smart grid).
The program has been carried out for each region 10-20 times with adapted input for the storage capacities, until a solution was found in which the energy demand was followed, per half minute for 5 years, with low costs.
The WWS system is assumed to connect in the electric network
geographically dispersed variable energy sources, concentrated solar power (CSP) and hydro power
storage facilities: pumped hydro, as heat in CSP plants, in batteries, as hydrogen by electrolysis of water, or as compressed air underground.
In 2020, Jacobson clarified in a textbook computer simulation results of a WWS energy system.
To match demand with supply every minute more solar and wind farms and high-voltage lines must be installed than to match year-averaged demand and supply. Oversizing (also in a conventional energy system) ensures that the demand can be followed during peak hours, but causes unused supply during off-peak hours. In a WWS system, more energy exchange between areas leads to more transmission loss. The table shows WWS supply, unused supply, losses and end-use, in GW average power to reliably supply the world and four major regions with energy by 2050. See textbook Table 8.10; energy in TWh is divided by 26.3 kh (1000 hours) to get power in GW. The bottom row is the storage capacity of pumped hydro plants (Table 8.7).
== See also ==
Carbon bubble
Carbon neutrality
Energy transition
Individual and political action on climate change
International Renewable Energy Agency
Nuclear power proposed as renewable energy
Timeline of sustainable energy research 2020–present
== References ==
== Further reading == | Wikipedia/100%_renewable_energy |
Scalable Urban Traffic Control (SURTRAC) is an adaptive traffic control system developed by researchers at the Robotics Institute, Carnegie Mellon University. SURTAC dynamically optimizes the control of traffic signals to improve traffic flow for both urban grids and corridors; optimization goals include less waiting, reduced traffic congestion, shorter trips, and less pollution. The core control engine combines schedule-driven intersection control with decentralized coordination mechanisms. Since June 2012, a pilot implementation of the SURTRAC system has been deployed on nine intersections in the East Liberty neighborhood of Pittsburgh, Pennsylvania. SURTRAC reduced travel times by more than 25% on average, and wait times were reduced by an average of 40%. A second phase of the pilot program for the Bakery Square district has been running since October 2013. In 2015, Rapid Flow Technologies was formed to commercialize the SURTRAC technology. The lead inventor of this technology, Dr. Xiao-Feng Xie, states that he has no association with and does not provide technical support for this company.
== Design ==
The SURTRAC system design has three characteristics. First, decision-making in SURTRAC proceeds in a decentralized manner. The decentralized control of individual intersections enables greater responsiveness to local real-time traffic conditions. Decentralization facilitates scalability by allowing the incremental addition of controlled intersections over time with little change to the existing adaptive network. It also reduces the possibility of a centralized computational bottleneck and avoids a single point of failure in the system.
A second characteristic of the SURTRAC design is an emphasis on real-time responsiveness to changing traffic conditions. SURTRAC adopts the real-time perspective of prior model-based intersection control methods which attempt to compute intersection control plans that optimize actual traffic inflows. By reformulating the optimization problem as a single machine scheduling problem, the core optimization algorithm termed a schedule-driven intersection control algorithm, is able to compute optimized intersection control plans over an extended horizon on a second-by-second basis.
A third characteristic of the SURTRAC design is to manage urban (grid-like) road networks, where multiple competing dominant flows shift dynamically through the day, and where specific dominant flows cannot be predetermined (as in arterial or major crossroad applications). Urban networks also often have closely spaced intersections requiring tight coordination of the intersection controllers. The combination of competing for dominant flows and densely spaced intersections presents a challenge for all adaptive traffic control systems. SURTRAC determines dominant flows dynamically by continually communicating projected outflows to downstream neighbors. This information gives each intersection controller a more informed basis for locally balancing competing inflows while simultaneously promoting the establishment of larger "green corridors" when traffic flow circumstances warrant.
== Criticism ==
The SURTRAC system employs closed-circuit television (CCTV) cameras to monitor traffic conditions. This use of CCTV networks in public spaces has sparked debate, with some critics arguing that such surveillance can contribute to an erosion of privacy and potentially facilitate more authoritarian forms of governance by reducing the anonymity of individuals in public areas. Moreover, CCTV footage can be processed with technologies like automatic number plate recognition software, enabling the tracking of vehicles based on their license plates. Facial recognition software can also analyze these images to identify individuals by their facial features. However, it is noted that the resolution of the cameras utilized in the SURTRAC system is reportedly not high enough to enable the detection of license plates or the recognition of individual faces.
There has also been discussion regarding the overall efficacy and impact of traffic optimization systems. Critics have suggested that the benefits of such systems have not been conclusively proven through scientific study. Additionally, concerns have been raised that these systems might inherently favor motorized traffic, potentially leading to disadvantages for pedestrians, bicyclists, and public transit users, and could inadvertently encourage increased use of automobiles.
== See also ==
Traffic optimization
Adaptive traffic control
Smart traffic signals
Traffic light control and coordination
Intelligent transportation system
Transportation demand management
Automated planning and scheduling
== Other adaptive traffic control systems ==
Sydney Coordinated Adaptive Traffic System
== References ==
== External links ==
SURTRAC adaptive traffic signal control
Information about core algorithms and further developments | Wikipedia/Scalable_Urban_Traffic_Control |
Advanced driver-assistance systems (ADAS) are technologies that assist drivers with the safe operation of a vehicle. Through a human-machine interface, ADAS increases car and road safety. ADAS uses automated technology, such as sensors and cameras, to detect nearby obstacles or driver errors and respond accordingly. ADAS can enable various levels of autonomous driving.
As most road crashes occur due to human error, ADAS are developed to automate, adapt, and enhance vehicle technology for safety and better driving. ADAS is proven to reduce road fatalities by minimizing human error. Safety features are designed to avoid crashes and collisions by offering technologies that alert the driver to problems, implementing safeguards, and taking control of the vehicle if necessary. ADAS may provide adaptive cruise control, assist in avoiding collisions, alert drivers to possible obstacles, warn of lane departure, assist in lane centering, incorporate satellite navigation, provide traffic warnings, provide navigational assistance through smartphones, automate lighting, or provide other features. According to the national crash database in the US, Forward Collision Prevention systems have the potential to reduce crashes by 29%. Similarly, Lane Keeping Assistance is shown to offer a reduction potential of 19%, while Blind Zone Detection could decrease crash incidents by 9%.
According to a 2021 research report from Canalys, approximately 33 percent of new vehicles sold in the United States, Europe, Japan, and China had ADAS. The firm also predicted that fifty percent of all automobiles on the road by the year 2030 would be ADAS-enabled.
== Terminology ==
Some groups advocate standardization of the name, such as Forward Collision Warning and Automatic Emergency Braking rather than Forward Collision Alert or Smart City Brake Support.
Such standardization is promoted by AAA, Consumer Reports, J.D. Power, National Safety Council, PAVE, and SAE International.
== Concept, history and development ==
ADAS were first being used in the 1970s with the adoption of the anti-lock braking system. Early ADAS include electronic stability control, anti-lock brakes, blind spot information systems, lane departure warning, adaptive cruise control, and traction control. These systems can be affected by mechanical alignment adjustments or damage from a collision. This has led many manufacturers to require automatic resets for these systems after a mechanical alignment is performed.
=== Technical concepts ===
The reliance on data that describes the outside environment of the vehicle, compared to internal data, differentiates ADAS from driver-assistance systems (DAS). ADAS rely on inputs from multiple data sources, including automotive imaging, LiDAR, radar, image processing, computer vision, and in-car networking. Additional inputs are possible from other sources separate from the primary vehicle platform, including other vehicles (vehicle-to-vehicle or V2V communication) and infrastructure (vehicle-to-infrastructure or V2I communication). Modern cars have ADAS integrated into their electronics; manufacturers can add these new features during the design process or after production via over-the-air (OTA) updates.
ADAS are considered real-time systems since they react quickly to multiple inputs and prioritize the incoming information to prevent crashes. The systems use preemptive priority scheduling to organize which task needs to be done first. The incorrect assignment of these priorities is what can cause more harm than good.
=== ADAS levels ===
ADAS are categorized into different levels based on the amount of automation and the scale provided by The Society of Automotive Engineers (SAE). ADAS can be divided into six levels. In level 0, ADAS cannot control the car and can only provide information for the driver to interpret on their own. Some ADAS that are considered level 0 are: parking sensors, surround-view, traffic sign recognition, lane departure warning, night vision, blind spot information system, rear-cross traffic alert, and forward-collision warning. Levels 1 and 2 are very similar in that they both have the driver do most of the decision-making. The difference is level 1 can take control over one functionality, and level 2 can take control over multiple to aid the driver. ADAS that are considered level 1 are: adaptive cruise control, emergency brake assist, automatic emergency brake assist, lane-keeping, and lane centering. ADAS that are considered level 2 are: highway assist, autonomous obstacle avoidance, and autonomous parking. From level 3 to 5, the amount of control the vehicle has increases; level 5 being where the vehicle is fully autonomous. Some of these systems have not yet been fully embedded in commercial vehicles. For instance, highway chauffeur is a Level 3 system, and automated valet parking is a level 4 system, both of which are not in full commercial use in 2019. The levels can be roughly understood as Level 0 - no automation; Level 1 - hands-on/shared control; Level 2 - hands-off; Level 3 - eyes off; Level 4 - mind off; and Level 5 - steering wheel optional.
== Feature examples ==
This list is not a comprehensive list of all of the ADAS. Instead, it provides information on critical examples of ADAS that have progressed and become more commonly available since 2015.
=== Alerts and warnings ===
Blind spot monitor involves cameras that monitor the driver's blind spots and notify the driver if any obstacles come close to the vehicle. Blind spots are defined as the areas behind or at the side of the vehicle that the driver cannot see from the driver's seat. Blind-spot monitoring systems typically work in conjunction with emergency braking systems to act accordingly if any obstacles come into the vehicle's path. A rear cross-traffic alert (RCTA) typically works in conjunction with the blind spot monitoring system, warning the driver of approaching cross traffic when reversing out of a parking spot.
Driver drowsiness detection aims to prevent collisions due to driver fatigue. The vehicle obtains information, such as facial patterns, steering movement, driving habits, turn signal use, and driving velocity, to determine if the driver's activities correspond with drowsy driving. If drowsy driving is suspected, the vehicle will typically sound off a loud alert and may vibrate the driver's seat.
Driver monitoring system is designed to monitor the alertness of the driver. These systems use biological and performance measures to assess the driver's alertness and ability to conduct safe driving practices. Currently, these systems use infrared sensors and cameras to monitor the driver's attentiveness through eye-tracking. If the vehicle detects a possible obstacle, it will notify the driver, and if no action is taken, the vehicle may react to the obstacle.
Electric vehicle warning sounds notify pedestrians and cyclists that a hybrid or plug-in electric vehicle is nearby, typically delivered through a noise, such as a beep or horn. This technology was developed in response to the U.S. National Highway Traffic Safety Administration ruling that issued 50 percent of quiet vehicles must have a device implemented into their systems that sound off when the vehicle travels at speeds less than 30 km/h (18.6 mph) by September 2019.
Forward collision warning (FCW) monitors the speed of the vehicle and, the vehicle in front of it, and the open distance around the vehicle. FCW systems will send an alert to the driver of a possible impending collision if gets too close to the vehicle in front of it. These systems do not take control of the vehicle, as currently, FCW systems only send an alert signal to the driver in the form of an audio alert, visual pop-up display, or other warning alert.
Intelligent speed adaptation or intelligent speed advice (ISA) assists drivers with compliance with the speed limit. They take in information about the vehicle's position and notify the driver when they are not enforcing the speed limit. Some ISA systems allow the vehicle to adjust its speed to adhere to the relative speed limit. Other ISA systems only warn the driver when they are going over the speed limit and leave it up to the driver to enforce the speed limit or not.
Intersection assistants use two radar sensors in the front bumper and sides of the car to monitor if there are any oncoming cars at intersections, highway exits, or car parks. This system alerts the driver of any upcoming traffic from the vehicle's sides. It can enact the vehicle's emergency braking system to prevent a collision.
Lane departure warning system (LDW) alerts the driver when they partially merge into a lane without using their turn signals. An LDW system uses cameras to monitor lane markings to determine if the driver unintentionally begins to drift. This system does not take control of the vehicle to help sway the car back into the safety zone but instead sends an audio or visual alert to the driver.
Parking sensors can scan the vehicle's surroundings for objects when the driver initiates parking. Audio warnings can notify the driver of the distance between the vehicle and its surrounding objects. Typically, the faster the audio warnings are issued, the closer the vehicle is getting to the object. These sensors may not detect objects closer to the ground, such as parking stops, which is why parking sensors typically work alongside backup cameras to assist the driver when reversing into a parking spot.
Tire pressure monitoring determine when the tire pressure is outside the normal inflation pressure range. The driver can monitor the tire pressure and is notified when there is a sudden drop through a pictogram display, gauge, or low-pressure warning signal.
Vibrating seat warnings alert the driver of danger. GM's Cadillacs have offered vibrating seat warnings since the 2013 Cadillac ATS. If the driver begins drifting out of the traveling lane of a highway, the seat vibrates in the direction of the drift, warning the driver of danger. The safety alert seat also provides a vibrating pulse on both sides of the seat when a frontal threat is detected.
Wrong-way driving warning issue alerts to drivers when it is detected that they are on the wrong side of the road. Vehicles with this system enacted can use sensors and cameras to identify the direction of oncoming traffic flow. In conjunction with lane detection services, this system can also notify drivers when they partially merge into the wrong side of the road
=== Crash mitigation ===
Pedestrian protection systems are designed to minimize the number of crashes or injuries that occur between a vehicle and a pedestrian. This system uses cameras and sensors to determine when the front of a vehicle strikes a pedestrian. When the collision occurs, the vehicle's bonnet lifts to provide a cushion between the vehicle's hard engine components and the pedestrian. This helps minimize the possibility of a severe head injury when the pedestrian's head comes into contact with the vehicle.
=== Driving task assistance ===
Adaptive cruise control (ACC) can maintain a chosen velocity and distance between a vehicle and the vehicle ahead. ACC can automatically brake or accelerate with concern to the distance between the vehicle and the vehicle ahead. ACC systems with stop and go features can come to a complete stop and accelerate back to the specified speed. This system still requires an alert driver to take in their surroundings, as it only controls speed and the distance between you and the car in front of you.
Anti-lock braking system (ABS) restores traction to a car's tires by regulating the brake pressure when the vehicle begins to skid. Alongside helping drivers in emergencies, such as when their car starts to skid on ice, ABS systems can also assist drivers who may lose control of their vehicle. With the growing popularity in the 1990s, ABS systems have become standard in vehicles.
Automatic parking entirely takes over control of parking functions, including steering, braking, and acceleration, to assist drivers in parking. Depending on the relative cars and obstacles, the vehicle positions itself safely into the available parking spot. Currently, the driver must still be aware of the vehicle's surroundings and be willing to take control of it if necessary.
Collision avoidance system (pre-crash system) uses small radar detectors, typically placed near the front of the car, to determine the car's vicinity to nearby obstacles and notify the driver of potential car crash situations. These systems can account for any sudden changes to the car's environment that may cause a collision. Systems can respond to a possible collision situation with multiple actions, such as sounding an alarm, tensing up passengers' seat belts, closing a sunroof, and raising reclined seats.
Crosswind stabilization helps prevent a vehicle from overturning when strong winds hit its side by analyzing the vehicle's yaw rate, steering angle, lateral acceleration, and velocity sensors. This system distributes the wheel load in relation to the velocity and direction of the crosswind.
Cruise control can maintain a specific speed pre-determined by the driver. The car will maintain the speed the driver sets until the driver hits the brake pedal, clutch pedal, or disengages the system. Specific cruise control systems can accelerate or decelerate, but require the driver to click a button and notify the car of the goal speed.
Electronic stability control (ESC) can reduce the speed of the car and activate individual brakes to prevent understeer and oversteer. Understeer occurs when the car's front wheels do not have enough traction to make the car turn and oversteer occurs when the vehicle turns more than intended, causing the vehicle to spin out. In conjunction with other car safety technologies, such as anti-lock braking and traction control, the ESC can safely help drivers maintain control of the car in unforeseen situations.
Emergency driver assistant facilitates emergency counteract measures if the driver falls asleep or does not perform any driving action after a defined length of time. After a specified period, if the driver has not interacted with the accelerator, brake, or steering wheel, the car will send audio, visual, and physical signals to the driver. If the driver does not wake up after these signals, the system will stop, safely position the vehicle away from oncoming traffic, and turn on the hazard warning lights.
Hill descent control helps drivers maintain a safe speed when driving down a hill or other decline. These systems are typically enacted if the vehicle moves faster than 15 to 20 mph when driving down. When a change in grade is sensed, hill descent control automates the driver's speed to descend down the steep grade safely. This system works by pulsing the braking system and controlling each wheel independently to maintain traction down the descent.
Hill-start assist, also known as hill-start control or hill holder, helps prevent a vehicle from rolling backward down a hill when starting again from a stopped position. This feature holds the brake for you while you transition between the brake pedal and the gas pedal. For manual cars, this feature holds the brake for you while you transition between the brake pedal, the clutch, and the gas pedal.
Lane centering assists the driver in keeping the vehicle centered in a lane. A lane-centering system may autonomously take over the steering when it determines the driver is at risk of deterring from the lane. This system uses cameras to monitor lane markings to stay within a safe distance between both sides of the lane.
Lane change assistance helps the driver through the safe completion of a lane change by using sensors to scan the vehicle's surroundings and monitor the driver's blind spots. When a driver intends to make a lane change, the vehicle will notify the driver through an audio or visual alert when a vehicle is approaching from behind or is in the vehicle's blind spot. The visual alert may appear in the dashboard, heads-up-display, or the exterior rear-view mirrors. Several kinds of lane change assistance might exist, for instance, UNECE regulation 79 considers:
"ACSF (Automatically commanded steering function) of Category C" (...) a function which is initiated/activated by the driver and which can perform a single lateral manoeuvre (e.g., lane change) when commanded by the driver.
"ACSF of Category D" (...) a function which is initiated/activated by the driver and which can indicate the possibility of a single lateral manoeuvre (e.g. lane change) but performs that function only following a confirmation by the driver.
"ACSF of Category E" (...) a function which is initiated/activated by the driver and which can continuously determine the possibility of a manoeuvre (e.g. lane change) and complete these manoeuvres for extended periods without further driver command/confirmation.
Rain sensors detect water and automatically trigger electrical actions, such as the raising of open windows and the closing of open convertible tops. A rain sensor can also take in the frequency of rain droplets to automatically trigger windshield wipers with an accurate speed for the corresponding rainfall.
Traction control system (TCS) helps prevent traction loss in vehicles and prevent vehicle turnover on sharp curves and turns. By limiting tire slip, or when the force on a tire exceeds the tire's traction, this limits power delivery and helps the driver accelerate the car without losing control. These systems use the same wheel-speed sensors as the antilock braking systems. Individual wheel braking systems are deployed through TCS to control when one tire spins faster than the others.
=== Visual and environmental monitoring ===
Automotive head-up display (auto-HUD) safely displays essential system information to a driver at a vantage point that does not require the driver to look down or away from the road. Currently, the majority of the auto-HUD systems on the market display system information on a windshield using LCDs.
Automotive navigation system uses digital mapping tools, such as the global positioning system (GPS) and traffic message channel (TMC), to provide drivers with up-to-date traffic and navigation information. Through an embedded receiver, an automotive navigation system can send and receive data signals transmitted from satellites regarding the current position of the vehicle in relation to its surroundings.
Automotive night vision systems enable the vehicle to detect obstacles, including pedestrians, in a nighttime setting or heavy weather situation when the driver has low visibility. These systems can use various technologies, including infrared sensors, GPS, Lidar, and Radar, to detect pedestrians and non-human obstacles.
Backup camera provides real-time video information regarding the location of your vehicle and its surroundings. This camera offers driver's aid when backing up by providing a viewpoint that is typically a blind spot in traditional cars. When the driver puts the car in reverse, the camera automatically turns on.
Glare-free high beam use Light Emitting Diodes, more commonly known as LEDs, to cut two or more cars from the light distribution. This allows oncoming vehicles coming in the opposite direction not to be affected by the light of the high-beams. In 2010, the VW Touareg introduced the first glare-free high beam headlamp system, which used a mechanical shutter to cut light from hitting specific traffic participants.
Omniview technology improves a driver's visibility by offering a 360-degree viewing system. This system can accurately provide 3D peripheral images of the car's surroundings through video display outputted to the driver. Currently, commercial systems can only provide 2D images of the driver's surroundings. Omniview technology uses the input of four cameras and a bird's eye technology to provide a composite 3D model of the surroundings.
Traffic sign recognition (TSR) systems can recognize common traffic signs, such as a "stop" sign or a "turn ahead" sign, through image processing techniques. This system takes into account the sign's shape, such as hexagons and rectangles, and the color to classify what the sign is communicating to the driver. Since most systems currently use camera-based technology, a wide variety of factors can make the system less accurate. These include poor lighting conditions, extreme weather conditions, and partial obstruction of the sign.
Vehicular communication systems come in three forms: vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-everything (V2X). V2V systems allow vehicles to exchange information with each other about their current position and upcoming hazards. V2I systems occur when the vehicle exchanges information with nearby infrastructure elements, such as street signs. V2X systems occur when the vehicle monitors its environment and takes in information about possible obstacles or pedestrians in its path.
=== Hands-off systems ===
Ford and General Motors provide "hands-off, eyes-on" systems such as Blue Cruise and Super Cruise in North America. These systems allow drivers to take their hands off the steering wheel while the system is engaged. However, drivers must keep their eyes on the road and be ready to take immediate action at all times.
== Adoption ==
In Europe, in Q2 2018, 3% of sold passenger cars had level 2 autonomy driving features. In Europe, in Q2 2019, 325,000 passenger cars are sold with level 2 autonomy driving features, that is 8% of all new cars sold.
According to a 2021 research report from Canalys, approximately 33 percent of new vehicles sold in the United States, Europe, Japan, and China had ADAS features. The firm also predicted that fifty percent of all automobiles on the road by the year 2030 would be ADAS-enabled.
=== Branding ===
Major car brands with Level 2 features include Audi, BMW, Mercedes-Benz, Tesla, Volvo, Tata, Citroën, Ford, Hyundai, Kia, Mazda, Nissan, Peugeot, Mahindra and Subaru. Full Level 2 features are included with Full Self-Driving from Tesla, Pilot Assist from Volvo, OpenPilot from Comma.ai and ProPILOT Assist from Nissan.
Level 3 features are included in Drive Pilot from Mercedes-Benz.
=== Crash statistics ===
On June 29, 2021, the National Highway Traffic Safety Administration (NHTSA), the branch of the United States Department of Transportation responsible for federal motor vehicle regulations, issued Standing General Order 2021-01 (SGO 2021-01), which required manufacturers of ADAS (Levels 1 or 2) and Automated Driving Systems (ADS) (Levels 3 through 5) to promptly report crashes that occurred when driver-assistance or automation systems were in use. SGO 2021-01 subsequently was amended on August 5, 2021. Under the amended SGO 2021-01, a crash involving ADS or Level 2 ADAS is reportable to the NHTSA if it meets the following criteria:: 13–15
it happened on a publicly accessible road in the United States
the Levels 3–5 ADS or Level 2 ADAS was engaged at any time within 30 seconds before the start of the crash through the conclusion of the crash
A severe crash is one that results in one or more of the following:: 14
transport to a hospital for medical treatment or a fatality, regardless of whether that person was an occupant of the vehicle equipped with the ADS or L2 ADAS
a vehicle tow-away or an air bag deployment, regardless of whether that is the vehicle equipped with the ADS or L2 ADAS
involves a vulnerable road user (anyone who is not an occupant of a motor vehicle with more than three wheels: typically pedestrians, wheelchair users, motorcyclists, or bicyclists), regardless of that vulnerable road user's influence on the cause of the crash
The incident report to the NHTSA must be made according to the following schedule:: 13, 14
Severe crashes must be reported within one calendar day after the manufacturer receives notice the crash has occurred. In addition, an updated crash incident report must be made within ten calendar days after the manufacturer receives notice the crash has occurred.
Otherwise, non-severe crashes involving ADS (excluding L2 ADAS) must be reported on the fifteenth day of the month following the calendar month in which the manufacturer receives notice the crash has occurred.
SGO 2021-01 is in effect for three years, starting on June 29, 2021.: 9 After gathering data for almost a year (July 1, 2021 through May 15, 2022), the NHTSA released the initial set of data in June 2022 and stated they plan to update the data on a monthly basis. The data are subject to several caveats and limitations; for instance, manufacturers are not required to report the number of vehicles that have been built and equipped with ADS/ADAS, the number of vehicles operating with ADS/ADAS, or the total distance traveled with ADS/ADAS active, which would be helpful to normalize the incident report data.
According to the initial data covering July 2021 to May 15, 2022, ADS (Levels 3–5) from 25 different manufacturers were involved in 130 crashes, led by Waymo LLC (62), Transdev Alternative Services (34), Cruise LLC (23), General Motors (16), and Argo AI (10); because multiple manufacturers can report the same crash, the sum exceeds the total number of reportable incidents.: 4–5 Of the 130 crashes, 108 had no associated injuries reported; there was only one serious injury associated with the remaining crashes.: 6 The most commonly-reported damage location was the rear of the ADS-equipped vehicle.: 7
Similarly, ADAS (Level 2) from 12 different manufacturers were involved in 367 crashes over the same period; 392 crashes were reported in total, but 25 either occurred before July 2021 or had no associated date. Reported incidents were led by Tesla (273), Honda (90), and Subaru (10).: 5–6 Of the 392 crashes, 98 included injury reporting; of the 98, 46 had no injuries reported, 5 resulted in serious injuries and 6 resulted in fatalities.: 7 The most commonly-reported damage location was the front of the ADAS-equipped vehicle.: 8
== Potential issues and concerns ==
=== Need for standardization ===
According to PACTS, lack of full standardization might make the system have difficulty being understandable by the driver who might believe that the car behaves like another car while it does not.
We can't help feeling that this lack of standardisation is one of the more problematic aspects of driver-assistance systems; and it's one that is likely to be felt more keenly as systems become increasingly commonplace in years to come, particularly if traffic laws change to allow 'hands-off' driving in the future.
ADAS might have many limitations, for instance a pre-collision system might have 12 pages to explain 23 exceptions where ADAS may operate when not needed and 30 exceptions where ADAS may not operate when a collision is likely.
Names for ADAS features are not standardized. For instance, adaptive cruise control is called Adaptive Cruise Control by Fiat, Ford, GM, VW, Volvo and Peugeot, but Intelligent Cruise Control by Nissan, Active Cruise Control by Citroen and BMW, and DISTRONIC by Mercedes. To help with standardization, SAE International has endorsed a series of recommendations for generic ADAS terminology for car manufacturers, that it created with Consumer Reports, the American Automobile Association, J.D. Power, and the National Safety Council.
Buttons and dashboard symbols change from car to car due to lack of standardization.
ADAS behavior might change from car to car, for instance ACC speed might be temporarily overridden in most cars, while some switch to standby after one minute.
=== Insurance and economic impact ===
The AV industry is growing exponentially, and according to a report by Market Research Future, the market is expected to hit over $65 billion by 2027. AV insurance and rising competition are expected to fuel that growth. Auto insurance for ADAS has directly affected the global economy, and many questions have arisen within the general public. ADAS allow autonomous vehicles to enable self-driving features, but there are associated risks with ADAS. AV companies and manufacturers are recommended to have insurance in the following areas in order to avoid any serious litigations. Depending on the level, ranging from 0 to 5, each car manufacturer would find it in its best interest to find the right combination of different insurances to best match their products. Note that this list is not exhaustive and may be constantly updated with more types of insurances and risks in the years to come.
Technology errors and omissions – This insurance will cover any physical risk if the technology itself has failed. These usually include all of the associated expenses of a car crash.
Auto liability and physical damage – This insurance covers third-party injuries and technology damage.
Cyber liability – This insurance will protect companies from any lawsuits from third parties and penalties from regulators regarding cybersecurity.
Directors and officers – This insurance protects a company's balance sheet and assets by protecting the company from bad management or misappropriation of assets.
With the technology embedded in autonomous vehicles, these self-driving cars are able to distribute data if a car crash occurs. This, in turn, will invigorate the claims administration and their operations. Fraud reduction will also disable any fraudulent staging of car crashes by recording the car's monitoring of every minute on the road. ADAS are expected to streamline the insurance industry and its economic efficiency with capable technology to fight off fraudulent human behavior. In September 2016, the NHTSA published the Federal Automated Vehicles Policy, which describes the U.S. Department of Transportation's policies related to highly automated vehicles (HAV) which range from vehicles with ADAS features to autonomous vehicles.
=== Ethical issues and current solutions ===
In March 2014, the US Department of Transportation's National Highway Traffic Safety Administration (NHTSA) announced that it will require all new vehicles under 10,000 pounds (4,500 kg) to have rear view cameras by May 2018. The rule was required by Congress as part of the Cameron Gulbransen Kids Transportation Safety Act of 2007. The Act is named after two-year-old Cameron Gulbransen. Cameron's father backed up his SUV over him, when he did not see the toddler in the family's driveway
The advancement of autonomous driving is accompanied by ethical concerns. The earliest moral issue associated with autonomous driving can be dated back to as early as the age of the trolleys. The trolley problem is one of the most well-known ethical issues. Introduced by English philosopher Philippa Foot in 1967, the trolley problem asks that under a situation which the trolley's brake does not work, and there are five people ahead of the trolley, the driver may go straight, killing the five persons ahead, or turn to the side track killing the one pedestrian, what should the driver do? Before the development of autonomous vehicles, the trolley problem remains an ethical dilemma between utilitarianism and deontological ethics. However, as the advancement in ADAS proceeds, the trolley problem becomes an issue that needs to be addressed by the programming of self-driving cars. The crashes that autonomous vehicles might face could be very similar to those depicted in the trolley problem. Although ADAS make vehicles generally safer than only human-driven cars, crashes are unavoidable. This raises questions such as "whose lives should be prioritized in the event of an inevitable crash?" Or "What should be the universal principle for these 'crash-algorithms'?"
Many researchers have been working on ways to address the ethical concerns associated with ADAS. For instance, the artificial intelligence approach allows computers to learn human ethics by feeding them data regarding human actions. Such a method is useful when the rules cannot be articulated because the computer can learn and identify the ethical elements on its own without precisely programming whether an action is ethical. However, there are limitations to this approach. For example, many human actions are done out of self-preservation instincts, which is realistic but not ethical; feeding such data to the computer cannot guarantee that the computer captures the ideal behavior. Furthermore, the data fed to an artificial intelligence must be carefully selected to avoid producing undesired outcomes.
Another notable method is a three-phase approach proposed by Noah J. Goodall. This approach first necessitates a system established with the agreement of car manufacturers, transportation engineers, lawyers, and ethicists, and should be set transparently. The second phase is letting artificial intelligence learn human ethics while being bound by the system established in phase one. Lastly, the system should provide constant feedback that is understandable by humans.
== Ratings ==
=== Consumer Reports ===
In October 2023, Consumer Reports rated 17 "active driving assistance systems". Their criteria were:
Capabilities and performance
Clear when safe to use
Ease of use
Keeping the driver engaged
Unresponsive driver
Their ratings were:
=== Insurance Institute for Highway Safety ===
In March 2024, the American Insurance Institute for Highway Safety (IIHS) reported its first "partial automation safeguard ratings". Their criteria were:
Adaptive cruise control does not automatically resume after a lengthy stop or if the driver is not looking at the road
Automated lane changes must be initiated or confirmed by the driver
Automation features cannot be used with seat belt unfastened
Automation features cannot be used with automatic emergency braking or lane departure prevention/warning disabled
Fail-safe procedure slows vehicle, notifies manufacturer and keeps automation off limits for remainder of drive
Lane centering does not discourage steering by driver
Monitors both the driver's gaze and hand position
Uses multiple types of rapidly escalating alerts to get driver's attention
The ratings were (no system received a "good" rating):
== Future ==
Intelligent transport systems (ITS) highly resemble ADAS, but experts believe that ITS goes beyond automatic traffic to include any enterprise that safely transports humans. ITS is where the transportation technology is integrated with a city's infrastructure. This would then lead to a "smart city". These systems promote active safety by increasing the efficiency of roads, possibly by adding 22.5% capacity on average, not the actual count. ADAS have aided in this increase in active safety, according to a study in 2008. ITS systems use a wide system of communication technology, including wireless technology and traditional technology, to enhance productivity.
Driver control assistance systems (DCAS) is the name of a draft ADAS regulation.
It would allow hands-free driving with a possible risk of lack of attentiveness.
Such DCAS regulation would allow system such as Tesla FSD in Europe.
The UNECE driver control assistance systems regulation plan that DCAS shall be designed to ensure that the driver performs the driving task, that the driver's hands must remain on the wheel and that the system shall monitor the driver's visual engagement.
== See also ==
Mobileye
Applied Intuition
EuroFOT
Road Safety
Integrated Vehicle-Based Safety Systems
Intelligent transportation system
Hands-free driving
Traffic psychology
Automotive electronics
== References ==
== External links ==
Driver Assist Technologies. Insurance Institute for Highway Safety (IIHS). | Wikipedia/Advanced_driver-assistance_systems |
The Materials Project is an open-access database offering material properties to accelerate the development of technology by predicting how new materials–both real and hypothetical–can be used. The project was established in 2011 with an emphasis on battery research, but includes property calculations for many areas of clean energy systems such as photovoltaics, thermoelectric materials, and catalysts. Most of the known 35,000 molecules and over 130,000 inorganic compounds are included in the database.
Dr. Kristin Persson of Lawrence Berkeley National Laboratory founded and leads the initiative, which uses supercomputers at Berkeley, among other institutions, to run calculations using Density Functional Theory (DFT). Commonly computed values include enthalpy of formation, crystal structure, and band gap. The assembled databases of computed structures and properties is freely available to anyone under a CC 4.0 license and was developed with ease of use in mind. The data have been used to predict new materials that should be synthesizable, and screen existing materials for useful properties.
The project can be traced back to Persson's postdoc research at MIT in 2004, during which she was given access to a supercomputer to do DFT calculations. After joining Berkeley Lab in 2008, Persson received the necessary funding to make the data from her research freely available.
== References == | Wikipedia/Materials_Project |
Pixel art scaling algorithms are graphical filters that attempt to enhance the appearance of hand-drawn 2D pixel art graphics. These algorithms are a form of automatic image enhancement. Pixel art scaling algorithms employ methods significantly different than the common methods of image rescaling, which have the goal of preserving the appearance of images.
As pixel art graphics are commonly used at very low resolutions, they employ careful coloring of individual pixels. This results in graphics that rely on a high amount of stylized visual cues to define complex shapes. Several specialized algorithms have been developed to handle re-scaling of such graphics.
These specialized algorithms can improve the appearance of pixel-art graphics, but in doing so they introduce changes. Such changes may be undesirable, especially if the goal is to faithfully reproduce the original appearance.
Since a typical application of this technology is improving the appearance of fourth-generation and earlier video games on arcade and console emulators, many pixel art scaling algorithms are designed to run in real-time for sufficiently small input images at 60-frames per second. This places constraints on the type of programming techniques that can be used for this sort of real-time processing. Many work only on specific scale factors. 2× is the most common scale factor, while and 3×, 4×, 5×, and 6× exist but are less used.
== Algorithms ==
=== SAA5050 'Diagonal Smoothing' ===
The Mullard SAA5050 Teletext character generator chip (1980) used a primitive pixel scaling algorithm to generate higher-resolution characters on the screen from a lower-resolution representation from its internal ROM. Internally, each character shape was defined on a 5 × 9 pixel grid, which was then interpolated by smoothing diagonals to give a 10 × 18 pixel character, with a characteristically angular shape, surrounded to the top and the left by two pixels of blank space. The algorithm only works on monochrome source data, and assumes the source pixels will be logically true or false depending on whether they are 'on' or 'off'. Pixels 'outside the grid pattern' are assumed to be off.
The algorithm works as follows:
A B C --\ 1 2
D E F --/ 3 4
1 = B | (A & E & !B & !D)
2 = B | (C & E & !B & !F)
3 = E | (!A & !E & B & D)
4 = E | (!C & !E & B & F)
Note that this algorithm, like the Eagle algorithm below, has a flaw:
If a pattern of 4 pixels in a hollow diamond shape appears, the hollow will be obliterated by the expansion.
The SAA5050's internal character ROM carefully avoids ever using this pattern.
The degenerate case:
*
* *
*
becomes:
**
****
******
******
****
**
=== EPX/Scale2×/AdvMAME2× ===
Eric's Pixel Expansion (EPX) is an algorithm developed by Eric Johnston at LucasArts around 1992, when porting the SCUMM engine games from the IBM PC (which ran at 320 × 200 × 256 colors) to the early color Macintosh computers, which ran at more or less double that resolution.
The algorithm works as follows, expanding P into 4 new pixels based on P's surroundings:
1=P; 2=P; 3=P; 4=P;
IF C==A => 1=A
IF A==B => 2=B
IF D==C => 3=C
IF B==D => 4=D
IF of A, B, C, D, three or more are identical: 1=2=3=4=P
Later implementations of this same algorithm (as AdvMAME2× and Scale2×, developed around 2001) are slightly more efficient but functionally identical:
1=P; 2=P; 3=P; 4=P;
IF C==A AND C!=D AND A!=B => 1=A
IF A==B AND A!=C AND B!=D => 2=B
IF D==C AND D!=B AND C!=A => 3=C
IF B==D AND B!=A AND D!=C => 4=D
AdvMAME2× is available in DOSBox via the scaler=advmame2x dosbox.conf option.
The AdvMAME4×/Scale4× algorithm is just EPX applied twice to get 4× resolution.
==== Scale3×/AdvMAME3× and ScaleFX ====
The AdvMAME3×/Scale3× algorithm (available in DOSBox via the scaler=advmame3x dosbox.conf option) can be thought of as a generalization of EPX to the 3× case. The corner pixels are calculated identically to EPX.
1=E; 2=E; 3=E; 4=E; 5=E; 6=E; 7=E; 8=E; 9=E;
IF D==B AND D!=H AND B!=F => 1=D
IF (D==B AND D!=H AND B!=F AND E!=C) OR (B==F AND B!=D AND F!=H AND E!=A) => 2=B
IF B==F AND B!=D AND F!=H => 3=F
IF (H==D AND H!=F AND D!=B AND E!=A) OR (D==B AND D!=H AND B!=F AND E!=G) => 4=D
5=E
IF (B==F AND B!=D AND F!=H AND E!=I) OR (F==H AND F!=B AND H!=D AND E!=C) => 6=F
IF H==D AND H!=F AND D!=B => 7=D
IF (F==H AND F!=B AND H!=D AND E!=G) OR (H==D AND H!=F AND D!=B AND E!=I) => 8=H
IF F==H AND F!=B AND H!=D => 9=F
There is also a variant improved over Scale3× called ScaleFX, developed by Sp00kyFox, and a version combined with Reverse-AA called ScaleFX-Hybrid.
=== Eagle ===
Eagle works as follows: for every in pixel, we will generate 4 out pixels. First, set all 4 to the color of the pixel we are currently scaling (as nearest-neighbor). Next look at the three pixels above, to the left, and diagonally above left: if all three are the same color as each other, set the top left pixel of our output square to that color in preference to the nearest-neighbor color. Work similarly for all four pixels, and then move to the next one.
Assume an input matrix of 3 × 3 pixels where the centermost pixel is the pixel to be scaled, and an output matrix of 2 × 2 pixels (i.e., the scaled pixel)
first: |Then
. . . --\ CC |S T U --\ 1 2
. C . --/ CC |V C W --/ 3 4
. . . |X Y Z
| IF V==S==T => 1=S
| IF T==U==W => 2=U
| IF V==X==Y => 3=X
| IF W==Z==Y => 4=Z
Thus if we have a single black pixel on a white background it will vanish. This is a bug in the Eagle algorithm but is solved by other algorithms such as EPX, 2xSaI, and HQ2x.
=== 2×SaI ===
2×SaI, short for 2× Scale and Interpolation engine, was inspired by Eagle. It was designed by Derek Liauw Kie Fa, also known as Kreed, primarily for use in console and computer emulators, and it has remained fairly popular in this niche. Many of the most popular emulators, including ZSNES and VisualBoyAdvance, offer this scaling algorithm as a feature. Several slightly different versions of the scaling algorithm are available, and these are often referred to as Super 2×SaI and Super Eagle.
The 2xSaI family works on a 4 × 4 matrix of pixels where the pixel marked A below is scaled:
I E F J
G A B K --\ W X
H C D L --/ Y Z
M N O P
For 16-bit pixels, they use pixel masks which change based on whether the 16-bit pixel format is 565 or 555. The constants colorMask, lowPixelMask, qColorMask, qLowPixelMask, redBlueMask, and greenMask are 16-bit masks. The lower 8 bits are identical in either pixel format.
Two interpolation functions are described:
INTERPOLATE(uint32 A, UINT32 B).
-- linear midpoint of A and B
if (A == B) return A;
return (
((A & colorMask) >> 1)
+ ((B & colorMask) >> 1)
+ (A & B & lowPixelMask) );
Q_INTERPOLATE(uint32 A, uint32 B, uint32 C, uint32 D)
-- bilinear interpolation; A, B, C, and D's average
x = ((A & qColorMask) >> 2)
+ ((B & qColorMask) >> 2)
+ ((C & qColorMask) >> 2)
+ ((D & qColorMask) >> 2);
y = (A & qLowPixelMask)
+ (B & qLowPixelMask)
+ (C & qLowPixelMask)
+ (D & qLowPixelMask);
y = (y >> 2) & qLowPixelMask;
return x + y;
The algorithm checks A, B, C, and D for a diagonal match such that A==D and B!=C, or the other way around, or if they are both diagonals or if there is no diagonal match. Within these, it checks for three or four identical pixels. Based on these conditions, the algorithm decides whether to use one of A, B, C, or D, or an interpolation among only these four, for each output pixel. The 2xSaI arbitrary scaler can enlarge any image to any resolution and uses bilinear filtering to interpolate pixels.
Since Kreed released the source code under the GNU General Public License, it is freely available to anyone wishing to utilize it in a project released under that license. Developers wishing to use it in a non-GPL project would be required to rewrite the algorithm without using any of Kreed's existing code.
It is available in DosBox via scaler=2xsai option.
=== hqnx family ===
Maxim Stepin's hq2x, hq3x, and hq4x are for scale factors of 2:1, 3:1, and 4:1 respectively. Each work by comparing the color value of each pixel to those of its eight immediate neighbors, marking the neighbors as close or distant, and using a pre-generated lookup table to find the proper proportion of input pixels' values for each of the 4, 9 or 16 corresponding output pixels. The hq3x family will perfectly smooth any diagonal line whose slope is ±0.5, ±1, or ±2 and which is not anti-aliased in the input; one with any other slope will alternate between two slopes in the output. It will also smooth very tight curves. Unlike 2xSaI, it anti-aliases the output.
hqnx was initially created for the Super NES emulator ZSNES. The author of bsnes has released a space-efficient implementation of hq2x to the public domain. A port to shaders, which has comparable quality to the early versions of xBR, is available. Before the port, a shader called "scalehq" has often been confused for hqx.
=== xBR family ===
There are 6 filters in this family: xBR , xBRZ, xBR-Hybrid, Super xBR, xBR+3D and Super xBR+3D.
xBR ("scale by rules"), created by Hyllian, works much the same way as HQx (based on pattern recognition) and would generate the same result as HQx when given the above pattern. However, it goes further than HQx by using a 2-stage set of interpolation rules, which better handle more complex patterns such as anti-aliased lines and curves. Scaled background textures keep the sharp characteristics of the original image, rather than becoming blurred like HQx (often ScaleHQ in practice) tends to do. The newest xBR versions are multi-pass and can preserve small details better. There is also a version of xBR combined with Reverse-AA shader called xBR-Hybrid. xBR+3D is a version with a 3D mask that only filters 2D elements.
xBRZ by Zenju is a modified version of xBR. It is implemented from scratch as a CPU-based filter in C++ . It uses the same basic idea as xBR's pattern recognition and interpolation but with a different rule set designed to preserve fine image details as small as a few pixels. This makes it useful for scaling the details in faces, and in particular eyes. xBRZ is optimized for multi-core CPUs and 64-bit architectures and shows 40–60% better performance than HQx even when running on a single CPU core only. It supports scaling images with an alpha channel, and scaling by integer factors from 2× up to 6×.
Super xBR is an algorithm developed by Hylian in 2015. It uses some combinations of known linear filters along with xBR edge detection rules in a non-linear way. It works in two passes and can only scale an image by two (or multiples of two by reapplying it and also has an anti-ringing filter). Super xBR+3D is a version with a 3D mask that only filters 2D elements.
There is also a Super xBR version rewritten in C/C++.
=== RotSprite ===
RotSprite is a scaling and rotation algorithm for sprites developed by Xenowhirl. It produces far fewer artifacts than nearest-neighbor rotation algorithms, and like EPX, it does not introduce new colors into the image (unlike most interpolation systems).
The algorithm first scales the image to 8 times its original size with a modified Scale2× algorithm which treats similar (rather than identical) pixels as matches. It then (optionally) calculates what rotation offset to use by favoring sampled points that are not boundary pixels. Next, the rotated image is created with a nearest-neighbor scaling and rotation algorithm that simultaneously shrinks the big image back to its original size and rotates the image. Finally, overlooked single-pixel details are (optionally) restored if the corresponding pixel in the source image is different and the destination pixel has three identical neighbors.
==== Fast RotSprite ====
Fast RotSprite is a fast rotation algorithm for pixel art developed by Oleg Mekekechko for the Pixel Studio app. It is based on RotSprite but has better performance with slight quality loss. It can process larger images in real-time. Instead of the 8× upscale, Fast RotSprite uses a single 3× upscale. Then it simply rotates all pixels with rounding coordinates. Finally, it performs 3× downscale without introducing new colors. As all operations on each step are independent, they can be done in parallel to greatly increase performance.
=== Kopf–Lischinski ===
The Kopf–Lischinski algorithm is a novel way to extract resolution-independent vector graphics from pixel art described in the 2011 paper "Depixelizing Pixel Art". A Python implementation is available.
The algorithm has been ported to GPUs and optimized for real-time rendering. The source code is available for this variant.
=== Edge-directed interpolation (EDI) ===
Edge-directed interpolation (EDI) describes upscaling techniques that use statistical sampling to ensure the quality of an image as it is scaled up. There were several earlier methods that involved detecting edges to generate blending weights for linear interpolation or classifying pixels according to their neighbor conditions and using different otherwise isotropic interpolation schemes based on the classification.
Each interpolation approach boils down to weighted averages of neighboring pixels. The goal is to find the optimal weights. Bilinear interpolation sets all the weights to be equal. Higher-order interpolation methods such as bicubic or sinc interpolation consider a larger number of neighbors than just the adjacent ones.
==== NEDI ====
NEDI (New Edge-Directed Interpolation) computes local covariances in the original image and uses them to adapt the interpolation at high resolution. It is the prototype filter of this family.
==== EDIUpsizer ====
EDIUpsizer is a resampling filter that resizes an image by a factor of two both horizontally and vertically using NEDI (new edge-directed interpolation). EDIUpsizer also uses a few modifications to basic NEDI to prevent a lot of the artifacts that NEDI creates in detailed areas. These include condition number testing and adaptive window size, as well as capping constraints. All modifications and constraints to NEDI are optional (can be turned on and off) and are user-configurable. This filter is rather slow.
==== FastEDIUpsizer ====
FastEDIUpsizer is a slimmed-down version of EDIUpsizer that is slightly more tuned for speed. It uses a constant 8 × 8 window size, only performs NEDI on the luma plane, and only uses either bicubic or bilinear interpolation as the fallback interpolation method.
==== eedi3 ====
Another edge-directed interpolation filter. Works by minimizing a cost function involving every pixel in a scan line. It is slow.
==== EEDI2 ====
EEDI2 resizes an image by 2× in the vertical direction by copying the existing image to 2⋅y(n) and interpolating the missing field. It is intended for edge-directed interpolation for deinterlacing (i.e. not made for resizing a normal image, but can do that as well). EEDI2 can be used with both TDeint and TIVTC, see the discussion link for more info on how to do this.
==== SuperRes ====
The SuperRes shaders use a different scaling method which can be used in combination with NEDI (or any other scaling algorithm). The method is explained in detail by its creator Shiandow in a Doom9 forum post in 2014. This method often gives better results than just using NEDI, and rival those of NNEDI3. These are now also available as an MPDN renderscript.
==== NNEDI ====
NNEDI is a family of intra-field deinterlacers that can also be used for enlarging images by powers of two. When being used as a deinterlacer, it takes in a frame, throws away one field, and then interpolates the missing pixels using only the information from the kept field. There are so far three major generations of NNEDI.
NNEDI, the original version, works with YUY2 and YV12 input. NNEDI2 added RGB24 support and a special function nnedi2_rpow2 for upscaling. NNEDI3 extends NNEDI2 with a predictor neural network. Both the size of the network and the neighborhood it examines can be tuned for a speed-quality tradeoff:This is a quality vs speed option; however, differences are usually small between the number of neurons for a specific resize factor, however the performance difference between the count of neurons becomes larger as you quadruple the image size. If you are only planning on doubling the resolution then you won't see massive differences between 16 and 256 neurons. There is still a noticeable difference between the highest and lowest options, but not orders of magnitude different.
== References ==
== See also ==
libretro - implements many aforementioned algorithms as shaders
pixelscalers - C++ implementations of ScaleNx, hqNx, and superXBR algorithms in a stand-alone tool
ScaleNx in Python - pure Python module implementation of Scale2x, Scale3x, Scale2xSFX and Scale3xSFX, FIR-optimized. Main application for single and batch PNG and PNM rescaling also available. | Wikipedia/Pixel-art_scaling_algorithms |
Automated journalism, also known as algorithmic journalism or robot journalism, is a term that attempts to describe modern technological processes that have infiltrated the journalistic profession, such as news articles and videos generated by computer programs. There are four main fields of application for automated journalism, namely automated content production, Data Mining, news dissemination and content optimization. Through artificial intelligence (AI) software, stories are produced automatically by computers rather than human reporters. These programs interpret, organize, and present data in human-readable ways. Typically, the process involves an algorithm that scans large amounts of provided data, selects from an assortment of pre-programmed article structures, orders key points, and inserts details such as names, places, amounts, rankings, statistics, and other figures. The output can also be customized to fit a certain voice, tone, or style.
Data science and AI companies such as Automated Insights, Narrative Science, United Robots and Monok develop and provide these algorithms to news outlets. As of 2016, only a few media organizations have used automated journalism. Early adopters include news providers such as the Associated Press, Forbes, ProPublica, and the Los Angeles Times.
Early implementations were mainly used for stories based on statistics and numerical figures. Common topics include sports recaps, weather, financial reports, real estate analysis, and earnings reviews. StatSheet, an online platform covering college basketball, runs entirely on an automated program. The Associated Press began using automation to cover 10,000 minor baseball leagues games annually, using a program from Automated Insights and statistics from MLB Advanced Media. Outside of sports, the Associated Press also uses automation to produce stories on corporate earnings. In 2006, Thomson Reuters announced their switch to automation to generate financial news stories on its online news platform. More famously, an algorithm called Quakebot published a story about a 2014 California earthquake on The Los Angeles Times website within three minutes after the shaking had stopped.
Automated journalism is sometimes seen as an opportunity to free journalists from routine reporting, providing them with more time for complex tasks. It also allows efficiency and cost-cutting, alleviating some financial burden that many news organizations face. However, automated journalism is also perceived as a threat to the authorship and quality of news and a threat to the livelihoods of human journalists.
== Benefits ==
=== Speed ===
Robot reporters are built to produce large quantities of information at quicker speeds. The Associated Press announced that their use of automation has increased the volume of earnings reports from customers by more than ten times. With software from Automated Insights and data from other companies, they can produce 150 to 300-word articles in the same time it takes journalists to crunch numbers and prepare information. By automating routine stories and tasks, journalists are promised more time for complex jobs such as investigative reporting and in-depth analysis of events.
Francesco Marconi of the Associated Press stated that, through automation, the news agency freed up 20 percent of reporters’ time to focus on higher-impact projects.
=== Cost ===
Automated journalism is cheaper because more content can be produced within less time. It also lowers labour costs for news organizations. Reduced human input means less expenses on wages or salaries, paid leaves, vacations, and employment insurance. Automation serves as a cost-cutting tool for news outlets struggling with tight budgets but still wish to maintain the scope and quality of their coverage.
== Criticisms ==
=== Authorship ===
In an automated story, there is often confusion about who should be credited as the author. Several participants of a study on algorithmic authorship attributed the credit to the programmer; others perceived the news organization as the author, emphasizing the collaborative nature of the work. There is also no way for the reader to verify whether an article was written by a robot or human, which raises issues of transparency although such issues also arise with respect to authorship attribution between human authors too.
=== Credibility and quality ===
Concerns about the perceived credibility of automated news is similar to concerns about the perceived credibility of news in general. Critics doubt if algorithms are "fair and accurate, free from subjectivity, error, or attempted influence." Again, these issues about fairness, accuracy, subjectivity, error, and attempts at influence or propaganda has also been present in articles written by humans over thousands of years. A common criticism is that machines do not replace human capabilities such as creativity, humour, and critical-thinking. However, as the technology evolves, the aim is to mimic human characteristics. When the UK's Guardian newspaper used an AI to write an entire article in September 2020, commentators pointed out that the AI still relied on human editorial content. Austin Tanney, the head of AI at Kainos said: "The Guardian got three or four different articles and spliced them together. They also gave it the opening paragraph. It doesn’t belittle what it is. It was written by AI, but there was human editorial on that."
The largest single study of readers' evaluations of news articles produced with and without the help of automation exposed 3,135 online news consumers to 24 articles. It found articles that had been automated were significantly less comprehensible, in part because they were considered to contain too many numbers. However, the automated articles were evaluated equally on other criteria including tone, narrative flow, and narrative structure.
Beyond human evaluation, there are now numerous algorithmic methods to identify machine written articles although some articles may still contain errors that are obvious for a human to identify, they can at times score better with these automatic identifiers than human-written articles.
=== Employment ===
Among the concerns about automation is the loss of employment for journalists as publishers switch to using AIs. The use of automation has become a near necessity in newsrooms nowadays, in order to keep up with the ever-increasing demand for news stories, which in turn has affected the very nature of the journalistic profession. In 2014, an annual census from The American Society of News Editors announced that the newspaper industry lost 3,800 full-time, professional editors. Falling by more than 10% within a year, this is the biggest drop since the industry cut over 10,000 jobs in 2007 and 2008.
=== Dependence on platform and technology companies ===
There has been a significant amount of recent scholarship on the relationship between platform companies, such as Google and Facebook, and the news industry with researchers examining the impact of these platforms on the distribution and monetization of news content, as well as the implications for journalism and democracy. Some scholars have extended this line of thinking to automated journalism and the use of AI in the news. A 2022 paper by the Oxford University academic Felix Simon, for example, argues that the concentration of AI tools and infrastructure in the hands of a few major technology companies, such as Google, Microsoft, and Amazon Web Services, is a significant issue for the news industry, as it risks shifting more control to these companies and increasing the industry's dependence on them. Simon argues that this could lead to vendor lock-in, where news organizations become structurally dependent on AI provided by these companies and are unable to switch to another vendor without incurring significant costs. The companies also possess artefactual and contractual control over their AI infrastructure and services, which could expose news organizations to the risk of unforeseen changes or the stopping of their AI solutions entirely. Additionally, the author argues the reliance on these companies for AI can make it more difficult for news organizations to understand the decisions or predictions made by the systems and can limit their ability to protect sources or proprietary business information.
== Opinions on automated journalism ==
A 2017 Nieman Reports article by Nicola Bruno discusses whether or not machines will replace journalists and addresses concerns around the concept of automated journalism practices. Ultimately, Bruno came to the conclusion that AI would assist journalists, not replace them. "No automated software or amateur reporter will ever replace a good journalist", she said.
In 2020, however, Microsoft did just that - replacing 27 journalists with AI. One staff member was quoted by The Guardian as saying: “I spend all my time reading about how automation and AI is going to take all our jobs, and here I am – AI has taken my job.” The journalist went on to say that replacing humans with software was risky, as existing staff were careful to stick to “very strict editorial guidelines” which ensured that users were not presented with violent or inappropriate content when opening their browser, for example.
== List of implementations ==
In May 2020, Microsoft announced that a number of its MSN contract journalists would be replaced by robot journalism.
On 8 September 2020, The Guardian published an article entirely written by the neural network GPT-3, although the published fragments were manually picked by a human editor.
Since 2014, Associated Press has been publishing quarterly financial stories with help from Automated Insights.
Reuters along with their Tracer tool employs AI in news reporting.
Agentic Tribune produces all of its news articles automatically using AI.
== References == | Wikipedia/Computer_generated_journalism |
An integrated development environment (IDE) is a software application that provides comprehensive facilities for software development. An IDE normally consists of at least a source-code editor, build automation tools, and a debugger. Some IDEs, such as IntelliJ IDEA, Eclipse and Lazarus contain the necessary compiler, interpreter or both; others, such as SharpDevelop and NetBeans, do not.
The boundary between an IDE and other parts of the broader software development environment is not well-defined; sometimes a version control system or various tools to simplify the construction of a graphical user interface (GUI) are integrated. Many modern IDEs also have a class browser, an object browser, and a class hierarchy diagram for use in object-oriented software development.
== Overview ==
Integrated development environments are designed to maximize programmer productivity by providing tight-knit components with similar user interfaces. IDEs present a single program in which all development is done. This program typically provides many features for authoring, modifying, compiling, deploying and debugging software. This contrasts with software development using unrelated tools, such as vi, GDB, GNU Compiler Collection, or make.
One aim of the IDE is to reduce the configuration necessary to piece together multiple development utilities. Instead, it provides the same set of capabilities as one cohesive unit. Reducing setup time can increase developer productivity, especially in cases where learning to use the IDE is faster than manually integrating and learning all of the individual tools. Tighter integration of all development tasks has the potential to improve overall productivity beyond just helping with setup tasks. For example, code can be continuously parsed while it is being edited, providing instant feedback when syntax errors are introduced, thus allowing developers to debug code much faster and more easily with an IDE.
Some IDEs are dedicated to a specific programming language, allowing a feature set that most closely matches the programming paradigms of the language. However, there are many multiple-language IDEs.
While most modern IDEs are graphical, text-based IDEs such as Turbo Pascal were in popular use before the availability of windowing systems like Microsoft Windows and the X Window System (X11). They commonly use function keys or hotkeys to execute frequently used commands or macros.
== History ==
IDEs initially became possible when developing via a console or terminal. Early systems could not support one, since programs were submitted to a compiler or assembler via punched cards, paper tape, etc. Dartmouth BASIC was the first language to be created with an IDE (and was also the first to be designed for use while sitting in front of a console or terminal). Its IDE (part of the Dartmouth Time-Sharing System) was command-based, and therefore did not look much like the menu-driven, graphical IDEs popular after the advent of the graphical user interface. However it integrated editing, file management, compilation, debugging and execution in a manner consistent with a modern IDE.
Maestro I is a product from Softlab Munich and was the world's first integrated development environment for software. Maestro I was installed for 22,000 programmers worldwide. Until 1989, 6,000 installations existed in the Federal Republic of Germany. Maestro was arguably the world leader in this field during the 1970s and 1980s. Today one of the last Maestro I can be found in the Museum of Information Technology at Arlington in Texas.
One of the first IDEs with a plug-in concept was Softbench. In 1995 Computerwoche commented that the use of an IDE was not well received by developers since it would fence in their creativity.
As of August 2023, the most commonly searched for IDEs on Google Search were Visual Studio, Visual Studio Code, and Eclipse.
== Topics ==
=== Syntax highlighting ===
The IDE editor usually provides syntax highlighting, it can show both the structures, the language keywords and the syntax errors with visually distinct colors and font effects.
=== Code completion ===
Code completion is an important IDE feature, intended to speed up programming. Modern IDEs even have intelligent code completion.
==== Intelligent code completion ====
=== Refactoring ===
Advanced IDEs provide support for automated refactoring.
=== Version control ===
An IDE is expected to provide integrated version control, in order to interact with source repositories.
=== Debugging ===
IDEs are also used for debugging, using an integrated debugger, with support for setting breakpoints in the editor, visual rendering of steps, etc.
=== Code search ===
IDEs may provide support for code search. Code search has two different meanings. First, it means searching for class and function declarations, usages, variable and field read/write, etc. IDEs can use different kinds of user interface for code search, for example form-based widgets and natural-language based interfaces.
Second, it means searching for a concrete implementation of some specified functionality.
=== Visual programming ===
Visual programming is a usage scenario in which an IDE is generally required. Visual Basic allows users to create new applications by moving programming, building blocks, or code nodes to create flowcharts or structure diagrams that are then compiled or interpreted. These flowcharts often are based on the Unified Modeling Language.
This interface has been popularized with the Lego Mindstorms system and is being actively perused by a number of companies wishing to capitalize on the power of custom browsers like those found at Mozilla. KTechlab supports flowcode and is a popular open-source IDE and Simulator for developing software for microcontrollers. Visual programming is also responsible for the power of distributed programming (cf. LabVIEW and EICASLAB software). An early visual programming system, Max, was modeled after an analog synthesizer design and has been used to develop real-time music performance software since the 1980s. Another early example was Prograph, a dataflow-based system originally developed for the Macintosh. The graphical programming environment "Grape" is used to program qfix robot kits.
This approach is also used in specialist software such as Openlab, where the end-users want the flexibility of a full programming language, without the traditional learning curve associated with one.
=== Language support ===
Some IDEs support multiple languages, such as GNU Emacs, IntelliJ IDEA, Eclipse, MyEclipse, NetBeans, MonoDevelop, JDoodle or PlayCode.
Support for alternative languages is often provided by plugins, allowing them to be installed on the same IDE at the same time. For example, Flycheck is a modern on-the-fly syntax checking extension for GNU Emacs 24 with support for 39 languages. Another example is JDoodle, an online cloud-based IDE that supports 88 languages.[1] Eclipse, and Netbeans have plugins for C/C++, Ada, GNAT (for example AdaGIDE), Perl, Python, Ruby, and PHP, which are selected between automatically based on file extension, environment or project settings.
=== Implementation ===
IDEs can be implemented in various languages, for example:
GNU Emacs using Emacs Lisp and C;
IntelliJ IDEA, Eclipse and NetBeans, using Java;
MonoDevelop and Rider using C#.
=== Attitudes across different computing platforms ===
Unix programmers can combine command-line POSIX tools into a complete development environment, capable of developing large programs such as the Linux kernel and its environment. In this sense, the entire Unix system functions as an IDE. The free software GNU toolchain (including GNU Compiler Collection (GCC), GNU Debugger (GDB), and GNU make) is available on many platforms, including Windows. The pervasive Unix philosophy of "everything is a text stream" enables developers who favor command-line oriented tools to use editors with support for many of the standard Unix and GNU build tools, building an IDE with programs like
Emacs
or Vim. Data Display Debugger is intended to be an advanced graphical front-end for many text-based debugger standard tools. Some programmers prefer managing makefiles and their derivatives to the similar code building tools included in a full IDE. For example, most contributors to the PostgreSQL database use make and GDB directly to develop new features. Even when building PostgreSQL for Microsoft Windows using Visual C++, Perl scripts are used as a replacement for make rather than relying on any IDE features. Some Linux IDEs such as Geany attempt to provide a graphical front end to traditional build operations.
On the various Microsoft Windows platforms, command-line tools for development are seldom used. Accordingly, there are many commercial and non-commercial products. However, each has a different design commonly creating incompatibilities. Most major compiler vendors for Windows still provide free copies of their command-line tools, including Microsoft (Visual C++, Platform SDK, .NET Framework SDK, nmake utility).
IDEs have always been popular on the Apple Macintosh's classic Mac OS and macOS, dating back to Macintosh Programmer's Workshop, Turbo Pascal, THINK Pascal and THINK C environments of the mid-1980s. Currently macOS programmers can choose between native IDEs like Xcode and open-source tools such as Eclipse and Netbeans. ActiveState Komodo is a proprietary multilanguage IDE supported on macOS.
== Online ==
An online integrated development environment, also known as a web IDE or cloud IDE, is a browser based IDE that allows for software development or web development. An online IDE can be accessed from a web browser, allowing for a portable work environment. An online IDE does not usually contain all of the same features as a traditional or desktop IDE although all of the basic IDE features, such as syntax highlighting, are typically present.
A Mobile-Based Integrated Development Environment (IDE) is a software application that provides a comprehensive suite of tools for software development on mobile platforms. Unlike traditional desktop IDEs, mobile-based IDEs are designed to run on smartphones and tablets, allowing developers to write, debug, and deploy code directly from their mobile devices.
== See also ==
== References == | Wikipedia/Integrated_Development_Environment |
In biology, a sequence motif is a nucleotide or amino-acid sequence pattern that is widespread and usually assumed to be related to biological function of the macromolecule. For example, an N-glycosylation site motif can be defined as Asn, followed by anything but Pro, followed by either Ser or Thr, followed by anything but Pro residue.
== Overview ==
When a sequence motif appears in the exon of a gene, it may encode the "structural motif" of a protein; that is a stereotypical element of the overall structure of the protein. Nevertheless, motifs need not be associated with a distinctive secondary structure. "Noncoding" sequences are not translated into proteins, and nucleic acids with such motifs need not deviate from the typical shape (e.g. the "B-form" DNA double helix).
Outside of gene exons, there exist regulatory sequence motifs and motifs within the "junk", such as satellite DNA. Some of these are believed to affect the shape of nucleic acids (see for example RNA self-splicing), but this is only sometimes the case. For example, many DNA binding proteins that have affinity for specific DNA binding sites bind DNA in only its double-helical form. They are able to recognize motifs through contact with the double helix's major or minor groove.
Short coding motifs, which appear to lack secondary structure, include those that label proteins for delivery to particular parts of a cell, or mark them for phosphorylation.
Within a sequence or database of sequences, researchers search and find motifs using computer-based techniques of sequence analysis, such as BLAST. Such techniques belong to the discipline of bioinformatics. See also consensus sequence.
== Motif Representation ==
Consider the N-glycosylation site motif mentioned above:
Asn, followed by anything but Pro, followed by either Ser or Thr, followed by anything but Pro
This pattern may be written as N{P}[ST]{P} where N = Asn, P = Pro, S = Ser, T = Thr; {X} means any amino acid except X; and [XY] means either X or Y.
The notation [XY] does not give any indication of the probability of X or Y occurring in the pattern. Observed probabilities can be graphically represented using sequence logos. Sometimes patterns are defined in terms of a probabilistic model such as a hidden Markov model.
=== Motifs and consensus sequences ===
The notation [XYZ] means X or Y or Z, but does not indicate the likelihood of any particular match. For this reason, two or more patterns are often associated with a single motif: the defining pattern, and various typical patterns.
For example, the defining sequence for the IQ motif may be taken to be:
[FILV]Qxxx[RK]Gxxx[RK]xx[FILVWY]
where x signifies any amino acid, and the square brackets indicate an alternative (see below for further details about notation).
Usually, however, the first letter is I, and both [RK] choices resolve to R. Since the last choice is so wide, the pattern IQxxxRGxxxR is sometimes equated with the IQ motif itself, but a more accurate description would be a consensus sequence for the IQ motif.
=== Pattern description notations ===
Several notations for describing motifs are in use but most of them are variants of standard notations for regular expressions and use these conventions:
there is an alphabet of single characters, each denoting a specific amino acid or a set of amino acids;
a string of characters drawn from the alphabet denotes a sequence of the corresponding amino acids;
any string of characters drawn from the alphabet enclosed in square brackets matches any one of the corresponding amino acids; e.g. [abc] matches any of the amino acids represented by a or b or c.
The fundamental idea behind all these notations is the matching principle, which assigns a meaning to a sequence of elements of the pattern notation:
a sequence of elements of the pattern notation matches a sequence of amino acids if and only if the latter sequence can be partitioned into subsequences in such a way that each pattern element matches the corresponding subsequence in turn.
Thus the pattern [AB] [CDE] F matches the six amino acid sequences corresponding to ACF, ADF, AEF, BCF, BDF, and BEF.
Different pattern description notations have other ways of forming pattern elements. One of these notations is the PROSITE notation, described in the following subsection.
==== PROSITE pattern notation ====
The PROSITE notation uses the IUPAC one-letter codes and conforms to the above description with the exception that a concatenation symbol, '-', is used between pattern elements, but it is often dropped between letters of the pattern alphabet.
PROSITE allows the following pattern elements in addition to those described previously:
The lower case letter 'x' can be used as a pattern element to denote any amino acid.
A string of characters drawn from the alphabet and enclosed in braces (curly brackets) denotes any amino acid except for those in the string. For example, {ST} denotes any amino acid other than S or T.
If a pattern is restricted to the N-terminal of a sequence, the pattern is prefixed with '<'.
If a pattern is restricted to the C-terminal of a sequence, the pattern is suffixed with '>'.
The character '>' can also occur inside a terminating square bracket pattern, so that S[T>] matches both "ST" and "S>".
If e is a pattern element, and m and n are two decimal integers with m <= n, then:
e(m) is equivalent to the repetition of e exactly m times;
e(m,n) is equivalent to the repetition of e exactly k times for any integer k satisfying: m <= k <= n.
Some examples:
x(3) is equivalent to x-x-x.
x(2,4) matches any sequence that matches x-x or x-x-x or x-x-x-x.
The signature of the C2H2-type zinc finger domain is:
C-x(2,4)-C-x(3)-[LIVMFYWC]-x(8)-H-x(3,5)-H
==== Matrices ====
A matrix of numbers containing scores for each residue or nucleotide at each position of a fixed-length motif. There are two types of weight matrices.
A position frequency matrix (PFM) records the position-dependent frequency of each residue or nucleotide. PFMs can be experimentally determined from SELEX experiments or computationally discovered by tools such as MEME using hidden Markov models.
A position weight matrix (PWM) contains log odds weights for computing a match score. A cutoff is needed to specify whether an input sequence matches the motif or not. PWMs are calculated from PFMs. PWMs are also known as PSSMs.
An example of a PFM from the TRANSFAC database for the transcription factor AP-1:
The first column specifies the position, the second column contains the number of occurrences of A at that position, the third column contains the number of occurrences of C at that position, the fourth column contains the number of occurrences of G at that position, the fifth column contains the number of occurrences of T at that position, and the last column contains the IUPAC notation for that position.
Note that the sums of occurrences for A, C, G, and T for each row should be equal because the PFM is derived from aggregating several consensus sequences.
== Motif Discovery ==
=== Overview ===
The sequence motif discovery process has been well-developed since the 1990s. In particular, most of the existing motif discovery research focuses on DNA motifs. With the advances in high-throughput sequencing, such motif discovery problems are challenged by both the sequence pattern degeneracy issues and the data-intensive computational scalability issues.
Process of discovery
Motif discovery happens in three major phases. A pre-processing stage where sequences are meticulously prepared in assembly and cleaning steps. Assembly involves selecting sequences that contain the desired motif in large quantities, and extraction of unwanted sequences using clustering. Cleaning then ensures the removal of any confounding elements. Next there is the discovery stage. In this phase sequences are represented using consensus strings or Position-specific Weight Matrices (PWM). After motif representation, an objective function is chosen and a suitable search algorithm is applied to uncover the motifs. Finally the post-processing stage involves evaluating the discovered motifs.
==== De novo motif discovery ====
There are software programs which, given multiple input sequences, attempt to identify one or more candidate motifs. One example is the Multiple EM for Motif Elicitation (MEME) algorithm, which generates statistical information for each candidate. There are more than 100 publications detailing motif discovery algorithms; Weirauch et al. evaluated many related algorithms in a 2013 benchmark. The planted motif search is another motif discovery method that is based on combinatorial approach.
==== Phylogenetic motif discovery ====
Motifs have also been discovered by taking a phylogenetic approach and studying similar genes in different species. For example, by aligning the amino acid sequences specified by the GCM (glial cells missing) gene in man, mouse and D. melanogaster, Akiyama and others discovered a pattern which they called the GCM motif in 1996. It spans about 150 amino acid residues, and begins as follows:
WDIND*.*P..*...D.F.*W***.**.IYS**...A.*H*S*WAMRNTNNHN
Here each . signifies a single amino acid or a gap, and each * indicates one member of a closely related family of amino acids. The authors were able to show that the motif has DNA binding activity.
A similar approach is commonly used by modern protein domain databases such as Pfam: human curators would select a pool of sequences known to be related and use computer programs to align them and produce the motif profile (Pfam uses HMMs, which can be used to identify other related proteins. A phylogenic approach can also be used to enhance the de novo MEME algorithm, with PhyloGibbs being an example.
==== De novo motif pair discovery ====
In 2017, MotifHyades has been developed as a motif discovery tool that can be directly applied to paired sequences.
==== De novo motif recognition from protein ====
In 2018, a Markov random field approach has been proposed to infer DNA motifs from DNA-binding domains of proteins.
Motif Discovery Algorithms
Motif discovery algorithms use diverse strategies to uncover patterns in DNA sequences. Integrating enumerative, probabilistic, and nature-inspired approaches, demonstrate their adaptability, with the use of multiple methods proving effective in enhancing identification accuracy.
Enumerative Approach:
Initiating the motif discovery journey, the enumerative approach witnesses algorithms meticulously generating and evaluating potential motifs. Pioneering this domain are Simple Word Enumeration techniques, such as YMF and DREME, which systematically go through the sequence in search of short motifs. Complementing these, Clustering-Based Methods such as CisFinder employ nucleotide substitution matrices for motif clustering, effectively mitigating redundancy. Concurrently, Tree-Based Methods like Weeder and FMotif exploit tree structures, and Graph Theoretic-Based Methods (e.g., WINNOWER) employ graph representations, demonstrating the richness of enumeration strategies.
Probabilistic Approach:
Diverging into the probabilistic realm, this approach capitalizes on probability models to discern motifs within sequences. MEME, a deterministic exemplar, employs Expectation-Maximization for optimizing Position Weight Matrices (PWMs) and unraveling conserved regions in unaligned DNA sequences. Contrasting this, stochastic methodologies like Gibbs Sampling initiate motif discovery with random motif position assignments, iteratively refining the predictions. This probabilistic framework adeptly captures the inherent uncertainty associated with motif discovery.
Advanced Approach:
Evolving further, advanced motif discovery embraces sophisticated techniques, with Bayesian modeling taking center stage. LOGOS and BaMM, exemplifying this cohort, intricately weave Bayesian approaches and Markov models into their fabric for motif identification. The incorporation of Bayesian clustering methods enhances the probabilistic foundation, providing a holistic framework for pattern recognition in DNA sequences.
Nature-Inspired and Heuristic Algorithms:
A distinct category unfolds, wherein algorithms draw inspiration from the biological realm. Genetic Algorithms (GA), epitomized by FMGA and MDGA, navigate motif search through genetic operators and specialized strategies. Harnessing swarm intelligence principles, Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC) algorithms, and Cuckoo Search (CS) algorithms, featured in GAEM, GARP, and MACS, venture into pheromone-based exploration. These algorithms, mirroring nature's adaptability and cooperative dynamics, serve as avant-garde strategies for motif identification. The synthesis of heuristic techniques in hybrid approaches underscores the adaptability of these algorithms in the intricate domain of motif discovery.
== Motif Cases ==
=== Three-dimensional chain codes ===
The E. coli lactose operon repressor LacI (PDB: 1lcc chain A) and E. coli catabolite gene activator (PDB: 3gap chain A) both have a helix-turn-helix motif, but their amino acid sequences do not show much similarity, as shown in the table below. In 1997, Matsuda, et al. devised a code they called the "three-dimensional chain code" for representing the protein structure as a string of letters. This encoding scheme reveals the similarity between the proteins much more clearly than the amino acid sequence (example from article): The code encodes the torsion angles between alpha-carbons of the protein backbone. "W" always corresponds to an alpha helix.
== See also ==
Biomolecular structure
Mammalian Motif Finder
MochiView
Multiple EM for Motif Elicitation
Nucleic acid sequence
Protein primary structure
Protein I-sites
Sequence logo
Sequence mining
Structural motif
Short linear motif
Conserved sequence
Protein domain
== References ==
=== Primary sources ===
== Further reading ==
=== Primary sources === | Wikipedia/DNA_motif |
Command and control (abbr. C2) is a "set of organizational and technical attributes and processes ... [that] employs human, physical, and information resources to solve problems and accomplish missions" to achieve the goals of an organization or enterprise, according to a 2015 definition by military scientists Marius Vassiliou, David S. Alberts, and Jonathan R. Agre. The term often refers to a military system.
Versions of the United States Army Field Manual 3-0 circulated circa 1999 define C2 in a military organization as the exercise of authority and direction by a properly designated commanding officer over assigned and attached forces in the accomplishment of a mission.
A 1988 NATO definition is that command and control is the exercise of authority and direction by a properly designated individual over assigned resources in the accomplishment of a common goal. An Australian Defence Force definition, similar to that of NATO, emphasises that C2 is the system empowering designated personnel to exercise lawful authority and direction over assigned forces for the accomplishment of missions and tasks. The Australian doctrine goes on to state: "The use of agreed terminology and definitions is fundamental to any C2 system and the development of joint doctrine and procedures. The definitions in the following paragraphs have some agreement internationally, although not every potential ally will use the terms with exactly the same meaning."
== Overview ==
=== US perspective ===
The US Department of Defense Dictionary of Military and Associated Terms defines command and control as: "The exercise of authority and direction by a properly designated commander over assigned and attached forces in the accomplishment of the mission. Also called C2. Source: JP 1".
The edition of the Dictionary "As Amended Through April 2010" elaborates, "Command and control functions are performed through an arrangement of personnel, equipment, communications, facilities, and procedures employed by a commander in planning, directing, coordinating, and controlling forces and operations in the accomplishment of the mission." However, this sentence is missing from the "command and control" entry for the edition "As Amended Through 15 August 2014."
Commanding officers are assisted in executing these tasks by specialized staff officers and enlisted personnel. These military staff are a group of officers and enlisted personnel that provides a bi-directional flow of information between a commanding officer and subordinate military units.
The purpose of a military staff is mainly that of providing accurate, timely information which by category represents information on which command decisions are based. The key application is that of decisions that effectively manage unit resources. While information flow toward the commander is a priority, information that is useful or contingent in nature is communicated to lower staffs and units.
=== Computer security industry ===
This term is also in common use within the computer security industry and in the context of cyberwarfare. Here the term refers to the influence an attacker has over a compromised computer system that they control. For example, a valid usage of the term is to say that attackers use "command and control infrastructure" to issue "command and control instructions" to their victims. Advanced analysis of command and control methodologies can be used to identify attackers, associate attacks, and disrupt ongoing malicious activity.
== Derivative terms ==
There is a plethora of derivative terms that emphasize various aspects, uses, and sub-domains of C2. These terms are accompanied by numerous associated abbreviations. For example, in addition to C2, command and control is often abbreviated as C2 and sometimes as C&C
"Command and control" have been coupled with:
Collaboration
Communication / communications
Computers / computing
Electronic warfare
Interoperability
Reconnaissance
Surveillance
Target acquisition
and others.
Some of the more common variations include:
AC2 - Aviation command & control
C2I – Command, control & intelligence
C2I – command, control & information (a less common usage)
R2C2I - rapid advanced manufacturing, command, control & intelligence [developed by SICDRONE]
C2IS – command and control information systems
C2ISR – C2I plus surveillance and reconnaissance
C2ISTAR – C2 plus ISTAR (intelligence, surveillance, target acquisition, and reconnaissance)
C3 – command, control & communication (human activity focus)
C3 – command, control & communications (technology focus)
C3 – consultation, command, and control [NATO]
C3I – 4 possibilities; the most common is command, control, communications and intelligence
C3ISTAR – C3 plus ISTAR
C3ISREW – C2ISR plus communications plus electronic warfare (technology focus)
C3MS - cyber command and control mission system
C3/SA - C3 plus situational awareness
C4, C4I, C4ISR, C4ISTAR, C4ISREW, C4ISTAREW – plus computers (technology focus) or computing (human activity focus)
C4I2 – command, control, communications, computers, intelligence, and interoperability
C5I – command, control, communications, computers, collaboration and intelligence
C5I – command, control, communications, computers, cyber and intelligence (US Army)
C6ISR – command, control, communications, computers, cyber-defense and combat systems and intelligence, surveillance, and reconnaissance
MDC2 - multi-domain command and control
NC2 − nuclear command and control
NC3 − nuclear command and control and communications
and others.
Command: The exercise of authority based upon certain knowledge to attain an objective.
Control: The process of verifying and correcting activity such that the objective or goal of command is accomplished.
Communication: Ability to exercise the necessary liaison to exercise effective command between tactical or strategic units to command.
Computers: The computer systems and compatibility of computer systems. Also includes data processing.
Intelligence: Includes collection as well as analysis and distribution of information.
== Command and control centers ==
A command and control center is typically a secure room or building in a government, military or prison facility that operates as the agency's dispatch center, surveillance monitoring center, coordination office and alarm monitoring center all in one. Command and control centers are operated by a government or municipal agency.
Various branches of the US military such as the US Coast Guard and Navy have command and control centers. They are also common in many large correctional facilities.
A command and control center that is used by a military unit in a deployed location is usually called a "command post". A warship has a combat information center for tactical control of the ship's resources, but commanding a fleet or joint operation requires additional space for commanders and staff plus C4I facilities provided on a flagship (e.g., aircraft carriers), sometimes a command ship or upgraded logistics ship such as USS Coronado.
== Command and control warfare ==
Command and control warfare encompasses all the military tactics that use communications technology. It can be abbreviated as C2W. An older name for these tactics is "signals warfare", derived from the name given to communications by the military. Newer names include information operations and information warfare.
The following techniques are combined:
Cyber operations
with the physical destruction of enemy communications facilities. The objective is to deny information to the enemy and so disrupt its command and control capabilities. At the same time precautions are taken to protect friendly command and control capabilities against retaliation.
In addition to targeting the enemy's command and control, information warfare can be directed to the enemy's politicians and other civilian communications.
Electronic warfare (EW)
Military deception
Operations security (OPSEC)
Psychological operations (PSYOP)
Psychological warfare
== See also ==
US and other NATO specific:
Other
Military Institute of Telecommunications and Information Technologies
== References ==
=== Citations ===
=== Sources ===
== External links ==
Command and control definitions and procedures, UK College of Policing
International Command and Control Institute
Understanding Command and Control by D. S. Alberts and R. E. Hayes (2006) | Wikipedia/Command_and_control |
Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Some common deep learning network architectures include fully connected networks, deep belief networks, recurrent neural networks, convolutional neural networks, generative adversarial networks, transformers, and neural radiance fields. These architectures have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.
Early forms of neural networks were inspired by information processing and distributed communication nodes in biological systems, particularly the human brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose.
== Overview ==
Most modern deep learning models are based on multi-layered neural networks such as convolutional neural networks and transformers, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines.
Fundamentally, deep learning refers to a class of machine learning algorithms in which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in an image recognition model, the raw input may be an image (represented as a tensor of pixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face.
Importantly, a deep learning process can learn which features to optimally place at which level on its own. Prior to deep learning, machine learning techniques often involved hand-crafted feature engineering to transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the model discovers useful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.
The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function. Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively.
Deep learning architectures can be constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features improve performance.
Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are deep belief networks.
The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986, and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons. Although the history of its appearance is apparently more complicated.
== Interpretations ==
Deep neural networks are generally interpreted in terms of the universal approximation theorem or probabilistic inference.
The classic universal approximation theorem concerns the capacity of feedforward neural networks with a single hidden layer of finite size to approximate continuous functions. In 1989, the first proof was published by George Cybenko for sigmoid activation functions and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik. Recent work also showed that universal approximation also holds for non-bounded activation functions such as Kunihiko Fukushima's rectified linear unit.
The universal approximation theorem for deep neural networks concerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al. proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator.
The probabilistic interpretation derives from the field of machine learning. It features inference, as well as the optimization concepts of training and testing, related to fitting and generalization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative distribution function. The probabilistic interpretation led to the introduction of dropout as regularizer in neural networks. The probabilistic interpretation was introduced by researchers including Hopfield, Widrow and Narendra and popularized in surveys such as the one by Bishop.
== History ==
=== Before 1980 ===
There are two types of artificial neural network (ANN): feedforward neural network (FNN) or multilayer perceptron (MLP) and recurrent neural networks (RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s, Wilhelm Lenz and Ernst Ising created the Ising model which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was republished by John Hopfield in 1982. Other early recurrent neural networks were published by Kaoru Nakano in 1971. Already in 1948, Alan Turing produced work on "Intelligent Machinery" that was not published in his lifetime, containing "ideas related to artificial evolution and learning RNNs".
Frank Rosenblatt (1958) proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight).: section 16 The book cites an earlier network by R. D. Joseph (1960) "functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptive multilayer perceptrons with learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion.
The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in 1965. They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates".
The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique.
In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning.
Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.
Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. The modern form of backpropagation was first published in Seppo Linnainmaa's master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
=== 1980s-2000s ===
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.
In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images.
Recurrent neural networks (RNN) were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study problems in cognitive psychology.
In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991, Jürgen Schmidhuber proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning where each RNN tries to predict its own next input, which is the next unexpected input of the RNN below. This "neural history compressor" uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network. In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. The "P" in ChatGPT refers to such pre-training.
Sepp Hochreiter's diploma thesis (1991) implemented the neural history compressor, and identified and analyzed the vanishing gradient problem. Hochreiter proposed recurrent residual connections to solve the vanishing gradient problem. This led to the long short-term memory (LSTM), published in 1995. LSTM can learn "very deep learning" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999, which became the standard RNN architecture.
In 1991, Jürgen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used in generative adversarial networks (GANs).
During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112 ). A 1988 network became state of the art in protein structure prediction, an early application of deep learning to bioinformatics.
Both shallow and deep learning (e.g., recurrent nets) of ANNs for speech recognition have been explored for many years. These methods never outperformed non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. Key difficulties have been analyzed, including gradient diminishing and weak temporal correlation structure in neural predictive models. Additional difficulties were the lack of training data and limited computing power.
Most speech recognition researchers moved away from neural nets to pursue generative modeling. An exception was at SRI International in the late 1990s. Funded by the US government's NSA and DARPA, SRI researched in speech and speaker recognition. The speaker recognition team led by Larry Heck reported significant success with deep neural networks in speech processing in the 1998 NIST Speaker Recognition benchmark. It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning.
The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features in the late 1990s, showing its superiority over the Mel-Cepstral features that contain stages of fixed transformation from spectrograms. The raw features of speech, waveforms, later produced excellent larger-scale results.
=== 2000s ===
Neural networks entered a lull, and simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks.
In 2003, LSTM became competitive with traditional speech recognizers on certain tasks. In 2006, Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it with connectionist temporal classification (CTC) in stacks of LSTMs. In 2009, it became the first RNN to win a pattern recognition contest, in connected handwriting recognition.
In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh deep belief networks were developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionally fine-tuned using supervised backpropagation. They could model high-dimensional probability distributions, such as the distribution of MNIST images, but convergence was slow.
The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun. Industrial applications of deep learning to large-scale speech recognition started around 2010.
The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems. The nature of the recognition errors produced by the two types of systems was characteristically different, offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems. Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition. That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models.
In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees.
=== Deep learning revolution ===
The deep learning revolution started around CNN- and GPU-based computer vision.
Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years, including CNNs, faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning.
A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004. In 2009, Raina, Madhavan, and Andrew Ng reported a 100M deep belief network trained on 30 Nvidia GeForce GTX 280 GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training.
In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly.
In 2012, Andrew Ng and Jeff Dean created an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken from YouTube videos.
In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3.
The success in image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs.
In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and the residual neural network (ResNet) in Dec 2015. ResNet behaves like an open-gated Highway Net.
Around the same time, deep learning started impacting the field of art. Early examples included Google DeepDream (2015), and neural style transfer (2015), both of which were based on pretrained image classification neural networks, such as VGG-19.
Generative adversarial network (GAN) by (Ian Goodfellow et al., 2014) (based on Jürgen Schmidhuber's principle of artificial curiosity)
became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022).
In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available through Google Voice Search on smartphone.
Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR). Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved. Convolutional neural networks were superseded for ASR by LSTM. but are more successful in computer vision.
Yoshua Bengio, Geoffrey Hinton and Yann LeCun were awarded the 2018 Turing Award for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".
== Neural networks ==
Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.
An ANN is based on a collection of connected units called artificial neurons, (analogous to biological neurons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.
Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.
The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.
Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.
As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go").
=== Deep neural networks ===
A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers. There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions. These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm.
For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer, and complex DNN have many layers, hence the name "deep" networks.
DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives. The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network. For instance, it was proved that sparse multivariate polynomials are exponentially easier to approximate with DNNs than with shallow networks.
Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.
DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights. That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data.
Recurrent neural networks, in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.
Convolutional neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR).
==== Challenges ====
As with ANNs, many issues can arise with naively trained DNNs. Two common issues are overfitting and computation time.
DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data. Regularization methods such as Ivakhnenko's unit pruning or weight decay (
ℓ
2
{\displaystyle \ell _{2}}
-regularization) or sparsity (
ℓ
1
{\displaystyle \ell _{1}}
-regularization) can be applied during training to combat overfitting. Alternatively dropout regularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies. Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction. Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting.
DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), the learning rate, and initial weights. Sweeping through the parameter space for optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such as batching (computing the gradient on several training examples at once rather than individual examples) speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations.
Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.
== Hardware ==
Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI . OpenAI estimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months.
Special electronic circuits called deep learning processors were designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) in Huawei cellphones and cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform. Cerebras Systems has also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2).
Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage.
In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs).
In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.
== Applications ==
=== Automatic speech recognition ===
Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates is competitive with traditional speech recognizers on certain tasks.
The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight major dialects of American English, where each speaker reads 10 sentences. Its small size lets many configurations be tried. More importantly, the TIMIT task concerns phone-sequence recognition, which, unlike word-sequence recognition, allows weak phone bigram language models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991.
The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas:
Scale-up/out and accelerated DNN training and decoding
Sequence discriminative training
Feature processing by deep models with solid understanding of the underlying mechanisms
Adaptation of DNNs and related deep models
Multi-task and transfer learning by DNNs and related deep models
CNNs and how to design them to best exploit domain knowledge of speech
RNN and its rich LSTM variants
Other types of deep models including tensor-based models and integrated deep generative/discriminative models.
All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning.
=== Image recognition ===
A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available.
Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces.
Deep learning-trained vehicles now interpret 360° camera views. Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.
=== Visual art processing ===
Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of
identifying the style period of a given painting
Neural Style Transfer – capturing the style of a given artwork and applying it in a visually pleasing manner to an arbitrary photograph or video
generating striking imagery based on random visual input fields.
=== Natural language processing ===
Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling.
Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN. Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing. Deep neural architectures provide the best results for constituency parsing, sentiment analysis, information retrieval, spoken language understanding, machine translation, contextual entity linking, writing style recognition, named-entity recognition (token classification), text classification, and others.
Recent developments generalize word embedding to sentence embedding.
Google Translate (GT) uses a large end-to-end long short-term memory (LSTM) network. Google Neural Machine Translation (GNMT) uses an example-based machine translation method in which the system "learns from millions of examples". It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages. The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations". GT uses English as an intermediate between most language pairs.
=== Drug discovery and toxicology ===
A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects. Research has explored use of deep learning to predict the biomolecular targets, off-targets, and toxic effects of environmental chemicals in nutrients, household products and drugs.
AtomNet is a deep learning system for structure-based rational drug design. AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.
In 2017 graph neural networks were used for the first time to predict various properties of molecules in a large toxicology data set. In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice.
=== Customer relationship management ===
Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.
=== Recommendation systems ===
Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.
=== Bioinformatics ===
An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.
In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.
Deep neural networks have shown unparalleled performance in predicting protein structure, according to the sequence of the amino acids that make it up. In 2020, AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods.
=== Deep Neural Network Estimations ===
Deep neural networks can be used to estimate the entropy of a stochastic process and called Neural Joint Entropy Estimator (NJEE). Such an estimation provides insights on the effects of input random variables on an independent random variable. Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in image classification tasks, the NJEE maps a vector of pixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes.
=== Medical image analysis ===
Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement. Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency.
=== Mobile advertising ===
Finding the appropriate mobile audience for mobile advertising is always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.
=== Image restoration ===
Deep learning has been successfully applied to inverse problems such as denoising, super-resolution, inpainting, and film colorization. These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration" which trains on an image dataset, and Deep Image Prior, which trains on the image that needs restoration.
=== Financial fraud detection ===
Deep learning is being successfully applied to financial fraud detection, tax evasion detection, and anti-money laundering.
=== Materials science ===
In November 2023, researchers at Google DeepMind and Lawrence Berkeley National Laboratory announced that they had developed an AI system known as GNoME. This system has contributed to materials science by discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganic crystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through the Materials Project database, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.
=== Military ===
The United States Department of Defense applied deep learning to train robots in new tasks through observation.
=== Partial differential equations ===
Physics informed neural networks have been used to solve partial differential equations in both forward and inverse problems in a data driven manner. One example is the reconstructing fluid flow governed by the Navier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventional CFD methods rely on.
=== Deep backward stochastic differential equation method ===
Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.
In addition, the integration of Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems.
=== Image reconstruction ===
Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging and ultrasound imaging.
=== Weather prediction ===
Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems.
=== Epigenetic clock ===
An epigenetic clock is a biochemical test that can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples. The clock uses information from 1000 CpG sites and predicts people with certain conditions older than healthy controls: IBD, frontotemporal dementia, ovarian cancer, obesity. The aging clock was planned to be released for public use in 2021 by an Insilico Medicine spinoff company Deep Longevity.
== Relation to human cognitive and brain development ==
Deep learning is closely related to a class of theories of brain development (specifically, neocortical development) proposed by cognitive neuroscientists in the early 1990s. These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave of nerve growth factor) support the self-organization somewhat analogous to the neural networks utilized in deep learning models. Like the neocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack of transducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature".
A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of the backpropagation algorithm have been proposed in order to increase its processing realism. Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchical generative models and deep belief networks, may be closer to biological reality. In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex.
Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons and neural populations. Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system both at the single-unit and at the population levels.
== Commercial activity ==
Facebook's AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them.
Google's DeepMind Technologies developed a system capable of learning how to play Atari video games using only pixels as data input. In 2015 they demonstrated their AlphaGo system, which learned the game of Go well enough to beat a professional Go player. Google Translate uses a neural network to translate between more than 100 languages.
In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.
As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation. Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job".
== Criticism and comment ==
Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science.
=== Theory ===
A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear. (e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as a black box, with most confirmations done empirically, rather than theoretically.
In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article on The Guardian's website.
=== Errors ===
Some deep learning architectures display problematic behaviors, such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014) and misclassifying minuscule perturbations of correctly classified images (2013). Goertzel hypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-component artificial general intelligence (AGI) architectures. These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar decompositions of observed entities and events. Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI).
=== Cyber threat ===
As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception. By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack".
In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system. One defense is reverse image search, in which a possible fake image is submitted to a site such as TinEye that can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken.
Another group showed that certain psychedelic spectacles could fool a facial recognition system into thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers to stop signs and caused an ANN to misclassify them.
ANNs can however be further trained to detect attempts at deception, potentially leading attackers and defenders into an arms race similar to the kind that already defines the malware defense industry. ANNs have been trained to defeat ANN-based anti-malware software by repeatedly attacking a defense with malware that was continually altered by a genetic algorithm until it tricked the anti-malware while retaining its ability to damage the target.
In 2016, another group demonstrated that certain sounds could make the Google Now voice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)".
In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery.
=== Data collection ethics ===
The deep learning systems that are trained using supervised learning often rely on data that is created or annotated by humans, or both. It has been argued that not only low-paid clickwork (such as on Amazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of human microwork that are often not recognized as such. The philosopher Rainer Mühlhoff distinguishes five types of "machinic capture" of human microwork to generate training data: (1) gamification (the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g. CAPTCHAs for image recognition or click-tracking on Google search results pages), (3) exploitation of social motivations (e.g. tagging faces on Facebook to obtain labeled facial images), (4) information mining (e.g. by leveraging quantified-self devices such as activity trackers) and (5) clickwork.
== See also ==
Applications of artificial intelligence
Comparison of deep learning software
Compressed sensing
Differentiable programming
Echo state network
List of artificial intelligence projects
Liquid state machine
List of datasets for machine-learning research
Reservoir computing
Scale space and deep learning
Sparse coding
Stochastic parrot
Topological deep learning
== References ==
== Further reading == | Wikipedia/Applications_of_deep_learning |
The Dow Jones Industrial Average (DJIA), Dow Jones, or simply the Dow (), is a stock market index of 30 prominent companies listed on stock exchanges in the United States.
The DJIA is one of the oldest and most commonly followed equity indices. It is price-weighted, unlike other common indexes such as the Nasdaq Composite or S&P 500, which use market capitalization. The DJIA also contains fewer stocks, which could exhibit higher risk; however, it could be less volatile when the market is rapidly rising or falling due to its components being well-established large-cap companies.
The value of the index can also be calculated as the sum of the stock prices of the companies included in the index, divided by a factor, which is approximately 0.163 as of November 2024. The factor is changed whenever a constituent company undergoes a stock split so that the value of the index is unaffected by the stock split.
First calculated on May 26, 1896, the index is the second-oldest among U.S. market indices, after the Dow Jones Transportation Average. It was created by Charles Dow, co-founder of both The Wall Street Journal and Dow Jones & Company, and named after him and his business associate, statistician Edward Jones.
The index is maintained by S&P Dow Jones Indices, an entity majority-owned by S&P Global. Its components are selected by a committee. The ten components with the largest dividend yields are commonly referred to as the Dogs of the Dow. As with all stock prices, the prices of the constituent stocks and consequently the value of the index itself are affected by the performance of the respective companies as well as macroeconomic factors.
== Components ==
As of May 29, 2025, the Dow Jones Industrial Average consists of the following companies, with a weighting as shown:
== Former components ==
As of November 8, 2024, the components of the DJIA have changed 59 times since its beginning on May 26, 1896. General Electric had the longest presence on the index, beginning in the original index in 1896 and ending in 2018, but was dropped and re-added twice between 1898 and 1907. Changes to the index since 1991 are as follows:
On May 6, 1991, Caterpillar Inc., J.P. Morgan & Co., and The Walt Disney Company replaced American Can, Navistar, and U.S. Steel.
On March 17, 1997, Travelers Inc., Hewlett-Packard, Johnson & Johnson, and Walmart replaced Westinghouse Electric, Texaco, Bethlehem Steel, and F. W. Woolworth Company.
On November 1, 1999, Microsoft, Intel, SBC Communications, and Home Depot replaced Goodyear Tire, Sears Roebuck, Union Carbide, and Chevron Corporation. Intel and Microsoft became the first and second companies traded on the Nasdaq to be part of the Dow.
On April 8, 2004, American International Group, Pfizer, and Verizon Communications replaced AT&T Corporation, Kodak, and International Paper.
On February 19, 2008, Chevron Corporation and Bank of America replaced Altria Group and Honeywell. Chevron was previously a Dow component from July 18, 1930, to November 1, 1999. During Chevron's absence, its split-adjusted price per share went from $44 to $85, while the price of petroleum rose from $24 to $100 per barrel.
On September 22, 2008, Kraft Foods Inc. replaced American International Group (AIG) in the index.
On June 8, 2009, The Travelers Companies and Cisco Systems replaced Motors Liquidation Company (formerly General Motors) and Citigroup. Cisco became the third company traded on the NASDAQ to be part of the Dow.
On September 24, 2012, UnitedHealth Group replaced Kraft Foods Inc. following Kraft's split into Mondelez International and Kraft Foods.
On September 23, 2013, Goldman Sachs, Nike, Inc., and Visa Inc. replaced Alcoa, Bank of America, and Hewlett-Packard. Visa replaced Hewlett-Packard because of the split into HP Inc. and Hewlett Packard Enterprise.
On March 19, 2015, Apple Inc. replaced AT&T, which had been a component of the DJIA since November 1916. Apple became the fourth company traded on the NASDAQ to be part of the Dow.
On September 1, 2017, DowDuPont replaced DuPont. DowDuPont was formed by the merger of Dow Chemical Company with DuPont.
On June 26, 2018, Walgreens Boots Alliance replaced General Electric, which had been a component of the DJIA since November 1907, after being part of the inaugural index in May 1896 and much of the 1896 to 1907 period.
On April 2, 2019, Dow Inc. replaced DowDuPont. Dow, Inc. is a spin-off of DowDuPont, itself a merger of Dow Chemical Company and DuPont.
On April 6, 2020, Raytheon Technologies replaced United Technologies. Raytheon is the name of the combination of United Technologies and the Raytheon Company, which merged as of April 3, 2020. The newly combined conglomerate does not include previous subsidiaries Carrier Global or Otis Worldwide.
On August 31, 2020, Amgen, Honeywell, and Salesforce.com replaced ExxonMobil, Pfizer, and Raytheon Technologies.
On February 26, 2024, Amazon replaced Walgreens Boots Alliance.
On November 8, 2024, Nvidia replaced Intel, and Sherwin-Williams replaced Dow Inc.
== Investment methods ==
Investing in the DJIA is possible via index funds as well as via derivatives such as option contracts and futures contracts.
=== Mutual and exchange-traded funds ===
Index funds, including mutual funds and exchange-traded funds (ETF) can replicate, before fees and expenses, the performance of the index by holding the same stocks as the index in the same proportions. An ETF that replicates the performance of the index is issued by State Street Corporation (NYSE Arca: DIA).
ProShares offers leveraged ETFs that attempt to produce three times the daily result of either investing in (NYSE Arca: UDOW) or shorting (NYSE Arca: SDOW) the Dow Jones Industrial Average.
=== Futures contracts ===
In the derivatives market, the CME Group through its subsidiaries the Chicago Mercantile Exchange (CME) and the Chicago Board of Trade (CBOT), issues Futures Contracts; the E-mini Dow ($5) Futures (YM), which track the average and trade on their exchange floors respectively. Trading is typically carried out in an open outcry auction, or over an electronic network such as CME's Globex platform.
=== Options contracts ===
The Chicago Board Options Exchange (CBOE) issues option contracts on the Dow through the root symbol DJX. Options on various Dow-underlying ETFs are also available for trading.
== Annual returns ==
The following table shows the annual development of the Dow Jones Index, which was calculated back to 1896.
== History ==
=== Precursor ===
In 1884, Charles Dow composed his first stock average, which contained nine railroads and two industrial companies that appeared in the Customer's Afternoon Letter, a daily two-page financial news bulletin which was the precursor to The Wall Street Journal. On January 2, 1886, the number of stocks represented in what is now the Dow Jones Transportation Average dropped from 14 to 12, as the Central Pacific Railroad and Central Railroad of New Jersey were removed. Though comprising the same number of stocks, this index contained only one of the original twelve industrials that would eventually form Dow's most famous index.
=== Initial components ===
Dow calculated his first average purely of industrial stocks on May 26, 1896, creating what is now known as the Dow Jones Industrial Average. None of the original 12 industrials still remain part of the index.
American Cotton Oil Company, a predecessor company to Hellmann's and Best Foods, now part of Unilever.
American Sugar Refining Company, became Domino Sugar in 1900, now Domino Foods, Inc.
American Tobacco Company, broken up in a 1911 antitrust action.
Chicago Gas Company, bought by Peoples Gas Light in 1897, was an operating subsidiary of the now-defunct Integrys Energy Group until 2014.
Distilling & Cattle Feeding Company, now Millennium Chemicals, formerly a division of LyondellBasell.
General Electric, still in operation, removed from the Dow Jones Industrial Average in 2018.
Laclede Gas Company, still in operation as Spire Inc, removed from the Dow Jones Industrial Average in 1899.
National Lead Company, now NL Industries, removed from the Dow Jones Industrial Average in 1916.
North American Company, an electric utility holding company, broken up by the U.S. Securities and Exchange Commission (SEC) in 1946.
Tennessee Coal, Iron and Railroad Company in Birmingham, Alabama, bought by U.S. Steel in 1907; U.S. Steel was removed from the Dow Jones Industrial Average in 1991.
United States Leather Company, dissolved in 1952.
United States Rubber Company, changed its name to Uniroyal in 1961, merged with private Goodrich Corporation in 1986, tire business bought by Michelin in 1990. The remainder of Goodrich remained independent until it was acquired by United Technologies in 2012 and became a part of UTC Aerospace Systems, now Collins Aerospace, a Raytheon Technologies subsidiary.
=== Early years ===
When it was first published in the mid-1880s, the index stood at a level of 62.76. It reached a peak of 78.38 during the summer of 1890, but reached its all-time low of 28.48 in the summer of 1896 during the Panic of 1896. Many of the biggest percentage price moves in the Dow occurred early in its history, as the nascent industrial economy matured. In the 1900s, the Dow halted its momentum as it worked its way through two financial crises: the Panic of 1901 and the Panic of 1907. The Dow remained stuck in a range between 53 and 103 until late 1914. The negativity surrounding the 1906 San Francisco earthquake did little to improve the economic climate; the index broke 100 for the first time in 1906.
At the start of the 1910s, the Panic of 1910–1911 stifled economic growth. On July 30, 1914, as the average stood at a level of 71.42, a decision was made to close the New York Stock Exchange, and suspend trading for a span of four and a half months. Some historians believe the exchange was closed because of a concern that markets would plunge as a result of panic over the onset of World War I. An alternative explanation is that the United States Secretary of the Treasury, William Gibbs McAdoo, closed the exchange to conserve the U.S. gold stock in order to launch the Federal Reserve System later that year, with enough gold to keep the United States on par with the gold standard. When the markets reopened on December 12, 1914, the index closed at 74.56, a gain of 4.4%. This is frequently reported as a large drop, due to using a later redefinition. Reports from the time say that the day was positive. Following World War I, the United States experienced another economic downturn, the Post–World War I recession. The Dow's performance remained unchanged from the closing value of the previous decade, adding only 8.26%, from 99.05 at the beginning of 1910, to a level of 107.23 at the end of 1919.
The Dow experienced a long bull run from 1920 to late 1929 when it rose from 73 to 381 points. In 1928, the components of the Dow were increased to 30 stocks near the economic height of that decade, which was nicknamed the Roaring Twenties. This period downplayed the influence of the Depression of 1920–1921 and certain international conflicts such as the Polish–Soviet War, the Irish Civil War, the Turkish War of Independence and the initial phase of the Chinese Civil War. After a peak of 381.17 on September 3, 1929, the bottom of the 1929 crash came just 2 months later on November 13, 1929, at 195.35 intraday, closing slightly higher at 198.69. The Wall Street Crash of 1929 and the ensuing Great Depression over the next several years saw the Dow continue to fall until July 8, 1932, when it closed at 41.22, roughly two-thirds of its mid-1880s starting point and almost 90% below its peak. Overall for the 1920s decade, the Dow still ended with a healthy 131.7% gain, from 107.23 to 248.48 at the end of 1929. In inflation-adjusted numbers, the high of 381.17 on September 3, 1929, was not surpassed until 1954.
Marked by global instability and the Great Depression, the 1930s contended with several consequential European and Asian outbreaks of war, leading to the catastrophic World War II in 1939. Other conflicts during the decade which affected the stock market included the 1936–1939 Spanish Civil War, the 1935–1936 Second Italo-Abyssinian War, the Soviet-Japanese Border War of 1939, and the Second Sino-Japanese War of 1937. The United States experienced the Recession of 1937–1938, which temporarily brought economic recovery to a halt. The largest one-day percentage gain in the index happened in the depths of the 1930s bear market on March 15, 1933, when the Dow gained 15.34% to close at 62.10. However, as a whole throughout the Great Depression, the Dow posted some of its worst performances, for a negative return during most of the 1930s for new and old stock market investors. For the decade, the Dow Jones average was down from 248.48 at the beginning of 1930, to a stable level of 150.24 at the end of 1939, a loss of about 40%.
=== 1940s ===
Post-war reconstruction during the 1940s, along with renewed optimism of peace and prosperity, brought about a 33% surge in the Dow from 150.24 to 200.13. The strength in the Dow occurred despite the Recession of 1949 and various global conflicts.
=== 1950s ===
During the 1950s, the Korean War and the Cold War did not stop the Dow's climb higher. A nearly 240% increase in the average from 200.13 to 679.36 ensued over the course of that decade.
=== 1960s ===
The Dow began to stall during the 1960s as the markets trudged through the Kennedy Slide of 1962, but still managed an 18% gain from 679.36 to 800.36.
=== 1970s ===
The 1970s marked a time of economic uncertainty and troubled relations between the U.S. and certain Middle-Eastern countries. The 1970s energy crisis was a prelude to a disastrous economic climate along with stagflation, the combination of high unemployment and high inflation. However, on November 14, 1972, the average closed at 1,003.16, above the 1,000 mark for the first time, during a brief relief rally in the midst of a lengthy bear market. Between January 1973 and December 1974, the average lost 48% of its value in what became known as the 1973–1974 stock market crash, closing at 577.60 on December 6, 1974. The nadir came after prices dropped more than 45% over two years since the NYSE's high point of 1,003.16 on November 4, 1972. In 1976, the index reached 1,000 several times and it closed the year at 1,004.75. Although the Vietnam War ended in 1975, new tensions arose towards Iran surrounding the Iranian Revolution in 1979. Performance-wise for the 1970s, the index remained virtually flat, rising 4.8% from 800.36 to 838.74.
=== 1980s ===
The 1980s began with the early 1980s recession. In early 1981, the index broke above 1,000 several times, but then retreated. After closing above 2,000 in January 1987, the largest one-day percentage drop occurred on Black Monday, October 19, 1987, when the average fell 22.61%. There were no clear reasons given to explain the crash.
On October 13, 1989, the Friday the 13th mini-crash, which initiated the collapse of the junk bond market, resulted in a loss of almost 7% of the index in a single day.
During the 1980s, the Dow increased 228% from 838.74 to 2,753.20; despite the market crashes, Silver Thursday, an early 1980s recession, the 1980s oil glut, the Japanese asset price bubble, and other political distractions. The index had only two negative years in the 1980s: in 1981 and 1984.
=== 1990s ===
The 1990s brought on rapid advances in technology along with the introduction of the dot-com era. The markets contended with the 1990 oil price shock compounded with the effects of the early 1990s recession and a brief European situation surrounding Black Wednesday. Certain influential foreign conflicts such as the 1991 Soviet coup d'état attempt which took place as part of the initial stages of the Dissolution of the Soviet Union and the Revolutions of 1989; the First Chechen War and the Second Chechen War, the Gulf War, and the Yugoslav Wars failed to dampen economic enthusiasm surrounding the ongoing Information Age and the "irrational exuberance" (a phrase coined by Alan Greenspan) of the dot-com bubble. Between late 1992 and early 1993, the Dow staggered through the 3,000 level making only modest gains as the biotechnology sector suffered through the downfall of the Biotech Bubble; as many biotech companies saw their share prices rapidly rise to record levels and then subsequently fall to new all-time lows.
The Dow soared from 2,753 to 8,000 between January 1990 to July 1997. In October 1997, the events surrounding the 1997 Asian financial crisis plunged the Dow into a 554-point loss to a close of 7,161.15; a retrenchment of 7.18% in what became known as the October 27, 1997 mini-crash.
However, the Dow continued climbing past 9,000 despite negativity surrounding the 1998 Russian financial crisis along with the subsequent fallout from the 1998 collapse of Long-Term Capital Management due to bad bets placed on the movement of the Russian ruble.
On March 29, 1999, the average closed at 10,006.78, its first close above 10,000. This prompted a celebration on the New York Stock Exchange trading floor, complete with party hats. Total gains for the decade exceeded 315%; from 2,753.20 to 11,497.12, which equates to 12.3% annually.
The Dow averaged a 5.3% return compounded annually for the 20th century, a record Warren Buffett called "a wonderful century"; when he calculated that to achieve that return again, the index would need to close at about 2,000,000 by December 2099.
=== 2000s ===
On September 17, 2001, the first day of trading after the September 11 attacks on the United States, the Dow fell 7.1%. However, the Dow began an upward trend shortly after the attacks, and regained all lost ground to close above 10,000 for the year. In 2002, the Dow dropped to a four-year low of 7,286 on September 24, 2002, due to the stock market downturn of 2002 and lingering effects of the dot-com bubble. Overall, while the NASDAQ index fell roughly 75% and the S&P 500 index fell roughly 50% between 2000 and 2002, the Dow only fell 27% during the same period. In 2003, the Dow held steady within the 7,000 to 9,000-point level and recovered to the 10,000 mark by year end.
The Dow continued climbing and reached a record high of 14,198.10 on October 11, 2007, a mark which was not matched until March 2013. It then dropped over the next year due to the 2008 financial crisis.
On September 15, 2008, a wider financial crisis became evident after the Bankruptcy of Lehman Brothers along with the economic effect of record high oil prices which had reached almost $150 per barrel two months earlier. The Dow lost more than 500 points for the day, returning to its mid-July lows below 11,000. A series of bailout packages, including the Emergency Economic Stabilization Act of 2008, proposed and implemented by the Federal Reserve and United States Department of the Treasury did not prevent further losses. After nearly six months of extreme volatility during which the Dow experienced its largest one-day point loss, largest daily point gain, and largest intraday range (of more than 1,000 points) at the time, the index closed at a new 12-year low of 6,547.05 on March 9, 2009, its lowest close since April 1997. The Dow had lost 20% of its value in only six weeks.
Towards the latter half of 2009, the average rallied towards the 10,000 level amid optimism that the Great Recession, the United States housing bubble and the 2008 financial crisis, were easing and possibly coming to an end. For the decade, the Dow saw a rather substantial pullback for a negative return from 11,497.12 to 10,428.05, a loss of a 9.3%.
=== 2010s ===
During the first half of the 2010s decade, aided by the Federal Reserve's loose monetary policy including quantitative easing, the Dow made a notable rally attempt. This was despite significant volatility due to growing global concerns such as the European debt crisis, the Dubai World 2009 debt standstill, and the 2011 United States debt-ceiling crisis.
On May 6, 2010, the Dow lost 9.2% intra-day and regained nearly all of it within a single hour. This event, which became known as the 2010 Flash Crash, sparked new regulations to prevent future incidents.
Six years after its previous high in 2007, the Dow finally closed at a new record high on March 5, 2013. It continued rising for the next several years past 17,000 points until a brief 2015–2016 stock market selloff in the second half of 2015. It then picked up again in early 2016 and climbed past 25,000 points on January 4, 2018.
On November 9, 2016, the day after Donald Trump's victory over Hillary Clinton in the U.S. presidential election, the index soared, coming within roughly 25 points of its all-time intraday high to that point.
Volatility returned in 2018 when the Dow fell nearly 20%. By early January 2019, the index had quickly rallied more than 10% from its Christmas Eve low.
Overall in the 2010s decade, the Dow increased from 10,428.05 to 28,538.44 for a substantial gain of 174%.
=== 2020s ===
Despite the emerging COVID-19 pandemic, the Dow continued its bull run from the previous decade before peaking at 29,551.42 on February 12, 2020 (29,568.57 intraday on the same day). The index slowly retreated for the remainder of the week and into the next week, before coronavirus fears and an oil price war between Saudi Arabia and Russia sent the index into a tailspin, recording several days of losses (and gains) of at least 1,000 points, a typical symptom of a bear market as previously seen in October 2008 during the 2008 financial crisis. Volatility rose high enough to trigger multiple 15-minute trading halts. In the first quarter of 2020, the DJIA fell 23%, its worst quarter since 1987. The market recovered in the third quarter, returning to 28,837.52 on October 12, 2020, and peaked momentarily at a new all-time high of 29,675.25 on November 9, 2020, at 14:00 ET, following that day's announcement of the success of the Pfizer–BioNTech COVID-19 vaccine in Phase III clinical trials. The Dow (as reported by the United Press International) closed over 30,000 on December 31, 2020, at a record 30,606.48. On November 24, following news that the presidential transition of Joe Biden was approved, the Dow increased by more than 500 points, closing at 30,046.24. On January 22, 2024, the Dow Jones crossed 38,000 points for the first time; a month later it surpassed 39,000; and in May, it surpassed 40,000 points.
== Computation ==
The DJIA is computed as the sum of the prices of all thirty stocks divided by a divisor, the Dow Divisor. The divisor is adjusted in case of stock splits, spinoffs or similar structural changes, to ensure that such events do not in themselves alter the numerical value of the DJIA. Early on, the initial divisor was composed of the original number of component companies; this initially made the DJIA a simple arithmetic average. The present divisor, after many adjustments, is less than one, making the index larger than the sum of the prices of the components. That is:
DJIA
=
∑
p
d
{\displaystyle {\text{DJIA}}={\sum p \over d}}
where p are the prices of the component stocks and d is the Dow Divisor.
Events such as stock splits or changes in the list of the companies composing the index alter the sum of the component prices. In these cases, in order to avoid discontinuity in the index, the Dow Divisor is updated so that the quotations right before and after the event coincide:
DJIA
=
∑
p
old
d
old
=
∑
p
new
d
new
.
{\displaystyle {\text{DJIA}}={\sum p_{\text{old}} \over d_{\text{old}}}={\sum p_{\text{new}} \over d_{\text{new}}}.}
Since November 8, 2024, the Dow Divisor is 0.16268413125742 and every $1 change in price in a particular stock within the average equates to a 6.146881 (or 1 ÷ 0.16268413125742) point movement.
== Assessment ==
=== Quality as a proxy of the stock market ===
Despite its unusual weighting by price rather than market capitalization, the Dow Jones Industrial Average is highly correlated with other proxies of the US equities market, particularly the S&P 500 Index. Between (1980-January-{{{day}}}) (2023-November-{{{day}}})January 1980 – November 2023, the DJIA returned an annualized 8.90%, with the S&P 500 returning a nearly identical 8.91%.
=== Issues with market representation ===
With the inclusion of only 30 stocks, critics such as Ric Edelman argue that the DJIA is an inaccurate representation of overall market performance compared to more comprehensive indices such as the S&P 500 Index or the Russell 3000 Index. Additionally, the DJIA is criticized for being a price-weighted index, which gives higher-priced stocks more influence over the average than their lower-priced counterparts, but takes no account of the relative industry size or market capitalization of the components. For example, a $1 increase in a lower-priced stock can be negated by a $1 decrease in a much higher-priced stock, even though the lower-priced stock experienced a larger percentage change. In addition, a $1 move in the smallest component of the DJIA has the same effect as a $1 move in the largest component of the average. For example, during September–October 2008, former component AIG's reverse split-adjusted stock price collapsed from $22.76 on September 8 to $1.35 on October 27; contributing to a roughly 3,000-point drop in the index.
As of June 2021, Goldman Sachs and UnitedHealth Group are among the highest-priced stocks in the average and therefore have the greatest influence on it. Alternately, Cisco Systems and Coca-Cola are among the lowest-priced stocks in the average and have the least sway in the price movement. Critics of the DJIA and most securities professionals recommend the market-capitalization weighted S&P 500 Index or the Wilshire 5000, the latter of which includes most publicly listed U.S. stocks, as better indicators of the U.S. stock market.
=== Correlation among components ===
A study between the correlation of components of the Dow Jones Industrial Average compared with the movement of the index finds that the correlation is higher when the stocks are declining. The correlation is lowest in a time when the average is flat or rises a modest amount.
== See also ==
Closing milestones of the Dow Jones Industrial Average
List of largest daily changes in the Dow Jones Industrial Average
William Peter Hamilton
S&P 500
== References ==
== Further reading ==
Stillman, Richard (1986). Dow Jones Industrial Average: History and Role in an Investment Strategy. Homewood, Ill.: Dow Jones-Irwin. ISBN 9780870945861. OCLC 424238820.
== External links ==
Official website
Business data for Dow Jones Industrial Average: | Wikipedia/Dow_Jones_Industrial_Average |
Big Ten Network (BTN) is an American sports network based in Chicago, Illinois. The channel is dedicated to coverage of collegiate sports sanctioned by the Big Ten Conference, including live and recorded event telecasts, news, analysis programs, and other content focusing on the conference's member schools. It is a joint venture between Fox Sports and the Big Ten, with Fox Corporation as 61% stakeholder and operating partner, and the Big Ten Conference owning a 39% stake. It is headquartered in the former Montgomery Ward & Co. Catalog House building at 600 West Chicago Avenue in Chicago.
Big Ten Network is carried by most major television providers and as of 2022, had an estimated 50 million U.S. subscribers. By June 2023, this number has dropped to 48.7 million households.
Big Ten Network was the second U.S. sports network to be devoted to a single college sports conference, having been preceded by the MountainWest Sports Network one year prior to its launch. BTN was later followed by rival cable channels by the Pac-12, SEC and ACC with a similar array of programming.
== History ==
The network's foundation traces back to 2004, following negotiations between the Big Ten and ESPN on an extension of the conference's broadcast contract with the network. With three years remaining in the existing deal, the conference sought a significant increase in rights fees. ESPN, however, balked, causing Big Ten commissioner Jim Delany to begin exploring the creation of his own network.
The launch of the Big Ten Network was announced on June 21, 2006, as a 20-year joint project between the Big Ten Conference and Fox Entertainment Group. At launch, the conference owned 51% of the network, while Fox owned a minority interest and handled its operations. The network was positioned to be the first ever cable channel dedicated to a single collegiate conference. The network also has a commitment to "event equality", stating it would produce and distribute an equal number of men's and women's events across all platforms, within three years of its launch. The deal was meant to replace the Big Ten's television contract with ESPN's ESPN Plus regional television package. ESPN Plus games were typically only seen on one broadcast television station in a team's local market (for example, the Illinois Fighting Illini aired its games on Champaign, Illinois CBS affiliate WCIA (channel 3)).
Big Ten Network was launched at 8:00 p.m. Eastern Time on August 30, 2007, with Big Ten Tonight as its inaugural program. The network aired its first live telecasts two days later on September 1, which included a football game between Appalachian State and Michigan – the game gained national attention for its upset victory; being the first win by a Division I FCS team over a ranked Division I FBS team since Division I was split into two subdivisions by the NCAA in 1978. On September 2, the network aired its first women's sports event (a soccer match between Syracuse University and Michigan State) and its first men's non-revenue sports event (a soccer match between UCLA at Indiana).
The new network suffered from limited carriage on its launch, as it was only carried by two major television providers. By the following year, the network had reached its goal to attain carriage on the "extended basic" tiers of cable providers in all Big Ten markets. While no specifics were revealed, Fox increased its stake in the Big Ten Network to 51% in June 2010, acquiring majority control, using a provision in its contract with the conference. To coincide with the 2011 college football season, the network unveiled a new logo that made "BTN" the primary name of the channel, and introduced a new TV Everywhere service known as "BTN2Go," which offers live streaming of BTN telecasts and other programming through a web browser or mobile app. The service was initially available to subscribers of Time Warner Cable, Charter Communications, DirecTV and Dish Network.
BTN and Dish Network were involved in a dispute leading up to the expiration of the satellite provider's contract with the network in August 2012, a day before that year's college football season began. The network was temporary blacked out for eight days beginning on September 14, giving way to a new agreement that restored BTN on Dish Network on September 22.
In July 2017, as part of a new six-year agreement that made Fox the primary television rightsholder of regular season Big Ten football games, Fox's contract to run BTN was extended through 2032. Concurrent with these agreements, BTN became the de jure owner of the Big Ten's media rights, with all future media rights deals officially being sublicenses of these rights to other broadcasters. This aspect of the agreement became relevant upon the conference's next round of media rights in 2023, when it was discovered that the conference could not sublicense the Big Ten football championship without paying compensation to Fox.
On December 14, 2017, 21st Century Fox announced it would sell a majority of its assets to The Walt Disney Company, owners of ESPN, SEC Network and the then-upcoming ACC Network, in a transaction valued at over $52 billion. 21st Century Fox's stake in the Big Ten Network was not included in the deal and was spun off to the significantly downsized Fox Corporation, along with the Fox network, Fox News, Fox Business, FS1, and FS2. The deal was approved by Disney and Fox shareholders on July 27, 2018, and was completed on March 20, 2019.
The network introduced a new logo on October 23, 2020, coinciding with the start of the delayed 2020 football season. The new logo returns to using "Big Ten Network" as the primary name of the channel, and incorporates the conference's "B1G" wordmark. In 2021, the Big Ten sold part of its stake to Fox. As a result of the sale, the Big Ten's ownership stake decreased to 39% while Fox's increased to 61%.
== Programming ==
=== Original programs ===
Big Ten Tonight – a weekly half-hour show airing on Sundays that is similar to ESPN's SportsCenter; it offers highlights and discussion of Big Ten sporting events. The program is currently anchored by Dave Revsine, Rick Pizzo, Mike Hall and Lisa Cornwell. Other reporters and analysts appear depending on the sport being discussed.
Big Ten Football Saturday – a program airing Saturdays (with pre-game, halftime and post-game editions) during the college football season, which features discussions and highlights of the day's games. It is hosted by Dave Revsine, with analysis provided by Gerry DiNardo (nicknamed by the hosts as "Coach") and Howard Griffith.
Big Ten Tailgate – originally titled Friday Night Tailgate, it is a Friday night program that takes a lighthearted and irreverent look at campus life surrounding the weekend of a Big Ten football game. It was host was Mike Hall, with correspondents Charissa Thompson and Chicago-area improv actors Jordan Klepper, Steve Waltien, and Tim Baltz. 90-minute In 2010, the show was cut to 60 minutes and was renamed as Big Ten Tailgate.
Big Ten Tip-Off Show – a pre-game show airing during the regular season from November to March discussing the day's basketball games; it is hosted by Dave Revsine, with analysis provided by Gene Keady, Jimmy Jackson, Tim Doyle and Kendall Gill.
Coaches Q&A – a program featuring excerpts from the week's press conferences around the conference.
The Big Ten's Greatest Games – a showcase of classic football and basketball games, with editing of some non-essential game action out to fit time constraints.
The Big Ten Women's Show – an hour-long Monday night program covering women's sports throughout the conference.
The Big Ten Quad – a weekly sports discussion show with Big Ten legends.
Big Ten Cookout – a half-hour live cooking/tailgate show on Saturday mornings, taking place at a different university campus within the conference each week; it is hosted by Melanie Collins, alongside chefs Julius Russell and former Hell's Kitchen season five contestant Ben Walanka.
The Big Ten's Best – a weekly countdown show with lists of the top 10 Big Ten teams or players in a certain category, such as "best running backs of the 1990s" or "best quarterbacks of the 1980s"; hosted by Charissa Thompson.
Various coach's shows
University Showcase – a program block of non-sports campus produced programs; each school has equal time.
Student U – Game broadcasts produced by university broadcast departments involving students controlling production and play-by-play which are usually seen only on closed-circuit campus cable networks.
"Big Ten Frozen Fridays" – a hockey pregame show on Friday nights, airing before most Big Ten hockey game telecasts, featuring game previews and highlights from around the Big Ten Conference
Big Ten Football: Breakdown – a weekly series airing on Tuesdays in which Big Ten coaches and players review the previous week's game footage, with network analysts providing a look at the nuances of the game and what affected the teams' success.
Big Ten Football: Sites & Sounds – a Wednesday night program that includes segments from press conferences, media interviews and the games, as well as other behind-the-scenes footage, hosted from the network's Chicago studios.
Big Ten Football: Behind the Schemes – airing Thursday nights, it is a breakdown featuring the network's resident head coach analysts, analyzing footage of the previous week's games and putting together game plans for games being held that week.
Big Ten Football… & Beyond – a Friday night program previewing the weekend's upcoming games with reports from each Big Ten stadium and a look at key national matchups that could impact the conference postseason.
Big Ten Film Vault – a program, hosted by Dan Dierdorf, showcasing a vintage Big Ten film from the 1940s to the 1970s.
Big Ten Icons – a series highlighting a Big Ten athlete from a wide range of sports and history. Notable subjects include Jesse Owens, Jack Nicklaus and Steve Alford.
The Journey: Big Ten Basketball – a Sunday night documentary-style series following multiple teams each week throughout the conference's 10-week basketball season.
Big Ten Treasure Hunter - a program starring memorabilia collector John Arcand in which he travels around Big Ten territories and make negotiations with fans to buy Big Ten memorabilia.
==== Former ====
Big Ten Hoops: On Campus – an hour-long Friday night program (hosted by Mike Hall, Jim Jackson, Tiffany Simons and Natalie Kane) featuring visits to different campuses each week to showcase the loyalty and tradition behind Big Ten basketball and its fans.
This Week in Big Ten Basketball – a Sunday night program providing comprehensive breakdowns of the week's college basketball action involving Big Ten teams; it was hosted by Dave Revsine, Jim Jackson and Dan Dakich.
=== Sports coverage ===
==== Football ====
Big Ten Network holds national broadcast rights to all of the conference's home football games and televises approximately 35-40 football games each season. Each team is guaranteed to appear a minimum of two times annually on the network, one of which must be a conference game.
==== Basketball ====
The network holds national television rights to all men's basketball conference home games; all non-conference and exhibition games are either televised or streamed on bigtennetwork.com. Each of the conference's men's basketball teams appear on the network approximately 10-20 times a season; it carries approximately 60–65 in-conference match-ups, as well as select tournament contests.
Big Ten Network also televises approximately 50-60 regular season women's basketball games annually, along with approximately nine Big Ten Basketball Tournament games. Each Big Ten team appears on the network approximately 8 to 10 times during the season. The network streams dozens of games live on its website, giving Big Ten women's basketball the most exposure of any conference in the country. The network maintains a set on-site during the Big Ten men's and women's basketball tournaments in Indianapolis, Indiana with anchors providing coverage and analysis of each day's game action during the event.
==== Other sports ====
Big Ten Network televises approximately 25 of the conference's baseball games each spring, with each team making approximately 5 to 8 appearances annually. In 2009, the network televised the entirety of the Big Ten baseball tournament.
In the 2013–14 season, Big Ten Network expanded its coverage of college ice hockey due to the Big Ten Conference beginning to officially sponsor the sport, broadcasting 27 games as well as the Big Ten tournament, and adding associated studio programs. The Big Ten Network televises more than 170 NCAA-sponsored Olympic events in both men's and women's sports such as hockey, soccer, volleyball, track and field, swimming and diving.
==== Esports ====
In April 2016, it was announced that BTN and Riot Games would organize a collegiate League of Legends event, the BTN Invitational, between teams representing Michigan State and Ohio State. The event was held at PAX East in Boston, alongside the semi-finals and finals of Riot's own college championship. Michael Sherman, head of Riot's collegiate competitions, stated that "there was actually a student group at Penn State that was looking to run a Big Ten tournament, and the Big Ten Network got word of it and through that we actually connected to each other and saw that we had a lot of interest in sort of building an event together."
In January 2017, BTN and Riot announced that it would hold a season of conference competition between teams representing 12 Big Ten schools, culminating with a championship whose winner would receive an invite to Riot's college championship. The competition was primarily streamed online, but later rounds were televised on BTN. In January 2018, Riot and BTN announced an extension of the partnership through 2019, complete with scholarship funds for teams ($35,000/team yearly) and the addition of Penn State and Nebraska, bringing all full conference members to the partnership. ESL became a partner with BTN's competition for 2019.
==== Tournament and championship events ====
The Big Ten Network televises 21 Big Ten Championships and Tournaments, including baseball, men's and women's basketball, men's and women's cross country, field hockey, men's and women's golf, men's and women's gymnastics, rowing, men's and women's soccer, men's and women's swimming and diving, men's and women's tennis, men's and women's indoor and outdoor track and field, and wrestling.
In February 2017, the NCAA announced that Big Ten Network had acquired rights to the Women's Frozen Four—the NCAA national championship of Women's ice hockey, beginning in 2017 under a four-year deal. BTN broadcast the finals in 2017, and began airing the semi-finals beginning 2018. ESPN (who televises all other NCAA national championships outside of men's basketball) took over the rights in 2021.
== On-air staff ==
=== Current on-air staff ===
==== Football ====
==== Basketball ====
==== Baseball ====
==== Ice hockey ====
Ben Clymer - ice hockey analyst
Paul Caponigri - ice hockey analyst
Billy Jaffe - ice hockey analyst
Fred Pletsch - ice hockey analyst and Play-by-play
Aaron Ward - ice hockey analyst
==== Volleyball ====
Grace Loberg - volleyball analyst
Telly Hughes - volleyball studio reported
Emily Ehman - volleyball color analyst
==== Wrestling ====
Shane Nebl Sparks - wrestling announcer
Jim Gibbons - wrestling announcer
Tim Johnson - wrestling announcer
=== Former on-air staff ===
Thom Brennaman - lead play-by-play announcer (later lead television voice for the Cincinnati Reds on Fox Sports Ohio, announcer for MLB on Fox, NFL on Fox and NFL Europe on Fox and FX)
Matt Devlin - play-by-play announcer (now television play-by-play announcer for the Toronto Raptors)
Cal Eldred - baseball analyst (now pitching coach for the Kansas City Royals)
Rebecca Haarlow - sideline reporter for football and baseball (now with MSG Network and NBA on TNT)
Ben Holden - play-by-play announcer (now college hockey announcer for Fox Sports Detroit, Comcast Local and CBS Sports Network)
Gus Johnson - play-by-play announcer (now NFL and Pac-12 announcer for Fox Sports)
Wayne Larrivee - play-by-play announcer (longtime Big Ten play-by-play announcer; now does play-by-play for the Packers Radio Network)
Charissa Thompson - sideline reporter (now with Fox Sports and later co-host of entertainment news magazine Extra)
Stephanie White - women's basketball analyst (now women's basketball coach for the Connecticut Sun in the WNBA and women's college basketball analyst for ESPN.)
Rod Woodson - (now an analyst for the NFL on Westwood One)
Eric Collins - play-by-play announcer (now television play-by-play announcer for the Charlotte Hornets on Fox Sports Southeast and Fox Sports South, occasional announcer for Fox Major League Baseball and Fox College Hoops)
Josh Lewin - play-by-play announcer (longtime play-by-play announcer for Fox Major League Baseball, now college basketball and football play-by-play announcer for the UCLA Bruins)
== Other services ==
=== High definition and 4K ===
Big Ten Network launched in both standard definition and a 720p high definition simulcast. All of its original programs and studio shows are broadcast in HD, as well as nearly all of its sports telecasts and some of its university-produced coaches and campus shows. The channel has produced all of its football games in HD since 2009.
In September 2017, BTN revealed plans to televise selected games from the 2018 Big Ten men's basketball tournament in 4K. Every tournament from then on has included 4K coverage.
=== Streaming platforms ===
BTN2Go was Big Ten Network's TV Everywhere service, which offers online streaming of BTN programming to subscribers on qualifying television providers. Beginning in the 2017–18 season, BTN2Go content became available within Fox Sports' main TV Everywhere app Fox Sports Go.
In July 2019, due to the Fox Sports Go platform being divested with the Fox Sports Networks as part of the acquisition of 21st Century Fox by Disney, BTN content moved from Fox Sports Go to the main Fox Sports website and apps. The BTN2Go app was transitioned to an app for Big Ten Plus (also stylized B1G+)—a subscription over-the-top streaming service for non-televised Big Ten events.
=== Football overflow feeds ===
On many Saturdays during the football season, the Big Ten Network produces multiple games that air at the same time. The network designates one game as its national game, which is shown on the main channel on satellite providers. The remaining games air on the main channel in the local markets and on the extra overflow channels in the remaining markets. Most cable systems inside the Big Ten's eight states offer these Big Ten Network overflow or "out-of-market" feeds to provide additional football games. All of the additional overflow feeds for the network's various football telecasts are available nationally on DirecTV and Dish Network; and regionally on AT&T U-verse, many Comcast systems, and several other cable providers. Some providers only carry the overflow feeds in standard definition, and providers outside of the U.S. provide them in out-of-market subscription packages. Since 2019, all Big Ten Network football games are also available via the Fox Sports app, regardless of geography and wireline restrictions.
== Carriage ==
Carriage negotiations with several major cable providers were stalled for several months due to their interest in placing the channel on a sports tier, with the providers only wanting to charge customers who wanted to subscribe to it; Big Ten Network, however, wanted providers to carry it on their extended basic tiers so that subscribers would not have to pay an extra fee to receive the network. Comcast, the largest cable provider in the U.S., reached a deal to carry the network on June 19, 2008, and began adding the channel to its systems on August 15, 2008; other major providers in states with universities in the Big Ten Conference (including Charter Communications and Time Warner Cable) would soon follow suit. Additionally, the Big Ten Network is an associate member of the Caribbean Cable Cooperative.
=== Carriage agreements ===
DirecTV and AT&T U-verse were the only major television providers to carry the channel at launch; however, 250 smaller cable systems (including those that are members of the National Cable Television Cooperative) also carried BTN at launch. Dish Network added the channel one week later in early September 2007.
During the late summer and early fall of 2008, several larger cable companies within states where a Big Ten university was located reached agreements to carry Big Ten Network, expanding its carriage to every major cable provider in those areas. On August 23, 2008, Mediacom (which services most of Iowa, including Iowa City, where Big Ten member, the University of Iowa, is located) was reported by Cedar Rapids newspaper The Gazette to have reached an agreement in principle to carry the network according to sources close to negotiations; the deal was announced on August 28.
On August 25, Time Warner Cable and the Big Ten Network announced in a joint statement that the two parties had reached a carriage deal. Time Warner Cable carries the channel on its expanded basic service in the eight states where Big Ten universities are located. These deals were later followed by carriage agreements with Charter Communications on August 26 and Cox Communications on August 28. Also on August 26, 2008, The Indianapolis Star reported that Bright House Networks was "very close to a deal" to carry the channel. On September 30, Broadstripe added the channel to its systems in Michigan.
On June 23, 2009, Cablevision added the channel in standard and high definition to its Optimum systems. The following month on August 25, the network reached a carriage agreement with Atlantic Broadband, which added the network's standard and high definition feed on September 1, 2009, to its systems in central and northern Pennsylvania. On December 28, 2009, Charter Communications reached an agreement to provide the network to its systems in St. Louis and Southern Illinois on the provider's expanded basic-digital tier.
On July 24, 2017, the Big Ten Network announced they would be available on Hulu Live TV and YouTube TV.
On April 11, 2018, Comcast's Xfinity dropped Big Ten Network in a number of "out-of-market" states that fall outside of the conference's direct geographical footprint, with other selected markets dropping the network on May 10, 2018. This notably included the 23,000 Comcast customers in New York, despite the recent addition of Rutgers University in New Jersey having been used to market the conference and BTN in neighboring New York City. On August 24, 2018, Comcast reached an agreement to renew its carriage of BTN, and stated that the channel would be reinstated on its sports and entertainment tier outside of the Big Ten's footprint.
In August 2024, issues between BTN and Comcast would emerge again with the Big Ten's expansion to the west coast. With California, Oregon, and Washington now part of the conference's in-market footprint, Fox sought higher carriage fees and basic tier carriage for the channel. However, Comcast wished to continue offering the channel on its "More Sports and Entertainment" tier. As a result, BTN began to black out event telecasts involving the Oregon Ducks, UCLA Bruins, USC Trojans, and Washington Huskies for Xfinity subscribers in the regions. The blackout ended on October 10, 2024, when Fox reached an agreement for in-market carriage of BTN on Comcast's basic tier in western markets.
=== Canadian carriage ===
In September 2008, the Canadian Radio-television and Telecommunications Commission approved a request by Shaw Communications to allow carriage of BTN in Canada on its specialty television services. While CTVglobemedia filed a concern that it would create undue competition (which is prohibited between foreign and domestic services) with its mainstream sports channel TSN, the CRTC determined that Big Ten Network's specific scope in coverage did not create undue competition with domestic mainstream sports services such as TSN. The network became available to Shaw Cable customers on December 3, 2008. The channel became available on Rogers Cable systems in Ontario and New Brunswick on October 22, 2009.
As of 2020 the channel is carried by Cogeco, Eastlink, Rogers Cable, Shaw Cable, Shaw Direct and VMedia.
== References ==
== External links ==
Official website – Big Ten Network
Official website – Big Ten Conference | Wikipedia/Big_Ten_Network |
This article lists notable industrial disasters, which are disasters caused by industrial companies, either by accident, negligence or incompetence. They are a form of industrial accident where great damage, injury or loss of life are caused.
Other disasters can also be considered industrial disasters, if their causes are rooted in the products or processes of industry. For example, the Great Chicago Fire of 1871 was made more severe due to the heavy concentration of lumber industry facilities, wood houses, and fuel and other chemicals in a small area.
The Convention on the Transboundary Effects of Industrial Accidents is designed to protect people and the environment from industrial accidents. The Convention aims to prevent accidents from occurring, to reduce their frequency and severity, and to mitigate their effects. The Convention addresses primarily industrial accidents in one country that affect the population and the environment of another country.
== Defense industry ==
October 12, 1654: Delft Gunpowder Explosion, Delft, The Netherlands. A gunpowder depot in the center of Delft exploded and killed more than 100 people while destroying a large part of the city center of Delft.
July 14, 1847: Faversham Guncotton Explosion. Faversham, United Kingdom. 18 killed during manufacture of guncotton.
September 17, 1862: Allegheny Arsenal explosion, Lawrenceville, Pennsylvania. Three individual explosions killed a total of 78 workers. The largest civilian disaster during the American Civil War.
June 17, 1864: 1864 Washington Arsenal explosion in Washington, D.C. After flares exploded, some of them entered a nearby building which blew up when a barrel of gunpowder exploded, killing 21 women and injuring many others.
May 25, 1865: Mobile magazine explosion, Mobile, Alabama. A warehouse containing 200 tons of powder and shells exploded, killing 300 and causing over $720,000 in property damage.
August 11, 1871: Stowmarket Guncotton Explosion. Stowmarket, United Kingdom. During manufacture of guncotton, two explosions killed 28 and injured 70.
April 2, 1916: Faversham Munitions Explosion. Faversham, United Kingdom. 200 tons of TNT caught fire, killing 115 people.
December 6, 1917: The Halifax Explosion. Halifax, Canada. A ship loaded with about 9,000 tons of high explosives destined for France caught fire as a result of a collision in Halifax harbour, and exploded. The explosion killed about 2,000 and injured about 9,000.
July 1, 1918 National Shell Filling Factory, Chilwell. 134 workers were killed and 250 injured when eight tons of TNT detonated at a munitions factory at the village of Chilwell, now a suburb of Nottingham, UK.
October 4, 1918: T. A. Gillespie Company Shell Loading Plant explosion. An ammunition plant in Sayreville, New Jersey, exploded, killing approximately 100 people, destroying 300 buildings and causing $18 million in damages.
March 1, 1924: 1924 Nixon Nitration Works disaster. A plant for processing ammonium nitrate in Edison, New Jersey, exploded, killing 24 people, injuring 100 and destroying several buildings.
July 10, 1926: Picatinny Arsenal in New Jersey. 600,000 lbs. of explosives detonated as a result of a lightning strike. 187 of the 200 buildings in the arsenal were destroyed and debris was found as far as 20 miles away. Damage of close to one billion dollars in 2022 dollars.
April 14, 1944: Bombay docks explosion. A British freighter SS Fort Stikine carrying 1400 tons of explosives and 240 tons of weapons (torpedoes and mines) caught fire due to improper storage, resulting in two massive explosions killing some 800-1300 people. The explosion also led to fires in many parts of the city and the docks needed months of repair work to function again.
July 17, 1944: Port Chicago Disaster. A munitions explosion that killed 320 people occurred at the Port Chicago Naval Magazine in Port Chicago, California.
Nov 27, 1944: RAF Fauld Explosion. Explosion of between 3500 and 4000 tonnes of ordnance in an underground munitions store killed 70 people.
August 9, 1965: Searcy missile silo fire, Arkansas. 53 contract workers were killed during a fire at a Titan missile silo. The cause of the fire was determined to be a welding rod damaging a hydraulic hose carrying Aerozine 50 fuel. This allowed the hypergolic fuel vapors to spread throughout the silo, which were then ignited by an open flame.
April 13, 1976: Lapua Cartridge Factory explosion. An explosion in a munitions factory in Lapua, Finland, kills 40 workers.
May 5, 1983: "6 Martie" Ammunition Factory in Zărnești, Romania. An explosion in the production facilities inside the factory completely destroyed two buildings, killing 37 people and injuring more than 300.
April 10, 1988: Ojhri Camp, Rawalpindi, Pakistan. A military storage center exploded, killing more than 90 people.
July 11, 2011: Evangelos Florakis Naval Base explosion, Cyprus. The disaster occurred when 98 containers of gunpowder exploded; 13 people were killed, among them the captain of the base, three commanders, twin brothers who were serving there as marines, and six firefighters. 62 people were injured and the explosion knocked out the island's power station for days.
== Energy industry ==
October 1957: The Windscale fire, the worst nuclear accident in the United Kingdom's history, released substantial amounts of radioactive contamination into the surrounding area at Windscale, Cumberland (now Sellafield, Cumbria). The incident led to about 100 to 240 cancer deaths.
March 1928: The St. Francis Dam in the U.S. state of California failed due to poor engineering and a lack of understanding the soil conditions. At least 431 people died in the subsequent flood, in what is considered to have been one of the worst American civil engineering disasters of the 20th century and the third-greatest loss of life in California history.
May 1962: The Centralia mine fire in the U.S. state of Pennsylvania began due to a fire on the surface accidentally igniting the mine's shallow coal vein, forcing the gradual evacuation of the Centralia borough. The fire continues to burn underneath the abandoned settlement.
October 1963: The Vajont Dam overflow, caused by a massive landslide, leading to the complete destruction of several villages and towns, and 1,917 deaths in northern Italy. The accident was anticipated by numerous warnings and signs of dangers disregarded by the electrical company and government.
March 4, 1965: The Natchitoches explosion: A 32-inch gas transmission pipeline, north of Natchitoches, Louisiana, belonging to the Tennessee Gas Pipeline exploded and burned from stress corrosion cracking on March 4, killing 17 people. At least 9 others were injured, and 7 homes 450 feet from the rupture were destroyed. The same pipeline also had an explosion on May 9, 1955, just 930 feet (280 m) from the 1965 failure.
March 1967: The Torrey Canyon supertanker was shipwrecked off the west coast of Cornwall, England, causing an environmental disaster. This was the first major oil spill at sea.
August 1975: The Banqiao Dam failed in the Henan Province of China due to extraordinarily heavy precipitation from the remnants of Typhoon Nina and poor construction quality of the dam, which was built during the Great Leap Forward. The flood immediately killed over 100,000 people, and another 150,000 died of subsequent epidemic diseases and famine, bringing the total death toll to around 250,000 and making it the worst technical disaster ever.
March 16, 1978: The Amoco Cadiz, a VLCC owned by the company Amoco sank near the northwest coast of France, resulting in the spilling of 68,684,000 US gallons of crude oil (1,635,000 barrels). This is the largest oil spill from an oil tanker in history.
January 8, 1979: The Whiddy Island disaster, also known as the Betelgeuse incident, occurred around 1:00 am, when the oil tanker Betelgeuse exploded in Bantry Bay, at the offshore jetty for the oil terminal at Whiddy Island, Ireland. The explosion and resulting fire claimed the lives of 50 people (42 French nationals, seven Irish nationals, and one British national).
March 28, 1979: Three Mile Island accident. Partial nuclear meltdown near Harrisburg, Pennsylvania. Mechanical failures in the non-nuclear secondary system, followed by a stuck-open pilot-operated relief valve in the primary system, allowed large amounts of reactor coolant to escape. Plant operators initially failed to recognize the loss of coolant, resulting in a partial meltdown. The reactor was brought under control but not before up to 481 PBq (13 million curies) of radioactive gases were released into the atmosphere.
June 3, 1979: Ixtoc oil spill. The Ixtoc I exploratory oil well suffered a blowout resulting in the third-largest oil spill and the second-largest accidental spill in history.
March 1980: The Alexander L. Kielland, a Norwegian semi-submersible drilling rig, capsized while working in the Ekofisk oil field, killing 123 people.
November 20, 1980: A Texaco oil rig drilled into a salt mine transforming Lake Peigneur, a freshwater lake before the accident, into a saltwater lake.
February 15, 1982: Newfoundland, Canada. The mobile offshore oil rig Ocean Ranger was struck by a rogue wave off the coast of Newfoundland, Canada and sank with the loss of all 84 crew.
December 19, 1982: The Tacoa disaster, an immense boilover from a fuel oil tank within the premises of a thermal power plant. It caused about 150 fatalities, including firefighters, media workers and bystanders.
January 7, 1983: An explosion in Newark, New Jersey was felt as far away as 100–130 miles from the epicenter, but only claimed 1 life, and injured 22–24 people.
July 23, 1984: Romeoville, Illinois, Union Oil refinery explosion killed 19 people.
November 19, 1984: San Juanico Disaster. A series of boiling liquid expanding vapor explosions (BLEVEs) at a liquefied petroleum gas tank farm killed more than 500 and injured thousands in San Juan Ixhuatepec, Mexico.
April 26, 1986: Chernobyl disaster. At the Chernobyl nuclear power plant in Pripyat, Soviet Union, (modern-day Ukraine) a test on reactor number four went out of control, resulting in a nuclear meltdown. The ensuing steam explosion and radiation killed up to 50 people with estimates that there may be between 4,000 and several hundred thousand additional cancer deaths over time, although this has not yet been observed and was estimated based on the contested linear no-threshold model. Nuclear fallout could be detected as far away as Canada. The Chernobyl Exclusion Zone, covering portions of Belarus and Ukraine surrounding Pripyat, remains contaminated and mostly uninhabited. Pripyat itself was totally evacuated and remains as a ghost town, although teeming with wildlife.
May 5, 1988: Norco, Louisiana, Shell Oil refinery explosion. Hydrocarbon gas escaped from a corroded pipe in a catalytic cracker and was ignited. Louisiana State Police evacuated 2,800 residents from nearby neighborhoods. Seven workers were killed and 42 injured. The total cost arising from the Norco blast is estimated at US$706 million.
July 6, 1988: Piper Alpha disaster. An explosion and resulting fire on a North Sea oil production platform killed 167 men. The total insured loss was about US$3.4 billion. To date it is rated as the world's worst offshore oil disaster in terms both of lives lost and impact to industry.
March 24, 1989: Exxon Valdez oil spill. The Exxon Valdez, an oil tanker bound for Long Beach, California, hit Prince William Sound's Bligh Reef, dumping an estimated minimum 10.8 million US gallons (40.9 million litres, or 250,000 barrels) of crude oil into the sea. It is considered to be one of the most devastating human-caused environmental disasters ever to occur. 100,000 to as many as 250,000 seabirds died, as well as at least 2,800 sea otters, approximately 12 river otters, 300 harbor seals, 247 bald eagles, and 22 orcas, and billions of salmon and herring eggs were destroyed. Overall reductions in population have been seen in various ocean animals, including stunted growth in pink salmon populations. Sea otters and ducks also showed higher death rates in following years, partially because they ingested prey from contaminated soil and also from ingestion of oil residues on their hair/feathers due to grooming.
July 5, 1990: 1990 ARCO explosion. An explosion at a petrochemical plant in Channelview, Texas, killed 17 people and injured five others.
April 22, 1992: 1992 Guadalajara explosions. A leak of gasoline into the sewer system caused 12 explosions in downtown Guadalajara, Mexico between 10:05 and 11:16 a.m., killing 206 – 252 people and injuring 1,800. Eight kilometers of streets were destroyed or seriously damaged.
March 23, 2005: Texas City refinery explosion. An explosion occurred at a BP refinery in Texas City, Texas. It is the third largest refinery in the United States and one of the largest in the world, processing 433,000 barrels of crude oil per day and accounting for three percent of that nation's gasoline supply. Over 100 were injured, and 15 were confirmed dead, including employees of Jacobs, Fluor and BP. BP has since accepted that its employees contributed to the accident. Several level indicators failed, leading to overfilling of a knockout drum, and light hydrocarbons concentrated at ground level throughout the area. A nearby running diesel truck set off the explosion.
July 27, 2005: Mumbai High fire. A major fire struck ONGC's Mumbai High North offshore complex, located approximately 100 km off Mumbai, Maharashtra, India, when a support vessel collided with the production platform. The fire caused 22 fatalities and extensive material damage.
December 11, 2005: Hertfordshire Oil Storage Terminal fire. A series of explosions at the Buncefield oil storage depot, described as the largest peacetime explosion in Europe, devastated the terminal and many surrounding properties. There were no fatalities. Total damages have been forecast as £750 million.
December 19, 2007: T2 Laboratories explosion and fire. Runaway reactor for production of gasoline additives explodes at Jacksonville, Florida, killing four.
December 22, 2008: Kingston Fossil Plant coal fly ash slurry spill. 1.1 billion gallons of coal ash were released when a dike ruptured at an ash storage pond at the Tennessee Valley Authority's Kingston Fossil Plant in Roane County, Tennessee.
August 17, 2009: Sayano–Shushenskaya power station accident. Seventy-five people were killed at a hydroelectric power station when a turbine failed. The failed turbine had been vibrating for a considerable time. Emergency doors to stop the incoming water took a long time to close, while a self-closing lock would have stopped the water in minutes.
February 7, 2010: 2010 Connecticut power plant explosion. A large explosion occurred at a Kleen Energy Systems 620-megawatt, Siemens combined cycle gas- and oil- fired power plant in Middletown, Connecticut, United States. Preliminary reports attributed the cause of the explosion to a test of the plant's energy systems. The plant was still under construction and scheduled to start supplying energy in June 2010. The number of injuries was eventually established to be 27. Five people died in the explosion.
April 20, 2010: Deepwater Horizon oil spill in the Gulf of Mexico. Eleven oil platform workers died in an explosion and fire that resulted in a massive oil spill in the Gulf of Mexico, considered the largest offshore spill in US history.
March 11, 2011: As a result of the 2011 Tōhoku earthquake and tsunami,
Fukushima Daiichi nuclear accident in Japan. Regarded as the largest nuclear disaster since the Chernobyl disaster, there were no direct deaths but a few of the plant's workers were severely injured or killed by the disaster conditions resulting from the earthquake.
Fujinuma Dam failure, Fukushima Prefecture, Japan. The dam failed 20 to 25 minutes after the earthquake as the nearly full reservoir overtopped the dam's crest. Eight people were killed.
Ichihara gas tank fire, Chiba Prefecture, Japan. A fire in natural gas containers at the Ichihara oil refinery. Six people were injured, and storage tanks were destroyed.
February 24, 2012: Köprü Dam in Adana Province, Turkey. A hydroelectric dam whose diversion tunnel seal was breached. 97 million cubic meters of water flooded the area downstream of the dam. The accident and flood killed 10 workers.
October 29, 2012: Hurricane Sandy caused a Consolidated Edison power plant to explode, causing a blackout in most of midtown Manhattan. The blue light emitted from the arc made places as far as Brooklyn glow. No person was killed or injured.
July 6, 2013: Lac-Mégantic, Quebec Canada. Lac-Mégantic derailment. Forty-seven people were killed when there was a derailment of an oil shipment train. The oil shipment caught fire and exploded, destroying more than thirty buildings. It was the fourth-deadliest rail accident in Canadian history.
July 23, 2018: Laos dam collapse. Part of a hydroelectric dam system under construction collapsed in Champasak Province, Laos. The collapse lead to widespread destruction and homelessness. 40 people were confirmed dead, at least 98 more were missing, and 6,600 others were displaced.
June 21, 2019: Philadelphia Refinery Explosion. An explosion at Philadelphia Energy Solutions' refinery destroyed the alkylation unit, where crude oil is converted to high octane gas, and led to the planned closure of the financially troubled plant. While the explosion and fire only led to a few minor injuries, it was catastrophic for the business.
== Food industry ==
17 October 1814: The London Beer Flood was an accident at Meux & Co's Horse Shoe Brewery, London, on 17 October 1814. It took place when one of the 22-foot-tall (6.7 m) wooden vats of fermenting porter burst. The pressure of the escaping liquid dislodged the valve of another vessel and destroyed several large barrels: between 128,000 and 323,000 imperial gallons (580,000–1,470,000 L; 154,000–388,000 US gal) of beer were released in total.
18 June 1875: The Dublin whiskey fire took place on 18 June 1875 in the Liberties area of Dublin. It lasted a single night but killed 13 people, and resulted in €6 million worth of damage in whiskey alone (adjusted for inflation). People drank the 6 inches (150 mm) deep river of whiskey that is said to have flowed as far as the Coombe. None of the fatalities suffered during the fire were due to smoke inhalation, burns, or any other form of direct contact with the fire itself; all of them were attributed to alcohol poisoning.
May 2, 1878: Great Mill Disaster. Six flour mills in Minneapolis were destroyed by a flour dust explosion and subsequent fire coming from the Washburn A Mill, killing 18. The mill was rebuilt with updated technology. The explosion led to new safety standards in the milling industry. A dust explosion is the rapid combustion of fine particles suspended in the air within an enclosed location. Dust explosions can occur where any dispersed powdered combustible material is present in high-enough concentrations in the atmosphere or other oxidizing gaseous medium, such as pure oxygen.
August 9, 1919: The Port Colborne explosion at Port Colborne, Ontario was a dust explosion in the Dominion grain elevator on August 9, 1919. The blast killed 10 and seriously injured 16 more.
January 15, 1919: Great Molasses Flood. A large molasses tank in Boston, Massachusetts burst and a wave of molasses rushed through the streets at an estimated 35 mph (56 km/h), killing 21 and injuring 150. The event has entered local folklore, and residents claim that on a hot summer day, the area still smells of molasses.
February 6, 1979: The Roland Mill, located in Bremen, West Germany, was destroyed by a flour dust explosion, killing 14 and injuring 17.
September 3, 1991: Hamlet chicken processing plant fire in Hamlet, North Carolina, where locked doors trapped workers in a burning processing plant, causing 25 deaths.
September 3, 1998: Grain elevator explosion in Haysville, Kansas. A series of dust explosions in a large grain storage facility resulted in the deaths of seven people.
May 9, 2000: The Wild Turkey Distillery fire – On May 9, 2000, a fire destroyed a seven-story aging warehouse at the company in Anderson County, Kentucky. It contained more than 17,000 wooden barrels of whiskey. Burning whiskey flowed from the warehouse setting the woods on fire. Firefighters saved Lawrenceburg's water treatment plant from destruction. However, an estimated 20% of the whiskey flowed into the Kentucky River. The river contamination required the temporary shutdown of the water treatment plant. Officials ordered water usage restrictions. Businesses and schools were closed because of the water shortage. The alcohol spill also depleted the oxygen in the river, killing an estimated 228,000 fish along a 66-mile stretch. The EPA and the Coast Guard's Gulf Strike Team aerated the river using equipment mounted on barges. The company paid $256,000 to the Kentucky Department of Fish and Wildlife in an effort to restore the fish population in the river.
February 7, 2008: The 2008 Georgia sugar refinery explosion in Port Wentworth, Georgia, United States. Fourteen people were killed and 42 injured when a dust explosion occurred at a sugar refinery owned by Imperial Sugar.
March 12, 2008: Morin-Heights, Quebec, Canada. A roof collapse in the Gourmet du Village bakery warehouse killed three workers.
June 9, 2009: The 2009 ConAgra Foods plant explosion, when a natural gas explosion at the ConAgra Foods Slim Jim production facility in Garner, North Carolina, United States killed four people and triggered an ammonia leak.
January 2013: 2013 Brunost blaze, 27 tonnes of goat cheese caught fire when the truck carrying it crashed in a tunnel in Tysfjord Municipality, Norway.
September 2013: The Honolulu molasses spill – In September 2013, 1,400 tons of molasses spilled into Honolulu Harbor. The spill was discovered on 9 September 2013. It was caused by a faulty pipe, for which the shipping company Matson Navigation Co. took responsibility. Molasses is an unregulated product, and neither Matson nor government officials had a contingency plan to respond to a molasses spill. Natural currents and weather were expected to eventually dilute and flush the molasses out of the harbor and a nearby lagoon.
23 April 2017: The Pepsi fruit juice flood was a flood of 176,000 barrels (28 million litres; 7.4 million US gallons) of fruit and vegetable juices into the streets of Lebedyan, Russia, and the Don River, caused by the collapse of a PepsiCo warehouse.
January 28, 2021: The 2021 Georgia poultry plant accident in Gainesville, Georgia, United States. Six people were killed by asphyxiation when a liquid nitrogen leak occurred at a poultry processing plant owned by Foundation Food Group.
== Manufacturing industry ==
January 10, 1860: Pemberton Mill was a large factory in Lawrence, Massachusetts that collapsed without warning. An estimated 145 workers were killed and 166 injured.
March 20, 1905: Grover Shoe Factory disaster. A boiler explosion, building collapse and fire killed 58 people and injured 150 in Brockton, Massachusetts.
October 6, 1907: Standard Steel Car Company was a large pressed steel car company in Butler, Pennsylvania. A ladle containing 9,000 lbs. of molten steel exploded in the plant, killing 4 workers instantly, fatally wounding 20 others, and seriously injuring 10 more.
March 25, 1911: Triangle Shirtwaist Factory fire in New York City. This was a major industrial disaster in the US, causing the death of more than 100 garment workers who either died in the fire or jumped to their deaths. The fire led to legislation requiring improved factory safety standards and helped spur the growth of the International Ladies' Garment Workers' Union, which fought for better working conditions for sweatshop workers in that industry.
February 20, 1947: O'Connor Plating Works disaster. A chemical explosion killed seventeen people in Los Angeles.
May 27, 1983: Benton fireworks disaster. An explosion at an illegal fireworks operation on a farm near Benton, Tennessee killed eleven, injured one, and inflicted damage within a radius of several miles.
November 23, 1984: MESIT factory collapse. A part of a factory in Uherské Hradiště, Czechoslovakia collapsed, killing 18 workers and injuring 43. The accident was kept secret by the communist regime, however, the news broke the iron curtain and made it to the western media.
December 3, 1984: The Bhopal disaster in India is one of the largest industrial disasters on record. A runaway reaction in a tank containing poisonous methyl isocyanate caused the pressure relief system to vent large amounts to the atmosphere at a Union Carbide India Limited plant. Estimates of the death toll range from 3700 to 16,000. The disaster caused the region's human and animal populations severe health problems to the present.
June 25, 1985: The Aerlex Fireworks plant explosion in Hallett, Oklahoma killed 21 people after a chain-reaction occurred.
May 4, 1988: PEPCON disaster, Henderson, Nevada. A massive fire and explosions at a chemical plant killed two people and injured over 300.
May 10, 1993: Kader Toy Factory fire. A fire started in a poorly built factory in Thailand. Exit doors were locked and the stairwell collapsed. 188 workers were killed, mostly young women.
May 13, 2000: Enschede fireworks disaster. A fire and explosion at a fireworks depot in Enschede, Netherlands resulted in 24 deaths and another 947 were injured. About 1,500 homes were damaged or destroyed. The damage was estimated to be over US$300 million in insured losses.
January 29, 2003: West Pharmaceutical Services explosion. The West Pharmaceutical Services syringe manufacturing facility was subject to a dust explosion which killed six people.
November 3, 2004: Seest fireworks disaster. N. P. Johnsens Fyrværkerifabrik fireworks factory exploded in Seest, a suburb of Kolding, Denmark. One firefighter died; seven from the rescue team as well as 17 locals were injured. In total 2,107 buildings were damaged by the explosion, with the cost of the damage estimated at €100 million.
December 6, 2006: Falk Corporation Explosion. A gas leak triggered a large explosion and ensuing fire at a gear manufacturing facility in Milwaukee, Wisconsin. Three were killed and 47 injured, with several of the buildings at the facility being leveled.
April 18, 2007: Qinghe Special Steel Corporation disaster. A ladle holding molten steel separated from the overhead iron rail, fell, tipped, and killed 32 workers, injuring another 6.
February 1, 2008: Istanbul fireworks explosion. An unlicensed fireworks factory exploded accidentally, leaving by some reports at least 22 people dead and at least 100 injured.
September 11, 2012: Karachi, Pakistan, 289 people died in a fire at the Ali Enterprises garment factory, which made ready-to-wear clothing for Western export.
November 24, 2012: Dhaka Tasreen Fashions fire. A seven-story factory fire outside of Dhaka, the capital of Bangladesh, killed at least 112 people, 12 from jumping out of windows to escape the blaze.
April 24, 2013: 2013 Savar building collapse. An eight-story factory building collapsed on the outskirts of Dhaka, the capital of Bangladesh, and killed 1129 people. The building contained five garment factories that were manufacturing clothing for the western market.
October 26, 2017: Tangerang fireworks disaster. At around 08:30 PM local time, a fireworks factory at Kosambi, Tangerang, exploded, shattering windows as far as 4 kilometres away and igniting a massive fire inside the factory. A second explosion occurred 3 hours later. There were 103 workers inside the factory at the time of the explosion; 49 were killed and 46 were injured.
== Mining industry ==
December 12, 1866: Oaks Colliery Explosion in Barnsley, West Riding of Yorkshire, United Kingdom. Caused by the explosion of firedamp. It was the worst mining accident in England, with a death toll of 361.
September 6, 1869: Avondale Mine Disaster, Plymouth Township, Pennsylvania. A massive fire at the Avondale Colliery caused the death of 110 workers. It was the greatest mine disaster to that point in American history.
February 16, 1883: Diamond Mine Disaster in Diamond, Illinois, United States. 74 people died, including 6 children.
June 28, 1896: Twin Shaft disaster, Pittston, Pennsylvania. A massive cave-in killed 58 coal miners at the Newton Colliery.
March 10, 1906: Courrières mine disaster, Courrières, France. 1,099 people died, including children, in the worst mine accident in Europe.
December 6, 1907: Monongah mining disaster, Monongah, West Virginia. 362 people officially died. The worst industrial accident in American history.
October 14, 1913: Senghenydd Colliery Disaster, Senghenydd, Wales. The worst mining accident in the United Kingdom. 439 workers died.
June 19, 1914: Hillcrest mine disaster, Hillcrest, Alberta, Canada. 189 workers died due to an explosion within the mine or from exposure to toxic fumes as a result of the same.
December 15, 1914: The Mitsubishi Hōjō mine disaster, Kyushu, Japan. A gas explosion at the Hōjō (Hojyo) coal mine killed 687. It was the worst mining accident in Japan.
September 10, 1918: Protection Island mining disaster. Hoisting cable frayed causing an elevator car carrying miners to plunge 300 feet causing the death of 16 miners on Protection Island near Nanaimo British Columbia, Canada
April 27, 1922: Lupeni mine disaster. A methane explosion occurred at the Aurelia Mine in Lupeni, Romania, killing 82 miners, and leaving 62 widows and 124 orphans.
September 22, 1934: Gresford Disaster. An explosion and underground fire killed 261 men at Gresford Colliery, near Wrexham, UK.
1940s - 1966: Wittenoom Mine Disaster. Asbestos mining in the Pilbara, Western Australia, exposed workers and residents to deadly fibers, leading to widespread illness and contamination. More than 2000 deaths were due to asbestos poisoning. The worst industrial accident in Australian history.
April 26, 1942: Benxihu Colliery disaster, Benxi, Liaoning Province, in the Imperial Japanese puppet state of Manchukuo. 1,549 workers died, making this the worst coal mine accident ever in the world.
August 8, 1956: Marcinelle mining disaster. An underground fire killed 262 workers, most of whom were Italian immigrants, in the Belgian town of Marcinelle.
October 23, 1958: Springhill mining disaster, Springhill, Nova Scotia, Canada. A "bump," or underground earthquake caused by a collapse, killed 75 miners. The other 99 miners were rescued by a recovery effort. Previous disasters had occurred at the same mine in 1891 and 1956.
January 22, 1959: Knox Mine Disaster, Jenkins Township, Pennsylvania Illegally undermining the Susquehanna River resulted in a coal mine flood that killed 12.
January 21, 1960: Coalbrook mining disaster at the Clydesdale Colliery near Sasolburg, Orange Free State, South Africa. 435 miners died. It was the worst mining accident in South Africa.
May 9, 1960: Laobaidong mining disaster. A methane gas explosion in the Laobaidong coal mine at Datong in the Shanxi province of China killed 684.
November 9, 1963: Mitsui Miike Coal Mine disaster. An explosion caused by the ignition of coal dust at the Miike coal mine in Kyushu, Japan. 458 people were killed by the explosion or by carbon monoxide poisoning. 839 others were injured.
May 28, 1965: Dhanbad coal mine disaster, Jharkhand, India. Over 300 miners killed.
May 1, 1966: Vratsa dam failure, Zgorigrad, People's Republic of Bulgaria. A copper tailings dam failed and flooded the city of Vratsa and the nearby village of Zgorigrad. Between 107 and 480 people were killed.
October 21, 1966: Aberfan disaster, Aberfan, Wales. A catastrophic collapse of a colliery spoil-tip killed 116 children and 28 adults.
October 30, 1971: Certej dam disaster, Certeju de Sus, Socialist Republic of Romania. A tailings dam failed due to overfilling. The flood destroyed six apartment buildings, a dormitory building and seven individual houses. 89 people were killed.
June 6, 1972: Wankie coal mine disaster, Rhodesia (present-day Zimbabwe). 426 people were killed, making it the country's worst-ever mining disaster.
November 29, 1980: Livezeni coal mine disaster, Petroșani, Socialist Republic of Romania. An explosion in the Livezeni Coal Mine killed 53 (including 15 military) and injured 27. It was the fourth-worst mining disaster in Romania.
July 19, 1985: Val di Stava dam collapse, Stava, near Tesero, Italy. Two tailings dams, used for sedimenting the mud from the nearby Prestavel mine, failed. This resulted in one of Italy's worst disasters, killing 268 people, destroying 63 buildings and demolishing eight bridges.
May 9, 1992: Westray mine disaster, Plymouth, Nova Scotia, Canada. A methane explosion killed all 26 miners. Canada's deadliest mining disaster since 1958.
May 9, 1993: Nambija mine disaster, Nambija, Ecuador. Approximately 300 people were killed in a landslide.
January 30, 2000: Baia Mare cyanide spill, Baia Mare, Romania. The accident, called the worst environmental disaster in Europe since Chernobyl, was a release of 100,000 tons of cyanide-contaminated water into the rivers Someş, Tisza and Danube by the Aurul mining company due to a reservoir breach. Although no human fatalities were reported, the leak killed up to 80 percent of aquatic life in some of the affected rivers.
April 5, 2010: Upper Big Branch Mine disaster, West Virginia, United States. An explosion occurred in Massey Energy's Upper Big Branch coal mine. Twenty-nine out of 31 miners at the site were killed.
November 19, 2010: Pike River Mine disaster, New Zealand. At 3:45 pm, the coal mine exploded. Twenty-nine men underground died immediately, or shortly afterwards, from the blast or from the toxic atmosphere. Two men in the stone drift, some distance from the mine workings, managed to escape. (Extract from Royal Commission of Inquiry Report on Pike River.)
May 13, 2014: Soma mine disaster, Manisa Province, Turkey. An explosion occurred two kilometers below the surface, starting a fire, which caused the mine's elevator to stop working. This trapped several hundred miners, many of whom died of carbon monoxide poisoning. 787 workers were present during the disaster, and 301 of them died during the disaster.
November 5, 2015: Mariana dam disaster, Minas Gerais, Brazil. An iron ore tailings dam suffered a catastrophic failure. The resultant flooding destroyed the village of Bento Rodrigues and killed 19 people.
January 25, 2019: Brumadinho dam disaster, Minas Gerais, Brazil. An iron ore tailings dam suffered a catastrophic failure. At least 259 people died.
June 27, 2019: Kolwezi copper and cobalt mine collapse, Lualaba province, Democratic Republic of the Congo. The mine was being worked by illegal artisanal miners, 43 of whom were killed.
September 11, 2020: Kamituga gold mine landslides, South Kivu province, Democratic Republic of the Congo. More than 50 people died when three artisanal gold mining wells collapsed in landslides.
November 2021: the Listvyazhnaya mine disaster took place in Listvyazhnaya, Russia. 40 men died in the accident.
== Other industrial disasters ==
March 11, 1864: The Great Sheffield Flood. The Dale Dyke Dam, at Bradfield, South Yorkshire, collapsed when its reservoir was being filled for the first time. At least 240 people died, and 5000 properties were flooded. Historian Peter Machan said: "In terms of Victorian England it was the greatest disaster in terms of loss of life, apart from maritime disasters".
January 20, 1909: Chicago Crib Disaster. During the construction of a water intake tunnel for the city of Chicago, a fire broke out on a temporary water crib used to access an intermediate point along the tunnel. The fire began in the dynamite magazine and burned the wooden dormitory that housed the tunnel workers. 46 workers survived the fire by jumping into the lake and climbing onto ice floes or the spoil heap near the crib. 29 men were burned beyond recognition, and approximately 60 men died. Most of the remainder drowned or froze to death in the lake and were not recovered.
September 21, 1921: Oppau explosion, Germany. Occurred when a tower silo storing 4,500 tonnes of a mixture of ammonium sulfate and ammonium nitrate fertilizer exploded at a BASF plant in Oppau, now part of Ludwigshafen, Germany, killing 500–600 people and injuring about 2,000 more.
1927–1932: Hawks Nest Tunnel disaster, near Gauley Bridge, West Virginia, United States. Over several years, as many as 1000 out of 3000 workers died from silicosis.
1932–1968: The Minamata disaster was caused by the dumping of mercury compounds in Minamata Bay, Japan. The Chisso Corporation, a fertilizer and later petrochemical company, was found responsible for polluting the bay for 37 years. It is estimated that over 3,000 people suffered various deformities, severe mercury poisoning symptoms or death from what became known as Minamata disease.
April 16, 1947: Texas City disaster, Texas. At 9:15 am an explosion occurred aboard a docked ship named the Grandcamp. The explosion, and subsequent fires and explosions, is referred to as the worst industrial disaster in America. At least 578 people lost their lives and another 3,500 were injured as the blast shattered windows from as far away as 25 mi (40 km). Large steel pieces were thrown more than a mile from the dock. The origin of the explosion was fire in the cargo on board the ship. Detonation of 3,200 tons of ammonium nitrate fertilizer aboard the Grandcamp led to further explosions and fires. The fertilizer shipment was to aid the struggling farmers of Europe recovering from World War II.
July 28, 1948: A chemical tank wagon explosion within the BASF's Ludwigshafen, Germany site caused 207 fatalities. 3,818 were injured, and 3,122 buildings were significantly affected.
January 9, 1959: Vega de Tera disaster, Spain. In the midst of heavy rains, a failure of the small Vega de Tera dam at about 1:00 a.m. killed 144 of 532 inhabitants in downriver Ribadelago (Zamora, Spain) some minutes later. The dam was new (1956) but poorly built as usual in that period, when the Francoist regime was prioritizing economic development over construction quality. The town was partially destroyed and never recovered; afterwards, the survivors were moved out of the floodable area to a newly built nearby town (Ribadelago Nuevo, "New Ribadelago.")
February 3, 1971: The Thiokol-Woodbine Explosion at a Thiokol chemical plant in Georgia (United States) killed 29 people and seriously injured 50.
June 1, 1974: Flixborough disaster, England. An explosion at a chemical plant near the village of Flixborough killed 28 people and seriously injured another 36.
1972–1976: Dioxin is unknowingly released on the unpaved roads of Times Beach, Missouri, as part of a dust-abatement program, causing the evacuation and disincorporation of the 2,000-strong town starting 1983. It was the largest civilian exposure to dioxin in the United States' history.
July 10, 1976: Seveso disaster, in Seveso, Italy, in a small chemical manufacturing plant of ICMESA. Due to the release of dioxins into the atmosphere and throughout a large section of the Lombard Plain, 3,000 pets and farm animals died and, later, 70,000 animals were slaughtered to prevent dioxins from entering the food chain. In addition, 193 people in the affected areas suffered from chloracne and other symptoms. The disaster lead to the Seveso Directive, which was issued by the European Community and imposed much harsher industrial regulations.
April 27, 1978: Willow Island disaster. A cooling tower for a power plant under construction in Willow Island, West Virginia collapsed, killing 51 construction workers. The cause was attributed to placing loads on recently poured concrete before it had cured sufficiently to withstand the loads. It is thought to be the largest construction accident in United States history.
October 12, 1978: Spyros disaster. The Greek tanker Spyros exploded at Jurong Shipyard in Singapore on October 12, 1978. It killed 76 people, and remains the worst accident, in terms of lives lost, in Singapore's post-war history. It is also Singapore's worst industrial accident.
February 24, 1984: Occurred on the night in Cubatao, Brazil around 23:30 a gasoline pipeline exploded in the favela of Vila Sao Jose killing at least 508 people, most of them children. The tragedy turned the eyes of the world to Cubatao and laid bare another problem: industrial pollution, since the 70s, gave the city the nickname "Death Valley".
November 1, 1986: The Sandoz disaster in Schweizerhalle, Switzerland released tons of toxic agrochemicals into the Rhine River.
June 28, 1988: Auburn, Indiana. Improper mixing of chemicals at Bastian Plating Company killed four workers in the worst confined-space industrial accident in U.S. history; a fifth victim died two days later.
October 23, 1989: Phillips Disaster. An explosion and fire killed 23 and injured 314 in Pasadena, Texas and registered 3.5 on the Richter magnitude scale.
July 5, 1990: An explosion and fire occurred at the Arco Chemical Company complex in Channelview, Texas. 17 people were killed. Five were permanent employees and the remaining 12 were contract labor employees. An area approximately the size of a city block was completely destroyed; no one in the area survived the explosion.
May 1, 1991: Sterlington, Louisiana. An explosion at the IMC-operated Angus Chemical nitro-paraffin plant in Sterlington, Louisiana, killed eight workers and injured 120 other people. There was severe damage to the surrounding community. The blasts were heard more than eight miles away.
May 7, 1991: Sungai Buloh fireworks disaster. Around 3:45 PM MYT, the Bright Sparklers Fireworks factory near Sungai Buloh, Selangor, Malaysia, caught fire and violently exploded, caused by experimentations with explosive chemicals in the factory's canteen. The disaster claimed 26 lives and injured over 100. Dubbed the Hiroshima of Sungai Buloh, the energy emanated from the explosion was so strong enough to destroy over 200 residential properties in the vicinity of the factory.
August 21, 2000: Pingxiang steel plant explosion. An oxygen generator exploded in a steel plant in Pingxiang, Jiangxi, China. At least 19 steel workers were killed.
September 21, 2001: Toulouse, France. An explosion at the AZF fertilizer factory killed 29, injured 2,500, and caused extensive structural damage to nearby neighbourhoods.
October 19, 2009: Ottawa, Canada. A boiler explosion at the Cliff Central Heating and Cooling Plant killed one person, and three others suffered injuries.
October 4, 2010: Alumina plant accident. Ajka, Kolontár, Devecser and several other settlements, Hungary. The dam of Magyar Aluminium's red mud reservoir broke and the escaping highly toxic and alkaline (~pH 13) sludge flooded several settlements. There were nine victims, including a young girl, and hundreds of injuries (mostly chemical burns).
January 20, 2012: Burns Lake, British Columbia, Canada. At a wood mill two workers were killed and 20 others injured in a fire and explosion. A combustible dust environment led to the explosion and fire.
November 8, 2012: Sherbrooke, Quebec, Canada. Two people died and 19 were injured in an industrial processing plant belonging to Neptune Technologies & Bioressources, a manufacturer of health care products.
April 17, 2013: Fertilizer plant explosion in West, Texas. An explosion occurred at the West Fertilizer Company storage and distribution facility in West, Texas, 18 miles (29 km) north of Waco, while emergency services personnel were responding to a fire at the facility. Fifteen people were killed, more than 160 were injured, and more than 150 buildings damaged or destroyed.
June 20, 2013: Coteau-du-Lac, Quebec, Canada. Two women were killed in a fireworks warehouse explosion.
July 31 – August 1, 2014: 2014 Kaohsiung gas explosions. From the underground-installed gas pipelines of a petrochemical factory, a large-scale leakage (which had been occurring for more than three hours) led to a series of gas explosions in the streets of Kaohsiung, Taiwan at the midnight between the two days. Thirty-two people were killed and 321 others were injured.
August 12, 2015: Binhai, Tianjin, China. Two explosions within 30 seconds of each other occurred at a container storage station at the Port of Tianjin in the Binhai New Area of Tianjin, China. 173 people died as a result.
August 23, 2016: Chittagong, Bangladesh. An incident of gas leakage happened at a fertilizer company in port city of Chittagong. The fertilizer company belongs to Chittagong Urea Fertiliser Limited (CUFL) located near the shore of Karnaphuli River. No deaths were reported but 25 people had fallen ill due to toxic ammonia inhalation. The investigation team found that tank was maintained by unskilled workers instead of skilled engineers which resulted in leakage.
September 10, 2016: Gazipur, Bangladesh. A boiler explosion in a packaging industry in the town of Tongi, Gazipur, led to the death of 23 workers. The explosion was so powerful that it made part of the four-story building collapse. The explosion also triggered a fire which spread to surrounding areas.
May 9, 2018: Patel Milmet Dam failure. An embankment dam in Nakuru County, Kenya, burst during heavy rains, killing at least 48 people.
May 7, 2020: Visakhapatnam gas leak. A gas leakage accident at LG Polymers chemical plant in Gopala samudram, Vizag. The leakage had spread over a radius of about 3 km, affecting the nearby areas and villages. 11 were killed and more than 1000 people were injured as of 7 May 2020.
3 June 2020: 2020 Dahej chemical plant explosion. Five deaths and more than fifty people injured.
August 4, 2020: 2020 Beirut explosions. A massive explosion of a large cache of ammonium nitrate at the Port of Beirut flattened much of the port and damaged buildings throughout the city. More than 200 people were killed and over 7000 injured.
4 November 2020: Ahmedabad chemical factory blast resulted in twelve deaths and injuries to nine people.
6 January 2022: Surat gas leak: At least six people died and 22 people became sick following gas leak from a tanker in an industrial area in India.
4 June 2022: 2022 Sitakunda fire. A fire and subsequent explosions at a container storage facility in Bangladesh's Chittagong District killed at least 33 people and injured more than 450 others.
27 June 2022: 2022 Aqaba toxic gas leak, at least 10 dead and more than 251 injured by ruptured tank containing 25 tons of chlorine in Port of Aqaba, Jordan.
26 April 2025: 2025 Port of Shahid Rajaee explosion
29 April 2025: 2025 Isfahan explosion
== See also ==
Lists of disasters
List of environmental disasters
List of civilian nuclear accidents
List of accidents and disasters by death toll
List of disasters in Great Britain and Ireland
Environmental racism
== References == | Wikipedia/Industrial_disaster |
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances in the field of deep learning have allowed neural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance.
ML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. The application of ML to business problems is known as predictive analytics.
Statistics and mathematical optimisation (mathematical programming) methods comprise the foundations of machine learning. Data mining is a related field of study, focusing on exploratory data analysis (EDA) via unsupervised learning.
From a theoretical viewpoint, probably approximately correct learning provides a framework for describing machine learning.
== History ==
The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence. The synonym self-teaching computers was also used in this time period.
Although the earliest machine learning model was introduced in the 1950s when Arthur Samuel invented a program that calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes. In 1949, Canadian psychologist Donald Hebb published the book The Organization of Behavior, in which he introduced a theoretical neural structure formed by certain interactions among nerve cells. Hebb's model of neurons interacting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, or artificial neurons used by computers to communicate data. Other researchers who have studied human cognitive systems contributed to the modern machine learning technologies as well, including logician Walter Pitts and Warren McCulloch, who proposed the early mathematical models of neural networks to come up with algorithms that mirror human thought processes.
By the early 1960s, an experimental "learning machine" with punched tape memory, called Cybertron, had been developed by Raytheon Company to analyse sonar signals, electrocardiograms, and speech patterns using rudimentary reinforcement learning. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions. A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that an artificial neural network learns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.
Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".
Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions.
== Relationships to other fields ==
=== Artificial intelligence ===
As a scientific endeavour, machine learning grew out of the quest for artificial intelligence (AI). In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalised linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.: 488
However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.: 488 By 1980, expert systems had come to dominate AI, and statistics was out of favour. Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming(ILP), but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.: 708–710, 755 Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including John Hopfield, David Rumelhart, and Geoffrey Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.: 25
Machine learning (ML), reorganised and recognised as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics, fuzzy logic, and probability theory.
=== Data compression ===
=== Data mining ===
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.
Machine learning also has intimate ties to optimisation: Many learning problems are formulated as minimisation of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the preassigned labels of a set of examples).
=== Generalization ===
Characterizing the generalisation of various learning algorithms is an active topic of current research, especially for deep learning algorithms.
=== Statistics ===
Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalisable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.
Conventional statistical analyses require the a priori selection of a model most suitable for the study data set. In addition, only significant or theoretically relevant variables based on previous experience are included for analysis. In contrast, machine learning is not built on a pre-structured model; rather, the data shape the model by detecting underlying patterns. The more variables (input) used to train the model, the more accurate the ultimate model will be.
Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random Forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.
=== Statistical physics ===
Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyse the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
== Theory ==
A core objective of a learner is to generalise from its experience. Generalisation in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.
The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory via the probably approximately correct learning model. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalisation error.
For the best performance in the context of generalisation, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalisation will be poorer.
In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.
== Approaches ==
Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system:
Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximise.
Although each algorithm has advantages and limitations, no single algorithm works for all problems.
=== Supervised learning ===
Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data, known as training data, consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimisation of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.
Types of supervised-learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, while regression algorithms are used when the outputs can take any numerical value within a range. For example, in a classification algorithm that filters emails, the input is an incoming email, and the output is the folder in which to file the email. In contrast, regression is used for tasks such as predicting a person's height based on factors like age and genetics or forecasting future temperatures based on historical data.
Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification.
=== Unsupervised learning ===
Unsupervised learning algorithms find structures in data that has not been labelled, classified or categorised. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering, dimensionality reduction, and density estimation.
Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity.
A special type of unsupervised learning called, self-supervised learning involves training a model by generating the supervisory signal from the data itself.
=== Semi-supervised learning ===
Semi-supervised learning falls between unsupervised learning (without any labelled training data) and supervised learning (with completely labelled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabelled data, when used in conjunction with a small amount of labelled data, can produce a considerable improvement in learning accuracy.
In weakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.
=== Reinforcement learning ===
Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximise some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimisation, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In reinforcement learning, the environment is typically represented as a Markov decision process (MDP). Many reinforcement learning algorithms use dynamic programming techniques. Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent.
=== Dimensionality reduction ===
Dimensionality reduction is a process of reducing the number of random variables under consideration by obtaining a set of principal variables. In other words, it is a process of reducing the dimension of the feature set, also called the "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination or extraction. One of the popular methods of dimensionality reduction is principal component analysis (PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D).
The manifold hypothesis proposes that high-dimensional data sets lie along low-dimensional manifolds, and many dimensionality reduction techniques make this assumption, leading to the area of manifold learning and manifold regularisation.
=== Other types ===
Other approaches have been developed which do not fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example, topic modelling, meta-learning.
==== Self-learning ====
Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, named crossbar adaptive array (CAA). It gives a solution to the problem learning without any external reward, by introducing emotion as an internal reward. Emotion is used as state evaluation of a self-learning agent. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion.
The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine:
in situation s perform action a
receive a consequence situation s'
compute emotion of being in the consequence situation v(s')
update crossbar memory w'(a,s) = w(a,s) + v(s')
It is a system with only one input, situation, and only one output, action (or behaviour) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioural environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioural environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behaviour, in an environment that contains both desirable and undesirable situations.
==== Feature learning ====
Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include principal component analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task.
Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labelled input data. Examples include artificial neural networks, multilayer perceptrons, and supervised dictionary learning. In unsupervised feature learning, features are learned with unlabelled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorisation and various forms of clustering.
Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros. Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors. Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.
Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.
==== Sparse dictionary learning ====
Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions and assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately. A popular heuristic method for sparse dictionary learning is the k-SVD algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.
==== Anomaly detection ====
In data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties, noise, deviations and exceptions.
In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.
Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabelled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labelled as "normal" and "abnormal" and involves training a classifier (the key difference from many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behaviour from a given normal training data set and then test the likelihood of a test instance to be generated by the model.
==== Robot learning ====
Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally meta-learning (e.g. MAML).
==== Association rules ====
Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".
Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilisation of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems.
Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule
{
o
n
i
o
n
s
,
p
o
t
a
t
o
e
s
}
⇒
{
b
u
r
g
e
r
}
{\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}
found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.
Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.
Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs.
Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting. Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples. The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set.
== Models ==
A machine learning model is a type of mathematical model that, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimise errors in its predictions. By extension, the term "model" can refer to several levels of specificity, from a general class of models and their associated learning algorithms to a fully trained model with all its internal parameters tuned.
Various types of models have been used and researched for machine learning systems, picking the best model for a task is called model selection.
=== Artificial neural networks ===
Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules.
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times.
The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.
Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.
=== Decision trees ===
Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels, and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making.
=== Random forest regression ===
Random forest regression (RFR) falls under umbrella of decision tree-based models. RFR is an ensemble learning method that builds multiple decision trees and averages their predictions to improve accuracy and to avoid overfitting. To build decision trees, RFR uses bootstrapped sampling, for instance each decision tree is trained on random data of from training set. This random selection of RFR for training enables model to reduce bias predictions and achieve accuracy. RFR generates independent decision trees, and it can work on single output data as well multiple regressor task. This makes RFR compatible to be used in various application.
=== Support-vector machines ===
Support-vector machines (SVMs), also known as support-vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
=== Regression analysis ===
Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularisation methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space.
Multivariate linear regression extends the concept of linear regression to handle multiple dependent variables simultaneously. This approach estimates the relationships between a set of input variables and several output variables by fitting a multidimensional linear model. It is particularly useful in scenarios where outputs are interdependent or share underlying patterns, such as predicting multiple economic indicators or reconstructing images, which are inherently multi-dimensional.
=== Bayesian networks ===
A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalisations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
=== Gaussian processes ===
A Gaussian process is a stochastic process in which every finite collection of the random variables in the process has a multivariate normal distribution, and it relies on a pre-defined covariance function, or kernel, that models how pairs of points relate to each other depending on their locations.
Given a set of observed points, or input–output examples, the distribution of the (unobserved) output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point.
Gaussian processes are popular surrogate models in Bayesian optimisation used to do hyperparameter optimisation.
=== Genetic algorithms ===
A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.
=== Belief functions ===
The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined (e.g., Dempster's rule of combination), just like how in a pmf-based Bayesian approach would combine probabilities. However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance and uncertainty quantification. These belief function approaches that are implemented within the machine learning domain typically leverage a fusion approach of various ensemble methods to better handle the learner's decision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving. However, the computational complexity of these algorithms are dependent on the number of propositions (classes), and can lead to a much higher computation time when compared to other machine learning approaches.
=== Rule-based models ===
Rule-based machine learning (RBML) is a branch of machine learning that automatically discovers and learns 'rules' from data. It provides interpretable models, making it useful for decision-making in fields like healthcare, fraud detection, and cybersecurity. Key RBML techniques includes learning classifier systems, association rule learning, artificial immune systems, and other similar models. These methods extract patterns from data and evolve rules over time.
=== Training models ===
Typically, machine learning models require a high quantity of reliable data to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representative sample of data. Data from the training set can be as varied as a corpus of text, a collection of images, sensor data, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Biased models may result in detrimental outcomes, thereby furthering the negative impacts on society or objectives. Algorithmic bias is a potential result of data not being fully prepared for training. Machine learning ethics is becoming a field of study and notably, becoming integrated within machine learning engineering teams.
==== Federated learning ====
Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralises the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralised server. This also increases efficiency by decentralising the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.
== Applications ==
There are many applications for machine learning, including:
In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. Shortly after the prize was awarded, Netflix realised that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly. In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis. In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software. In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognised influences among artists. In 2019 Springer Nature published the first research book created using machine learning. In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19. Machine learning was recently applied to predict the pro-environmental behaviour of travellers. Recently, machine learning technology was also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone. When applied correctly, machine learning algorithms (MLAs) can utilise a wide range of company characteristics to predict stock returns without overfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques like OLS.
Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes.
Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes. Other applications have been focusing on pre evacuation decisions in building fires.
Machine learning is also emerging as a promising tool in geotechnical engineering, where it is used to support tasks such as ground classification, hazard prediction, and site characterization. Recent research emphasizes a move toward data-centric methods in this field, where machine learning is not a replacement for engineering judgment, but a way to enhance it using site-specific data and patterns.
== Limitations ==
Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.
The "black box theory" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data. The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes.
In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. Microsoft's Bing Chat chatbot has been reported to produce hostile and offensive response against its users.
Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves.
=== Explainability ===
Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI. It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation.
=== Overfitting ===
Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalising the theory in accordance with how complex the theory is.
=== Other limitations and vulnerabilities ===
Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers often do not primarily make judgements from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies.
Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel. Machine learning models are often vulnerable to manipulation or evasion via adversarial machine learning.
Researchers have demonstrated how backdoors can be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type of data/software transparency is provided, possibly including white-box access.
== Model assessments ==
Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.
In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning true positive rate (TPR) and true negative rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. Receiver operating characteristic (ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model.
== Ethics ==
=== Bias ===
Different machine learning approaches can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society.
Systems that are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitising cultural prejudices. For example, in 1988, the UK's Commission for Racial Equality found that St. George's Medical School had been using a computer program trained from data of previous admissions staff and that this program had denied nearly 60 candidates who were found to either be women or have non-European sounding names. Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants. Another example includes predictive policing company Geolitica's predictive algorithm that resulted in "disproportionately high levels of over-policing in low-income and minority communities" after being trained with historical crime data.
While responsible collection of data and documentation of algorithmic rules used by a system is considered a critical part of machine learning, some researchers blame lack of participation and representation of minority population in the field of AI for machine learning's vulnerability to biases. In fact, according to research carried out by the Computing Research Association (CRA) in 2021, "female faculty merely make up 16.1%" of all faculty members who focus on AI among several universities around the world. Furthermore, among the group of "new U.S. resident AI PhD graduates," 45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as African American, which further demonstrates a lack of diversity in the field of AI.
Language models learned from data have been shown to contain human-like biases. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases. In 2016, Microsoft tested Tay, a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.
In an experiment carried out by ProPublica, an investigative journalism organisation, a machine learning algorithm's insight into the recidivism rates among prisoners falsely flagged "black defendants high risk twice as often as white defendants". In 2015, Google Photos once tagged a couple of black people as gorillas, which caused controversy. The gorilla label was subsequently removed, and in 2023, it still cannot recognise gorillas. Similar issues with recognising non-white people have been found in many other systems.
Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains. Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good, is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who said that "[t]here's nothing artificial about AI. It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility."
=== Financial incentives ===
There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated.
== Hardware ==
Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.
=== Tensor Processing Units (TPUs) ===
Tensor Processing Units (TPUs) are specialised hardware accelerators developed by Google specifically for machine learning workloads. Unlike general-purpose GPUs and FPGAs, TPUs are optimised for tensor computations, making them particularly efficient for deep learning tasks such as training and inference. They are widely used in Google Cloud AI services and large-scale machine learning models like Google's DeepMind AlphaFold and large language models. TPUs leverage matrix multiplication units and high-bandwidth memory to accelerate computations while maintaining energy efficiency. Since their introduction in 2016, TPUs have become a key component of AI infrastructure, especially in cloud-based environments.
=== Neuromorphic computing ===
Neuromorphic computing refers to a class of computing systems designed to emulate the structure and functionality of biological neural networks. These systems may be implemented through software-based simulations on conventional hardware or through specialised hardware architectures.
==== physical neural networks ====
A physical neural network is a specific type of neuromorphic hardware that relies on electrically adjustable materials, such as memristors, to emulate the function of neural synapses. The term "physical neural network" highlights the use of physical hardware for computation, as opposed to software-based implementations. It broadly refers to artificial neural networks that use materials with adjustable resistance to replicate neural synapses.
=== Embedded machine learning ===
Embedded machine learning is a sub-field of machine learning where models are deployed on embedded systems with limited computing resources, such as wearable computers, edge devices and microcontrollers. Running models directly on these devices eliminates the need to transfer and store data on cloud servers for further processing, thereby reducing the risk of data breaches, privacy leaks and theft of intellectual property, personal data and business secrets. Embedded machine learning can be achieved through various techniques, such as hardware acceleration, approximate computing, and model optimisation. Common optimisation techniques include pruning, quantisation, knowledge distillation, low-rank factorisation, network architecture search, and parameter sharing.
== Software ==
Software suites containing a variety of machine learning algorithms include the following:
=== Free and open-source software ===
=== Proprietary software with free and open-source editions ===
KNIME
RapidMiner
=== Proprietary software ===
== Journals ==
Journal of Machine Learning Research
Machine Learning
Nature Machine Intelligence
Neural Computation
IEEE Transactions on Pattern Analysis and Machine Intelligence
== Conferences ==
AAAI Conference on Artificial Intelligence
Association for Computational Linguistics (ACL)
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)
International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB)
International Conference on Machine Learning (ICML)
International Conference on Learning Representations (ICLR)
International Conference on Intelligent Robots and Systems (IROS)
Conference on Knowledge Discovery and Data Mining (KDD)
Conference on Neural Information Processing Systems (NeurIPS)
== See also ==
Automated machine learning – Process of automating the application of machine learning
Big data – Extremely large or complex datasets
Deep learning — branch of ML concerned with artificial neural networks
Differentiable programming – Programming paradigm
List of datasets for machine-learning research
M-theory (learning framework)
Machine unlearning
Solomonoff's theory of inductive inference – A mathematical theory
== References ==
== Sources ==
Domingos, Pedro (22 September 2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0465065707.
Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4. Archived from the original on 26 July 2020. Retrieved 18 November 2019.
Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
== Further reading ==
== External links ==
International Machine Learning Society
mloss is an academic database of open-source machine learning software. | Wikipedia/Applications_of_machine_learning |
Artificial intelligence (AI) has many applications in warfare, including in communications, intelligence, and munitions control.
== Uses ==
AI can enhance command and control, communications, sensors, integration and interoperability. AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles, both human operated and autonomous.
AI has been used in military operations in Iraq, Syria, Ukraine and Israel.
=== Autonomous armament ===
Military drones capable of autonomous action are in wide use.
=== Command and control ===
In 2024 a Chinese laboratory at the Joint Operations College of the National Defense University in Shijiazhuang has created an AI military commander, for use in large-scale war simulations in the role of the commander-in-chief.
In 2024, the Ukrainian Army developed autonomous Kamikaze drones in order to make Russian interference during flight ineffective.
=== Military intelligence ===
In 2023, the United States Department of Defense tested generative AI based on large language models to digitize and integrate data across the military.
In the Gaza war, Israel used two AI systems to generate targets to strike: Habsora (translated: "the gospel") was used to compile a list of buildings to target, while "Lavender" produced a list of people. "Lavender" produced a list of 37,000 people to target. The list of buildings to target included Gazan private homes of people that were suspected of affiliation to Hamas operatives. The combination of AI targeting technology with policy shift away from avoiding civilian targets resulted in unprecedented numbers of civilian deaths. IDF officials say the program addresses the previous issue of the air force running out of targets. Using Habsora, officials say that suspected and junior Hamas members homes significantly expand the "AI target bank." An internal source describes the process as a “mass assassination factory”.
In 2024, the U.S. military trained artificial intelligence to identify airstrike targets during its operations in Iraq and Syria.
== Global trends ==
Various countries are researching and deploying AI military applications, in what has been termed the "artificial intelligence arms race". Ongoing research is focused on intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles.
Worldwide annual military spending on robotics rose from US$5.1 billion in 2010 to US$7.5 billion in 2015.
In November 2023, US Vice President Kamala Harris disclosed a declaration signed by 31 nations to set guardrails for the military use of AI. The commitments include using legal reviews to ensure the compliance of military AI with international laws, and being cautious and transparent in the development of this technology.
Many AI researchers try to avoid military applications, with guardrails to prevent military applications integrated into most mainstream large language models.
== In popular culture ==
Military artificial intelligence systems have appeared in many works of fiction, often as antagonists.
=== Film ===
The Terminator franchise
The Matrix franchise
=== Literature ===
Legends of Dune trilogy by Brian Herbert
== References == | Wikipedia/Military_applications_of_artificial_intelligence |
Narrative Science was a natural language generation company based in Chicago, Illinois, that specialized in data storytelling. As of December 17, 2021, Narrative Science was acquired by Salesforce and has been integrated into Salesforce's Tableau Software.
== History ==
Narrative Science was founded in 2010 in Evanston, Illinois, after a student project in the Intelligent Information Lab at Northwestern University jump started the NLG technology. The first prototype of the company technology went by the project name StatsMonkey and was developed in the laboratory by Kris Hammond, Larry Birnbaum, Nick Allen and John Templon. StatsMonkey was created to allow stories based in data, specifically baseball stories at the beginning, to be written automatically by StatsMonkey. These baseball stories would include recaps based on game data like players, win probability and game score. Narrative Science licensed StatsMonkey and the related intellectual property from Northwestern and began commercial operations in early 2010. Afterwards the company decided to change direction, they no longer focused on the journalistic capabilities of their technology and focused on how the same technology could be used in the business world. This led to the development of a Natural Language Generation platform called Quill, which analyzes structured data and automatically generates intelligent narratives for business users who are not data fluent. Narrative Science has several investors, including SAP Ventures and In-Q-Tel, the investment arm of the Central Intelligence Agency. In 2014, the Chicago company raised another $10 million in equity financing, led by customer USAA, for a total of $32 million raised since the company’s inception. In 2020, Narrative Science launched Data Storytelling for Good, their non-profit branch which provides their products for free to organizations doing good in their community.
On November 15, 2021, Narrative Science announced an agreement to be acquired by Salesforce. The deal closed on December 17, 2021, and Narrative Science was folded into Tableau. In the announcement of the close, Salesforce indicated that Narrative Science's products would no longer be sold on a stand-alone basis.
== Recognition ==
In 2017, Fortune listed Narrative Science as one of the 50 companies leading the artificial intelligence revolution. In 2015, CNBC named Narrative Science to their Disruptor 50 list.
Gartner named Narrative Science as one of the “Cool Vendors in Smart Machines” in 2014.
In 2013, the company was named to the Red Herring Top 100 for North America, which highlights promising startups in Asia, Europe, and the Americas.
Narrative Science won a 2013 Edison Award for Innovative Services in Collaboration and Knowledge Management.
In 2018, Narrative Science was part of the World Economic Forum's Technology Pioneers.
In 2018, Narrative Science was named Most Innovative Company by Crain's Chicago Business.
== Competitors ==
According to Gartner's 2019 "Market Guide for NLG", the main NLG companies are (in alphabetical order): Arria NLG, Automated Insights, AX Semantics, Narrative Science, vPhrase and Yseop. Other similar companies in the area of natural language generation include Smartologic, Retresco, United Robots and Linguastat.
== Criticism ==
The company received some early criticism from journalists speculating that Narrative Science was attempting to eliminate the jobs of writers, particularly in sports and finance. Critics also argue that biases and assumptions in original data sets can lead to reinforced bias in the stories generated by natural language processors, such as Narrative Science. A CBS article compared artificially generated journalism in the financial sector to the property market bubble, as it leads to “everyone making investments in the same way for the same reasons”. The article claimed that computer-generated narratives have the “potential to amplify biases and assumptions, but at far greater speed and on a far wider scale than anything written by humans.”
An article from the Columbia Journalism School also criticized the limitations of “robo-journalism” software, as “it can’t assess the damage on the ground, can’t interview experts, and can’t discern the relative newsworthiness of various aspects of the story” and therefore, lacks a necessary human element.
== See also ==
Narrative Inquiry
Natural Language Processing
== References == | Wikipedia/Narrative_Science |
A 1.58-bit Large Language Model (1.58-bit LLM, also ternary LLM) is a version of a transformer large language model with weights using only three values: -1, 0, and +1. This restriction theoretically allows the model to replace costly multiplications with additions and reduce the storage memory. Since the end-task performance and perplexity of the 1.58-bit LLMs, at least for smaller model sizes (up to 3-4B parameters), are close to their "full precision" (16-bit FP16 or BF16) counterparts, this design allows reaching the same artificial intelligence goals with much lower hardware requirements, latency, and training effort.
The name comes from a fact that a single trit, a ternary arithmetic equivalent of a bit that can take the {-1, 0, 1} values, carries
l
o
g
2
3
≈
1.58
{\displaystyle log_{2}3\approx 1.58}
bits of information. The 1.58-bit LLM models are also called 1-bit LLMs (the true 1-bit models also exist).
== BitNet ==
In 2024, Ma et al., researchers at Microsoft, declared that their 1.58-bit model, BitNet b1.58 is comparable in performance to the 16-bit Llama 2 and opens the era of 1-bit LLM. BitNet creators did not use the post-training quantization of weights but instead relied on the new BitLinear transform that replaced the nn.Linear layer of the traditional transformer design.
In 2025, Microsoft researchers had released an open-weights and open inference code model BitNet b1.58 2B4T demonstrating performance competitive to the full precision models at 2B parameters and 4T training tokens.
== Critique ==
Some researchers point out that the scaling laws of large language models favor the low-bit weights only in case of undertrained models. As the number of training tokens increases, the deficiencies of low-bit quantization surface.
== References ==
== Sources ==
Ma, Shuming; Wang, Hongyu; Ma, Lingxiao; Wang, Lei; Wang, Wenhui; Huang, Shaohan; Dong, Li; Wang, Ruiping; Xue, Jilong; Wei, Furu (2024-02-27). "The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits". arXiv:2402.17764 [cs.CL].
Ma, Shuming; Wang, Hongyu; Huang, Shaohan; Zhang, Xingxing; Hu, Ying; Song, Ting; Xia, Yan; Wei, Furu (2025). "BitNet b1.58 2B4T Technical Report". arXiv:2504.12285 [cs.CL].
Friha, Othmane; Amine Ferrag, Mohamed; Kantarci, Burak; Cakmak, Burak; Ozgun, Arda; Ghoualmi-Zine, Nassira (2024). "LLM-Based Edge Intelligence: A Comprehensive Survey on Architectures, Applications, Security and Trustworthiness". IEEE Open Journal of the Communications Society. 5: 5799–5856. doi:10.1109/OJCOMS.2024.3456549. ISSN 2644-125X.
Hutson, Matthew (2024-05-30). "1-bit LLMs Could Solve AI's Energy Demands". IEEE Spectrum. Retrieved 2025-04-22.
Huyen, Chip (2024-12-04). AI Engineering. "O'Reilly Media, Inc.". ISBN 978-1-0981-6627-4. Retrieved 2025-04-22.
Kumar, Tanishq; Ankner, Zachary; Spector, Benjamin F.; Bordelon, Blake; Muennighoff, Niklas; Paul, Mansheej; Pehlevan, Cengiz; Ré, Christopher; Raghunathan, Aditi (2024). "Scaling Laws for Precision". arXiv:2411.04330 [cs.LG].
Morales, Jowi (2025-04-17). "Microsoft researchers build 1-bit AI LLM with 2B parameters". Tom's Hardware. Retrieved 2025-04-21.
Ouyang, Xu; Ge, Tao; Hartvigsen, Thomas; Zhang, Zhisong; Mi, Haitao; Yu, Dong (2024). "Low-Bit Quantization Favors Undertrained LLMS: Scaling Laws for Quantized LLMS with 100T Training Tokens". arXiv:2411.17691 [cs.LG].
Wang, Hongyu; Ma, Shuming; Dong, Li; Huang, Shaohan; Wang, Huaijie; Ma, Lingxiao; Yang, Fan; Wang, Ruiping; Wu, Yi; Wei, Furu (2023). "BitNet: Scaling 1-bit Transformers for Large Language Models". arXiv:2310.11453 [cs.CL]. | Wikipedia/1.58-bit_large_language_model |
Language model benchmarks are standardized tests designed to evaluate the performance of language models on various natural language processing tasks. These tests are intended for comparing different models' capabilities in areas such as language understanding, generation, and reasoning.
Benchmarks generally consist of a dataset and corresponding evaluation metrics. The dataset provides text samples and annotations, while the metrics measure a model's performance on tasks like question answering, text classification, and machine translation. These benchmarks are developed and maintained by academic institutions, research organizations, and industry players to track progress in the field.
== Overview ==
=== Types ===
Benchmarks may be described by the following adjectives, not mutually exclusive:
Classical: These tasks are studied in natural language processing, even before the advent of deep learning. Examples include the Penn Treebank for testing syntactic and semantic parsing, as well as bilingual translation benchmarked by BLEU scores.
Question answering: These tasks have a text question and a text answer, often multiple-choice. They can be open-book or closed-book. Open-book QA resembles reading comprehension questions, with relevant passages included as annotation in the question, in which the answer appears. Closed-book QA includes no relevant passages. Closed-book QA is also called open-domain question-answering. Before the era of large language models, open-book QA was more common, and understood as testing information retrieval methods. Closed-book QA became common since GPT-2 as a method to measure knowledge stored within model parameters.
Omnibus: An omnibus benchmark combines many benchmarks, often previously published. It is intended as an all-in-one benchmarking solution.
Reasoning: These tasks are usually in the question-answering format, but are intended to be more difficult than standard question answering.
Multimodal: These tasks require processing not only text, but also other modalities, such as images and sound. Examples include OCR and transcription.
Agency: These tasks are for a language-model–based software agent that operates a computer for a user, such as editing images, browsing the web, etc.
Adversarial: A benchmark is "adversarial" if the items in the benchmark are picked specifically so that certain models do badly on them. Adversarial benchmarks are often constructed after SOTA models have saturated a benchmark, to renew the benchmark. A benchmark is "adversarial" only at a certain moment in time, since what is adversarial may cease to be adversarial as newer SOTA models appear.
The boundary between a benchmark and a dataset is not sharp. Generally, a dataset contains three "splits": training, test, validation. Both the test and validation splits are essentially benchmarks. In general, a benchmark is distinguished from a test/validation dataset in that a benchmark is typically intended to be used to measure the performance of many different models that are not trained specifically for doing well on the benchmark, while a test/validation set is intended to be used to measure the performance of models trained specifically on the corresponding training set. In other words, a benchmark may be thought of as a test/validation set without a corresponding training set.
Conversely, certain benchmarks may be used as a training set, such as the English Gigaword or the One Billion Word Benchmark, which in modern language is just the negative log likelihood loss on a pretraining set with 1 billion words. Indeed, the distinction between benchmark and dataset in language models became sharper after the rise of the pretraining paradigm.
=== Lifecycle ===
Generally, the life cycle of a benchmark consists of the following steps:
Inception: A benchmark is published. It can be simply given as a demonstration of the power of a new model (implicitly) that others then picked up as a benchmark, or as a benchmark that others are encouraged to use (explicitly).
Growth: More papers and models use the benchmark, and the performance on the benchmark grows.
Maturity, degeneration or deprecation: A benchmark may be saturated, after which researchers move on to other benchmarks. Progress on the benchmark may also be neglected as the field moves to focus on other benchmarks.
Renewal: A saturated benchmark can be upgraded to make it no longer saturated, allowing further progress.
=== Construction ===
Like datasets, benchmarks are typically constructed by several methods, individually or in combination:
Web scraping: Ready-made question-answer pairs may be scraped online, such as from websites that teach mathematics and programming.
Conversion: Items may be constructed programmatically from scraped web content, such as by blanking out named entities from sentences, and asking the model to fill in the blank. This was used for making the CNN/Daily Mail Reading Comprehension Task.
Crowd sourcing: Items may be constructed by paying people to write them, such as on Amazon Mechanical Turk. This was used for making the MCTest.
=== Evaluation ===
Generally, benchmarks are fully automated. This limits the questions that can be asked. For example, with mathematical questions, "proving a claim" would be difficult to automatically check, while "calculate an answer with a unique integer answer" would be automatically checkable. With programming tasks, the answer can generally be checked by running unit tests, with an upper limit on runtime.
The benchmark scores are of the following kinds:
For multiple choice or cloze questions, common scores are accuracy (frequency of correct answer), precision, recall, F1 score, etc.
pass@n: The model is given
n
{\displaystyle n}
attempts to solve each problem. If any attempt is correct, the model earns a point. The pass@n score is the model's average score over all problems.
k@n: The model makes
n
{\displaystyle n}
attempts to solve each problem, but only
k
{\displaystyle k}
attempts out of them are selected for submission. If any submission is correct, the model earns a point. The k@n score is the model's average score over all problems.
cons@n: The model is given
n
{\displaystyle n}
attempts to solve each problem. If the most common answer is correct, the model earns a point. The cons@n score is the model's average score over all problems. Here "cons" stands for "consensus" or "majority voting".
The pass@n score can be estimated more accurately by making
N
>
n
{\displaystyle N>n}
attempts, and use the unbiased estimator
1
−
(
N
−
c
n
)
(
N
n
)
{\displaystyle 1-{\frac {\binom {N-c}{n}}{\binom {N}{n}}}}
, where
c
{\displaystyle c}
is the number of correct attempts.
For less well-formed tasks, where the output can be any sentence, there are the following commonly used scores: BLEU ROUGE, METEOR, NIST, word error rate, LEPOR, CIDEr, SPICE, etc.
=== Issues ===
error: Some benchmark answers may be wrong.
ambiguity: Some benchmark questions may be ambiguously worded.
subjective: Some benchmark questions may not have an objective answer at all. This problem generally prevents creative writing benchmarks. Similarly, this prevents benchmarking writing proofs in natural language, though benchmarking proofs in a formal language is possible.
open-ended: Some benchmark questions may not have a single answer of a fixed size. This problem generally prevents programming benchmarks from using more natural tasks such as "write a program for X", and instead uses tasks such as "write a function that implements specification X".
inter-annotator agreement: Some benchmark questions may be not fully objective, such that even people would not agree with 100% on what the answer should be. This is common in natural language processing tasks, such as syntactic annotation.
shortcut: Some benchmark questions may be easily solved by an "unintended" shortcut. For example, in the SNLI benchmark, having a negative word like "not" in the second sentence is a strong signal for the "Contradiction" category, regardless of what the sentences actually say.
contamination/leakage: Some benchmark questions may have answers already present in the training set. Also called "training on the test set". Some benchmarks (such as Big-Bench) may use a "canary string", so that documents containing the canary string can be voluntarily removed from the training set.
saturation: As time goes on, many models reach the highest performance level practically possible, and so the benchmark can no longer differentiate these models. For example, GLUE had been saturated, necessitating SuperGLUE.
Goodhart's law: If new models are designed or selected to score highly on a benchmark, the benchmark may cease to be a good indicator for model quality.
cherry picking: New model publications may only point to benchmark scores on which the new model performed well, avoiding benchmark scores that it did badly on.
== List of benchmarks ==
=== General language modeling ===
Essentially any dataset can be used as a benchmark for statistical language modeling, with the perplexity (or near-equivalently, negative log-likelihood and bits per character, as in the original Shannon's test of the entropy of the English language) being used as the benchmark score. For example, the original GPT-2 announcement included those of the model on WikiText-2, enwik8, text8, and WikiText-103 (all being standard language datasets made from the English Wikipedia).
However, there had been datasets more commonly used, or specifically designed, for use as a benchmark.
One Billion Word Benchmark: The negative log likelihood loss on a dataset of 1 billion words.
Penn Treebank: The error or negative log likelihood loss for part-of-speech tags on a dataset of text.
Paloma (Perplexity Analysis for Language Model Assessment): A collection of English and code texts, divided into 546 domains. Used to measure the perplexity of a model on specific domains.
=== General language understanding ===
See for a review of over 100 such benchmarks.
WSC (Winograd schema challenge): 273 sentences with ambiguous pronouns. The task is to determine what the pronoun refers to.
WinoGrande: A larger version of WSC with 44,000 items. Designed to be adversarial to 2019 SOTA, since the original had been saturated. This dataset consists of fill-in-the-blank style sentences, as opposed to the pronoun format of previous datasets.
CoLA (Corpus of Linguistic Acceptability): 10,657 English sentences from published linguistics literature that were manually labeled either as grammatical or ungrammatical.
SNLI (Stanford Natural Language Inference: 570K human-written English sentence pairs manually labeled for balanced classification with 3 labels "entailment", "contradiction", and "neutral".
WMT 2014 (Workshop on Statistical Machine Translation): a collection of 4 machine translation benchmarks at the Ninth Workshop on Statistical Machine Translation. The Attention Is All You Need paper used it as a benchmark.
MultiNLI (Multi-Genre Natural Language Inference): Similar to SNLI, with 433K English sentence pairs from ten distinct genres of written and spoken English.
CNN/Daily Mail Reading Comprehension Task: Articles from CNN (380K training, 3.9K development, 3.2K test) and Daily Mail (879K training, 64.8K development, 53.2K test) were scraped. The bullet point summaries accompanying the news articles were used. One entity in a bullet point was replaced with a placeholder, creating a cloze-style question. The goal is to identify the masked entity from the article.
SWAG (Situations With Adversarial Generations): 113K descriptions of activities or events, each with 4 candidate endings; the model must choose the most plausible ending. Adversarial against a few shallow language models (MLP, bag of words, one-layer CNN, etc).
HellaSwag (Harder Endings, Longer contexts, and Low-shot Activities for SWAG): A harder version of SWAG. Contains 10K items.
RACE (ReAding Comprehension Examinations): 100,000 reading comprehension problems in 28,000 passages, collected from the English exams for middle and high school Chinese students in the age range between 12 to 18.
LAMBADA: 10,000 narrative passages from books, each with a missing last word that humans can guess if given the full passage but not from the last sentence alone.
=== General language generation ===
NaturalInstructions: 61 distinct tasks with human-authored instructions, and 193k task instances (input-output pairs). The instructions are obtained from crowdsourcing instructions used to create existing NLP datasets and mapped to a unified schema.
Super-NaturalInstructions: 1,616 diverse NLP tasks and their expert-written instructions, and 5M task instances.
IFEval (Instruction-Following Eval): 541 instructions to be followed, each containing at least one verifiable constraint, such as "mention the keyword of AI at least 3 times".
Chatbot Arena: Human users vote between two outputs from two language models. An Elo rating for each language model is computed based on these human votes.
MT-Bench (multi-turn benchmark): An automated version of Chatbot Arena where LLMs replace humans in generating votes.
=== Open-book question-answering ===
MCTest (Machine Comprehension Test): 500 fictional stories, each with 4 multiple-choice questions (with at least 2 requiring multi-sentence understanding), designed to be understandable by a 7-year-old. The vocabulary was limited to approximately 8,000 words probably known by a 7-year-old. The stories were written by workers on Amazon Mechanical Turk.
SQuAD (Stanford Question Answering Dataset): 100,000+ questions posed by crowd workers on 500+ Wikipedia articles. The task is, given a passage from Wikipedia and a question, find a span of text in the text that answers the question.
SQuAD 2.0: 50,000 unanswerable questions that look similar to SQuAD questions. Every such unanswerable question must be answered with an empty string. Written by crowd workers.
ARC (AI2 Reasoning Challenge): Multiple choice questions, with a Challenge Set (2590 questions) and an Easy Set (5197 questions). Designed specifically to be adversarial against models that had saturated SNLI and SQuAD.
CoQA (Conversational QA): 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains.
WebQuestions: 6,642 question-answer pairs designed to be answerable with knowledge present in the 2013 version of Freebase.
Natural Questions: 323045 items. Each containing a question that had been searched on Google, a Wikipedia page relevant for answering the question, a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or "null" if no long/short answer is present.
TriviaQA: 650K question-answer-evidence triples. Includes 95K question-answer pairs scraped from 14 trivia and quiz-league websites, and (on average 6) evidence documents for each pair, gathered by searching with Bing and Wikipedia.
OpenBookQA: 5960 multiple choice questions, each coming with an elementary level science fact (the "open book"). There are 1329 such facts in total.
SearchQA: 140,461 question-answer pairs from the J! Archive, with each pair augmented with (on average 50) snippets and urls obtained by searching the question on Google.
HotpotQA: 113K multi-hop questions that require reading multiple Wikipedia-based passages to answer. They were produced by showing crowd workers multiple supporting context documents and asking them to produce questions that requiring reasoning about all of the documents.
StrategyQA: 2,780 questions annotated with relevant passages from Wikipedia, such that the question require multi-hop reasoning over the passages to answer. For example, "Did Aristotle use a laptop?" is annotated with passages from the Wikipedia pages for "laptop" and "Aristotle".
DROP (Discrete Reasoning Over the content of Paragraphs): 96,567 questions along with Wikipedia passages, especially from narratives rich in numerical information (like sports summaries and history), often involving multi-step numerical reasoning over several text spans. Adversarial against 2019 SOTA.
GRS-QA: Graph Reasoning-Structured Question Answering Dataset. A dataset designed to evaluate question answering models on graph-based reasoning tasks.
ChartQA: 32,719 questions about 20,882 charts crawled from four diverse online sources (Statista, Pew Research Center, Our World In Data, OECD). Of these, 9,608 were human-written (in ChartQA-H), and 23,111 were machine-generated (in ChartQA-M). The answers are either verbatim texts from the chart or integers calculated based on the chart's data.
DocVQA: multimodal, 50,000 questions on 12,767 document images, sectioned from 6,071 distinct documents. The documents were sourced from 5 industries (tobacco, food, drug, fossil fuel, chemical) of the UCSF Industry Documents Library, mostly from the 1940-2010 period. Documents with structured elements like tables, forms, lists, and figures were prioritized. The answers are verbatim extracts from the document text.
=== Closed-book question-answering ===
C-Eval (Chinese Eval): 13948 multiple choice questions about in 52 subjects at 4 levels of difficulty. In Chinese.
TruthfulQA: 817 questions in health, law, finance and politics with common misconceptions. Adversarial against GPT-3 and T5.
PIQA (Physical Interaction QA): 17951 two-choice questions. Each question gives a goal (like separating egg yolk from egg white with a water bottle), and 2 choices for accomplishing it.
MedQA: 61097 questions from professional medical board exams, in English, Simplified Chinese, Traditional Chinese.
ScienceQA: 21208 multiple choice questions in natural science, social science, and linguistics, with difficulty level from grade 1 to grade 12, sourced from elementary and high school science curricula. Some questions require reading a diagram. Most questions are annotated with lecture textual lectures and explanations.
SimpleQA: 4,326 short questions that are answerable with knowledge as of 2023. Each answer is graded as either "correct", "incorrect", or "not attempted". Adversarial against GPT-4 specifically.
RealWorldQA: 765 multimodal multiple-choice questions. Each containing an image and a question. Designed to test spatial understanding. Images are drawn from various real-world scenarios, including those captured from vehicles.
OpenEQA (Open Embodied QA): over 1600 questions accompanying about videos, scans of real-world environments, and simulations.
=== Omnibus ===
Some benchmarks are "omnibus", meaning they are made by combining several previous benchmarks.
GLUE (General Language Understanding Evaluation): collection of 9 benchmarks designed for testing general language understanding. The tasks are in the format of sentence- or sentence-pair. There are over 1M items.
SuperGLUE: An update to GLUE. Designed to be still challenging to the SOTA models of the time (2019) since the original had been saturated. Includes 8 additional tasks (e.g. logical reasoning, commonsense inference, coreference resolution).
Big-Bench (Beyond the Imitation Game): A benchmark collection of 204 tasks. A particular subset of 23 tasks is called BBH (Big-Bench Hard). An adversarial variant of BBH is called BBEH (Big-Bench Extra Hard), made by replacing each of the 23 tasks from BBH with a similar but adversarial variant.
MMLU (Measuring Massive Multitask Language Understanding): 16,000 multiple-choice questions spanning 57 academic subjects including mathematics, philosophy, law, and medicine. Upgraded to MMLU-Pro which increases the number of choices from 4 to 10, eliminated the trivial and noisy questions from MMLU, and added harder problems.
MMMLU (Multilingual MMLU): The test set of MMLU, translated into 14 languages by professional human translators.
CMMLU (Chinese MMLU): 1,528 multiple-choice questions across 67 subjects, 16 of which are "China-specific", like Classical Chinese. Some data collected from non-publicly available materials, mock exam questions, and questions from quiz shows to avoid contamination. More than 80% of the data was crawled from PDFs after OCR.
MMMU (Massive Multi-discipline Multimodal Understanding): A vision-language version of MMLU. 11550 questions collected from college exams, quizzes, and textbooks, covering 30 subjects. The questions require image-understanding to solve. Includes multiple-choice questions and open-ended QA (which are scored by regex extraction). Human expert baseline is 89%.
MMMU-Pro: 1730 multiple-choice multimodal questions in the same format as MMMU, designed to be adversarial against text-only models. Some problems in MMMU turned out to be answerable without looking at the images, necessitating MMMU-Pro. Each question has 10 choices, and presented in both text-image format, and screenshot/photo format.
MMT-Bench: A comprehensive benchmark designed to assess LVLMs across massive multimodal tasks requiring expert knowledge and deliberate visual recognition, localization, reasoning, and planning. Comprises 31,325 meticulously curated multi-choice visual questions from various multimodal scenarios such as vehicle driving and embodied navigation, covering 32 core meta-tasks and 162 subtasks in multimodal understanding.
=== Agency ===
GAIA: 450 questions with unambiguous answers that require information that can be obtained by browsing the Internet, requiring different levels of tooling and autonomy to solve. Divided into 3 difficulty levels.
WebArena: 241 mock-up websites based on real-world websites (Reddit, GitLab, Magento's admin portal, etc), and 812 tasks to be performed on the websites. The tasks include information-seeking, site navigation, and content and configuration operation.
Mind2Web: 2,350 tasks collected from 137 websites, and crowdsourced action sequences. The task is to reproduce the action sequence.
OSWorld: 369 multimodal computer-using tasks, involving multiple real web and desktop apps and OS file I/O. In both Windows and Ubuntu. Each task includes an initial state setup configuration, and is tested by an execution-based evaluation script.
Windows Agent Arena: 154 multimodal tasks with the same format as OSWorld. Only in Windows.
WebVoyager: 643 multimodal tasks based on 15 popular websites. Evaluation is by screenshotting the action sequence and asking a vision language model to judge.
BFCL (Berkeley Function-Calling Leaderboard): The task is to write API calls according to a specification. Released in 3 versions, with 1760, 2251, and 1000 items respectively. Some calls are evaluated by parsing into an AST and comparing against the reference answer, while others are evaluated by calling and comparing the response against the reference response. Includes Python, Java, JavaScript, SQL, and REST API.
TAU-bench (Tool-Agent-User benchmark, also written as τ-bench): Two environments (retail, airline booking) that test for an agent to fulfill user instructions, interactively over multiple turns of dialogue. The user is simulated by a language model.
terminal-bench: A collection of complex tasks in the Linux terminal.
=== Context length ===
Some benchmarks were designed specifically to test for processing continuous text that is very long.
Needle in a haystack tests (NIH): This is not a specific benchmark, but a method for benchmarking context lengths. In this method, a long context window is filled with text, such as Paul Graham's essays, and a random statement is inserted. The task is to answer a question about the inserted statement.
Long Range Arena: 6 synthetic tasks that required 1K to 16K tokens of context length to solve.
NoLiMa: Long-Context Evaluation Beyond Literal Matching. The benchmark assesses long-context models beyond simple keyword matching. Specifically, the words in the question have minimal or no direct lexical overlap with the words in the "needle" sentence. The "haystacks" are 10 open-licensed books.
L-Eval: 2,000+ human-labeled query-response pairs over 508 long documents in 20 tasks, including diverse task types, domains, and input length (3K—200K tokens).
InfiniteBench: 3946 items in 12 tasks from 5 domains (retrieval, code, math, novels, and dialogue) with context lengths exceeding 100K tokens.
ZeroSCROLLS: 4,378 items in 6 tasks. Includes 6 tasks from SCROLLS and introduces 4 new datasets. Named "zero" because it was designed for zero-shot learning during the early days of pretraining paradigm, back when zero-shot capability was uncommon.
LongBench: 4,750 tasks on 21 datasets across 6 task categories in both English and Chinese, with an average length of 6,711 words (English) and 13,386 characters (Chinese). Updated with LongBench v2 that contained 503 more tasks, that require a context length ranging from 8K to 2M words, with the majority under 128K.
RULER: 13 tasks in 4 categories (retrieval, multi-hop, aggregation, question answering). Each task is specified by a program which can generate arbitrarily long instances of each task on demand.
LOFT (Long-Context Frontiers): 6 long-context task categories (text retrieval, visual retrieval, audio retrieval, retrieval-augmented generation, SQL-like dataset query, many-shot in-context learning) in 35 datasets and 4 modalities. Up to 1 million tokens.
MTOB (Machine Translation from One Book): translate sentences between English and Kalamang after reading a grammar book of Kalamang (~570 pages), a bilingual word list (2,531 entries, with Part-of-Speech tags) and a small parallel corpus of sentence pairs (~400 train sentences, 100 test sentences, filtered to exclude examples from the book), both published on Dictionaria.
=== Reasoning ===
==== Mathematics ====
Alg514: 514 algebra word problems and associated equation systems gathered from Algebra.com.
Math23K: 23,164 elementary school Chinese mathematical word problems, collected from various online educational websites.
AQuA-RAT (Algebra Question Answering with Rationales): Also known as just "AQuA". 100,000 algebraic word problems with 5 choices per problem, and an annotation for the correct choice with natural language rationales. 34,202 "seed problems" were collected from many sources, such as GMAT and GRE, which were then expanded to the full dataset with Amazon Turk.
GSM8K (Grade School Math): 8.5K linguistically diverse elementary school math word problems that require 2 to 8 basic arithmetic operations to solve. Contains errors that had been corrected with GSM8K-Platinum.
GSM1K: 1205 items with the same format and difficulty as GSM8K. More securely contained to avoid the data contamination concerns with the previous GSM8K.
MATH: 12,500 competition-level math problems divided into difficulty levels 1 to 5 (as the Art of Problem Solving), with AIME problems being level 5. There are 1,324 level 5 items. An adversarial version is MATH-P, obtained by modifying a few characters in the original questions.
MathQA: 37,200 word problems in English. Each problem came from AQuA-RAT, and annotated with an "operation program" which exactly specifies the mathematical operations required to solve the problem, written in a domain-specific language with 58 operators. Has a variant, MathQA-Python, consisting of 23,914 problems, produced by taking the solutions to a subset of the MathQA dataset, and rewriting into Python.
MathEval: An omnibus benchmark that contains 20 other benchmarks, such as GSM8K, MATH, and the math subsection of MMLU. Over 20,000 math problems. Difficulty ranges from elementary school to high school competition.
TheoremQA: 800 questions that test for the use of 350 theorems from math, physics, electric engineering, computer science, and finance.
ProofNet: 371 theorems in undergraduate-level mathematics, each consisting of a formal statement in Lean, a natural language statement, and a natural language proof. There are two tasks: given an informal (formal) statement, produce a corresponding formal (informal) statement; given an informal theorem statement, its informal proof, and its formal statement, produce a formal proof. Originally was in Lean 3, but the original authors deprecated it in favor of the Lean 4 version.
miniF2F (mini formal-to-formal): 488 Olympiad-level mathematics problems from AIME, AMC, and IMO, stated in formal languages (Metamath, Lean, Isabelle (partially) and HOL Light (partially)). The task is to formally prove the formal statement, which can be verified automatically.
U-MATH: 1100 math problems sourced from real-world university curricula, balanced across six subjects with 20% of problems including visual elements.
MathBench: 3709 questions in English and Chinese, divided into 5 difficulty levels (basic arithmetic, primary school, middle school, high school, college). Divided into 2,209 questions of MathBench-T (theoretical) and 1,500 questions of MathBench-A (applied).
PutnamBench: 1709 formalized versions of Putnam competition questions during 1962 - 2023. The task is to compute the numerical answer (if there is a numerical answer) and to provide a formal proof. The formalizations are in Lean 4, Isabelle, and Coq.
Omni-MATH: 4428 competition-level math problems with human annotation.
FrontierMath: Several hundred questions from areas of modern math that are difficult for professional mathematicians to solve. Many questions have integer answers, so that answers can be verified automatically. Held-out to prevent contamination.
MathArena: Instead of a purpose-built benchmark, the MathArena benchmark simply takes the latest math competitions (AIME and HMMT) as soon as possible and uses those to benchmark LLMs, to prevent contamination.
==== Programming ====
APPS: 10,000 problems from Codewars, AtCoder, Kattis, and Codeforces.
MBPP (Mostly Basic Programming Problems): 974 short Python functions designed to be solved by entry-level programmers. Each comes with a text description and unit tests. They were written by an internal pool of crowdworkers who have basic knowledge of Python.
DS-1000: 1000 data science problems obtained by reformulating 451 unique StackOverflow problems, requiring the use of 7 Python libraries, such as NumPy and Pandas. The resposes are scored by running test cases and comparing outputs, and checking for the presence/absence of specific APIs or keywords.
HumanEval: 164 problems where the solution is always a python function, often just a few lines long.
CodeElo: 387 contest problems from Codeforces during 2024, annotated with metadata such as contest divisions, problem difficulty ratings, and problem algorithm tags. Benchmarking is run by directly submitting to Codeforces, resulting in an Elo rating. Limited to 8 submissions per problem.
Aider Polyglot: 225 of the hardest coding exercises from Exercism, in languages of C++, Go, Java, JavaScript, Python and Rust.
BigCodeBench: 1140 tasks that requires multiple function calls. The benchmark involves 139 libraries and 7 domains. A subset BigCodeBench-Hard involves just a 148-task subset of the full benchmark.
SWE-bench: 2,294 software engineering problems drawn from real GitHub issues and corresponding pull requests across 12 popular Python repositories. Given a codebase and an issue, the task is to edit the codebase to solve the issue. There are 2 subsets: Lite (300 problems that are faster to run), Verified (human-validated subset of 500 problems reviewed by software engineers).
Multi-SWE-bench: 1,632 problems across 7 languages: Java, TypeScript, JavaScript, Go, Rust, C, and C++. Similar to SWE-bench.
SWE-bench Multimodal: a variant of SWE-bench, with 619 task instances from 17 popular JavaScript repositories, each featuring images that are required for solving the task.
SWE-Lancer: 1,488 freelance software engineering tasks from Upwork. Includes implementation tasks (from $50 bug fixes to $32,000 feature implementations) and managerial tasks, where the model must choose between technical implementation proposals.
KernelBench: 250 PyTorch machine learning tasks, for which a CUDA kernel must be written.
Cybench (cybersecurity bench): 40 professional-level Capture the Flag (CTF) tasks from 4 competitions. Tasks are broken down into subtasks for more fine-grained scoring. At least one professional-level human team at each competition was able to solve each of the tasks. The time it took the fastest team to solve each task ranged from 2 minutes to 25 hours.
HCAST (Human-Calibrated Autonomy Software Tasks): 189 tasks in machine learning, cybersecurity, software engineering, and general reasoning. Each task has a "baseline", the measured average time required for a human skilled in the task domains, working under identical conditions as AI agents. The baseline ranges from 1 minute to 8+ hours.
PaperBench: 8,316 individually gradable tasks that would be necessary for replicating 20 Spotlight and Oral papers from ICML 2024 from scratch. The human baseline of ML PhDs (best of 3 attempts) at 48 hours of effort is 41.4%.
==== General ====
GPQA (Google-Proof Q&A): 448 multiple-choice questions written by domain experts in biology, physics, and chemistry, designed to be PhD-level. The "Diamond" subset contains the 198 hardest questions in it. OpenAI found that human experts achieve an average score of 69.7% on the Diamond subset.
SuperGPQA: 26,529 multiple-choice questions collected by domain experts in 285 graduate-level disciplines. The questions were collected by individuals with or pursuing a PhD and then refined and inspected with the help of large language models.
MathVista: 6,141 questions involving quantitative reasoning that requires reading a picture to solve.
AGIEval: questions from 20 official, public, and high-standard admission and qualification exams, such as SAT, Gaokao, law school admission tests, math competitions, lawyer qualification tests, and national civil service exams.
OlympicArena: 11,163 problems from 62 distinct Olympic competitions.
OlympiadBench: 8,476 math and physics problems in English and Chinese, sourced from International Olympiads, Chinese Olympiads, and Gaokao.
ARC-AGI (Abstraction and Reasoning Corpus for Artificial General Intelligence): Given three pairs of before-and-after diagrams of applying a rule, apply the same rule to the fourth before-diagram. It is similar to a Raven's Progressive Matrices test.
LiveBench: A series of benchmarks released monthly, including high school math competition questions, competitive coding questions, logic puzzles, and other tasks.
Humanity's Last Exam: 3,000 multimodal questions across over a hundred academic subjects, with a held-out private dataset left unreleased to prevent contamination. 10% of questions requires both image and text comprehension and the rest are fully text-based. 80% of questions are scored by exact string matching, and the rest are multiple-choice.
SimpleBench: A multiple-choice text benchmark with over 200 questions covering spatio-temporal reasoning, social intelligence, and linguistic adversarial robustness (or trick questions). It is designed to test "everyday human reasoning".
== See also ==
List of large language models
List of datasets for machine-learning research
== External links ==
Epoch AI - AI Benchmarking Hub
== References == | Wikipedia/Language_model_benchmark |
The transformer is a deep learning architecture based on the multi-head attention mechanism, in which text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished.
Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM). Later variations have been widely adopted for training large language models (LLM) on large (language) datasets.
The modern version of the transformer was proposed in the 2017 paper "Attention Is All You Need" by researchers at Google. Transformers were first developed as an improvement over previous architectures for machine translation, but have found many applications since. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning, audio, multimodal learning, robotics, and even playing chess. It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs) and BERT (bidirectional encoder representations from transformers).
== History ==
=== Predecessors ===
For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens.
A key breakthrough was LSTM (1995), a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called multiplicative units. Neural networks using multiplicative units were later called sigma-pi networks or higher-order networks. LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers.
However, LSTM still used sequential processing, like most other RNNs. Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence.
Modern Transformers overcome this problem, but unlike RNNs, they require computation time that is quadratic in the size of the context window. The linearly scaling fast weight controller (1992) learns to compute a weight matrix for further processing depending on the input. One of its two networks has "fast weights" or "dynamic links" (1981). A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries. This was later shown to be equivalent to the unnormalized linear Transformer.
=== Attention with seq2seq ===
The idea of encoder-decoder sequence transduction had been developed in the early 2010s; commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.
A 380M-parameter model for machine translation uses two long short-term memories (LSTM). Its architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model used gated recurrent units (GRU) instead of LSTM. Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.
These early seq2seq models had no attention mechanism, and the state vector is accessible only after the last word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into a fixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.
The RNNsearch model introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of the fixed-size output vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".
The relative performances were compared between global (that of RNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.
In 2016, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM. It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.
=== Parallelizing attention ===
Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them from being accelerated on GPUs. In 2016, decomposable attention applied a self-attention mechanism to feedforward networks, which are easy to parallelize, and achieved SOTA result in textual entailment with an order of magnitude fewer parameters than LSTMs. One of its authors, Jakob Uszkoreit, suspected that attention without recurrence would be sufficient for language translation, thus the title "attention is all you need". That hypothesis was against conventional wisdom at the time, and even his father Hans Uszkoreit, a well-known computational linguist, was skeptical. In the same year, self-attention (called intra-attention or intra-sentence attention) was proposed for LSTMs.
In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks.
=== AI boom era ===
Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles. Transformer architecture is now used alongside many generative models that contribute to the ongoing AI boom.
In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model. In 2019 October, Google started using BERT to process search queries. In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.
Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly popular, triggering a boom around large language models.
Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer, speech recognition, robotics, and multimodal. The vision transformer, in turn, stimulated new developments in convolutional neural networks. Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024), and Sora (2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data.
== Training ==
=== Methods for stabilizing training ===
The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again.
A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup.
=== Pretrain-finetune ===
Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include:
language modeling
next-sentence prediction
question answering
reading comprehension
sentiment analysis
paraphrasing
The T5 transformer report documents a large number of natural language pretraining tasks. Some examples are:
restoring or repairing incomplete or corrupted text. For example, the input, "Thank you ~~ me to your party ~~ week", might generate the output, "Thank you for inviting me to your party last week".
translation between natural languages (machine translation)
judging the pragmatic acceptability of natural language. For example, the following sentence might be judged "not acceptable", because even though it is syntactically well-formed, it is improbable in ordinary human usage: The course is jumping well.
Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture.
=== Tasks ===
In general, there are 3 classes of language modelling tasks: "masked", "autoregressive", and "prefixLM". These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer.
In a masked task, one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens:
Loss
=
−
∑
t
∈
masked tokens
ln
(
probability of
t
conditional on its context
)
{\displaystyle {\text{Loss}}=-\sum _{t\in {\text{masked tokens}}}\ln({\text{probability of }}t{\text{ conditional on its context}})}
and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task.
In an autoregressive task, the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks.
In a prefixLM task, the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. The T5 series of models are trained by prefixLM tasks.
Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model).
== Architecture ==
All transformers have the same primary components:
Tokenizers, which convert text into tokens.
Embedding layer, which converts tokens and positions of the tokens into vector representations.
Transformer layers, which carry out repeated transformations on the vector representations, extracting more and more linguistic information. These consist of alternating attention and feedforward layers. There are two major types of transformer layers: encoder layers and decoder layers, with further variants.
Un-embedding layer, which converts the final vector representations back to a probability distribution over the tokens.
The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section.
By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as
x
W
{\displaystyle xW}
.
=== Tokenization ===
As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is a tokenizer.
The set of all tokens is the vocabulary of the tokenizer, and its size is the vocabulary size
n
vocabulary
{\displaystyle n_{\text{vocabulary}}}
. When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown".
Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece.
=== Embedding ===
Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix
M
{\displaystyle M}
. For example, if the input token is
3
{\displaystyle 3}
, then the one-hot representation is
[
0
,
0
,
0
,
1
,
0
,
0
,
…
]
{\displaystyle [0,0,0,1,0,0,\dots ]}
, and its embedding vector is
E
m
b
e
d
(
3
)
=
[
0
,
0
,
0
,
1
,
0
,
0
,
…
]
M
{\displaystyle \mathrm {Embed} (3)=[0,0,0,1,0,0,\dots ]M}
The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors.
The number of dimensions in an embedding vector is called hidden size or embedding size and written as
d
emb
{\displaystyle d_{\text{emb}}}
. This size is written as
d
model
{\displaystyle d_{\text{model}}}
in the original Transformer paper.
=== Un-embedding ===
An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens.
The un-embedding layer is a linear-softmax layer:
U
n
E
m
b
e
d
(
x
)
=
s
o
f
t
m
a
x
(
x
W
+
b
)
{\displaystyle \mathrm {UnEmbed} (x)=\mathrm {softmax} (xW+b)}
The matrix has shape
(
d
emb
,
n
vocabulary
)
{\displaystyle (d_{\text{emb}},n_{\text{vocabulary}})}
. The embedding matrix
M
{\displaystyle M}
and the un-embedding matrix
W
{\displaystyle W}
are sometimes required to be transposes of each other, a practice called weight tying.
=== Positional encoding ===
A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information about where the words are in the input sequence. This shall induce a bias towards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man".
The positional encoding is defined as a function of type
f
:
R
→
R
d
;
d
∈
Z
,
d
>
0
{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0}
, where
d
{\displaystyle d}
is a positive even integer. The full positional encoding defined in the original paper is:
(
f
(
t
)
2
k
,
f
(
t
)
2
k
+
1
)
=
(
sin
(
θ
)
,
cos
(
θ
)
)
∀
k
∈
{
0
,
1
,
…
,
d
/
2
−
1
}
{\displaystyle (f(t)_{2k},f(t)_{2k+1})=(\sin(\theta ),\cos(\theta ))\quad \forall k\in \{0,1,\ldots ,d/2-1\}}
where
θ
=
t
r
k
,
r
=
N
2
/
d
{\displaystyle \theta ={\frac {t}{r^{k}}},r=N^{2/d}}
.
Here,
N
{\displaystyle N}
is a free parameter that should be significantly larger than the biggest
k
{\displaystyle k}
that would be input into the positional encoding function. The original paper uses
N
=
10000
{\displaystyle N=10000}
.
The function is in a simpler form when written as a complex function of type
f
:
R
→
C
d
/
2
{\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}}
f
(
t
)
=
(
e
i
t
/
r
k
)
k
=
0
,
1
,
…
,
d
2
−
1
{\displaystyle f(t)=\left(e^{it/r^{k}}\right)_{k=0,1,\ldots ,{\frac {d}{2}}-1}}
where
r
=
N
2
/
d
{\displaystyle r=N^{2/d}}
.
The main reason for using this positional encoding function is that using it, shifts are linear transformations:
f
(
t
+
Δ
t
)
=
d
i
a
g
(
f
(
Δ
t
)
)
f
(
t
)
{\displaystyle f(t+\Delta t)=\mathrm {diag} (f(\Delta t))f(t)}
where
Δ
t
∈
R
{\displaystyle \Delta t\in \mathbb {R} }
is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication.
By taking a linear sum, any convolution can also be implemented as linear transformations:
∑
j
c
j
f
(
t
+
Δ
t
j
)
=
(
∑
j
c
j
d
i
a
g
(
f
(
Δ
t
j
)
)
)
f
(
t
)
{\displaystyle \sum _{j}c_{j}f(t+\Delta t_{j})=\left(\sum _{j}c_{j}\,\mathrm {diag} (f(\Delta t_{j}))\right)f(t)}
for any constants
c
j
{\displaystyle c_{j}}
. This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in a convolutional neural network language model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position."
In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference.
=== Encoder-decoder (overview) ===
Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far.
The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).
Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps. These feed-forward layers contain most of the parameters in a Transformer model.
=== Feedforward network ===
The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons:
F
F
N
(
x
)
=
ϕ
(
x
W
(
1
)
+
b
(
1
)
)
W
(
2
)
+
b
(
2
)
{\displaystyle \mathrm {FFN} (x)=\phi (xW^{(1)}+b^{(1)})W^{(2)}+b^{(2)}}
where
W
(
1
)
{\displaystyle W^{(1)}}
and
W
(
2
)
{\displaystyle W^{(2)}}
are weight matrices and
b
(
1
)
{\displaystyle b^{(1)}}
and
b
(
2
)
{\displaystyle b^{(2)}}
are bias vectors, and
ϕ
{\displaystyle \phi }
is its activation function. The original Transformer used ReLU activation.
The number of neurons in the middle layer is called intermediate size (GPT), filter size (BERT), or feedforward size (BERT). It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size:
d
ffn
=
4
d
emb
{\displaystyle d_{\text{ffn}}=4d_{\text{emb}}}
.
=== Scaled dot-product attention ===
==== Attention head ====
The attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights
W
Q
{\displaystyle W^{Q}}
, the key weights
W
K
{\displaystyle W^{K}}
, and the value weights
W
V
{\displaystyle W^{V}}
.
The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length
ℓ
seq, query
{\displaystyle \ell _{\text{seq, query}}}
, and each entry is a vector of dimension
d
emb, query
{\displaystyle d_{\text{emb, query}}}
. Similarly for the key and value sequences.
For each vector
x
i
,
query
{\displaystyle x_{i,{\text{query}}}}
in the query sequence, it is multiplied by a matrix
W
Q
{\displaystyle W^{Q}}
to produce a query vector
q
i
=
x
i
,
query
W
Q
{\displaystyle q_{i}=x_{i,{\text{query}}}W^{Q}}
. The matrix of all query vectors is the query matrix:
Q
=
X
query
W
Q
{\displaystyle Q=X_{\text{query}}W^{Q}}
Similarly, we construct the key matrix
K
=
X
key
W
K
{\displaystyle K=X_{\text{key}}W^{K}}
and the value matrix
V
=
X
value
W
V
{\displaystyle V=X_{\text{value}}W^{V}}
.
It is usually the case that all
W
Q
,
W
K
,
W
V
{\displaystyle W^{Q},W^{K},W^{V}}
are square matrices, meaning
d
emb, query
=
d
query
{\displaystyle d_{\text{emb, query}}=d_{\text{query}}}
, etc.
Attention weights are calculated using the query and key vectors: the attention weight
a
i
j
{\displaystyle a_{ij}}
from token
i
{\displaystyle i}
to token
j
{\displaystyle j}
is the dot product between
q
i
{\displaystyle q_{i}}
and
k
j
{\displaystyle k_{j}}
. The attention weights are divided by the square root of the dimension of the key vectors,
d
k
{\displaystyle {\sqrt {d_{k}}}}
, which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that
W
Q
{\displaystyle W^{Q}}
and
W
K
{\displaystyle W^{K}}
are different matrices allows attention to be non-symmetric: if token
i
{\displaystyle i}
attends to token
j
{\displaystyle j}
(i.e.
q
i
⋅
k
j
{\displaystyle q_{i}\cdot k_{j}}
is large), this does not necessarily mean that token
j
{\displaystyle j}
will attend to token
i
{\displaystyle i}
(i.e.
q
j
⋅
k
i
{\displaystyle q_{j}\cdot k_{i}}
could be small). The output of the attention unit for token
i
{\displaystyle i}
is the weighted sum of the value vectors of all tokens, weighted by
a
i
j
{\displaystyle a_{ij}}
, the attention from token
i
{\displaystyle i}
to each token.
The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices
Q
{\displaystyle Q}
,
K
{\displaystyle K}
and
V
{\displaystyle V}
are defined as the matrices where the
i
{\displaystyle i}
th rows are vectors
q
i
{\displaystyle q_{i}}
,
k
i
{\displaystyle k_{i}}
, and
v
i
{\displaystyle v_{i}}
respectively. Then we can represent the attention as
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
where the softmax is applied over each of the rows of the matrix.
The number of dimensions in a query vector is query size
d
query
{\displaystyle d_{\text{query}}}
and similarly for the key size
d
key
{\displaystyle d_{\text{key}}}
and value size
d
value
{\displaystyle d_{\text{value}}}
. The output dimension of an attention head is its head dimension
d
head
{\displaystyle d_{\text{head}}}
. The attention mechanism requires the following three equalities to hold:
ℓ
seq, key
=
ℓ
seq, value
,
d
query
=
d
key
,
d
value
=
d
head
{\displaystyle \ell _{\text{seq, key}}=\ell _{\text{seq, value}},\;d_{\text{query}}=d_{\text{key}},\;d_{\text{value}}=d_{\text{head}}}
but is otherwise unconstrained.
If the attention head is used in a self-attention fashion, then
X
query
=
X
key
=
X
value
{\displaystyle X_{\text{query}}=X_{\text{key}}=X_{\text{value}}}
. If the attention head is used in a cross-attention fashion, then usually
X
query
≠
X
key
=
X
value
{\displaystyle X_{\text{query}}\neq X_{\text{key}}=X_{\text{value}}}
. It is theoretically possible for all three to be different, but that is rarely the case in practice.
==== Multiheaded attention ====
One set of
(
W
Q
,
W
K
,
W
V
)
{\displaystyle \left(W^{Q},W^{K},W^{V}\right)}
matrices is called an attention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices,
W
Q
{\displaystyle W^{Q}}
and
W
K
{\displaystyle W^{K}}
, which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrix
W
V
{\displaystyle W^{V}}
, in combination with the part of the output projection matrix
W
O
{\displaystyle W^{O}}
, determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects. The computations for each attention head can be performed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers.
Concretely, let the multiple attention heads be indexed by
i
{\displaystyle i}
, then we have
MultiheadedAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
i
K
,
X
W
i
V
)
)
W
O
{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V}))W^{O}}
where the matrix
X
{\displaystyle X}
is the concatenation of word embeddings, and the matrices
W
i
Q
,
W
i
K
,
W
i
V
{\displaystyle W_{i}^{Q},W_{i}^{K},W_{i}^{V}}
are "projection matrices" owned by individual attention head
i
{\displaystyle i}
, and
W
O
{\displaystyle W^{O}}
is a final projection matrix owned by the whole multi-headed attention head.
It is theoretically possible for each attention head to have a different head dimension
d
head
{\displaystyle d_{\text{head}}}
, but that is rarely the case in practice.
As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:
d
emb
=
768
,
n
head
=
12
,
d
head
=
64
{\displaystyle d_{\text{emb}}=768,n_{\text{head}}=12,d_{\text{head}}=64}
Since
12
×
64
=
768
{\displaystyle 12\times 64=768}
, its output projection matrix
W
O
∈
R
(
12
×
64
)
×
768
{\displaystyle W^{O}\in \mathbb {R} ^{(12\times 64)\times 768}}
is a square matrix.
==== Masked attention ====
The Transformer architecture is constructed to calculate output tokens iteratively. Assuming
t
=
0
{\displaystyle t=0}
refers to the calculation of the first output token
i
=
0
{\displaystyle i=0}
, for step
t
>
0
{\displaystyle t>0}
, the output token
i
=
0
{\displaystyle i=0}
shall remain constant. This ensures properties of the model similar to autoregressive models. Therefore, at every time step
t
{\displaystyle t}
, the calculation for all outputs
i
{\displaystyle i}
should not have access to tokens at position
j
{\displaystyle j}
for
j
>=
i
{\displaystyle j>=i}
(as it naturally is the case for time step
t
=
i
{\displaystyle t=i}
, when tokens
j
>
t
{\displaystyle j>t}
are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrix
M
{\displaystyle M}
that is
−
∞
{\displaystyle -\infty }
at entries where the attention link must be cut, and
0
{\displaystyle 0}
at other places:
MaskedAttention
(
Q
,
K
,
V
)
=
softmax
(
M
+
Q
K
T
d
k
)
V
{\displaystyle {\begin{aligned}{\text{MaskedAttention}}(Q,K,V)={\text{softmax}}\left(M+{\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
The following matrix is commonly used in decoder self-attention modules, called "causal masking":
M
causal
=
[
0
−
∞
−
∞
…
−
∞
0
0
−
∞
…
−
∞
0
0
0
…
−
∞
⋮
⋮
⋮
⋱
⋮
0
0
0
…
0
]
{\displaystyle M_{\text{causal}}={\begin{bmatrix}0&-\infty &-\infty &\dots &-\infty \\0&0&-\infty &\dots &-\infty \\0&0&0&\dots &-\infty \\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &0\end{bmatrix}}}
In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form
P
M
causal
P
−
1
{\displaystyle PM_{\text{causal}}P^{-1}}
, where
P
{\displaystyle P}
is a random permutation matrix.
=== Encoder ===
An encoder consists of an embedding layer, followed by multiple encoder layers.
Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:
given input vectors
h
0
,
h
1
,
…
combine them into a matrix
H
=
[
h
0
h
1
⋮
]
EncoderLayer
(
H
)
=
[
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
0
)
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
1
)
⋮
]
{\displaystyle {\begin{aligned}{\text{given input vectors }}&h_{0},h_{1},\dots \\{\text{combine them into a matrix }}H&={\begin{bmatrix}h_{0}\\h_{1}\\\vdots \end{bmatrix}}\\{\text{EncoderLayer}}(H)&={\begin{bmatrix}{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{0})\\{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{1})\\\vdots \end{bmatrix}}\\\end{aligned}}}
where
FFN
{\displaystyle {\text{FFN}}}
stands for "feed-forward network". We can more succinctly write it as
EncoderLayer
(
H
)
=
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
)
{\displaystyle {\text{EncoderLayer}}(H)={\text{FFN}}({\text{MultiheadedAttention}}(H,H,H))}
with the implicit convention that the
FFN
{\displaystyle {\text{FFN}}}
is applied to each row of the matrix individually.
The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder.
As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking.
=== Decoder ===
A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer.
Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention.
Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked.
In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism.
Schematically, we have:
H
′
=
MaskedMultiheadedAttention
(
H
,
H
,
H
)
DecoderLayer
(
H
)
=
FFN
(
MultiheadedAttention
(
H
′
,
H
E
,
H
E
)
)
{\displaystyle {\begin{aligned}H'&={\text{MaskedMultiheadedAttention}}(H,H,H)\\{\text{DecoderLayer}}(H)&={\text{FFN}}({\text{MultiheadedAttention}}(H',H^{E},H^{E}))\end{aligned}}}
where
H
E
{\displaystyle H^{E}}
is the matrix with rows being the output vectors from the encoder.
The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text.
=== Adapted architectures ===
Many large language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. Early GPT models are decoder-only models trained to predict the next token in a sequence. BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence.
== Full transformer architecture ==
=== Sublayers ===
Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network.
The final points of detail are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence.
The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero.
Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector.
There are two common conventions in use: the post-LN and the pre-LN convention. In the post-LN convention, the output of each sublayer is
L
a
y
e
r
N
o
r
m
(
x
+
S
u
b
l
a
y
e
r
(
x
)
)
{\displaystyle \mathrm {LayerNorm} (x+\mathrm {Sublayer} (x))}
where
S
u
b
l
a
y
e
r
(
x
)
{\displaystyle \mathrm {Sublayer} (x)}
is the function implemented by the sublayer itself.
In the pre-LN convention, the output of each sublayer is
x
+
S
u
b
l
a
y
e
r
(
L
a
y
e
r
N
o
r
m
(
x
)
)
{\displaystyle x+\mathrm {Sublayer} (\mathrm {LayerNorm} (x))}
The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018, was found to be easier to train, requiring no warm-up, leading to faster convergence.
=== Pseudocode ===
The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from
input: Encoder input t_e
Decoder input t_d
output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence))
/* encoder */
z_e ← encoder.tokenizer(t_e)
for each t in 1:length(z_e) do
z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t)
for each l in 1:length(encoder.layers) do
layer ← encoder.layers[l]
/* first sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.multiheaded_attention(z_e, z_e, z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
/* second sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.feedforward(z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
for each t in 1:length(z_e) do
z_e[t] ← encoder.final_layer_norm(z_e[t])
/* decoder */
z_d ← decoder.tokenizer(t_d)
for each t in 1:length(z_d) do
z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t)
for each l in 1:length(decoder.layers) do
layer ← decoder.layers[l]
/* first sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* second sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.multiheaded_attention(z_d, z_e, z_e)
for each i in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* third sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.feedforward(z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
z_d ← decoder.final_layer_norm(z_d)
output_distributions ← []
for each t in 1:length(z_d) do
output_distributions.append(decoder.unembed(z_d[t]))
return output_distributions
=== Terminology ===
The Transformer architecture, being modular, allows variations. Several common variations are described here.
An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.
A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only.
An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder.
A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form: Figure 3
M
prefixLM
=
[
0
−
∞
0
M
causal
]
{\displaystyle M_{\text{prefixLM}}={\begin{bmatrix}\mathbf {0} &-\infty \\\mathbf {0} &M_{\text{causal}}\end{bmatrix}}}
where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.
There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively.
== Subsequent work ==
=== Alternative activation functions ===
The original transformer uses ReLU activation function. Other activation functions were developed. The Llama series and PaLM used SwiGLU; both GPT-1 and BERT used GELU.
Alternative activation functions are often used in combination with Gated Linear Units in the feedforward module.
=== Alternative normalizations ===
The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm which is used in the Llama series. Other examples include CapsuleNorm ScaleNorm, or FixNorm.
=== Alternative positional encodings ===
Transformers may use other positional encoding methods than sinusoidal.
The original Transformer paper reported using a learned positional encoding, but finding it not superior to the sinusoidal one. Later, found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module.
==== RoPE ====
RoPE (rotary positional embedding), is best explained by considering a list of 2-dimensional vectors
[
(
x
1
(
1
)
,
x
1
(
2
)
)
,
(
x
2
(
1
)
,
x
2
(
2
)
)
,
(
x
3
(
1
)
,
x
3
(
2
)
)
,
.
.
.
]
{\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),...]}
. Now pick some angle
θ
{\displaystyle \theta }
. Then RoPE encoding is
RoPE
(
x
m
(
1
)
,
x
m
(
2
)
,
m
)
=
(
cos
m
θ
−
sin
m
θ
sin
m
θ
cos
m
θ
)
(
x
m
(
1
)
x
m
(
2
)
)
=
(
x
m
(
1
)
cos
m
θ
−
x
m
(
2
)
sin
m
θ
x
m
(
2
)
cos
m
θ
+
x
m
(
1
)
sin
m
θ
)
{\displaystyle {\text{RoPE}}{\big (}x_{m}^{(1)},x_{m}^{(2)},m{\big )}={\begin{pmatrix}\cos m\theta &-\sin m\theta \\\sin m\theta &\cos m\theta \end{pmatrix}}{\begin{pmatrix}x_{m}^{(1)}\\x_{m}^{(2)}\\\end{pmatrix}}={\begin{pmatrix}x_{m}^{(1)}\cos m\theta -x_{m}^{(2)}\sin m\theta \\x_{m}^{(2)}\cos m\theta +x_{m}^{(1)}\sin m\theta \\\end{pmatrix}}}
Equivalently, if we write the 2-dimensional vectors as complex numbers
z
m
:=
x
m
(
1
)
+
i
x
m
(
2
)
{\displaystyle z_{m}:=x_{m}^{(1)}+ix_{m}^{(2)}}
, then RoPE encoding is just multiplication by an angle:
RoPE
(
z
m
,
m
)
=
e
i
m
θ
z
m
{\displaystyle {\text{RoPE}}{\big (}z_{m},m{\big )}=e^{im\theta }z_{m}}
For a list of
2
n
{\displaystyle 2n}
-dimensional vectors, a RoPE encoder is defined by a sequence of angles
θ
(
1
)
,
.
.
.
,
θ
(
n
)
{\displaystyle \theta ^{(1)},...,\theta ^{(n)}}
. Then the RoPE encoding is applied to each pair of coordinates.
The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:
RoPE
(
x
,
m
)
T
RoPE
(
y
,
n
)
=
RoPE
(
x
,
m
+
k
)
T
RoPE
(
y
,
n
+
k
)
{\displaystyle {\text{RoPE}}{\big (}x,m{\big )}^{T}{\text{RoPE}}{\big (}y,n{\big )}={\text{RoPE}}{\big (}x,m+k{\big )}^{T}{\text{RoPE}}{\big (}y,n+k{\big )}}
for any integer
k
{\displaystyle k}
.
==== ALiBi ====
ALiBi (Attention with Linear Biases) is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism is
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
+
s
B
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+sB\right)V\end{aligned}}}
Here,
s
{\displaystyle s}
is a real number ("scalar"), and
B
{\displaystyle B}
is the linear bias matrix defined by
B
=
(
0
1
2
3
⋯
−
1
0
1
2
⋯
−
2
−
1
0
1
⋯
−
3
−
2
−
1
0
⋯
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle B={\begin{pmatrix}0&1&2&3&\cdots \\-1&0&1&2&\cdots \\-2&-1&0&1&\cdots \\-3&-2&-1&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}}
in other words,
B
i
,
j
=
j
−
i
{\displaystyle B_{i,j}=j-i}
. The idea being that the linear bias matrix is a softened mask. Just as
0
{\displaystyle 0}
represent full attention paid, and
−
∞
{\displaystyle -\infty }
represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction.
ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located).
==== Relative Position Encodings ====
Relative Position Encodings is similar to ALiBi, but more generic:
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
+
B
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+B\right)V\end{aligned}}}
where
B
{\displaystyle B}
is a Toeplitz matrix, that is,
B
i
,
j
=
B
i
′
,
j
′
{\displaystyle B_{i,j}=B_{i',j'}}
whenever
i
−
j
=
i
′
−
j
′
{\displaystyle i-j=i'-j'}
. This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding".
=== Efficient implementation ===
The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models.
==== KV caching ====
When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching.
If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots.
==== FlashAttention ====
FlashAttention is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page on softmax for details.
An improved version, FlashAttention-2, was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention.
Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).
Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8.
==== Multi-Query Attention ====
Multi-Query Attention changes the multiheaded attention mechanism. Whereas normally,
MultiheadedAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
i
K
,
X
W
i
V
)
)
W
O
{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V})\right)W^{O}}
with Multi-Query Attention, there is just one
W
K
,
W
V
{\displaystyle W^{K},W^{V}}
, thus:
MultiQueryAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
K
,
X
W
V
)
)
W
O
{\displaystyle {\text{MultiQueryAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW^{K},XW^{V})\right)W^{O}}
This has a neutral effect on model quality and training speed, but increases inference speed.
More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups.
Multihead Latent Attention (MLA) is a low-rank approximation to standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached.
==== Speculative decoding ====
Speculative decoding is a method to accelerate token decoding. Similarly to speculative execution in CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly.
The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense.
Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a token
x
1
,
x
2
,
.
.
.
,
x
512
{\displaystyle x_{1},x_{2},...,x_{512}}
, taking time
512
T
GPT-3
{\displaystyle 512T_{\text{GPT-3}}}
. However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that each
x
t
{\displaystyle x_{t}}
is indeed the token with the largest log-likelihood in the
t
{\displaystyle t}
-th output.
In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens:
x
~
1
,
x
~
2
,
x
~
3
,
x
~
4
{\displaystyle {\tilde {x}}_{1},{\tilde {x}}_{2},{\tilde {x}}_{3},{\tilde {x}}_{4}}
. This only takes
4
T
GPT-3-small
{\displaystyle 4T_{\text{GPT-3-small}}}
. These tokens are then run through the larger GPT-3 in one go. Suppose that
x
~
1
{\displaystyle {\tilde {x}}_{1}}
and
x
~
2
{\displaystyle {\tilde {x}}_{2}}
are verified by GPT-3 as what it would have picked, then those are kept, but
x
~
3
{\displaystyle {\tilde {x}}_{3}}
is not, so
x
~
3
,
x
~
4
{\displaystyle {\tilde {x}}_{3},{\tilde {x}}_{4}}
are discarded, and GPT-3 is run on those. This would take
4
T
GPT-3-small
+
3
T
GPT-3
{\displaystyle 4T_{\text{GPT-3-small}}+3T_{\text{GPT-3}}}
, which might be shorter than
4
T
GPT-3
{\displaystyle 4T_{\text{GPT-3}}}
.
For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.
In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict the next token, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack.
=== Sub-quadratic transformers ===
Training transformer-based architectures can be expensive, especially for long inputs. Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows. In the audio domain, SepTr decouples the attention in time and frequency domains. Long Range Arena (2020) is a standard benchmark for comparing the behavior of transformer architectures over long inputs.
==== Alternative attention graphs ====
The standard attention graph is either all-to-all or causal, both of which scales as
O
(
N
2
)
{\displaystyle O(N^{2})}
where
N
{\displaystyle N}
is the number of tokens in a sequence.
Reformer (2020) reduces the computational load from
O
(
N
2
)
{\displaystyle O(N^{2})}
to
O
(
N
ln
N
)
{\displaystyle O(N\ln N)}
by using locality-sensitive hashing and reversible layers.
Sparse attention uses attention graphs that grows slower than
O
(
N
2
)
{\displaystyle O(N^{2})}
. For example, BigBird (2020) uses random small-world networks which grows as
O
(
N
)
{\displaystyle O(N)}
.
Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value.
==== Random Feature Attention ====
Random Feature Attention (2021) uses Fourier random features:
φ
(
x
)
=
1
D
[
cos
⟨
w
1
,
x
⟩
,
sin
⟨
w
1
,
x
⟩
,
⋯
cos
⟨
w
D
,
x
⟩
,
sin
⟨
w
D
,
x
⟩
]
T
{\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\cdots \cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}}
where
w
1
,
.
.
.
,
w
D
{\displaystyle w_{1},...,w_{D}}
are independent samples from the normal distribution
N
(
0
,
σ
2
I
)
{\displaystyle N(0,\sigma ^{2}I)}
. This choice of parameters satisfy
E
[
⟨
φ
(
x
)
,
φ
(
y
)
⟩
]
=
e
−
‖
x
−
y
‖
2
2
σ
2
{\displaystyle \mathbb {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{-{\frac {\|x-y\|^{2}}{2\sigma ^{2}}}}}
, or
e
⟨
x
,
y
⟩
/
σ
2
=
E
[
⟨
e
‖
x
‖
2
/
2
σ
2
φ
(
x
)
,
e
‖
y
‖
2
/
2
σ
2
φ
(
y
)
⟩
]
≈
⟨
e
‖
x
‖
2
/
2
σ
2
φ
(
x
)
,
e
‖
y
‖
2
/
2
σ
2
φ
(
y
)
⟩
{\displaystyle e^{\langle x,y\rangle /\sigma ^{2}}=\mathbb {E} [\langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle ]\approx \langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle }
Consequently, the one-headed attention, with one query, can be written as
Attention
(
q
,
K
,
V
)
=
softmax
(
q
K
T
d
k
)
V
≈
φ
(
q
)
T
∑
i
e
‖
k
i
‖
2
/
2
σ
2
φ
(
k
i
)
v
i
T
φ
(
q
)
T
∑
i
e
‖
k
i
‖
2
/
2
σ
2
φ
(
k
i
)
{\displaystyle {\text{Attention}}(q,K,V)={\text{softmax}}\left({\frac {qK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx {\frac {\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})v_{i}^{T}}{\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})}}}
where
σ
=
d
K
1
/
4
{\displaystyle \sigma =d_{K}^{1/4}}
. Similarly for multiple queries, and for multiheaded attention.
This approximation can be computed in linear time, as we can compute the matrix
φ
(
k
i
)
v
i
T
{\displaystyle \varphi (k_{i})v_{i}^{T}}
first, then multiply it with the query. In essence, we have managed to obtain a more precise version of
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
)
V
≈
Q
(
K
T
V
/
d
k
)
{\displaystyle {\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx Q(K^{T}V/{\sqrt {d_{k}}})}
Performer (2022) uses the same Random Feature Attention, but
w
1
,
.
.
.
,
w
D
{\displaystyle w_{1},...,w_{D}}
are first independently sampled from the normal distribution
N
(
0
,
σ
2
I
)
{\displaystyle N(0,\sigma ^{2}I)}
, then they are Gram-Schmidt processed.
=== Multimodality ===
Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality.
Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning. The LLaVA was a vision-language model composed of a language model (Vicuna-13B) and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.
Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer.
Conformer and later Whisper follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
Perceivers are a variant of Transformers designed for multimodality.
For image generation, notable architectures are DALL-E 1 (2021), Parti (2022), Phenaki (2023), and Muse (2023). Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image. Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image. Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted. Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.
== Applications ==
The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Gemini, AlbertAGPT, Claude, BERT, Grok, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including:
machine translation
time series prediction
document summarization
document generation
named entity recognition (NER)
writing computer code based on requirements expressed in natural language.
speech-to-text
Beyond traditional NLP, the transformer architecture has had success in other applications, such as:
biological sequence analysis
video understanding
protein folding (such as AlphaFold)
evaluating chess board positions. Using static evaluation alone (that is, with no Minimax search) transformer achieved an Elo of 2895, putting it at grandmaster level.
== See also ==
seq2seq – Family of machine learning approaches
Perceiver – Variant of Transformer designed for multimodal data
Vision transformer – Machine learning model for vision processing
Large language model – Type of machine learning model
BERT (language model) – Series of language models developed by Google AI
Generative pre-trained transformer – Type of large language model
T5 (language model) – Series of large language models developed by Google AI
== Notes ==
== References ==
== Further reading == | Wikipedia/Transformer_model |
The IBM alignment models are a sequence of increasingly complex models used in statistical machine translation to train a translation model and an alignment model, starting with lexical translation probabilities and moving to reordering and word duplication. They underpinned the majority of statistical machine translation systems for almost twenty years starting in the early 1990s, until neural machine translation began to dominate. These models offer principled probabilistic formulation and (mostly) tractable inference.
The IBM alignment models were published in parts in 1988 and 1990, and the entire series is published in 1993. Every author of the 1993 paper subsequently went to the hedge fund Renaissance Technologies.
The original work on statistical machine translation at IBM proposed five models, and a model 6 was proposed later. The sequence of the six models can be summarized as:
Model 1: lexical translation
Model 2: additional absolute alignment model
Model 3: extra fertility model
Model 4: added relative alignment model
Model 5: fixed deficiency problem.
Model 6: Model 4 combined with a HMM alignment model in a log linear way
== Mathematical setup ==
The IBM alignment models translation as a conditional probability model. For each source-language ("foreign") sentence
f
{\displaystyle f}
, we generate both a target-language ("English") sentence
e
{\displaystyle e}
and an alignment
a
{\displaystyle a}
. The problem then is to find a good statistical model for
p
(
e
,
a
|
f
)
{\displaystyle p(e,a|f)}
, the probability that we would generate English language sentence
e
{\displaystyle e}
and an alignment
a
{\displaystyle a}
given a foreign sentence
f
{\displaystyle f}
.
The meaning of an alignment grows increasingly complicated as the model version number grew. See Model 1 for the most simple and understandable version.
== Model 1 ==
=== Word alignment ===
Given any foreign-English sentence pair
(
e
,
f
)
{\displaystyle (e,f)}
, an alignment for the sentence pair is a function of type
{
1
,
.
,
.
.
.
,
l
e
}
→
{
0
,
1
,
.
,
.
.
.
,
l
f
}
{\displaystyle \{1,.,...,l_{e}\}\to \{0,1,.,...,l_{f}\}}
. That is, we assume that the English word at location
i
{\displaystyle i}
is "explained" by the foreign word at location
a
(
i
)
{\displaystyle a(i)}
. For example, consider the following pair of sentences It will surely rain tomorrow -- 明日 は きっと 雨 だWe can align some English words to corresponding Japanese words, but not everyone:it -> ?
will -> ?
surely -> きっと
rain -> 雨
tomorrow -> 明日This in general happens due to the different grammar and conventions of speech in different languages. English sentences require a subject, and when there is no subject available, it uses a dummy pronoun it. Japanese verbs do not have different forms for future and present tense, and the future tense is implied by the noun 明日 (tomorrow). Conversely, the topic-marker は and the grammar word だ (roughly "to be") do not correspond to any word in the English sentence.
So, we can write the alignment as 1-> 0; 2 -> 0; 3 -> 3; 4 -> 4; 5 -> 1where 0 means that there is no corresponding alignment.
Thus, we see that the alignment function is in general a function of type
{
1
,
.
,
.
.
.
,
l
e
}
→
{
0
,
1
,
.
,
.
.
.
,
l
f
}
{\displaystyle \{1,.,...,l_{e}\}\to \{0,1,.,...,l_{f}\}}
.
Future models will allow one English world to be aligned with multiple foreign words.
=== Statistical model ===
Given the above definition of alignment, we can define the statistical model used by Model 1:
Start with a "dictionary". Its entries are of form
t
(
e
i
|
f
j
)
{\displaystyle t(e_{i}|f_{j})}
, which can be interpreted as saying "the foreign word
f
j
{\displaystyle f_{j}}
is translated to the English word
e
i
{\displaystyle e_{i}}
with probability
t
(
e
i
|
f
j
)
{\displaystyle t(e_{i}|f_{j})}
".
After being given a foreign sentence
f
{\displaystyle f}
with length
l
f
{\displaystyle l_{f}}
, we first generate an English sentence length
l
e
{\displaystyle l_{e}}
uniformly in a range
U
n
i
f
o
r
m
[
1
,
2
,
.
.
.
,
N
]
{\displaystyle Uniform[1,2,...,N]}
. In particular, it does not depend on
f
{\displaystyle f}
or
l
f
{\displaystyle l_{f}}
.
Then, we generate an alignment uniformly in the set of all possible alignment functions
{
1
,
.
,
.
.
.
,
l
e
}
→
{
0
,
1
,
.
,
.
.
.
,
l
f
}
{\displaystyle \{1,.,...,l_{e}\}\to \{0,1,.,...,l_{f}\}}
.
Finally, for each English word
e
1
,
e
2
,
.
.
.
e
l
e
{\displaystyle e_{1},e_{2},...e_{l_{e}}}
, generate each one independently of every other English word. For the word
e
i
{\displaystyle e_{i}}
, generate it according to
t
(
e
i
|
f
a
(
i
)
)
{\displaystyle t(e_{i}|f_{a(i)})}
.
Together, we have the probability
p
(
e
,
a
|
f
)
=
1
/
N
(
1
+
l
f
)
l
e
∏
i
=
1
l
e
t
(
e
i
|
f
a
(
i
)
)
{\displaystyle p(e,a|f)={\frac {1/N}{(1+l_{f})^{l_{e}}}}\prod _{i=1}^{l_{e}}t(e_{i}|f_{a(i)})}
IBM Model 1 uses very simplistic assumptions on the statistical model, in order to allow the following algorithm to have closed-form solution.
=== Learning from a corpus ===
If a dictionary is not provided at the start, but we have a corpus of English-foreign language pairs
{
(
e
(
k
)
,
f
(
k
)
)
}
k
{\displaystyle \{(e^{(k)},f^{(k)})\}_{k}}
(without alignment information), then the model can be cast into the following form:
fixed parameters: the foreign sentences
{
f
(
k
)
}
k
{\displaystyle \{f^{(k)}\}_{k}}
.
learnable parameters: the entries of the dictionary
t
(
e
i
|
f
j
)
{\displaystyle t(e_{i}|f_{j})}
.
observable variables: the English sentences
{
e
(
k
)
}
k
{\displaystyle \{e^{(k)}\}_{k}}
.
latent variables: the alignments
{
a
(
k
)
}
k
{\displaystyle \{a^{(k)}\}_{k}}
In this form, this is exactly the kind of problem solved by expectation–maximization algorithm. Due to the simplistic assumptions, the algorithm has a closed-form, efficiently computable solution, which is the solution to the following equations:
{
max
t
′
∑
k
∑
i
∑
a
(
k
)
t
(
a
(
k
)
|
e
(
k
)
,
f
(
k
)
)
ln
t
(
e
i
(
k
)
|
f
a
(
k
)
(
i
)
(
k
)
)
∑
x
t
′
(
e
x
|
f
y
)
=
1
∀
y
{\displaystyle {\begin{cases}\max _{t'}\sum _{k}\sum _{i}\sum _{a^{(k)}}t(a^{(k)}|e^{(k)},f^{(k)})\ln t(e_{i}^{(k)}|f_{a^{(k)}(i)}^{(k)})\\\sum _{x}t'(e_{x}|f_{y})=1\quad \forall y\end{cases}}}
This can be solved by Lagrangian multipliers, then simplified. For a detailed derivation of the algorithm, see chapter 4 and.
In short, the EM algorithm goes as follows:INPUT. a corpus of English-foreign sentence pairs
{
(
e
(
k
)
,
f
(
k
)
)
}
k
{\displaystyle \{(e^{(k)},f^{(k)})\}_{k}}
INITIALIZE. matrix of translations probabilities
t
(
e
x
|
f
y
)
{\displaystyle t(e_{x}|f_{y})}
.This could either be uniform or random. It is only required that every entry is positive, and for each
y
{\displaystyle y}
, the probability sums to one:
∑
x
t
(
e
x
|
f
y
)
=
1
{\displaystyle \sum _{x}t(e_{x}|f_{y})=1}
.
LOOP. until
t
(
e
x
|
f
y
)
{\displaystyle t(e_{x}|f_{y})}
converges:
t
(
e
x
|
f
y
)
←
t
(
e
x
|
f
y
)
λ
y
∑
k
,
i
,
j
δ
(
e
x
,
e
i
(
k
)
)
δ
(
f
y
,
f
j
(
k
)
)
∑
j
′
t
(
e
i
(
k
)
|
f
j
′
(
k
)
)
{\displaystyle t(e_{x}|f_{y})\leftarrow {\frac {t(e_{x}|f_{y})}{\lambda _{y}}}\sum _{k,i,j}{\frac {\delta (e_{x},e_{i}^{(k)})\delta (f_{y},f_{j}^{(k)})}{\sum _{j'}t(e_{i}^{(k)}|f_{j'}^{(k)})}}}
where each
λ
y
{\displaystyle \lambda _{y}}
is a normalization constant that makes sure each
∑
x
t
(
e
x
|
f
y
)
=
1
{\displaystyle \sum _{x}t(e_{x}|f_{y})=1}
.RETURN.
t
(
e
x
|
f
y
)
{\displaystyle t(e_{x}|f_{y})}
.In the above formula,
δ
{\displaystyle \delta }
is the Dirac delta function -- it equals 1 if the two entries are equal, and 0 otherwise. The index notation is as follows:
k
{\displaystyle k}
ranges over English-foreign sentence pairs in corpus;
i
{\displaystyle i}
ranges over words in English sentences;
j
{\displaystyle j}
ranges over words in foreign language sentences;
x
{\displaystyle x}
ranges over the entire vocabulary of English words in the corpus;
y
{\displaystyle y}
ranges over the entire vocabulary of foreign words in the corpus.
=== Limitations ===
There are several limitations to the IBM model 1.
No fluency: Given any sentence pair
(
e
,
f
)
{\displaystyle (e,f)}
, any permutation of the English sentence is equally likely:
p
(
e
|
f
)
=
p
(
e
′
|
f
)
{\displaystyle p(e|f)=p(e'|f)}
for any permutation of the English sentence
e
{\displaystyle e}
into
e
′
{\displaystyle e'}
.
No length preference: The probability of each length of translation is equal:
∑
e
has length
l
p
(
e
|
f
)
=
1
N
{\displaystyle \sum _{e{\text{ has length }}l}p(e|f)={\frac {1}{N}}}
for any
l
∈
{
1
,
2
,
.
.
.
,
N
}
{\displaystyle l\in \{1,2,...,N\}}
.
Does not explicitly model fertility: some foreign words tend to produce a fixed number of English words. For example, for German-to-English translation, ja is usually omitted, and zum is usually translated to one of to the, for the, to a, for a.
== Model 2 ==
Model 2 allows alignment to be conditional on sentence lengths. That is, we have a probability distribution
p
a
(
j
|
i
,
l
e
,
l
f
)
{\displaystyle p_{a}(j|i,l_{e},l_{f})}
, meaning "the probability that English word
i
{\displaystyle i}
is aligned to foreign word
j
{\displaystyle j}
, when the English sentence is of length
l
e
{\displaystyle l_{e}}
, and the foreign sentence is of length
l
f
{\displaystyle l_{f}}
".
The rest of Model 1 is unchanged. With that, we have
p
(
e
,
a
|
f
)
=
1
/
N
∏
i
=
1
l
e
t
(
e
i
|
f
a
(
i
)
)
p
a
(
a
(
i
)
|
i
,
l
e
,
l
f
)
{\displaystyle p(e,a|f)={1/N}\prod _{i=1}^{l_{e}}t(e_{i}|f_{a(i)})p_{a}(a(i)|i,l_{e},l_{f})}
The EM algorithm can still be solved in closed-form, giving the following algorithm:
t
(
e
x
|
f
y
)
←
1
λ
y
∑
k
,
i
,
j
t
(
e
i
(
k
)
|
f
j
(
k
)
)
p
a
(
j
|
i
,
l
e
,
l
f
)
δ
(
e
x
,
e
i
(
k
)
)
δ
(
f
y
,
f
j
(
k
)
)
∑
j
′
t
(
e
i
(
k
)
|
f
j
′
(
k
)
)
p
a
(
j
′
|
i
,
l
e
,
l
f
)
{\displaystyle t(e_{x}|f_{y})\leftarrow {\frac {1}{\lambda _{y}}}\sum _{k,i,j}{\frac {t(e_{i}^{(k)}|f_{j}^{(k)})p_{a}(j|i,l_{e},l_{f})\delta (e_{x},e_{i}^{(k)})\delta (f_{y},f_{j}^{(k)})}{\sum _{j'}t(e_{i}^{(k)}|f_{j'}^{(k)})p_{a}(j'|i,l_{e},l_{f})}}}
p
a
(
j
|
i
,
l
e
,
l
f
)
←
1
λ
i
,
l
e
,
l
f
∑
k
t
(
e
i
(
k
)
|
f
j
(
k
)
)
p
a
(
j
|
i
,
l
e
,
l
f
)
δ
(
e
x
,
e
i
(
k
)
)
δ
(
f
y
,
f
j
(
k
)
)
δ
(
l
e
,
l
e
(
k
)
)
δ
(
l
f
,
l
f
(
k
)
)
∑
j
′
t
(
e
i
(
k
)
|
f
j
′
(
k
)
)
p
a
(
j
′
|
i
,
l
e
,
l
f
)
{\displaystyle p_{a}(j|i,l_{e},l_{f})\leftarrow {\frac {1}{\lambda _{i,l_{e},l_{f}}}}\sum _{k}{\frac {t(e_{i}^{(k)}|f_{j}^{(k)})p_{a}(j|i,l_{e},l_{f})\delta (e_{x},e_{i}^{(k)})\delta (f_{y},f_{j}^{(k)})\delta (l_{e},l_{e}^{(k)})\delta (l_{f},l_{f}^{(k)})}{\sum _{j'}t(e_{i}^{(k)}|f_{j'}^{(k)})p_{a}(j'|i,l_{e},l_{f})}}}
where
λ
{\displaystyle \lambda }
are still normalization factors. See section 4.4.1 of for a derivation and an algorithm.
== Model 3 ==
The fertility problem is addressed in IBM Model 3. The fertility is modeled using probability distribution defined as:
n
(
ϕ
∨
f
)
{\displaystyle n(\phi \lor f)}
For each foreign word
j
{\displaystyle j}
, such distribution indicates to how many output words
ϕ
{\displaystyle \phi }
it usually translates. This model deals with dropping input words because it allows
ϕ
=
0
{\displaystyle \phi =0}
. But there is still an issue when adding words. For example, the English word do is often inserted when negating. This issue generates a special NULL token that can also have its fertility modeled using a conditional distribution defined as:
n
(
∅
∨
N
U
L
L
)
{\displaystyle n(\varnothing \lor NULL)}
The number of inserted words depends on sentence length. This is why the NULL token insertion is modeled as an additional step: the fertility step. It increases the IBM Model 3 translation process to four steps:
The last step is called distortion instead of alignment because it is possible to produce the same translation with the same alignment in different ways. For example, in the above example, we have another way to get the same alignment:
ja NULL nie pôjde tak do do domu
I do not go the to house
I do not go to the house
IBM Model 3 can be mathematically expressed as:
P
(
S
∣
E
,
A
)
=
∏
i
=
1
I
Φ
i
!
n
(
Φ
∣
e
j
)
∗
∏
j
=
1
J
t
(
f
j
∣
e
a
j
)
∗
∏
j
:
a
(
j
)
≠
0
J
d
(
j
|
a
j
,
I
,
J
)
(
J
−
Φ
0
Φ
0
)
p
0
Φ
0
p
1
J
{\displaystyle P(S\mid E,A)=\prod _{i=1}^{I}\Phi _{i}!n(\Phi \mid e_{j})*\prod _{j=1}^{J}t(f_{j}\mid e_{a_{j}})*\prod _{j:a(j)\neq 0}^{J}d(j|a_{j},I,J){\binom {J-\Phi _{0}}{\Phi _{0}}}p_{0}^{\Phi _{0}}p_{1}^{J}}
where
Φ
i
{\displaystyle \Phi _{i}}
represents the fertility of
e
i
{\displaystyle e_{i}}
, each source word
s
{\displaystyle s}
is assigned a fertility distribution
n
{\displaystyle n}
, and
I
{\displaystyle I}
and
J
{\displaystyle J}
refer to the absolute lengths of the target and source sentences, respectively.
See section 4.4.2 of for a derivation and an algorithm.
== Model 4 ==
In IBM Model 4, each word is dependent on the previously aligned word and on the word classes of the surrounding words. Some words tend to get reordered during translation more than others (e.g. adjective–noun inversion when translating Polish to English). Adjectives often get moved before the noun that precedes them. The word classes introduced in Model 4 solve this problem by conditioning the probability distributions of these classes. The result of such distribution is a lexicalized model. Such a distribution can be defined as follows:
For the initial word in the cept:
d
1
(
j
−
⊙
[
i
−
1
]
∨
A
(
f
[
i
−
1
]
)
,
B
(
e
j
)
)
{\displaystyle d_{1}(j-\odot _{[i-1]}\lor A(f_{[i-1]}),B(e_{j}))}
For additional words:
d
1
(
j
−
π
i
,
k
−
1
∨
B
(
e
j
)
)
{\displaystyle d_{1}(j-\pi _{i,k-1}\lor B(e_{j}))}
where
A
(
f
)
{\displaystyle A(f)}
and
B
(
e
)
{\displaystyle B(e)}
functions map words to their word classes, and
e
j
{\displaystyle e_{j}}
and
f
[
i
−
1
]
{\displaystyle f_{[i-1]}}
are distortion probability distributions of the words. The cept is formed by aligning each input word
f
i
{\displaystyle f_{i}}
to at least one output word.
Both Model 3 and Model 4 ignore if an input position was chosen and if the probability mass was reserved for the input positions outside the sentence boundaries. It is the reason for the probabilities of all correct alignments not sum up to unity in these two models (deficient models).
== Model 5 ==
IBM Model 5 reformulates IBM Model 4 by enhancing the alignment model with more training parameters in order to overcome the model deficiency. During the translation in Model 3 and Model 4 there are no heuristics that would prohibit the placement of an output word in a position already taken. In Model 5 it is important to place words only in free positions. It is done by tracking the number of free positions and allowing placement only in such positions. The distortion model is similar to IBM Model 4, but it is based on free positions. If
v
j
{\displaystyle v_{j}}
denotes the number of free positions in the output, the IBM Model 5 distortion probabilities would be defined as:
For the initial word in the cept:
d
1
(
v
j
∨
B
(
e
j
)
,
v
⊙
i
−
1
,
v
m
a
x
)
{\displaystyle d_{1}(v_{j}\lor B(e_{j}),v_{\odot i-1},v_{max})}
For additional words:
d
1
(
v
j
−
v
π
i
,
k
−
1
∨
B
(
e
j
)
,
v
m
a
x
′
)
{\displaystyle d_{1}(v_{j}-v_{\pi _{i,k-1}}\lor B(e_{j}),v_{max'})}
The alignment models that use first-order dependencies like the HMM or IBM Models 4 and 5 produce better results than the other alignment methods. The main idea of HMM is to predict the distance between subsequent source language positions. On the other hand, IBM Model 4 tries to predict the distance between subsequent target language positions. Since it was expected to achieve better alignment quality when using both types of such dependencies, HMM and Model 4 were combined in a log-linear manner in Model 6 as follows:
p
6
(
f
,
a
∨
e
)
=
p
4
(
f
,
a
∨
e
)
α
∗
p
H
M
M
(
f
,
a
∨
e
)
∑
a
′
,
f
′
p
4
(
f
′
,
a
′
∨
e
)
α
∗
p
H
M
M
(
f
′
,
a
′
∨
e
)
{\displaystyle p_{6}(f,a\lor e)={\frac {p_{4}(f,a\lor e)^{\alpha }*p_{HMM}(f,a\lor e)}{\sum _{a',f'}p_{4}(f',a'\lor e)^{\alpha }*p_{HMM}(f',a'\lor e)}}}
where the interpolation parameter
α
{\displaystyle \alpha }
is used in order to count the weight of Model 4 relatively to the hidden Markov model. A log-linear combination of several models can be defined as
p
k
(
f
,
a
∣
e
)
{\displaystyle p_{k}(f,a\mid e)}
with
k
=
1
,
2
,
…
,
K
{\displaystyle k=1,2,\dotsc ,K}
as:
p
6
(
f
,
a
∨
e
)
=
∏
k
=
1
K
p
k
(
f
,
a
∨
e
)
α
k
∑
a
′
,
f
′
∏
k
=
1
K
p
k
(
f
′
,
a
′
∣
e
)
α
k
{\displaystyle p_{6}(f,a\lor e)={\frac {\prod _{k=1}^{K}p_{k}(f,a\lor e)^{\alpha _{k}}}{\sum _{a',f'}\prod _{k=1}^{K}p_{k}(f',a'\mid e)^{\alpha _{k}}}}}
The log-linear combination is used instead of linear combination because the
P
r
(
f
,
a
∣
e
)
{\displaystyle P_{r}(f,a\mid e)}
values are typically different in terms of their orders of magnitude for HMM and IBM Model 4.
== References ==
== Further reading ==
Knight, Kevin (1997-12-15). "Automating Knowledge Acquisition for Machine Translation". AI Magazine. 18 (4): 81. doi:10.1609/aimag.v18i4.1323. ISSN 2371-9621.
Knight, Kevin. "A statistical MT tutorial workbook." Prepared for the 1999 JHU Summer Workshop. 1999. | Wikipedia/IBM_alignment_models |
PaLM (Pathways Language Model) is a 540 billion-parameter dense decoder-only transformer-based large language model (LLM) developed by Google AI. Researchers also trained smaller versions of PaLM (with 8 and 62 billion parameters) to test the effects of model scale.
== Model ==
PaLM is capable of a wide range of tasks, including commonsense reasoning, arithmetic reasoning, joke explanation, code generation, and translation. When combined with chain-of-thought prompting, PaLM achieved significantly better performance on datasets requiring reasoning of multiple steps, such as word problems and logic-based questions.
The model was first announced in April 2022 and remained private until March 2023, when Google launched an API for PaLM and several other technologies. The API was initially available to a limited number of developers who joined a waitlist before it was released to the public.
Google and DeepMind developed a version of PaLM 540B (the parameter count, 540 billion), called Med-PaLM, that is fine-tuned on medical data and outperforms previous models on medical question answering benchmarks. Med-PaLM was the first to obtain a passing score on U.S. medical licensing questions, and in addition to answering both multiple choice and open-ended questions accurately, it also provides reasoning and is able to evaluate its own responses.
Google also extended PaLM using a vision transformer to create PaLM-E, a state-of-the-art vision-language model that can be used for robotic manipulation. The model can perform tasks in robotics competitively without the need for retraining or fine-tuning.
In May 2023, Google announced PaLM 2 at the annual Google I/O keynote. PaLM 2 is reported to be a 340 billion-parameter model trained on 3.6 trillion tokens.
In June 2023, Google announced AudioPaLM for speech-to-speech translation, which uses the PaLM-2 architecture and initialization.
== Training ==
PaLM is pre-trained on a high-quality corpus of 780 billion tokens that comprise various natural language tasks and use cases. This dataset includes filtered webpages, books, Wikipedia articles, news articles, source code obtained from open source repositories on GitHub, and social media conversations. It is based on the dataset used to train Google's LaMDA model. The social media conversation portion of the dataset makes up 50% of the corpus, which aids the model in its conversational capabilities.
PaLM 540B was trained over two TPU v4 Pods with 3,072 TPU v4 chips in each Pod attached to 768 hosts, connected using a combination of model and data parallelism, which was the largest TPU configuration. This allowed for efficient training at scale, using 6,144 chips, and marked a record for the highest training efficiency achieved for LLMs at this scale: a hardware FLOPs utilization of 57.8%.
== See also ==
LaMDA, PaLM's predecessor
Gemini, PaLM's successor
Chinchilla
== References == | Wikipedia/Pathways_Language_Model |
Small language models (SLMs) are artificial intelligence language models designed for human natural language processing including language and text generation. Unlike large language models (LLMs), small language models are much smaller in scale and scope.
Typically, an LLM's number of training parameters is in the hundreds of billions, with some models even exceeding a trillion parameters. The size of any LLM is vast because it contains a large amount of information, which allows it to generate better content. However, this requires enormous computational power, making it impossible for an individual to train a large language model using just a single computer and GPU.
Small language models, on the other hand, use far fewer parameters, typically ranging from a few million to a few billion. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device.
== See also ==
Edge computing
== References == | Wikipedia/Small_language_model |
A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It is an artificial neural network that is used in natural language processing by machines. It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content. As of 2023, most LLMs had these characteristics and are sometimes referred to broadly as GPTs.
The first GPT was introduced in 2018 by OpenAI. OpenAI has released significant GPT foundation models that have been sequentially numbered, to comprise its "GPT-n" series. Each of these was significantly more capable than the previous, due to increased size (number of trainable parameters) and training. The most recent of these, GPT-4o, was released in May 2024. Such models have been the basis for their more task-specific GPT systems, including models fine-tuned for instruction following—which in turn power the ChatGPT chatbot service.
The term "GPT" is also used in the names and descriptions of such models developed by others. For example, other GPT foundation models include a series of models created by EleutherAI, and seven models created by Cerebras in 2023. Companies in different industries have developed task-specific GPTs in their respective fields, such as Salesforce's "EinsteinGPT" (for CRM) and Bloomberg's "BloombergGPT" (for finance).
== History ==
=== Initial developments ===
Generative pretraining (GP) was a long-established concept in machine learning applications. It was originally used as a form of semi-supervised learning, as the model is trained first on an unlabeled dataset (pretraining step) by learning to generate datapoints in the dataset, and then it is trained to classify a labeled dataset.
There were three main types of early GP. The hidden Markov models learn a generative model of sequences for downstream applications. For example, in speech recognition, a trained HMM infers the most likely hidden sequence for a speech signal, and the hidden sequence is taken as the phonemes of the speech signal. These were developed in the 1970s and became widely applied in speech recognition in the 1980s.
The compressors learn to compress data such as images and textual sequences, and the compressed data serves as a good representation for downstream applications such as facial recognition. The autoencoders similarly learn a latent representation of data for later downstream applications such as speech recognition. The connection between autoencoders and algorithmic compressors was noted in 1993.
During the 2010s, the problem of machine translation was solved by recurrent neural networks, with attention mechanism added. This was optimized into the transformer architecture, published by Google researchers in Attention Is All You Need (2017). That development led to the emergence of large language models such as BERT (2018) which was a pre-trained transformer (PT) but not designed to be generative (BERT was an "encoder-only" model). Also in 2018, OpenAI published Improving Language Understanding by Generative Pre-Training, which introduced GPT-1, the first in its GPT series.
Previously in 2017, some of the authors who would later work on GPT-1 worked on generative pre-training of language with LSTM, which resulted in a model that could represent text with vectors that could easily be fine-tuned for downstream applications.
Prior to transformer-based architectures, the best-performing neural NLP (natural language processing) models commonly employed supervised learning from large amounts of manually-labeled data. The reliance on supervised learning limited their use on datasets that were not well-annotated, and also made it prohibitively expensive and time-consuming to train extremely large language models.
The semi-supervised approach OpenAI employed to make a large-scale generative system—and was first to do with a transformer model—involved two stages: an unsupervised generative "pretraining" stage to set initial parameters using a language modeling objective, and a supervised discriminative "fine-tuning" stage to adapt these parameters to a target task.
=== Later developments ===
Regarding more recent GPT foundation models, OpenAI published its first versions of GPT-3 in July 2020. There were three models, with 1B, 6.7B, 175B parameters, respectively named babbage, curie, and davinci (giving initials B, C, and D).
In July 2021, OpenAI published Codex, a task-specific GPT model targeted for programming applications. This was developed by fine-tuning a 12B parameter version of GPT-3 (different from previous GPT-3 models) using code from GitHub.
In March 2022, OpenAI published two versions of GPT-3 that were fine-tuned for instruction-following (instruction-tuned), named davinci-instruct-beta (175B) and text-davinci-001, and then started beta testing code-davinci-002. text-davinci-002 was instruction-tuned from code-davinci-002. Both text-davinci-003 and ChatGPT were released in November 2022, with both building upon text-davinci-002 via reinforcement learning from human feedback (RLHF). text-davinci-003 is trained for following instructions (like its predecessors), whereas ChatGPT is further trained for conversational interaction with a human user.
OpenAI's most recent GPT foundation model, GPT-4, was released on March 14, 2023. It can be accessed directly by users via a premium version of ChatGPT, and is available to developers for incorporation into other products and services via OpenAI's API. Other producers of GPT foundation models include EleutherAI (with a series of models starting in March 2021) and Cerebras (with seven models released in March 2023).
== Foundation models ==
A foundation model is an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks.
Thus far, the most notable GPT foundation models have been from OpenAI's GPT-n series. The most recent from that is GPT-4, for which OpenAI declined to publish the size or training details (citing "the competitive landscape and the safety implications of large-scale models").
Other such models include Google's PaLM, a broad foundation model that has been compared to GPT-3 and has been made available to developers via an API, and Together's GPT-JT, which has been reported as the closest-performing open-source alternative to GPT-3 (and is derived from earlier open-source GPTs). Meta AI (formerly Facebook) also has a generative transformer-based foundational large language model, known as LLaMA.
Foundational GPTs can also employ modalities other than text, for input and/or output. GPT-4 is a multi-modal LLM that is capable of processing text and image input (though its output is limited to text). Regarding multimodal output, some generative transformer-based models are used for text-to-image technologies such as diffusion and parallel decoding. Such kinds of models can serve as visual foundation models (VFMs) for developing downstream systems that can work with images.
== Task-specific models ==
A foundational GPT model can be further adapted to produce more targeted systems directed to specific tasks and/or subject-matter domains. Methods for such adaptation can include additional fine-tuning (beyond that done for the foundation model) as well as certain forms of prompt engineering.
An important example of this is fine-tuning models to follow instructions, which is of course a fairly broad task but more targeted than a foundation model. In January 2022, OpenAI introduced "InstructGPT"—a series of models which were fine-tuned to follow instructions using a combination of supervised training and reinforcement learning from human feedback (RLHF) on base GPT-3 language models. Advantages this had over the bare foundational models included higher accuracy, less negative/toxic sentiment, and generally better alignment with user needs. Hence, OpenAI began using this as the basis for its API service offerings. Other instruction-tuned models have been released by others, including a fully open version.
Another (related) kind of task-specific models are chatbots, which engage in human-like conversation. In November 2022, OpenAI launched ChatGPT—an online chat interface powered by an instruction-tuned language model trained in a similar fashion to InstructGPT. They trained this model using RLHF, with human AI trainers providing conversations in which they played both the user and the AI, and mixed this new dialogue dataset with the InstructGPT dataset for a conversational format suitable for a chatbot. Other major chatbots currently include Microsoft's Bing Chat, which uses OpenAI's GPT-4 (as part of a broader close collaboration between OpenAI and Microsoft), and Google's competing chatbot Gemini (initially based on their LaMDA family of conversation-trained language models, with plans to switch to PaLM).
Yet another kind of task that a GPT can be used for is the meta-task of generating its own instructions, like developing a series of prompts for 'itself' to be able to effectuate a more general goal given by a human user. This is known as an AI agent, and more specifically a recursive one because it uses results from its previous self-instructions to help it form its subsequent prompts; the first major example of this was Auto-GPT (which uses OpenAI's GPT models), and others have since been developed as well.
=== Multimodality ===
Generative transformer-based systems can also be targeted for tasks involving modalities beyond text. For example, Microsoft's "Visual ChatGPT" combines ChatGPT with visual foundation models (VFMs) to enable input or output comprising images as well as text. Also, advances in text-to-speech technology offer tools for audio content creation when used in conjunction with foundational GPT language models.
=== Domain-specificity ===
GPT systems can be directed toward particular fields or domains. Some reported examples of such models and apps are as follows:
EinsteinGPT – for sales and marketing domains, to aid with customer relationship management (uses GPT-3.5)
BloombergGPT – for the financial domain, to aid with financial news and information (uses "freely available" AI methods, combined with their proprietary data)
Khanmigo – described as a GPT version for tutoring, in the education domain, it aids students using Khan Academy by guiding them through their studies without directly providing answers (powered by GPT-4)
SlackGPT – for the Slack instant-messaging service, to aid with navigating and summarizing discussions on it (uses OpenAI's API)
BioGPT – for the biomedical domain, to aid with biomedical literature text generation and mining (uses GPT-2)
Sometimes domain-specificity is accomplished via software plug-ins or add-ons. For example, several different companies have developed particular plugins that interact directly with OpenAI's ChatGPT interface, and Google Workspace has available add-ons such as "GPT for Sheets and Docs"—which is reported to aid use of spreadsheet functionality in Google Sheets.
== Brand issues ==
OpenAI, which created the first generative pre-trained transformer (GPT) in 2018, asserted in 2023 that "GPT" should be regarded as a brand of OpenAI. In April 2023, OpenAI revised the brand guidelines in its terms of service to indicate that other businesses using its API to run their artificial intelligence (AI) services would no longer be able to include "GPT" in such names or branding. In May 2023, OpenAI engaged a brand management service to notify its API customers of this policy, although these notifications stopped short of making overt legal claims (such as allegations of trademark infringement or demands to cease and desist). As of November 2023, OpenAI still prohibits its API licensees from naming their own products with "GPT", but it has begun enabling its ChatGPT Plus subscribers to make "custom versions of ChatGPT" that are being called GPTs on the OpenAI site. OpenAI's terms of service says that its subscribers may use "GPT" in the names of these, although it's "discouraged".
Relatedly, OpenAI has applied to the United States Patent and Trademark Office (USPTO) to seek domestic trademark registration for the term "GPT" in the field of AI. OpenAI sought to expedite handling of its application, but the USPTO declined that request in April 2023. In May 2023, the USPTO responded to the application with a determination that "GPT" was both descriptive and generic. As of November 2023, OpenAI continues to pursue its argument through the available processes. Regardless, failure to obtain a registered U.S. trademark does not preclude some level of common-law trademark rights in the U.S., and/or trademark rights in other countries.
For any given type or scope of trademark protection in the U.S., OpenAI would need to establish that the term is actually "distinctive" to their specific offerings in addition to being a broader technical term for the kind of technology. Some media reports suggested that OpenAI may be able to obtain trademark registration based indirectly on the fame of its GPT-based chatbot product, ChatGPT, for which OpenAI has separately sought protection (and which it has sought to enforce more strongly). Other reports have indicated that registration for the bare term "GPT" seems unlikely to be granted, as it is used frequently as a common term to refer simply to AI systems that involve generative pre-trained transformers. In any event, to whatever extent exclusive rights in the term may occur the U.S., others would need to avoid using it for similar products or services in ways likely to cause confusion. If such rights ever became broad enough to implicate other well-established uses in the field, the trademark doctrine of descriptive fair use could still continue non-brand-related usage.
== Selected bibliography ==
This section lists the main official publications from OpenAI and Microsoft on their GPT models.
GPT-1: report, GitHub release.
GPT-2: blog announcement, report on its decision of "staged release", GitHub release.
GPT-3: report. No GitHub or any other form of code release thenceforth.
WebGPT: blog announcement, report,
InstructGPT: blog announcement, report.
ChatGPT: blog announcement (no report).
GPT-4: blog announcement, reports, model card.
GPT-4o: blog announcement.
GPT-4.5: blog announcement.
GPT-4.1: blog announcement.
== See also ==
Cyc
Gemini
== References == | Wikipedia/Generative_pretrained_transformer |
In computing and telecommunications, a control character or non-printing character (NPC) is a code point in a character set that does not represent a written character or symbol. They are used as in-band signaling to cause effects other than the addition of a symbol to the text. All other characters are mainly graphic characters, also known as printing characters (or printable characters), except perhaps for "space" characters. In the ASCII standard there are 33 control characters, such as code 7, BEL, which rings a terminal bell.
== History ==
Procedural signs in Morse code are a form of control character.
A form of control characters were introduced in the 1870 Baudot code: NUL and DEL.
The 1901 Murray code added the carriage return (CR) and line feed (LF), and other versions of the Baudot code included other control characters.
The bell character (BEL), which rang a bell to alert operators, was also an early teletype control character.
Some control characters have also been called "format effectors".
== In ASCII ==
There were quite a few control characters defined (33 in ASCII, and the ECMA-48 standard adds 32 more). This was because early terminals had very primitive mechanical or electrical controls that made any kind of state-remembering API quite expensive to implement, thus a different code for each and every function looked like a requirement. It quickly became possible and inexpensive to interpret sequences of codes to perform a function, and device makers found a way to send hundreds of device instructions. Specifically, they used ASCII code 2710 (escape), followed by a series of characters called a "control sequence" or "escape sequence". The mechanism was invented by Bob Bemer, the father of ASCII. For example, the sequence of code 2710, followed by the printable characters "[2;10H", would cause a Digital Equipment Corporation VT100 terminal to move its cursor to the 10th cell of the 2nd line of the screen. Several standards exist for these sequences, notably ANSI X3.64, but the number of non-standard variations is large.
All entries in the ASCII table below code 3210 (technically the C0 control code set) are of this kind, including CR and LF used to separate lines of text. The code 12710 (DEL) is also a control character. Extended ASCII sets defined by ISO 8859 added the codes 12810 through 15910 as control characters. This was primarily done so that if the high bit was stripped, it would not change a printing character to a C0 control code. This second set is called the C1 set.
These 65 control codes were carried over to Unicode. Unicode added more characters that could be considered controls, but it makes a distinction between these "Formatting characters" (such as the zero-width non-joiner) and the 65 control characters.
The Extended Binary Coded Decimal Interchange Code (EBCDIC) character set contains 65 control codes, including all of the ASCII control codes plus additional codes which are mostly used to control IBM peripherals.
The control characters in ASCII still in common use include:
0x00 (null, NUL, \0, ^@), originally intended to be an ignored character, but now used by many programming languages including C to mark the end of a string.
0x04 (EOT} N/A, N/A, ^D) used as an End Of File character on some terminals and for terminal input on Unix-like systems.
0x07 (bell, BEL, \a, ^G), which may cause the device to emit a warning such as a bell or beep sound or the screen flashing.
0x08 (backspace, BS, \b, ^H), may overprint the previous character.
0x09 (horizontal tab, HT, \t, ^I), moves the printing position right to the next tab stop.
0x0A (line feed, LF, \n, ^J), moves the print head down one line, or to the left edge and down. Used as the end of line marker in Unix-like systems.
0x0B (vertical tab, VT, \v, ^K), vertical tabulation.
0x0C (form feed, FF, \f, ^L), to cause a printer to eject paper to the top of the next page, or a video terminal to clear the screen.
0x0D (carriage return, CR, \r, ^M), moves the printing position to the start of the line, allowing overprinting. Used as the end of line marker in Classic Mac OS, OS-9, FLEX (and variants). A CR+LF pair is used by CP/M-80 and its derivatives including DOS and Windows, and by Application Layer protocols such as FTP, SMTP, and HTTP.
0x1A (Control-Z, SUB, ^Z). Acts as an end-of-file for the Windows text-mode file i/o.
0x1B (escape, ESC, \e (GCC only), ^[). Introduces an escape sequence.
Control characters may be described as doing something when the user inputs them, such as code 3 (End-of-Text character, ETX, ^C) to interrupt the running process, or code 4 (End-of-Transmission character, EOT, ^D), used to end text input on Unix or to exit a Unix shell. These uses usually have little to do with their use when they are in text being output.
== In Unicode ==
In Unicode, "Control-characters" are U+0000—U+001F (C0 controls), U+007F (delete), and U+0080—U+009F (C1 controls). Their General Category is "Cc". Formatting codes are distinct, in General Category "Cf". The Cc control characters have no Name in Unicode, but are given labels such as "<control-001A>" instead.
== Display ==
There are a number of techniques to display non-printing characters, which may be illustrated with the bell character in ASCII encoding:
Code point: decimal 7, hexadecimal 0x07
An abbreviation, often three capital letters: BEL
A special character condensing the abbreviation: Unicode U+2407 (␇), "symbol for bell"
An ISO 2047 graphical representation: Unicode U+237E (⍾), "graphic for bell"
Caret notation in ASCII, where code point 00xxxxx is represented as a caret followed by the capital letter at code point 10xxxxx: ^G
An escape sequence, as in C/C++ character string codes: \a, \007, \x07, etc.
== How control characters map to keyboards ==
ASCII-based keyboards have a key labelled "Control", "Ctrl", or (rarely) "Cntl" which is used much like a shift key, being pressed in combination with another letter or symbol key. In one implementation, the control key generates the code 64 places below the code for the (generally) uppercase letter it is pressed in combination with (i.e., subtract 0x40 from ASCII code value of the (generally) uppercase letter). The other implementation is to take the ASCII code produced by the key and bitwise AND it with 0x1F, forcing bits 5 to 7 to zero. For example, pressing "control" and the letter "g" (which is 0110 0111 in binary), produces the code 7 (BELL, 7 in base ten, or 0000 0111 in binary). The NULL character (code 0) is represented by Ctrl-@, "@" being the code immediately before "A" in the ASCII character set. For convenience, some terminals accept Ctrl-Space as an alias for Ctrl-@. In either case, this produces one of the 32 ASCII control codes between 0 and 31. Neither approach works to produce the DEL character because of its special location in the table and its value (code 12710), Ctrl-? is sometimes used for this character.
When the control key is held down, letter keys produce the same control characters regardless of the state of the shift or caps lock keys. In other words, it does not matter whether the key would have produced an upper-case or a lower-case letter. The interpretation of the control key with the space, graphics character, and digit keys (ASCII codes 32 to 63) varies between systems. Some will produce the same character code as if the control key were not held down. Other systems translate these keys into control characters when the control key is held down. The interpretation of the control key with non-ASCII ("foreign") keys also varies between systems.
Control characters are often rendered into a printable form known as caret notation by printing a caret (^) and then the ASCII character that has a value of the control character plus 64. Control characters generated using letter keys are thus displayed with the upper-case form of the letter. For example, ^G represents code 7, which is generated by pressing the G key when the control key is held down.
Keyboards also typically have a few single keys which produce control character codes. For example, the key labelled "Backspace" typically produces code 8, "Tab" code 9, "Enter" or "Return" code 13 (though some keyboards might produce code 10 for "Enter").
Many keyboards include keys that do not correspond to any ASCII printable or control character, for example cursor control arrows and word processing functions. The associated keypresses are communicated to computer programs by one of four methods: appropriating otherwise unused control characters; using some encoding other than ASCII; using multi-character control sequences; or using an additional mechanism outside of generating characters. "Dumb" computer terminals typically use control sequences. Keyboards attached to stand-alone personal computers made in the 1980s typically use one (or both) of the first two methods. Modern computer keyboards generate scancodes that identify the specific physical keys that are pressed; computer software then determines how to handle the keys that are pressed, including any of the four methods described above.
== The design purpose ==
The control characters were designed to fall into a few groups: printing and display control, data structuring, transmission control, and miscellaneous.
=== Printing and display control ===
Printing control characters were first used to control the physical mechanism of printers, the earliest output device. An early example of this idea was the use of Figures (FIGS) and Letters (LTRS) in Baudot code to shift between two code pages. A later, but still early, example was the out-of-band ASA carriage control characters. Later, control characters were integrated into the stream of data to be printed.
The carriage return character (CR), when sent to such a device, causes it to put the character at the edge of the paper at which writing begins (it may, or may not, also move the printing position to the next line).
The line feed character (LF/NL) causes the device to put the printing position on the next line. It may (or may not), depending on the device and its configuration, also move the printing position to the start of the next line (which would be the leftmost position for left-to-right scripts, such as the alphabets used for Western languages, and the rightmost position for right-to-left scripts such as the Hebrew and Arabic alphabets).
The vertical and horizontal tab characters (VT and HT/TAB) cause the output device to move the printing position to the next tab stop in the direction of reading.
The form feed character (FF/NP) starts a new sheet of paper, and may or may not move to the start of the first line.
The backspace character (BS) moves the printing position one character space backwards. On printers, including hard-copy terminals, this is most often used so the printer can overprint characters to make other, not normally available, characters. On video terminals and other electronic output devices, there are often software (or hardware) configuration choices that allow a destructive backspace (e.g., a BS, SP, BS sequence), which erases, or a non-destructive one, which does not.
The shift in and shift out characters (SI and SO) selected alternate character sets, fonts, underlining, or other printing modes. Escape sequences were often used to do the same thing.
With the advent of computer terminals that did not physically print on paper and so offered more flexibility regarding screen placement, erasure, and so forth, printing control codes were adapted. Form feeds, for example, usually cleared the screen, there being no new paper page to move to. More complex escape sequences were developed to take advantage of the flexibility of the new terminals, and indeed of newer printers. The concept of a control character had always been somewhat limiting, and was extremely so when used with new, much more flexible, hardware. Control sequences (sometimes implemented as escape sequences) could match the new flexibility and power and became the standard method. However, there were, and remain, a large variety of standard sequences to choose from.
=== Data structuring ===
The separators (File, Group, Record, and Unit: FS, GS, RS and US) were made to structure data, usually on a tape, in order to simulate punched cards.
End of medium (EM) warns that the tape (or other recording medium) is ending.
While many systems use CR/LF and TAB for structuring data, it is possible to encounter the separator control characters in data that needs to be structured. The separator control characters are not overloaded; there is no general use of them except to separate data into structured groupings. Their numeric values are contiguous with the space character, which can be considered a member of the group, as a word separator.
For example, the RS separator is used by RFC 7464 (JSON Text Sequences) to encode a sequence of JSON elements. Each sequence item starts with a RS character and ends with a line feed. This allows to serialize open-ended JSON sequences. It is one of the JSON streaming protocols.
=== Transmission control ===
The transmission control characters were intended to structure a data stream, and to manage re-transmission or graceful failure, as needed, in the face of transmission errors.
The start of heading (SOH) character was to mark a non-data section of a data stream—the part of a stream containing addresses and other housekeeping data. The start of text character (STX) marked the end of the header, and the start of the textual part of a stream. The end of text character (ETX) marked the end of the data of a message. A widely used convention is to make the two characters preceding ETX a checksum or CRC for error-detection purposes. The end of transmission block character (ETB) was used to indicate the end of a block of data, where data was divided into such blocks for transmission purposes.
The escape character (ESC) was intended to "quote" the next character, if it was another control character it would print it instead of performing the control function. It is almost never used for this purpose today. Various printable characters are used as visible "escape characters", depending on context.
The substitute character (SUB) was intended to request a translation of the next character from a printable character to another value, usually by setting bit 5 to zero. This is handy because some media (such as sheets of paper produced by typewriters) can transmit only printable characters. However, on MS-DOS systems with files opened in text mode, "end of text" or "end of file" is marked by this Ctrl-Z character, instead of the Ctrl-C or Ctrl-D, which are common on other operating systems.
The cancel character (CAN) signaled that the previous element should be discarded. The negative acknowledge character (NAK) is a definite flag for, usually, noting that reception was a problem, and, often, that the current element should be sent again. The acknowledge character (ACK) is normally used as a flag to indicate no problem detected with current element.
When a transmission medium is half duplex (that is, it can transmit in only one direction at a time), there is usually a master station that can transmit at any time, and one or more slave stations that transmit when they have permission. The enquire character (ENQ) is generally used by a master station to ask a slave station to send its next message. A slave station indicates that it has completed its transmission by sending the end of transmission character (EOT).
The device control codes (DC1 to DC4) were originally generic, to be implemented as necessary by each device. However, a universal need in data transmission is to request the sender to stop transmitting when a receiver is temporarily unable to accept any more data. Digital Equipment Corporation invented a convention which used 19 (the device control 3 character (DC3), also known as control-S, or XOFF) to "S"top transmission, and 17 (the device control 1 character (DC1), a.k.a. control-Q, or XON) to start transmission. It has become so widely used that most don't realize it is not part of official ASCII. This technique, however implemented, avoids additional wires in the data cable devoted only to transmission management, which saves money. A sensible protocol for the use of such transmission flow control signals must be used, to avoid potential deadlock conditions, however.
The data link escape character (DLE) was intended to be a signal to the other end of a data link that the following character is a control character such as STX or ETX. For example a packet may be structured in the following way (DLE) <STX> <PAYLOAD> (DLE) <ETX>.
=== Miscellaneous codes ===
Code 7 (BEL) is intended to cause an audible signal in the receiving terminal.
Many of the ASCII control characters were designed for devices of the time that are not often seen today. For example, code 22, "synchronous idle" (SYN), was originally sent by synchronous modems (which have to send data constantly) when there was no actual data to send. (Modern systems typically use a start bit to announce the beginning of a transmitted word— this is a feature of asynchronous communication. Synchronous communication links were more often seen with mainframes, where they were typically run over corporate leased lines to connect a mainframe to another mainframe or perhaps a minicomputer.)
Code 0 (ASCII code name NUL) is a special case. In paper tape, it is the case when there are no holes. It is convenient to treat this as a fill character with no meaning otherwise. Since the position of a NUL character has no holes punched, it can be replaced with any other character at a later time, so it was typically used to reserve space, either for correcting errors or for inserting information that would be available at a later time or in another place. In computing, it is often used for padding in fixed length records; to mark the end of a string; and formerly to give printing devices enough time to execute a control function.
Code 127 (DEL, a.k.a. "rubout") is likewise a special case. Its 7-bit code is all-bits-on in binary, which essentially erased a character cell on a paper tape when overpunched. Paper tape was a common storage medium when ASCII was developed, with a computing history dating back to WWII code breaking equipment at Biuro Szyfrów. Paper tape became obsolete in the 1970s, so this aspect of ASCII rarely saw any use after that. Some systems (such as the original Apple computers) converted it to a backspace. But because its code is in the range occupied by other printable characters, and because it had no official assigned glyph, many computer equipment vendors used it as an additional printable character (often an all-black box character useful for erasing text by overprinting with ink).
Non-erasable programmable ROMs are typically implemented as arrays of fusible elements, each representing a bit, which can only be switched one way, usually from one to zero. In such PROMs, the DEL and NUL characters can be used in the same way that they were used on punched tape: one to reserve meaningless fill bytes that can be written later, and the other to convert written bytes to meaningless fill bytes. For PROMs that switch one to zero, the roles of NUL and DEL are reversed; also, DEL will only work with 7-bit characters, which are rarely used today; for 8-bit content, the character code 255, commonly defined as a nonbreaking space character, can be used instead of DEL.
Many file systems do not allow control characters in filenames, as they may have reserved functions.
== See also ==
Arrow keys § HJKL keys, HJKL as arrow keys, used on ADM-3A terminal
C0 and C1 control codes
Escape sequence
In-band signaling
Whitespace character
== Notes and references ==
== External links ==
ISO IR 1 C0 Set of ISO 646 (PDF) | Wikipedia/Control_character |
A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks.
In neuroscience, a biological neural network is a physical structure found in brains and complex nervous systems – a population of nerve cells connected by synapses.
In machine learning, an artificial neural network is a mathematical model used to approximate nonlinear functions. Artificial neural networks are used to solve artificial intelligence problems.
== In biology ==
In the context of biology, a neural network is a population of biological neurons chemically connected to each other by synapses. A given neuron can be connected to hundreds of thousands of synapses.
Each neuron sends and receives electrochemical signals called action potentials to its connected neighbors. A neuron can serve an excitatory role, amplifying and propagating signals it receives, or an inhibitory role, suppressing signals instead.
Populations of interconnected neurons that are smaller than neural networks are called neural circuits. Very large interconnected networks are called large scale brain networks, and many of these together form brains and nervous systems.
Signals generated by neural networks in the brain eventually travel through the nervous system and across neuromuscular junctions to muscle cells, where they cause contraction and thereby motion.
== In machine learning ==
In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines, today they are almost always implemented in software.
Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).
The "signal" input to each neuron is a number, specifically a linear combination of the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to its activation function. The behavior of the network depends on the strengths (or weights) of the connections between neurons. A network is trained by modifying these weights through empirical risk minimization or backpropagation in order to fit some preexisting dataset.
The term deep neural network refers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers.
Neural networks are used to solve problems in artificial intelligence, and have thereby found applications in many disciplines, including predictive modeling, adaptive control, facial recognition, handwriting recognition, general game playing, and generative AI.
== History ==
The theoretical base for contemporary neural networks was independently proposed by Alexander Bain in 1873 and William James in 1890. Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949, Donald Hebb described Hebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.
Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach of connectionism. However, starting with the invention of the perceptron, a simple artificial neural network, by Warren McCulloch and Walter Pitts in 1943, followed by the implementation of one in hardware by Frank Rosenblatt in 1957,
artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
== See also ==
Emergence
Biological cybernetics
Biologically-inspired computing
== References == | Wikipedia/neural_network |
Large-scale brain networks (also known as intrinsic brain networks) are collections of widespread brain regions showing functional connectivity by statistical analysis of the fMRI BOLD signal or other recording methods such as EEG, PET and MEG. An emerging paradigm in neuroscience is that cognitive tasks are performed not by individual brain regions working in isolation but by networks consisting of several discrete brain regions that are said to be "functionally connected". Functional connectivity networks may be found using algorithms such as cluster analysis, spatial independent component analysis (ICA), seed based, and others. Synchronized brain regions may also be identified using long-range synchronization of the EEG, MEG, or other dynamic brain signals.
The set of identified brain areas that are linked together in a large-scale network varies with cognitive function. When the cognitive state is not explicit (i.e., the subject is at "rest"), the large-scale brain network is a resting state network (RSN). As a physical system with graph-like properties, a large-scale brain network has both nodes and edges and cannot be identified simply by the co-activation of brain areas. In recent decades, the analysis of brain networks was made feasible by advances in imaging techniques as well as new tools from graph theory and dynamical systems.
The Organization for Human Brain Mapping has created the Workgroup for HArmonized Taxonomy of NETworks (WHATNET) group to work towards a consensus regarding network nomenclature. WHATNET conducted a survey in 2021 which showed a large degree of agreement about the name and topography of three networks: the "somato network", the "default network" and the "visual network", while other networks had less agreement. Several issues make the work of creating a common atlas for networks difficult: some of these issues are the variability of spatial and time scales, variability across individuals, and the dynamic nature of some networks.
Some large-scale brain networks are identified by their function and provide a coherent framework for understanding cognition by offering a neural model of how different cognitive functions emerge when different sets of brain regions join together as self-organized coalitions. The number and composition of the coalitions will vary with the algorithm and parameters used to identify them. In one model, there is only the default mode network and the task-positive network, but most current analyses show several networks, from a small handful to 17. The most common and stable networks are enumerated below. The regions participating in a functional network may be dynamically reconfigured.
Disruptions in activity in various networks have been implicated in neuropsychiatric disorders such as depression, Alzheimer's, autism spectrum disorder, schizophrenia, ADHD and bipolar disorder.
== Commonly identified networks ==
Because brain networks can be identified at various different resolutions and with various different neurobiological properties, there is currently no universal atlas of brain networks that fits all circumstances. Uddin, Yeo, and Spreng proposed in 2019 that the following six networks should be defined as core networks based on converging evidences from multiple studies to facilitate communication between researchers.
=== Default mode (medial frontoparietal) ===
The default mode network is active when an individual is awake and at rest. It preferentially activates when individuals focus on internally-oriented tasks such as daydreaming, envisioning the future, retrieving memories, and theory of mind. It is negatively correlated with brain systems that focus on external visual signals. It is the most widely researched network.
=== Salience (midcingulo-insular) ===
The salience network consists of several structures, including the anterior (bilateral) insula, dorsal anterior cingulate cortex, and three subcortical structures which are the ventral striatum, substantia nigra/ventral tegmental region. It plays the key role of monitoring the salience of external inputs and internal brain events. Specifically, it aids in directing attention by identifying important biological and cognitive events.
This network includes the ventral attention network, which primarily includes the temporoparietal junction and the ventral frontal cortex of the right hemisphere. These areas respond when behaviorally relevant stimuli occur unexpectedly. The ventral attention network is inhibited during focused attention in which top-down processing is being used, such as when visually searching for something. This response may prevent goal-driven attention from being distracted by non-relevant stimuli. It becomes active again when the target or relevant information about the target is found.
=== Attention (dorsal frontoparietal) ===
This network is involved in the voluntary, top-down deployment of attention. Within the dorsal attention network, the intraparietal sulcus and frontal eye fields influence the visual areas of the brain. These influencing factors allow for the orientation of attention.
=== Control (lateral frontoparietal) ===
This network initiates and modulates cognitive control and comprises 18 sub-regions of the brain. There is a strong correlation between fluid intelligence and the involvement of the fronto-parietal network with other networks.
Versions of this network have also been called the central executive (or executive control) network and the cognitive control network.
=== Sensorimotor or somatomotor (pericentral) ===
This network processes somatosensory information and coordinates motion. The auditory cortex may be included.
=== Visual (occipital) ===
This network handles visual information processing.
== Other networks ==
Different methods and data have identified several other brain networks, many of which greatly overlap or are subsets of more well-characterized core networks.
Limbic
Auditory
Right/left executive
Cerebellar
Spatial attention
Language
Lateral visual
Temporal
Visual perception/imagery
== See also ==
Complex network
Neural network (biology)
== References == | Wikipedia/Large_scale_brain_network |
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.
Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:
Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
== Definition ==
In mathematics, a linear map (or linear function)
f
(
x
)
{\displaystyle f(x)}
is one which satisfies both of the following properties:
Additivity or superposition principle:
f
(
x
+
y
)
=
f
(
x
)
+
f
(
y
)
;
{\displaystyle \textstyle f(x+y)=f(x)+f(y);}
Homogeneity:
f
(
α
x
)
=
α
f
(
x
)
.
{\displaystyle \textstyle f(\alpha x)=\alpha f(x).}
Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle
f
(
α
x
+
β
y
)
=
α
f
(
x
)
+
β
f
(
y
)
{\displaystyle f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)}
An equation written as
f
(
x
)
=
C
{\displaystyle f(x)=C}
is called linear if
f
(
x
)
{\displaystyle f(x)}
is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if
C
=
0
{\displaystyle C=0}
and
f
(
x
)
{\displaystyle f(x)}
is a homogeneous function.
The definition
f
(
x
)
=
C
{\displaystyle f(x)=C}
is very general in that
x
{\displaystyle x}
can be any sensible mathematical object (number, vector, function, etc.), and the function
f
(
x
)
{\displaystyle f(x)}
can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If
f
(
x
)
{\displaystyle f(x)}
contains differentiation with respect to
x
{\displaystyle x}
, the result will be a differential equation.
== Nonlinear systems of equations ==
A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation.
For a single equation of the form
f
(
x
)
=
0
,
{\displaystyle f(x)=0,}
many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a polynomial equation such as
x
2
+
x
−
1
=
0.
{\displaystyle x^{2}+x-1=0.}
The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation.
Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborate algorithms have been designed, such as Gröbner base algorithms.
For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions.
== Nonlinear recurrence relations ==
A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.
== Nonlinear differential equations ==
A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.
One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.
=== Ordinary differential equations ===
First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation
d
u
d
x
=
−
u
2
{\displaystyle {\frac {du}{dx}}=-u^{2}}
has
u
=
1
x
+
C
{\displaystyle u={\frac {1}{x+C}}}
as a general solution (and also the special solution
u
=
0
,
{\displaystyle u=0,}
corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as
d
u
d
x
+
u
2
=
0
{\displaystyle {\frac {du}{dx}}+u^{2}=0}
and the left-hand side of the equation is not a linear function of
u
{\displaystyle u}
and its derivatives. Note that if the
u
2
{\displaystyle u^{2}}
term were replaced with
u
{\displaystyle u}
, the problem would be linear (the exponential decay problem).
Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.
Common methods for the qualitative analysis of nonlinear ordinary differential equations include:
Examination of any conserved quantities, especially in Hamiltonian systems
Examination of dissipative quantities (see Lyapunov function) analogous to conserved quantities
Linearization via Taylor expansion
Change of variables into something easier to study
Bifurcation theory
Perturbation methods (can be applied to algebraic equations too)
Existence of solutions of Finite-Duration, which can happen under specific conditions for some non-linear ordinary differential equations.
=== Partial differential equations ===
The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.
Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.
Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.
=== Pendula ===
A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation
d
2
θ
d
t
2
+
sin
(
θ
)
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\sin(\theta )=0}
where gravity points "downwards" and
θ
{\displaystyle \theta }
is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use
d
θ
/
d
t
{\displaystyle d\theta /dt}
as an integrating factor, which would eventually yield
∫
d
θ
C
0
+
2
cos
(
θ
)
=
t
+
C
1
{\displaystyle \int {\frac {d\theta }{\sqrt {C_{0}+2\cos(\theta )}}}=t+C_{1}}
which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless
C
0
=
2
{\displaystyle C_{0}=2}
).
Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at
θ
=
0
{\displaystyle \theta =0}
, called the small angle approximation, is
d
2
θ
d
t
2
+
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\theta =0}
since
sin
(
θ
)
≈
θ
{\displaystyle \sin(\theta )\approx \theta }
for
θ
≈
0
{\displaystyle \theta \approx 0}
. This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at
θ
=
π
{\displaystyle \theta =\pi }
, corresponding to the pendulum being straight up:
d
2
θ
d
t
2
+
π
−
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\pi -\theta =0}
since
sin
(
θ
)
≈
π
−
θ
{\displaystyle \sin(\theta )\approx \pi -\theta }
for
θ
≈
π
{\displaystyle \theta \approx \pi }
. The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that
|
θ
|
{\displaystyle |\theta |}
will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.
One more interesting linearization is possible around
θ
=
π
/
2
{\displaystyle \theta =\pi /2}
, around which
sin
(
θ
)
≈
1
{\displaystyle \sin(\theta )\approx 1}
:
d
2
θ
d
t
2
+
1
=
0.
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+1=0.}
This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.
== Types of nonlinear dynamic behaviors ==
Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system
Chaos – values of a system cannot be predicted indefinitely far into the future, and fluctuations are aperiodic
Multistability – the presence of two or more stable states
Solitons – self-reinforcing solitary waves
Limit cycles – asymptotic periodic orbits to which destabilized fixed points are attracted.
Self-oscillations – feedback oscillations taking place in open dissipative physical systems.
== Examples of nonlinear equations ==
== See also ==
== References ==
== Further reading ==
== External links ==
Command and Control Research Program (CCRP)
New England Complex Systems Institute: Concepts in Complex Systems
Nonlinear Dynamics I: Chaos at MIT's OpenCourseWare
Nonlinear Model Library – (in MATLAB) a Database of Physical Systems
The Center for Nonlinear Studies at Los Alamos National Laboratory | Wikipedia/Nonlinear_function |
Iterative Learning Control (ILC) is an open-loop control approach of tracking control for systems that work in a repetitive mode. Examples of systems that operate in a repetitive manner include robot arm manipulators, chemical batch processes and reliability testing rigs. In each of these tasks the system is required to perform the same action over and over again with high precision. This action is represented by the objective of accurately tracking a chosen reference signal
r
(
t
)
{\displaystyle r(t)}
on a finite time interval.
Repetition allows the system to sequentially improve tracking accuracy, in effect learning the required input needed to track the reference as closely as possible. The learning process uses information from previous repetitions to improve the control signal, ultimately enabling a suitable control action to be found iteratively. The internal model principle yields conditions under which perfect tracking can be achieved but the design of the control algorithm still leaves many decisions to be made to suit the application. A typical, simple control law is of the form:
u
p
+
1
=
u
p
+
K
∗
e
p
{\displaystyle u_{p+1}=u_{p}+K*e_{p}}
where
u
p
{\displaystyle u_{p}}
is the input to the system during the pth repetition,
e
p
{\displaystyle e_{p}}
is the tracking error during the pth repetition and
K
{\displaystyle K}
is a design parameter representing operations on
e
p
{\displaystyle e_{p}}
. Achieving perfect tracking through iteration is represented by the mathematical requirement of convergence of the input signals as
p
{\displaystyle p}
becomes large, whilst the rate of this convergence represents the desirable practical need for the learning process to be rapid. There is also the need to ensure good algorithm performance even in the presence of uncertainty about the details of process dynamics. The operation
K
{\displaystyle K}
is crucial to achieving design objectives (i.e. trading off fast convergence and robust performance) and ranges from simple scalar gains to sophisticated optimization computations.
In many cases a low-pass filter is added to the input to improve performance. The control law then takes the form
u
p
+
1
=
Q
(
u
p
+
K
∗
e
p
)
{\displaystyle u_{p+1}=Q(u_{p}+K*e_{p})}
where
Q
{\displaystyle Q}
is a low-pass filtering matrix. This removes high-frequency disturbances which may otherwise be aplified during the learning process.
== References ==
S.Arimoto, S. Kawamura; F. Miyazaki (1984). "Bettering operation of robots by learning". Journal of Robotic Systems. 1 (2): 123–140. doi:10.1002/rob.4620010203.
Moore, K.L. (1993). Iterative Learning Control for Deterministic Systems. London: Springer-Verlag. ISBN 0-387-19707-9.
Jian Xin Xu; Ying Tan. (2003). Linear and Nonlinear Iterative Learning Control. Springer-Verlag. p. 177. ISBN 3-540-40173-3.
Bristow, D. A.; Tharayil, M.; Alleyne, A. G. (2006). "A Survey of Iterative Learning Control A learning-based method for high-performance tracking control". IEEE Control Systems Magazine. Vol. 26. pp. 96–114.
Owens D.H.; Feng K. (20 July 2003). "Parameter optimization in iterative learning control". International Journal of Control. 76 (11): 1059–1069. doi:10.1080/0020717031000121410. S2CID 120288506.
Owens D.H.; Hätönen J. (2005). "Iterative learning control — An optimization paradigm". Annual Reviews in Control. 29 (1): 57–70. doi:10.1016/j.arcontrol.2005.01.003.
Daley S.; Owens D.H. (2008). "Iterative Learning Control – Monotonicity and Optimization" (PDF). International Journal of Applied Mathematics and Computer Science. 18 (3): 179–293. doi:10.2478/v10006-008-0026-7.
Wang Y.; Gao F.; Doyle III, F.J. (2009). "Survey on iterative learning control, repetitive control, and run-to-run control". Journal of Process Control. 19 (10): 1589–1600. doi:10.1016/j.jprocont.2009.09.006. | Wikipedia/Iterative_learning_control |
In control theory, multiple model control is an approach to ensure stability in cases of large model uncertainty or changing plant dyanamics. It uses a number of models, which are distributed to give a suitable cover of the region of uncertainty, and adapts control based on the responses of the plant and the models. A model is chosen at every instant, depending on which is closest to the plant according to some metric, and this is used to determine the appropriate control input. The method offers satisfactory performance when no restrictions are put on the number of available models.
== Approaches ==
There are a number of multiple model methods, including:
“Switching”, the control input to the plant is based on the fixed model chosen at that instant. It is discontinuous, fast, but coarse. However it does have the advantage of verifiable stability bounds.
“Switching and tuning”, an adaptive model is initialized from the location of the fixed model chosen, and the parameters of the best model determine the control to be used. It is continuous, slow, but accurate.
"Blending", the control input is chosen based on a weighted combination of a number of suitable models.
== Applications ==
Multiple model method can be used for:
controlling an unknown plant - parameter estimate and the identification errors can be used collectively to determine the control input to the overall system,
applying multi observer - significantly improving transients and reducing observer overshoot.
== See also ==
State observer
Adaptive control
== References ==
=== General references === | Wikipedia/Multiple_models |
Dual control theory is a branch of control theory that deals with the control of systems whose characteristics are initially unknown. It is called dual because in controlling such a system the controller's objectives are twofold:
(1) Action: To control the system as well as possible based on current system knowledge
(2) Investigation: To experiment with the system so as to learn about its behavior and control it better in the future.
These two objectives may be partly in conflict.
In the context of reinforcement learning, this is known as the exploration-exploitation trade-off (e.g. Multi-armed bandit#Empirical motivation).
Dual control theory was developed by Alexander Aronovich Fel'dbaum in 1960. He showed that in principle the optimal solution can be found by dynamic programming, but this is often impractical; as a result a number of methods for designing sub-optimal dual controllers have been devised.
== Example ==
To use an analogy: if you are driving a new car you want to get to your destination cheaply and smoothly, but you also want to see how well the car accelerates, brakes and steers so as to get a better feel for how to drive it, so you will do some test manoeuvers for this purpose. Similarly a dual controller will inject a so-called probing (or exploration) signal into the system that may detract from short-term performance but will improve control in the future.
== References ==
Feldbaum, A.A. (April 1961) [September 1960 (in Russian, pp. 1240–1249)]. "Dual control theory, Part I". Automation and Remote Control. 21 (9): 874–880.
Feldbaum, A.A. (May 1961) [November 1960 (in Russian, pp. 1453–1464)]. "Dual control theory, Part II". Automation and Remote Control. 21 (11): 1033–1039.
Wittenmark, B. (June 1995). "Adaptive Dual Control Methods: An Overview". Lund University: 67–72. CiteSeerX 10.1.1.25.7446. {{cite journal}}: Cite journal requires |journal= (help) | Wikipedia/Dual_control_theory |
SNNS (Stuttgart Neural Network Simulator) is a neural network simulator originally developed at the University of Stuttgart. While it was originally built for X11 under Unix, there are Windows ports. Its successor JavaNNS never reached the same popularity.
== Features ==
SNNS is written around a simulation kernel to which user written activation functions, learning procedures and output functions can be added. It has support for arbitrary network topologies and the standard release contains support for a number of standard neural network architectures and training algorithms.
== Status ==
There is currently no ongoing active development of SNNS. In July 2008 the license was changed to the GNU LGPL.
== See also ==
Artificial neural network
Neural network software
== External links ==
SNNS homepage
Patches with bugfixes and a Python interface to the SNNS kernel | Wikipedia/Stuttgart_Neural_Network_Simulator |
The Predictive Model Markup Language (PMML) is an XML-based predictive model interchange format conceived by Robert Lee Grossman, then the director of the National Center for Data Mining at the University of Illinois at Chicago. PMML provides a way for analytic applications to describe and exchange predictive models produced by data mining and machine learning algorithms. It supports common models such as logistic regression and other feedforward neural networks. Version 0.9 was published in 1998. Subsequent versions have been developed by the Data Mining Group.
Since PMML is an XML-based standard, the specification comes in the form of an XML schema. PMML itself is a mature standard with over 30 organizations having announced products supporting PMML.
== PMML components ==
A PMML file can be described by the following components:
Header: contains general information about the PMML document, such as copyright information for the model, its description, and information about the application used to generate the model such as name and version. It also contains an attribute for a timestamp which can be used to specify the date of model creation.
Data Dictionary: contains definitions for all the possible fields used by the model. It is here that a field is defined as continuous, categorical, or ordinal (attribute optype). Depending on this definition, the appropriate value ranges are then defined as well as the data type (such as, string or double).
Data Transformations: transformations allow for the mapping of user data into a more desirable form to be used by the mining model. PMML defines several kinds of simple data transformations.
Normalization: map values to numbers, the input can be continuous or discrete.
Discretization: map continuous values to discrete values.
Value mapping: map discrete values to discrete values.
Functions (custom and built-in): derive a value by applying a function to one or more parameters.
Aggregation: used to summarize or collect groups of values.
Model: contains the definition of the data mining model. E.g., A multi-layered feedforward neural network is represented in PMML by a "NeuralNetwork" element which contains attributes such as:
Model Name (attribute modelName)
Function Name (attribute functionName)
Algorithm Name (attribute algorithmName)
Activation Function (attribute activationFunction)
Number of Layers (attribute numberOfLayers)
This information is then followed by three kinds of neural layers which specify the architecture of the neural network model being represented in the PMML document. These attributes are NeuralInputs, NeuralLayer, and NeuralOutputs. Besides neural networks, PMML allows for the representation of many other types of models including support vector machines, association rules, Naive Bayes classifier, clustering models, text models, decision trees, and different regression models.
Mining Schema: a list of all fields used in the model. This can be a subset of the fields as defined in the data dictionary. It contains specific information about each field, such as:
Name (attribute name): must refer to a field in the data dictionary
Usage type (attribute usageType): defines the way a field is to be used in the model. Typical values are: active, predicted, and supplementary. Predicted fields are those whose values are predicted by the model.
Outlier Treatment (attribute outliers): defines the outlier treatment to be use. In PMML, outliers can be treated as missing values, as extreme values (based on the definition of high and low values for a particular field), or as is.
Missing Value Replacement Policy (attribute missingValueReplacement): if this attribute is specified then a missing value is automatically replaced by the given values.
Missing Value Treatment (attribute missingValueTreatment): indicates how the missing value replacement was derived (e.g. as value, mean or median).
Targets: allows for post-processing of the predicted value in the format of scaling if the output of the model is continuous. Targets can also be used for classification tasks. In this case, the attribute priorProbability specifies a default probability for the corresponding target category. It is used if the prediction logic itself did not produce a result. This can happen, e.g., if an input value is missing and there is no other method for treating missing values.
Output: this element can be used to name all the desired output fields expected from the model. These are features of the predicted field and so are typically the predicted value itself, the probability, cluster affinity (for clustering models), standard error, etc. The latest release of PMML, PMML 4.1, extended Output to allow for generic post-processing of model outputs. In PMML 4.1, all the built-in and custom functions that were originally available only for pre-processing became available for post-processing too.
== PMML 4.0, 4.1, 4.2 and 4.3 ==
PMML 4.0 was released on June 16, 2009.
Examples of new features included:
Improved Pre-Processing Capabilities: Additions to built-in functions include a range of Boolean operations and an If-Then-Else function.
Time Series Models: New exponential Smoothing models; also place holders for ARIMA, Seasonal Trend Decomposition, and Spectral density estimation, which are to be supported in the near future.
Model Explanation: Saving of evaluation and model performance measures to the PMML file itself.
Multiple Models: Capabilities for model composition, ensembles, and segmentation (e.g., combining of regression and decision trees).
Extensions of Existing Elements: Addition of multi-class classification for Support Vector Machines, improved representation for Association Rules, and the addition of Cox Regression Models.
PMML 4.1 was released on December 31, 2011.
New features included:
New model elements for representing Scorecards, k-Nearest Neighbors (KNN) and Baseline Models.
Simplification of multiple models. In PMML 4.1, the same element is used to represent model segmentation, ensemble, and chaining.
Overall definition of field scope and field names.
A new attribute that identifies for each model element if the model is ready or not for production deployment.
Enhanced post-processing capabilities (via the Output element).
PMML 4.2 was released on February 28, 2014.
New features include:
Transformations: New elements for implementing text mining
New built-in functions for implementing regular expressions: matches, concat, and replace
Simplified outputs for post-processing
Enhancements to Scorecard and Naive Bayes model elements
PMML 4.3 was released on August 23, 2016.
New features include:
New Model Types:
Gaussian Process
Bayesian Network
New built-in functions
Usage clarifications
Documentation improvements
Version 4.4 was released in November 2019.
== Release history ==
== Data Mining Group ==
The Data Mining Group is a consortium managed by the Center for Computational Science Research, Inc., a nonprofit founded in 2008. The Data Mining Group also developed a standard called Portable Format for Analytics, or PFA, which is complementary to PMML.
== See also ==
Open Neural Network Exchange
== References ==
== External links ==
Data Pre-processing in PMML and ADAPA - A Primer
Video of Alex Guazzelli's PMML presentation for the ACM Data Mining Group (hosted by LinkedIn)
PMML 3.2 Specification
PMML 4.0 Specification
PMML 4.1 Specification
PMML 4.2.1 Specification
PMML 4.4 Specification
Representing predictive solutions in PMML: Move from raw data to predictions - Article published on the IBM developerWorks website.
Predictive analytics in healthcare: The importance of open standards - Article published on the IBM developerWorks website. | Wikipedia/Predictive_Model_Markup_Language |
Neural Networks is a monthly peer-reviewed scientific journal and an official journal of the International Neural Network Society, European Neural Network Society, and Japanese Neural Network Society.
== History ==
The journal was established in 1988 and is published by Elsevier. It covers all aspects of research on artificial neural networks. The founding editor-in-chief was Stephen Grossberg (Boston University).
The current editors-in-chief are DeLiang Wang (Ohio State University) and Taro Toyoizumi (RIKEN Center for Brain Science).
== Abstracting and indexing ==
The journal is abstracted and indexed in Scopus and the Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2022 impact factor of 7.8.
== References ==
== External links ==
Official website | Wikipedia/Neural_Networks_(journal) |
Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.
High-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."
Various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include learning, reasoning, knowledge representation, planning, natural language processing, perception, and support for robotics. To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields. Some companies, such as OpenAI, Google DeepMind and Meta, aim to create artificial general intelligence (AGI)—AI that can complete virtually any cognitive task at least as well as a human.
Artificial intelligence was founded as an academic discipline in 1956, and the field went through multiple cycles of optimism throughout its history, followed by periods of disappointment and loss of funding, known as AI winters. Funding and interest vastly increased after 2012 when graphics processing units started being used to accelerate neural networks, and deep learning outperformed previous AI techniques. This growth accelerated further after 2017 with the transformer architecture. In the 2020s, the period of rapid progress marked by advanced generative AI became known as the AI boom. Generative AI and its ability to create and modify content exposed several unintended consequences and harms in the present and raised ethical concerns about AI's long-term effects and potential existential risks, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.
== Goals ==
The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.
=== Reasoning and problem-solving ===
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics.
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. Accurate and efficient reasoning is an unsolved problem.
=== Knowledge representation ===
Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery (mining "interesting" and actionable inferences from large databases), and other areas.
A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge. Knowledge bases need to represent things such as objects, properties, categories, and relations between objects; situations, events, states, and time; causes and effects; knowledge about knowledge (what we know about what other people know); default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing); and many other aspects and domains of knowledge.
Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous); and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally). There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications.
=== Planning and decision-making ===
An "agent" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen. In automated planning, the agent has a specific goal. In automated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.
In classical planning, the agent knows exactly what the effect of any action will be. In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.
In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning), or the agent can seek information to improve its preferences. Information value theory can be used to weigh the value of exploratory or experimental actions. The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be.
A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state. The policy could be calculated (e.g., by iteration), be heuristic, or it can be learned.
Game theory describes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents.
=== Learning ===
Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning.
There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance. Supervised learning requires labeling the training data with the expected answers, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input).
In reinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good". Transfer learning is when the knowledge gained from one problem is applied to a new problem. Deep learning is a type of machine learning that runs inputs through biologically inspired artificial neural networks for all of these types of learning.
Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.
=== Natural language processing ===
Natural language processing (NLP) allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering.
Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem). Margaret Masterman believed that it was meaning and not grammar that was the key to understanding languages, and that thesauri and not dictionaries should be the basis of computational language structure.
Modern deep learning techniques for NLP include word embedding (representing words, typically as vectors encoding their meaning), transformers (a deep learning architecture using an attention mechanism), and others. In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text, and by 2023, these models were able to get human-level scores on the bar exam, SAT test, GRE test, and many other real-world applications.
=== Perception ===
Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input.
The field includes speech recognition, image classification, facial recognition, object recognition,object tracking, and robotic perception.
=== Social intelligence ===
Affective computing is a field that comprises systems that recognize, interpret, process, or simulate human feeling, emotion, and mood. For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.
However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject.
=== General intelligence ===
A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence.
== Techniques ==
AI research uses a wide variety of techniques to accomplish the goals above.
=== Search and optimization ===
AI can solve many problems by intelligently searching through many possible solutions. There are two very different kinds of search used in AI: state space search and local search.
==== State space search ====
State space search searches through a tree of possible states to try to find a goal state. For example, planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.
Simple exhaustive searches are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. "Heuristics" or "rules of thumb" can help prioritize choices that are more likely to reach a goal.
Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and countermoves, looking for a winning position.
==== Local search ====
Local search uses mathematical optimization to find a solution to a problem. It begins with some form of guess and refines it incrementally.
Gradient descent is a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize a loss function. Variants of gradient descent are commonly used to train neural networks, through the backpropagation algorithm.
Another type of local search is evolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them, selecting only the fittest to survive each generation.
Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).
=== Logic ===
Formal logic is used for reasoning and knowledge representation.
Formal logic comes in two main forms: propositional logic (which operates on statements that are true or false and uses logical connectives such as "and", "or", "not" and "implies") and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as "Every X is a Y" and "There are some Xs that are Ys").
Deductive reasoning in logic is the process of proving a new statement (conclusion) from other statements that are given and assumed to be true (the premises). Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules.
Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem. In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved.
Inference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages.
Fuzzy logic assigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true.
Non-monotonic logics, including logic programming with negation as failure, are designed to handle default reasoning. Other specialized versions of logic have been developed to describe many complex domains.
=== Probabilistic methods for uncertain reasoning ===
Many problems in AI (including reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design.
Bayesian networks are a tool that can be used for reasoning (using the Bayesian inference algorithm), learning (using the expectation–maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks).
Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).
=== Classifiers and statistical learning methods ===
The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand. Classifiers are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.
There are many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm. K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.
The naive Bayes classifier is reportedly the "most widely used learner" at Google, due in part to its scalability.
Neural networks are also used as classifiers.
=== Artificial neural networks ===
An artificial neural network is based on a collection of nodes also known as artificial neurons, which loosely model the neurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once the weight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers.
Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function.
In feedforward neural networks the signal passes in only one direction. Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks. Perceptrons use only a single layer of neurons; deep learning uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are "close" to each other—this is especially important in image processing, where a local set of neurons must identify an "edge" before the network can identify an object.
=== Deep learning ===
Deep learning uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.
Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification, and others. The reason that deep learning performs so well in so many applications is not known as of 2021. The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s) but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet.
=== GPT ===
Generative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pre-trained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT models are prone to generating falsehoods called "hallucinations". These can be reduced with RLHF and quality data, but the problem has been getting worse for reasoning systems. Such systems are used in chatbots, which allow people to ask a question or request a task in simple text.
Current models and services include Gemini (formerly Bard), ChatGPT, Grok, Claude, Copilot, and LLaMA. Multimodal GPT models can process different types of data (modalities) such as images, videos, sound, and text.
=== Hardware and software ===
In the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training. Specialized programming languages such as Prolog were used in early AI research, but general-purpose programming languages like Python have become predominant.
The transistor density in integrated circuits has been observed to roughly double every 18 months—a trend known as Moore's law, named after the Intel co-founder Gordon Moore, who first identified it. Improvements in GPUs have been even faster, a trend sometimes called Huang's law, named after Nvidia co-founder and CEO Jensen Huang.
== Applications ==
AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's FaceID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's Photos and TikTok). The deployment of AI may be overseen by a Chief automation officer (CAO).
=== Health and medicine ===
The application of AI in medicine and medical research has the potential to increase patient care and quality of life. Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.
For medical research, AI is an important tool for processing and integrating big data. This is particularly important for organoid and tissue engineering development which use microscopy imaging as a key technique in fabrication. It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research. New AI tools can deepen the understanding of biomedically relevant pathways. For example, AlphaFold 2 (2021) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria. In 2024, researchers used machine learning to accelerate the search for Parkinson's disease drug treatments. Their aim was to identify compounds that block the clumping, or aggregation, of alpha-synuclein (the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold.
=== Games ===
Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then, in 2017, it defeated Ke Jie, who was the best Go player in the world. Other programs handle imperfect-information games, such as the poker-playing program Pluribus. DeepMind developed increasingly generalistic reinforcement learning models, such as with MuZero, which could be trained to play chess, Go, or Atari games. In 2019, DeepMind's AlphaStar achieved grandmaster level in StarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map. In 2021, an AI agent competed in a PlayStation Gran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning. In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseen open-world video games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.
=== Mathematics ===
Large language models, such as GPT-4, Gemini, Claude, LLaMa or Mistral, are increasingly used in mathematics. These probabilistic models are versatile, but can also produce wrong answers in the form of hallucinations. They sometimes need a large database of mathematical problems to learn from, but also methods such as supervised fine-tuning or trained classifiers with human-annotated data to improve answers for new problems and learn from corrections. A February 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data. One technique to improve their performance involves training the models to produce correct reasoning steps, rather than just the correct result. The Alibaba Group developed a version of its Qwen models called Qwen2-Math, that achieved state-of-the-art performance on several mathematical benchmarks, including 84% accuracy on the MATH dataset of competition mathematics problems. In January 2025, Microsoft proposed the technique rStar-Math that leverages Monte Carlo tree search and step-by-step reasoning, enabling a relatively small language model like Qwen-7B to solve 53% of the AIME 2024 and 90% of the MATH benchmark problems.
Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such as AlphaTensor, AlphaGeometry and AlphaProof all from Google DeepMind, Llemma from EleutherAI or Julius.
When natural language is used to describe mathematical problems, converters can transform such prompts into a formal language such as Lean to define mathematical tasks.
Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics.
Topological deep learning integrates various topological approaches.
=== Finance ===
Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years.
According to Nicolas Firzli, director of the World Pensions & Investments Forum, it may be too early to see the emergence of highly innovative AI-informed financial products and services. He argues that "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation."
=== Military ===
Various countries are deploying AI military applications. The main applications enhance command and control, communications, sensors, integration and interoperability. Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles, both human operated and autonomous.
AI has been used in military operations in Iraq, Syria, Israel and Ukraine.
=== Generative AI ===
=== Agents ===
Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.
=== Sexuality ===
Applications of AI in this domain include AI-enabled menstruation and fertility trackers that analyze user data to offer prediction, AI-integrated sex toys (e.g., teledildonics), AI-generated sexual education content, and AI agents that simulate sexual and romantic partners (e.g., Replika). AI is also used for the production of non-consensual deepfake pornography, raising significant ethical and legal concerns.
AI technologies have also been used to attempt to identify online gender-based violence and online sexual grooming of minors.
=== Other industry-specific tasks ===
There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes. A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.
AI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.
In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.
Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights." For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.
During the 2024 Indian elections, US$50 million was spent on authorized AI-generated content, notably by creating deepfakes of allied (including sometimes deceased) politicians to better engage with voters, and by translating speeches to various local languages.
== Ethics ==
AI has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of DeepMind hopes to "solve intelligence, and then use that to solve everything else". However, as the use of AI has become widespread, several unintended consequences and risks have been identified. In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning.
=== Risks and harm ===
==== Privacy and copyright ====
Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright.
AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency.
Sensitive user data collected may include online activity records, geolocation data, video, or audio. For example, in order to build speech recognition algorithms, Amazon has recorded millions of private conversations and allowed temporary workers to listen to and transcribe some of them. Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy.
AI developers argue that this is the only way to deliver valuable applications and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy. Since 2016, some privacy experts, such as Cynthia Dwork, have begun to view privacy in terms of fairness. Brian Christian wrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'."
Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work". Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file. In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI. Another discussed approach is to envision a separate sui generis system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.
==== Dominance by tech giants ====
The commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.
==== Power needs and environmental impacts ====
In January 2024, the International Energy Agency (IEA) released Electricity 2024, Analysis and Forecast to 2026, forecasting electric power use. This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.
Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.
A 2024 Goldman Sachs Research Paper, AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means. Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.
In 2024, the Wall Street Journal reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US). Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers.
In September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 US Inflation Reduction Act. The US government and the state of Michigan are investing almost $2 billion (US) to reopen the Palisades Nuclear reactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO of Exelon who was responsible for Exelon spinoff of Constellation.
After the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages. Taiwan aims to phase out nuclear power by 2025. On the other hand, Singapore imposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban.
Although most nuclear plants in Japan have been shut down after the 2011 Fukushima nuclear accident, according to an October 2024 Bloomberg article in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI. Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI.
On 1 November 2024, the Federal Energy Regulatory Commission (FERC) rejected an application submitted by Talen Energy for approval to supply some electricity from the nuclear power station Susquehanna to Amazon's data center.
According to the Commission Chairman Willie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors.
In 2025 a report prepared by the International Energy Agency estimated the greenhouse gas emissions from the energy consumption of AI at 180 million tons. By 2035, these emissions could rise to 300-500 million tonnes depending on what measures will be taken. This is below 1.5% of the energy sector emissions. The emissions reduction potential of AI was estimated at 5% of the energy sector emissions, but rebound effects (for example if people will pass from public transport to autonomous cars) can reduce it.
==== Misinformation ====
YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government. The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took some steps to mitigate the problem.
In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda. One such potential malicious use is deepfakes for computational propaganda. AI pioneer Geoffrey Hinton expressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks.
AI researchers at Microsoft, OpenAI, universities and other organisations have suggested using "personhood credentials" as a way to overcome online deception enabled by AI models.
==== Algorithmic bias and fairness ====
Machine learning applications will be biased if they learn from biased data. The developers may not be aware that the bias exists. Bias can be introduced by the way training data is selected and by the way a model is deployed. If a biased algorithm is used to make decisions that can seriously harm people (as it can in medicine, finance, recruitment, housing or policing) then the algorithm may cause discrimination. The field of fairness studies how to prevent harms from algorithmic biases.
On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people, a problem called "sample size disparity". Google "fixed" this problem by preventing the system from labelling anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.
COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend. In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.
A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender". Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work."
Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions as recommendations, some of these "recommendations" will likely be racist. Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is descriptive rather than prescriptive.
Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.
There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category is distributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negative stereotypes or render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict with anti-discrimination laws.
At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022), the Association for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.
==== Lack of transparency ====
Many AI systems are so complex that their designers cannot explain how they reach their decisions. Particularly with deep neural networks, in which there are a large amount of non-linear relationships between inputs and outputs. But some popular explainability techniques exist.
It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with a ruler as "cancerous", because pictures of malignancies typically include a ruler to show the scale. Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading.
People who have been harmed by an algorithm's decision have a right to an explanation. Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection Regulation in 2016 included an explicit statement that this right exists. Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.
DARPA established the XAI ("Explainable Artificial Intelligence") program in 2014 to try to solve these problems.
Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output. LIME can locally approximate a model's outputs with a simpler, interpretable model. Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned. Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning. For generative pre-trained transformers, Anthropic developed a technique based on dictionary learning that associates patterns of neuron activations with human-understandable concepts.
==== Bad actors and weaponized AI ====
Artificial intelligence provides a number of tools that are useful to bad actors, such as authoritarian governments, terrorists, criminals or rogue states.
A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. Even when used in conventional warfare, they currently cannot reliably choose targets and could potentially kill an innocent person. In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. By 2015, over fifty countries were reported to be researching battlefield robots.
AI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China.
There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.
==== Technological unemployment ====
Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.
In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk". The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies. In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.
From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.
==== Existential risk ====
It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, "spell the end of the human race". This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character. These sci-fi scenarios are misleading in several ways.
First, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager). Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead." In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is "fundamentally on our side".
Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.
The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.
In May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google". He notably mentioned risks of an AI takeover, and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.
In 2023, many leading AI experts endorsed the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".
Some other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier." While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors." Andrew Ng also argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests." Yann LeCun "scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction." In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine. However, after 2016, the study of current and future risks and possible solutions became a serious area of research.
=== Ethical machines and alignment ===
Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.
The field of machine ethics is also called computational morality,
and was founded at an AAAI symposium in 2005.
Other approaches include Wendell Wallach's "artificial moral agents" and Stuart J. Russell's three principles for developing provably beneficial machines.
=== Open source ===
Active organizations in the AI open-source community include Hugging Face, Google, EleutherAI and Meta. Various AI models, such as Llama 2, Mistral or Stable Diffusion, have been made open-weight, meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freely fine-tuned, which allows companies to specialize them with their own data and for their own use-case. Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitate bioterrorism) and that once released on the Internet, they cannot be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses.
=== Frameworks ===
Artificial Intelligence projects can be guided by ethical considerations during the design, development, and implementation of an AI system. An AI framework such as the Care and Act Framework, developed by the Alan Turing Institute and based on the SUM values, outlines four main ethical dimensions, defined as follows:
Respect the dignity of individual people
Connect with other people sincerely, openly, and inclusively
Care for the wellbeing of everyone
Protect social values, justice, and the public interest
Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others; however, these principles are not without criticism, especially regards to the people chosen to contribute to these frameworks.
Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.
The UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.
=== Regulation ===
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics. In 2024, the Council of Europe created the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.
In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".
In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks. 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence. In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.
== History ==
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning. This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an "electronic brain". They developed several areas of research that would become part of AI, such as McCullouch and Pitts design for "artificial neurons" in 1943, and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that "machine intelligence" was plausible.
The field of AI research was founded at a workshop at Dartmouth College in 1956. The attendees became the leaders of AI research in the 1960s. They and their students produced programs that the press described as "astonishing": computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English. Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.
Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field. In 1965 Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". In 1967 Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved". They had, however, underestimated the difficulty of the problem. In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill and ongoing pressure from the U.S. Congress to fund more productive projects. Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether. The "AI winter", a period when obtaining funding for AI projects was difficult, followed.
In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.
Up to this point, most of AI's funding had gone to projects that used high-level symbols to represent mental objects like plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition, and began to look into "sub-symbolic" approaches. Rodney Brooks rejected "representation" in general and focussed directly on engineering machines that move and survive. Judea Pearl, Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic. But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others. In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics). By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence" (a tendency known as the AI effect).
However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.
Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field.
For many specific tasks, other methods were abandoned.
Deep learning's success was based on both hardware improvements (faster computers, graphics processing units, cloud computing) and access to large amounts of data (including curated datasets, such as ImageNet). Deep learning's success led to an enormous increase in interest and funding in AI. The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.
In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study.
In the late 2010s and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program taught only the game's rules and developed a strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months. It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI". About 800,000 "AI"-related U.S. job openings existed in 2022. According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies.
== Philosophy ==
Philosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines. Another major focus has been whether machines can be conscious, and the associated ethical implications. Many other topics in philosophy are relevant to AI, such as epistemology and free will. Rapid advancements have intensified public discussions on the philosophy and ethics of AI.
=== Defining artificial intelligence ===
Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?" He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people but "it is usual to have a polite convention that everyone thinks."
Russell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure. However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineering texts", they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.'" AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".
McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world". Another AI founder, Marvin Minsky, similarly describes it as "the ability to solve hard problems". The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals. These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.
Another definition has been adopted by Google, a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.
Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI, with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did "not actually use AI in a material way".
=== Evaluating approaches to AI ===
No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.
==== Symbolic AI and its limits ====
Symbolic AI (or "GOFAI") simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult. Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.
The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.
==== Neat vs. scruffy ====
"Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, but eventually was seen as irrelevant. Modern AI has elements of both.
==== Soft vs. hard computing ====
Finding a provably correct or optimal solution is intractable for many important problems. Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.
==== Narrow vs. general AI ====
AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals. General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.
=== Machine consciousness, sentience, and mind ===
There is no settled consensus in philosophy of mind on whether a machine can have a mind, consciousness and mental states in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on." However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.
==== Consciousness ====
David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness. The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like.
==== Computationalism and functionalism ====
Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.
Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Searle challenges this claim with his Chinese room argument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind.
==== AI welfare and rights ====
It is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society.
In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.
Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.
== Future ==
=== Superintelligence and the singularity ===
A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity".
However, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.
=== Transhumanism ===
Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger.
Edward Fredkin argues that "artificial intelligence is the next step in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his 1998 book Darwin Among the Machines: The Evolution of Global Intelligence.
== In fiction ==
Thought-capable artificial beings have appeared as storytelling devices since antiquity, and have been a persistent theme in science fiction.
A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.
Isaac Asimov introduced the Three Laws of Robotics in many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.
== See also ==
Artificial consciousness – Field in cognitive science
Artificial intelligence and elections – Use and impact of AI on political elections
Artificial intelligence content detection – Software to detect AI-generated content
Behavior selection algorithm – Algorithm that selects actions for intelligent agents
Business process automation – Automation of business processes
Case-based reasoning – Process of solving new problems based on the solutions of similar past problems
Computational intelligence – Ability of a computer to learn a specific task from data or experimental observation
Digital immortality – Hypothetical concept of storing a personality in digital form
Emergent algorithm – Algorithm exhibiting emergent behavior
Female gendering of AI technologies – Gender biases in digital technologyPages displaying short descriptions of redirect targets
Glossary of artificial intelligence – List of definitions of terms and concepts commonly used in the study of artificial intelligence
Intelligence amplification – Use of information technology to augment human intelligence
Intelligent agent – Software agent which acts autonomously
Intelligent automation – Software process that combines robotic process automation and artificial intelligence
Mind uploading – Hypothetical process of digitally emulating a brain
Organoid intelligence – Use of brain cells and brain organoids for intelligent computing
Robotic process automation – Form of business process automation technology
The Last Day – 1967 Welsh science fiction novel
Wetware computer – Computer composed of organic material
DARWIN EU - A European Union initiative coordinated by the European Medicines Agency (EMA) to generate and utilize real-world evidence (RWE) to support the evaluation and supervision of medicines across the EU.
== Explanatory notes ==
== References ==
=== AI textbooks ===
The two most widely used textbooks in 2023 (see the Open Syllabus):
Russell, Stuart J.; Norvig, Peter (2021). Artificial Intelligence: A Modern Approach (4th ed.). Hoboken: Pearson. ISBN 978-0-1346-1099-3. LCCN 20190474.
Rich, Elaine; Knight, Kevin; Nair, Shivashankar B (2010). Artificial Intelligence (3rd ed.). New Delhi: Tata McGraw Hill India. ISBN 978-0-0700-8770-5.
The four most widely used AI textbooks in 2008:
Other textbooks:
Ertel, Wolfgang (2017). Introduction to Artificial Intelligence (2nd ed.). Springer. ISBN 978-3-3195-8486-7.
Ciaramella, Alberto; Ciaramella, Marco (2024). Introduction to Artificial Intelligence: from data analysis to generative AI (1st ed.). Intellisemantic Editions. ISBN 978-8-8947-8760-3.
=== History of AI ===
=== Other sources ===
== Further reading ==
== External links ==
"Artificial Intelligence". Internet Encyclopedia of Philosophy. | Wikipedia/artificial_intelligence |
Value of information (VOI or VoI) is the amount a decision maker would be willing to pay for information prior to making a decision.
== Similar terms ==
VoI is sometimes distinguished into value of perfect information, also called value of clairvoyance (VoC), and value of imperfect information. They are closely related to the widely known expected value of perfect information (EVPI) and expected value of sample information (EVSI). Note that VoI is not necessarily equal to "value of decision situation with perfect information" - "value of current decision situation" as commonly understood.
== Definitions ==
=== Simple ===
A simple example best illustrates the concept: Consider the decision situation with one decision, for example deciding on a 'Vacation Activity'; and one uncertainty, for example what will the 'Weather Condition' be? But we will only know the 'Weather Condition' after we have decided and begun the 'Vacation Activity'.
The Value of perfect information on Weather Condition captures the value of being able to know Weather Condition even before making the Vacation Activity decision. It is quantified as the highest price the decision-maker is willing to pay for being able to know Weather Condition before making the Vacation Activity decision.
The Value of imperfect information on Weather Condition, however, captures the value of being able to know the outcome of another related uncertainty, e.g., Weather Forecast, instead of Weather Condition itself before making Vacation Activity decision. It is quantified as the highest price the decision-maker is willing to pay for being able to know Weather Forecast before making Vacation Activity decision. Note that it is essentially the value of perfect information on Weather Forecast.
=== Formal ===
The above definition illustrates that the value of imperfect information of any uncertainty can always be framed as the value of perfect information, i.e., VoC, of another uncertainty, hence only the term VoC will be used onwards.
==== Standard ====
Consider a general decision situation having n decisions (d1, d2, d3, ..., dn) and m uncertainties (u1, u2, u3, ..., um). Rationality assumption in standard individual decision-making philosophy states that what is made or known are not forgotten, i.e., the decision-maker has perfect recall. This assumption translates into the existence of a linear ordering of these decisions and uncertainties such that;
di is made prior to making dj if and only if di comes before dj in the ordering
di is made prior to knowing uj if and only if di comes before uj in the ordering
di is made after knowing uj if and only if di comes after uj in the ordering
Consider cases where the decision-maker is enabled to know the outcome of some additional uncertainties earlier in his/her decision situation, i.e., some ui are moved to appear earlier in the ordering. In such case, VoC is quantified as the highest price which the decision-maker is willing to pay for all those moves.
==== Generalized ====
The standard then is further generalized in team decision analysis framework where there is typically incomplete sharing of information among team members under the same decision situation. In such case, what is made or known might not be known in later decisions belonging to different team members, i.e., there might not exist linear ordering of decisions and uncertainties satisfying perfect recall assumption. VoC thus captures the value of being able to know "not only additional uncertainties but also additional decisions already made by other team members" before making some other decisions in the team decision situation.
== Characteristics ==
There are four characteristics of VoI that always hold for any decision situation:
The value of information can never be less than zero since the decision-maker can always ignore the additional information and make a decision as if such information is not available.
No other information gathering/sharing activities can be more valuable than that quantified by value of clairvoyance.
Observing multiple new evidences yields the same gain in maximum expected utility regardless of the order of observation.
The VoI of observing two new evidence variables is not additive. Instead it is equivalent to observing one, incorporating it into our current evidence, then observing the other.
== Computation ==
VoC is derived strictly following its definition as the monetary amount that is big enough to just offset the additional benefit of getting more information. In other words; VoC is calculated iteratively until
"value of decision situation with perfect information while paying VoC" = "value of current decision situation".
A special case is when the decision-maker is risk neutral where VoC can be simply computed as
VoC = "value of decision situation with perfect information" - "value of current decision situation".
This special case is how expected value of perfect information and expected value of sample information are calculated where risk neutrality is implicitly assumed. For cases where the decision-maker is risk averse or risk seeking, this simple calculation does not necessarily yield the correct result, and iterative calculation is the only way to ensure correctness.
Decision trees and influence diagrams are most commonly used in representing and solving decision situations as well as associated VoC calculation. The influence diagram, in particular, is structured to accommodate team decision situations where incomplete sharing of information among team members can be represented and solved very efficiently. While decision trees are not designed to accommodate team decision situations, they can do so by augmenting them with information sets widely used in game trees.
== Examples ==
VoC is often illustrated using the example of paying for a consultant in a business transaction, who may either be perfect (expected value of perfect information) or imperfect (expected value of imperfect information).
In a typical consultant situation, the consultant would be paid up to cost c for their information, based on the expected cost E without the consultant and the revised cost F with the consultant's information. In a perfect information scenario, E can be defined as the sum product of the probability of a good outcome g times its cost k, plus the probability of a bad outcome (1-g) times its cost k'>k:
E = gk + (1-g)k',
which is revised to reflect expected cost F of perfect information including consulting cost c. The perfect information case assumes the bad outcome does not occur due to the perfect information consultant.
F = g(k+c)
We then solve for values of c for which F<E to determine when to pay the consultant.
In the case of a recursive decision tree, we often have an additional cost m that results from correcting the error, and the process restarts such that the expected cost will appear on both the left and right sides of our equations. This is typical of hiring-rehiring decisions or value chain decisions for which assembly line components must be replaced if erroneously ordered or installed:
E = gk + (1-g)(k'+m+E)
F = g(k+c)
If the consultant is imperfect with frequency f, then the consultant cost is solved with the probability of error included:
F = g(k+c)(1-f) + g(k+c+F)f + (1-g)(1-f)(k+c+F) + (1-g)f(k'+c+m+F)
VoI is also used to do an inspection and maintenance planning of the structures. analyze to what extent the value associated with the information collected during the service life of engineered structures, for example, inspections, in the context of integrity management, is affected by not only measurement random errors but also biases (systematic errors), taking the dependency between the collections into account
== See also ==
Decision analysis
Decision tree
Expected value of perfect information (EVPI)
Expected value of including uncertainty (EVIU)
Expected value of sample information
Value of structural health information
Influence diagram
Value of control
Information theory
== Bibliography == | Wikipedia/Information_value_theory |
In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations (for example clusters, rankings, principal components, correlations, classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over all pairs of data points computed using inner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the representer theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.
Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick". Kernel functions have been introduced for sequence data, graphs, text, images, as well as vectors.
Algorithms capable of operating with kernels include the kernel perceptron, support-vector machines (SVM), Gaussian processes, principal components analysis (PCA), canonical correlation analysis, ridge regression, spectral clustering, linear adaptive filters and many others.
Most kernel algorithms are based on convex optimization or eigenproblems and are statistically well-founded. Typically, their statistical properties are analyzed using statistical learning theory (for example, using Rademacher complexity).
== Motivation and informal explanation ==
Kernel methods can be thought of as instance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" the
i
{\displaystyle i}
-th training example
(
x
i
,
y
i
)
{\displaystyle (\mathbf {x} _{i},y_{i})}
and learn for it a corresponding weight
w
i
{\displaystyle w_{i}}
. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of a similarity function
k
{\displaystyle k}
, called a kernel, between the unlabeled input
x
′
{\displaystyle \mathbf {x'} }
and each of the training inputs
x
i
{\displaystyle \mathbf {x} _{i}}
. For instance, a kernelized binary classifier typically computes a weighted sum of similarities
y
^
=
sgn
∑
i
=
1
n
w
i
y
i
k
(
x
i
,
x
′
)
,
{\displaystyle {\hat {y}}=\operatorname {sgn} \sum _{i=1}^{n}w_{i}y_{i}k(\mathbf {x} _{i},\mathbf {x'} ),}
where
y
^
∈
{
−
1
,
+
1
}
{\displaystyle {\hat {y}}\in \{-1,+1\}}
is the kernelized binary classifier's predicted label for the unlabeled input
x
′
{\displaystyle \mathbf {x'} }
whose hidden true label
y
{\displaystyle y}
is of interest;
k
:
X
×
X
→
R
{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }
is the kernel function that measures similarity between any pair of inputs
x
,
x
′
∈
X
{\displaystyle \mathbf {x} ,\mathbf {x'} \in {\mathcal {X}}}
;
the sum ranges over the n labeled examples
{
(
x
i
,
y
i
)
}
i
=
1
n
{\displaystyle \{(\mathbf {x} _{i},y_{i})\}_{i=1}^{n}}
in the classifier's training set, with
y
i
∈
{
−
1
,
+
1
}
{\displaystyle y_{i}\in \{-1,+1\}}
;
the
w
i
∈
R
{\displaystyle w_{i}\in \mathbb {R} }
are the weights for the training examples, as determined by the learning algorithm;
the sign function
sgn
{\displaystyle \operatorname {sgn} }
determines whether the predicted classification
y
^
{\displaystyle {\hat {y}}}
comes out positive or negative.
Kernel classifiers were described as early as the 1960s, with the invention of the kernel perceptron. They rose to great prominence with the popularity of the support-vector machine (SVM) in the 1990s, when the SVM was found to be competitive with neural networks on tasks such as handwriting recognition.
== Mathematics: the kernel trick ==
The kernel trick avoids the explicit mapping that is needed to get linear learning algorithms to learn a nonlinear function or decision boundary. For all
x
{\displaystyle \mathbf {x} }
and
x
′
{\displaystyle \mathbf {x'} }
in the input space
X
{\displaystyle {\mathcal {X}}}
, certain functions
k
(
x
,
x
′
)
{\displaystyle k(\mathbf {x} ,\mathbf {x'} )}
can be expressed as an inner product in another space
V
{\displaystyle {\mathcal {V}}}
. The function
k
:
X
×
X
→
R
{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }
is often referred to as a kernel or a kernel function. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum or integral.
Certain problems in machine learning have more structure than an arbitrary weighting function
k
{\displaystyle k}
. The computation is made much simpler if the kernel can be written in the form of a "feature map"
φ
:
X
→
V
{\displaystyle \varphi \colon {\mathcal {X}}\to {\mathcal {V}}}
which satisfies
k
(
x
,
x
′
)
=
⟨
φ
(
x
)
,
φ
(
x
′
)
⟩
V
.
{\displaystyle k(\mathbf {x} ,\mathbf {x'} )=\langle \varphi (\mathbf {x} ),\varphi (\mathbf {x'} )\rangle _{\mathcal {V}}.}
The key restriction is that
⟨
⋅
,
⋅
⟩
V
{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {V}}}
must be a proper inner product. On the other hand, an explicit representation for
φ
{\displaystyle \varphi }
is not necessary, as long as
V
{\displaystyle {\mathcal {V}}}
is an inner product space. The alternative follows from Mercer's theorem: an implicitly defined function
φ
{\displaystyle \varphi }
exists whenever the space
X
{\displaystyle {\mathcal {X}}}
can be equipped with a suitable measure ensuring the function
k
{\displaystyle k}
satisfies Mercer's condition.
Mercer's theorem is similar to a generalization of the result from linear algebra that associates an inner product to any positive-definite matrix. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure the counting measure
μ
(
T
)
=
|
T
|
{\displaystyle \mu (T)=|T|}
for all
T
⊂
X
{\displaystyle T\subset X}
, which counts the number of points inside the set
T
{\displaystyle T}
, then the integral in Mercer's theorem reduces to a summation
∑
i
=
1
n
∑
j
=
1
n
k
(
x
i
,
x
j
)
c
i
c
j
≥
0.
{\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}k(\mathbf {x} _{i},\mathbf {x} _{j})c_{i}c_{j}\geq 0.}
If this summation holds for all finite sequences of points
(
x
1
,
…
,
x
n
)
{\displaystyle (\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n})}
in
X
{\displaystyle {\mathcal {X}}}
and all choices of
n
{\displaystyle n}
real-valued coefficients
(
c
1
,
…
,
c
n
)
{\displaystyle (c_{1},\dots ,c_{n})}
(cf. positive definite kernel), then the function
k
{\displaystyle k}
satisfies Mercer's condition.
Some algorithms that depend on arbitrary relationships in the native space
X
{\displaystyle {\mathcal {X}}}
would, in fact, have a linear interpretation in a different setting: the range space of
φ
{\displaystyle \varphi }
. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to compute
φ
{\displaystyle \varphi }
directly during computation, as is the case with support-vector machines. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms.
Theoretically, a Gram matrix
K
∈
R
n
×
n
{\displaystyle \mathbf {K} \in \mathbb {R} ^{n\times n}}
with respect to
{
x
1
,
…
,
x
n
}
{\displaystyle \{\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n}\}}
(sometimes also called a "kernel matrix"), where
K
i
j
=
k
(
x
i
,
x
j
)
{\displaystyle K_{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})}
, must be positive semi-definite (PSD). Empirically, for machine learning heuristics, choices of a function
k
{\displaystyle k}
that do not satisfy Mercer's condition may still perform reasonably if
k
{\displaystyle k}
at least approximates the intuitive idea of similarity. Regardless of whether
k
{\displaystyle k}
is a Mercer kernel,
k
{\displaystyle k}
may still be referred to as a "kernel".
If the kernel function
k
{\displaystyle k}
is also a covariance function as used in Gaussian processes, then the Gram matrix
K
{\displaystyle \mathbf {K} }
can also be called a covariance matrix.
== Applications ==
Application areas of kernel methods are diverse and include geostatistics, kriging, inverse distance weighting, 3D reconstruction, bioinformatics, cheminformatics, information extraction and handwriting recognition.
== Popular kernels ==
Fisher kernel
Graph kernels
Kernel smoother
Polynomial kernel
Radial basis function kernel (RBF)
String kernels
Neural tangent kernel
Neural network Gaussian process (NNGP) kernel
== See also ==
Kernel methods for vector output
Kernel density estimation
Representer theorem
Similarity learning
Cover's theorem
== References ==
== Further reading ==
Shawe-Taylor, J.; Cristianini, N. (2004). Kernel Methods for Pattern Analysis. Cambridge University Press. ISBN 9780511809682.
Liu, W.; Principe, J.; Haykin, S. (2010). Kernel Adaptive Filtering: A Comprehensive Introduction. Wiley. ISBN 9781118211212.
Schölkopf, B.; Smola, A. J.; Bach, F. (2018). Learning with Kernels : Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press. ISBN 978-0-262-53657-8.
== External links ==
Kernel-Machines Org—community website
onlineprediction.net Kernel Methods Article | Wikipedia/Kernel_methods |
A convolutional neural network (CNN) is a type of feedforward neural network that learns features via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different types of data including text, images and audio. Convolution-based networks are the de-facto standard in deep learning-based approaches to computer vision and image processing, and have only recently been replaced—in some cases—by newer deep learning architectures such as the transformer.
Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by the regularization that comes from using shared weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution (or cross-correlation) kernels, only 25 weights for each convolutional layer are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.
Some applications of CNNs include:
image and video recognition,
recommender systems,
image classification,
image segmentation,
medical image analysis,
natural language processing,
brain–computer interfaces, and
financial time series.
CNNs are also known as shift invariant or space invariant artificial neural networks, based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input.
Feedforward neural networks are usually fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The "full connectivity" of these networks makes them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training (such as weight decay) or trimming connectivity (skipped connections, dropout, etc.) Robust datasets also increase the probability that CNNs will learn the generalized principles that characterize a given dataset rather than the biases of a poorly-populated set.
Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.
CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns to optimize the filters (or kernels) through automated learning, whereas in traditional algorithms these filters are hand-engineered. This simplifies and automates the process, enhancing efficiency and scalability overcoming human-intervention bottlenecks.
== Architecture ==
A convolutional neural network consists of an input layer, hidden layers and an output layer. In a convolutional neural network, the hidden layers include one or more layers that perform convolutions. Typically this includes a layer that performs a dot product of the convolution kernel with the layer's input matrix. This product is usually the Frobenius inner product, and its activation function is commonly ReLU. As the convolution kernel slides along the input matrix for the layer, the convolution operation generates a feature map, which in turn contributes to the input of the next layer. This is followed by other layers such as pooling layers, fully connected layers, and normalization layers.
Here it should be noted how close a convolutional neural network is to a matched filter.
=== Convolutional layers ===
In a CNN, the input is a tensor with shape:
(number of inputs) × (input height) × (input width) × (input channels)
After passing through a convolutional layer, the image becomes abstracted to a feature map, also called an activation map, with shape:
(number of inputs) × (feature map height) × (feature map width) × (feature map channels).
Convolutional layers convolve the input and pass its result to the next layer. This is similar to the response of a neuron in the visual cortex to a specific stimulus. Each convolutional neuron processes data only for its receptive field.
Although fully connected feedforward neural networks can be used to learn features and classify data, this architecture is generally impractical for larger inputs (e.g., high-resolution images), which would require massive numbers of neurons because each pixel is a relevant input feature. A fully connected layer for an image of size 100 × 100 has 10,000 weights for each neuron in the second layer. Convolution reduces the number of free parameters, allowing the network to be deeper. For example, using a 5 × 5 tiling region, each with the same shared weights, requires only 25 neurons. Using shared weights means there are many fewer parameters, which helps avoid the vanishing gradients and exploding gradients problems seen during backpropagation in earlier neural networks.
To speed processing, standard convolutional layers can be replaced by depthwise separable convolutional layers, which are based on a depthwise convolution followed by a pointwise convolution. The depthwise convolution is a spatial convolution applied independently over each channel of the input tensor, while the pointwise convolution is a standard convolution restricted to the use of
1
×
1
{\displaystyle 1\times 1}
kernels.
=== Pooling layers ===
Convolutional networks may include local and/or global pooling layers along with traditional convolutional layers. Pooling layers reduce the dimensions of data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer. Local pooling combines small clusters, tiling sizes such as 2 × 2 are commonly used. Global pooling acts on all the neurons of the feature map. There are two common types of pooling in popular use: max and average. Max pooling uses the maximum value of each local cluster of neurons in the feature map, while average pooling takes the average value.
=== Fully connected layers ===
Fully connected layers connect every neuron in one layer to every neuron in another layer. It is the same as a traditional multilayer perceptron neural network (MLP). The flattened matrix goes through a fully connected layer to classify the images.
=== Receptive field ===
In neural networks, each neuron receives input from some number of locations in the previous layer. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron's receptive field. Typically the area is a square (e.g. 5 by 5 neurons). Whereas, in a fully connected layer, the receptive field is the entire previous layer. Thus, in each convolutional layer, each neuron takes input from a larger area in the input than previous layers. This is due to applying the convolution over and over, which takes the value of a pixel into account, as well as its surrounding pixels. When using dilated layers, the number of pixels in the receptive field remains constant, but the field is more sparsely populated as its dimensions grow when combining the effect of several layers.
To manipulate the receptive field size as desired, there are some alternatives to the standard convolutional layer. For example, atrous or dilated convolution expands the receptive field size without increasing the number of parameters by interleaving visible and blind regions. Moreover, a single dilated convolutional layer can comprise filters with multiple dilation ratios, thus having a variable receptive field size.
=== Weights ===
Each neuron in a neural network computes an output value by applying a specific function to the input values received from the receptive field in the previous layer. The function that is applied to the input values is determined by a vector of weights and a bias (typically real numbers). Learning consists of iteratively adjusting these biases and weights.
The vectors of weights and biases are called filters and represent particular features of the input (e.g., a particular shape). A distinguishing feature of CNNs is that many neurons can share the same filter. This reduces the memory footprint because a single bias and a single vector of weights are used across all receptive fields that share that filter, as opposed to each receptive field having its own bias and vector weighting.
=== Deconvolutional ===
A deconvolutional neural network is essentially the reverse of a CNN. It consists of deconvolutional layers and unpooling layers.
A deconvolutional layer is the transpose of a convolutional layer. Specifically, a convolutional layer can be written as a multiplication with a matrix, and a deconvolutional layer is multiplication with the transpose of that matrix.
An unpooling layer expands the layer. The max-unpooling layer is the simplest, as it simply copies each entry multiple times. For example, a 2-by-2 max-unpooling layer is
[
x
]
↦
[
x
x
x
x
]
{\displaystyle [x]\mapsto {\begin{bmatrix}x&x\\x&x\end{bmatrix}}}
.
Deconvolution layers are used in image generators. By default, it creates periodic checkerboard artifact, which can be fixed by upscale-then-convolve.
== History ==
CNN are often compared to the way the brain achieves vision processing in living organisms.
=== Receptive fields in the visual cortex ===
Work by Hubel and Wiesel in the 1950s and 1960s showed that cat visual cortices contain neurons that individually respond to small regions of the visual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as its receptive field. Neighboring cells have similar and overlapping receptive fields. Receptive field size and location varies systematically across the cortex to form a complete map of visual space. The cortex in each hemisphere represents the contralateral visual field.
Their 1968 paper identified two basic visual cell types in the brain:
simple cells, whose output is maximized by straight edges having particular orientations within their receptive field
complex cells, which have larger receptive fields, whose output is insensitive to the exact position of the edges in the field.
Hubel and Wiesel also proposed a cascading model of these two types of cells for use in pattern recognition tasks.
=== Fukushima's analog threshold elements in a vision model ===
In 1969, Kunihiko Fukushima introduced a multilayer visual feature detection network, inspired by the above-mentioned work of Hubel and Wiesel, in which "All the elements in one layer have the same set of interconnecting coefficients; the arrangement of the elements and their interconnections are all homogeneous over a given layer." This is the essential core of a convolutional network, but the weights were not trained. In the same paper, Fukushima also introduced the ReLU (rectified linear unit) activation function.
=== Neocognitron, origin of the trainable CNN architecture ===
The "neocognitron" was introduced by Fukushima in 1980. The neocognitron introduced the two basic types of layers:
"S-layer": a shared-weights receptive-field layer, later known as a convolutional layer, which contains units whose receptive fields cover a patch of the previous layer. A shared-weights receptive-field group (a "plane" in neocognitron terminology) is often called a filter, and a layer typically has several such filters.
"C-layer": a downsampling layer that contain units whose receptive fields cover patches of previous convolutional layers. Such a unit typically computes a weighted average of the activations of the units in its patch, and applies inhibition (divisive normalization) pooled from a somewhat larger patch and across different filters in a layer, and applies a saturating activation function. The patch weights are nonnegative and are not trainable in the original neocognitron. The downsampling and competitive inhibition help to classify features and objects in visual scenes even when the objects are shifted.
Several supervised and unsupervised learning algorithms have been proposed over the decades to train the weights of a neocognitron. Today, however, the CNN architecture is usually trained through backpropagation.
Fukushima's ReLU activation function was not used in his neocognitron since all the weights were nonnegative; lateral inhibition was used instead. The rectifier has become a very popular activation function for CNNs and deep neural networks in general.
=== Convolution in time ===
The term "convolution" first appears in neural networks in a paper by Toshiteru Homma, Les Atlas, and Robert Marks II at the first Conference on Neural Information Processing Systems in 1987. Their paper replaced multiplication with convolution in time, inherently providing shift invariance, motivated by and connecting more directly to the signal-processing concept of a filter, and demonstrated it on a speech recognition task. They also pointed out that as a data-trainable system, convolution is essentially equivalent to correlation since reversal of the weights does not affect the final learned function ("For convenience, we denote * as correlation instead of convolution. Note that convolving a(t) with b(t) is equivalent to correlating a(-t) with b(t)."). Modern CNN implementations typically do correlation and call it convolution, for convenience, as they did here.
=== Time delay neural networks ===
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel et al. for phoneme recognition and was an early convolutional network exhibiting shift-invariance. A TDNN is a 1-D convolutional neural net where the convolution is performed along the time axis of the data. It is the first CNN utilizing weight sharing in combination with a training by gradient descent, using backpropagation. Thus, while also using a pyramidal structure as in the neocognitron, it performed a global optimization of the weights instead of a local one.
TDNNs are convolutional networks that share weights along the temporal dimension. They allow speech signals to be processed time-invariantly. In 1990 Hampshire and Waibel introduced a variant that performs a two-dimensional convolution. Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both time and frequency shifts, as with images processed by a neocognitron.
TDNNs improved the performance of far-distance speech recognition.
=== Image recognition with CNNs trained by gradient descent ===
Denker et al. (1989) designed a 2-D CNN system to recognize hand-written ZIP Code numbers. However, the lack of an efficient training method to determine the kernel coefficients of the involved convolutions meant that all the coefficients had to be laboriously hand-designed.
Following the advances in the training of 1-D CNNs by Waibel et al. (1987), Yann LeCun et al. (1989) used back-propagation to learn the convolution kernel coefficients directly from images of hand-written numbers. Learning was thus fully automatic, performed better than manual coefficient design, and was suited to a broader range of image recognition problems and image types.
Wei Zhang et al. (1988) used back-propagation to train the convolution kernels of a CNN for alphabets recognition. The model was called shift-invariant pattern recognition neural network before the name CNN was coined later in the early 1990s. Wei Zhang et al. also applied the same CNN without the last fully connected layer for medical image object segmentation (1991) and breast cancer detection in mammograms (1994).
This approach became a foundation of modern computer vision.
==== Max pooling ====
In 1990 Yamaguchi et al. introduced the concept of max pooling, a fixed filtering operation that calculates and propagates the maximum value of a given region. They did so by combining TDNNs with max pooling to realize a speaker-independent isolated word recognition system. In their system they used several TDNNs per word, one for each syllable. The results of each TDNN over the input signal were combined using max pooling and the outputs of the pooling layers were then passed on to networks performing the actual word classification.
In a variant of the neocognitron called the cresceptron, instead of using Fukushima's spatial averaging with inhibition and saturation, J. Weng et al. in 1993 used max pooling, where a downsampling unit computes the maximum of the activations of the units in its patch, introducing this method into the vision field.
Max pooling is often used in modern CNNs.
==== LeNet-5 ====
LeNet-5, a pioneering 7-level convolutional network by LeCun et al. in 1995, classifies hand-written numbers on checks (British English: cheques) digitized in 32x32 pixel images. The ability to process higher-resolution images requires larger and more layers of convolutional neural networks, so this technique is constrained by the availability of computing resources.
It was superior than other commercial courtesy amount reading systems (as of 1995). The system was integrated in NCR's check reading systems, and fielded in several American banks since June 1996, reading millions of checks per day.
=== Shift-invariant neural network ===
A shift-invariant neural network was proposed by Wei Zhang et al. for image character recognition in 1988. It is a modified Neocognitron by keeping only the convolutional interconnections between the image feature layers and the last fully connected layer. The model was trained with back-propagation. The training algorithm was further improved in 1991 to improve its generalization ability. The model architecture was modified by removing the last fully connected layer and applied for medical image segmentation (1991) and automatic detection of breast cancer in mammograms (1994).
A different convolution-based design was proposed in 1988 for application to decomposition of one-dimensional electromyography convolved signals via de-convolution. This design was modified in 1989 to other de-convolution-based designs.
=== GPU implementations ===
Although CNNs were invented in the 1980s, their breakthrough in the 2000s required fast implementations on graphics processing units (GPUs).
In 2004, it was shown by K. S. Oh and K. Jung that standard neural networks can be greatly accelerated on GPUs. Their implementation was 20 times faster than an equivalent implementation on CPU. In 2005, another paper also emphasised the value of GPGPU for machine learning.
The first GPU-implementation of a CNN was described in 2006 by K. Chellapilla et al. Their implementation was 4 times faster than an equivalent implementation on CPU. In the same period, GPUs were also used for unsupervised training of deep belief networks.
In 2010, Dan Ciresan et al. at IDSIA trained deep feedforward networks on GPUs. In 2011, they extended this to CNNs, accelerating by 60 compared to training CPU. In 2011, the network won an image recognition contest where they achieved superhuman performance for the first time. Then they won more competitions and achieved state of the art on several benchmarks.
Subsequently, AlexNet, a similar GPU-based CNN by Alex Krizhevsky et al. won the ImageNet Large Scale Visual Recognition Challenge 2012. It was an early catalytic event for the AI boom.
Compared to the training of CNNs using GPUs, not much attention was given to CPU. (Viebke et al 2019) parallelizes CNN by thread- and SIMD-level parallelism that is available on the Intel Xeon Phi.
== Distinguishing features ==
In the past, traditional multilayer perceptron (MLP) models were used for image recognition. However, the full connectivity between nodes caused the curse of dimensionality, and was computationally intractable with higher-resolution images. A 1000×1000-pixel image with RGB color channels has 3 million weights per fully-connected neuron, which is too high to feasibly process efficiently at scale.
For example, in CIFAR-10, images are only of size 32×32×3 (32 wide, 32 high, 3 color channels), so a single fully connected neuron in the first hidden layer of a regular neural network would have 32*32*3 = 3,072 weights. A 200×200 image, however, would lead to neurons that have 200*200*3 = 120,000 weights.
Also, such network architecture does not take into account the spatial structure of data, treating input pixels which are far apart in the same way as pixels that are close together. This ignores locality of reference in data with a grid-topology (such as images), both computationally and semantically. Thus, full connectivity of neurons is wasteful for purposes such as image recognition that are dominated by spatially local input patterns.
Convolutional neural networks are variants of multilayer perceptrons, designed to emulate the behavior of a visual cortex. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images. As opposed to MLPs, CNNs have the following distinguishing features:
3D volumes of neurons. The layers of a CNN have neurons arranged in 3 dimensions: width, height and depth. Where each neuron inside a convolutional layer is connected to only a small region of the layer before it, called a receptive field. Distinct types of layers, both locally and completely connected, are stacked to form a CNN architecture.
Local connectivity: following the concept of receptive fields, CNNs exploit spatial locality by enforcing a local connectivity pattern between neurons of adjacent layers. The architecture thus ensures that the learned "filters" produce the strongest response to a spatially local input pattern. Stacking many such layers leads to nonlinear filters that become increasingly global (i.e. responsive to a larger region of pixel space) so that the network first creates representations of small parts of the input, then from them assembles representations of larger areas.
Shared weights: In CNNs, each filter is replicated across the entire visual field. These replicated units share the same parameterization (weight vector and bias) and form a feature map. This means that all the neurons in a given convolutional layer respond to the same feature within their specific response field. Replicating units in this way allows for the resulting activation map to be equivariant under shifts of the locations of input features in the visual field, i.e. they grant translational equivariance—given that the layer has a stride of one.
Pooling: In a CNN's pooling layers, feature maps are divided into rectangular sub-regions, and the features in each rectangle are independently down-sampled to a single value, commonly by taking their average or maximum value. In addition to reducing the sizes of feature maps, the pooling operation grants a degree of local translational invariance to the features contained therein, allowing the CNN to be more robust to variations in their positions.
Together, these properties allow CNNs to achieve better generalization on vision problems. Weight sharing dramatically reduces the number of free parameters learned, thus lowering the memory requirements for running the network and allowing the training of larger, more powerful networks.
== Building blocks ==
A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume (e.g. holding the class scores) through a differentiable function. A few distinct types of layers are commonly used. These are further discussed below.
=== Convolutional layer ===
The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnable filters (or kernels), which have a small receptive field, but extend through the full depth of the input volume. During the forward pass, each filter is convolved across the width and height of the input volume, computing the dot product between the filter entries and the input, producing a 2-dimensional activation map of that filter. As a result, the network learns filters that activate when it detects some specific type of feature at some spatial position in the input.
Stacking the activation maps for all filters along the depth dimension forms the full output volume of the convolution layer. Every entry in the output volume can thus also be interpreted as an output of a neuron that looks at a small region in the input. Each entry in an activation map use the same set of parameters that define the filter.
Self-supervised learning has been adapted for use in convolutional layers by using sparse patches with a high-mask ratio and a global response normalization layer.
==== Local connectivity ====
When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing a sparse local connectivity pattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume.
The extent of this connectivity is a hyperparameter called the receptive field of the neuron. The connections are local in space (along width and height), but always extend along the entire depth of the input volume. Such an architecture ensures that the learned filters produce the strongest response to a spatially local input pattern.
==== Spatial arrangement ====
Three hyperparameters control the size of the output volume of the convolutional layer: the depth, stride, and padding size:
The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color.
Stride controls how depth columns around the width and height are allocated. If the stride is 1, then we move the filters one pixel at a time. This leads to heavily overlapping receptive fields between the columns, and to large output volumes. For any integer
S
>
0
,
{\textstyle S>0,}
a stride S means that the filter is translated S units at a time per output. In practice,
S
≥
3
{\textstyle S\geq 3}
is rare. A greater stride means smaller overlap of receptive fields and smaller spatial dimensions of the output volume.
Sometimes, it is convenient to pad the input with zeros (or other values, such as the average of the region) on the border of the input volume. The size of this padding is a third hyperparameter. Padding provides control of the output volume's spatial size. In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume, this is commonly referred to as "same" padding.
The spatial size of the output volume is a function of the input volume size
W
{\displaystyle W}
, the kernel field size
K
{\displaystyle K}
of the convolutional layer neurons, the stride
S
{\displaystyle S}
, and the amount of zero padding
P
{\displaystyle P}
on the border. The number of neurons that "fit" in a given volume is then:
W
−
K
+
2
P
S
+
1.
{\displaystyle {\frac {W-K+2P}{S}}+1.}
If this number is not an integer, then the strides are incorrect and the neurons cannot be tiled to fit across the input volume in a symmetric way. In general, setting zero padding to be
P
=
(
K
−
1
)
/
2
{\textstyle P=(K-1)/2}
when the stride is
S
=
1
{\displaystyle S=1}
ensures that the input volume and output volume will have the same size spatially. However, it is not always completely necessary to use all of the neurons of the previous layer. For example, a neural network designer may decide to use just a portion of padding.
==== Parameter sharing ====
A parameter sharing scheme is used in convolutional layers to control the number of free parameters. It relies on the assumption that if a patch feature is useful to compute at some spatial position, then it should also be useful to compute at other positions. Denoting a single 2-dimensional slice of depth as a depth slice, the neurons in each depth slice are constrained to use the same weights and bias.
Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as a convolution of the neuron's weights with the input volume. Therefore, it is common to refer to the sets of weights as a filter (or a kernel), which is convolved with the input. The result of this convolution is an activation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to the translation invariance of the CNN architecture.
Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer".
=== Pooling layer ===
Another important concept of CNNs is pooling, which is used as a form of non-linear down-sampling. Pooling provides downsampling because it reduces the spatial dimensions (height and width) of the input feature maps while retaining the most important information. There are several non-linear functions to implement pooling, where max pooling and average pooling are the most common. Pooling aggregates information from small regions of the input creating partitions of the input feature map, typically using a fixed-size window (like 2x2) and applying a stride (often 2) to move the window across the input. Note that without using a stride greater than 1, pooling would not perform downsampling, as it would simply move the pooling window across the input one step at a time, without reducing the size of the feature map. In other words, the stride is what actually causes the downsampling by determining how much the pooling window moves over the input.
Intuitively, the exact location of a feature is less important than its rough location relative to other features. This is the idea behind the use of pooling in convolutional neural networks. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters, memory footprint and amount of computation in the network, and hence to also control overfitting. This is known as down-sampling. It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by an activation function, such as a ReLU layer) in a CNN architecture.: 460–461 While pooling layers contribute to local translation invariance, they do not provide global translation invariance in a CNN, unless a form of global pooling is used. The pooling layer commonly operates independently on every depth, or slice, of the input and resizes it spatially. A very common form of max pooling is a layer with filters of size 2×2, applied with a stride of 2, which subsamples every depth slice in the input by 2 along both width and height, discarding 75% of the activations:
f
X
,
Y
(
S
)
=
max
a
,
b
=
0
1
S
2
X
+
a
,
2
Y
+
b
.
{\displaystyle f_{X,Y}(S)=\max _{a,b=0}^{1}S_{2X+a,2Y+b}.}
In this case, every max operation is over 4 numbers. The depth dimension remains unchanged (this is true for other forms of pooling as well).
In addition to max pooling, pooling units can use other functions, such as average pooling or ℓ2-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which generally performs better in practice.
Due to the effects of fast spatial reduction of the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether.
==== Channel max pooling ====
A channel max pooling (CMP) operation layer conducts the MP operation along the channel side among the corresponding positions of the consecutive feature maps for the purpose of redundant information elimination. The CMP makes the significant features gather together within fewer channels, which is important for fine-grained image classification that needs more discriminating features. Meanwhile, another advantage of the CMP operation is to make the channel number of feature maps smaller before it connects to the first fully connected (FC) layer. Similar to the MP operation, we denote the input feature maps and output feature maps of a CMP layer as F ∈ R(C×M×N) and C ∈ R(c×M×N), respectively, where C and c are the channel numbers of the input and output feature maps, M and N are the widths and the height of the feature maps, respectively. Note that the CMP operation only changes the channel number of the feature maps. The width and the height of the feature maps are not changed, which is different from the MP operation.
See for reviews for pooling methods.
=== ReLU layer ===
ReLU is the abbreviation of rectified linear unit. It was proposed by Alston Householder in 1941, and used in CNN by Kunihiko Fukushima in 1969. ReLU applies the non-saturating activation function
f
(
x
)
=
max
(
0
,
x
)
{\textstyle f(x)=\max(0,x)}
. It effectively removes negative values from an activation map by setting them to zero. It introduces nonlinearity to the decision function and in the overall network without affecting the receptive fields of the convolution layers.
In 2011, Xavier Glorot, Antoine Bordes and Yoshua Bengio found that ReLU enables better training of deeper networks, compared to widely used activation functions prior to 2011.
Other functions can also be used to increase nonlinearity, for example the saturating hyperbolic tangent
f
(
x
)
=
tanh
(
x
)
{\displaystyle f(x)=\tanh(x)}
,
f
(
x
)
=
|
tanh
(
x
)
|
{\displaystyle f(x)=|\tanh(x)|}
, and the sigmoid function
σ
(
x
)
=
(
1
+
e
−
x
)
−
1
{\textstyle \sigma (x)=(1+e^{-x})^{-1}}
. ReLU is often preferred to other functions because it trains the neural network several times faster without a significant penalty to generalization accuracy.
=== Fully connected layer ===
After several convolutional and max pooling layers, the final classification is done via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular (non-convolutional) artificial neural networks. Their activations can thus be computed as an affine transformation, with matrix multiplication followed by a bias offset (vector addition of a learned or fixed bias term).
=== Loss layer ===
The "loss layer", or "loss function", exemplifies how training penalizes the deviation between the predicted output of the network, and the true data labels (during supervised learning). Various loss functions can be used, depending on the specific task.
The Softmax loss function is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in
[
0
,
1
]
{\displaystyle [0,1]}
. Euclidean loss is used for regressing to real-valued labels
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
.
== Hyperparameters ==
Hyperparameters are various settings that are used to control the learning process. CNNs use more hyperparameters than a standard multilayer perceptron (MLP).
=== Padding ===
Padding is the addition of (typically) 0-valued pixels on the borders of an image. This is done so that the border pixels are not undervalued (lost) from the output because they would ordinarily participate in only a single receptive field instance. The padding applied is typically one less than the corresponding kernel dimension. For example, a convolutional layer using 3x3 kernels would receive a 2-pixel pad, that is 1 pixel on each side of the image.
=== Stride ===
The stride is the number of pixels that the analysis window moves on each iteration. A stride of 2 means that each kernel is offset by 2 pixels from its predecessor.
=== Number of filters ===
Since feature map size decreases with depth, layers near the input layer tend to have fewer filters while higher layers can have more. To equalize computation at each layer, the product of feature values va with pixel position is kept roughly constant across layers. Preserving more information about the input would require keeping the total number of activations (number of feature maps times number of pixel positions) non-decreasing from one layer to the next.
The number of feature maps directly controls the capacity and depends on the number of available examples and task complexity.
=== Filter (or Kernel) size ===
Common filter sizes found in the literature vary greatly, and are usually chosen based on the data set. Typical filter sizes range from 1x1 to 7x7. As two famous examples, AlexNet used 3x3, 5x5, and 11x11. Inceptionv3 used 1x1, 3x3, and 5x5.
The challenge is to find the right level of granularity so as to create abstractions at the proper scale, given a particular data set, and without overfitting.
=== Pooling type and size ===
Max pooling is typically used, often with a 2x2 dimension. This implies that the input is drastically downsampled, reducing processing cost.
Greater pooling reduces the dimension of the signal, and may result in unacceptable information loss. Often, non-overlapping pooling windows perform best.
=== Dilation ===
Dilation involves ignoring pixels within a kernel. This reduces processing memory potentially without significant signal loss. A dilation of 2 on a 3x3 kernel expands the kernel to 5x5, while still processing 9 (evenly spaced) pixels. Specifically, the processed pixels after the dilation are the cells (1,1), (1,3), (1,5), (3,1), (3,3), (3,5), (5,1), (5,3), (5,5), where (i,j) denotes the cell of the i-th row and j-th column in the expanded 5x5 kernel. Accordingly, dilation of 4 expands the kernel to 7x7.
== Translation equivariance and aliasing ==
It is commonly assumed that CNNs are invariant to shifts of the input. Convolution or pooling layers within a CNN that do not have a stride greater than one are indeed equivariant to translations of the input. However, layers with a stride greater than one ignore the Nyquist–Shannon sampling theorem and might lead to aliasing of the input signal While, in principle, CNNs are capable of implementing anti-aliasing filters, it has been observed that this does not happen in practice, and therefore yield models that are not equivariant to translations.
Furthermore, if a CNN makes use of fully connected layers, translation equivariance does not imply translation invariance, as the fully connected layers are not invariant to shifts of the input. One solution for complete translation invariance is avoiding any down-sampling throughout the network and applying global average pooling at the last layer. Additionally, several other partial solutions have been proposed, such as anti-aliasing before downsampling operations, spatial transformer networks, data augmentation, subsampling combined with pooling, and capsule neural networks.
== Evaluation ==
The accuracy of the final model is typically estimated on a sub-part of the dataset set apart at the start, often called a test set. Alternatively, methods such as k-fold cross-validation are applied. Other strategies include using conformal prediction.
== Regularization methods ==
Regularization is a process of introducing additional information to solve an ill-posed problem or to prevent overfitting. CNNs use various types of regularization.
=== Empirical ===
==== Dropout ====
Because networks have so many parameters, they are prone to overfitting. One method to reduce overfitting is dropout, introduced in 2014. At each training stage, individual nodes are either "dropped out" of the net (ignored) with probability
1
−
p
{\displaystyle 1-p}
or kept with probability
p
{\displaystyle p}
, so that a reduced network is left; incoming and outgoing edges to a dropped-out node are also removed. Only the reduced network is trained on the data in that stage. The removed nodes are then reinserted into the network with their original weights.
In the training stages,
p
{\displaystyle p}
is usually 0.5; for input nodes, it is typically much higher because information is directly lost when input nodes are ignored.
At testing time after training has finished, we would ideally like to find a sample average of all possible
2
n
{\displaystyle 2^{n}}
dropped-out networks; unfortunately this is unfeasible for large values of
n
{\displaystyle n}
. However, we can find an approximation by using the full network with each node's output weighted by a factor of
p
{\displaystyle p}
, so the expected value of the output of any node is the same as in the training stages. This is the biggest contribution of the dropout method: although it effectively generates
2
n
{\displaystyle 2^{n}}
neural nets, and as such allows for model combination, at test time only a single network needs to be tested.
By avoiding training all nodes on all training data, dropout decreases overfitting. The method also significantly improves training speed. This makes the model combination practical, even for deep neural networks. The technique seems to reduce node interactions, leading them to learn more robust features that better generalize to new data.
==== DropConnect ====
DropConnect is the generalization of dropout in which each connection, rather than each output unit, can be dropped with probability
1
−
p
{\displaystyle 1-p}
. Each unit thus receives input from a random subset of units in the previous layer.
DropConnect is similar to dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights, rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage.
==== Stochastic pooling ====
A major drawback to dropout is that it does not have the same benefits for convolutional layers, where the neurons are not fully connected.
Even before dropout, in 2013 a technique called stochastic pooling, the conventional deterministic pooling operations were replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to a multinomial distribution, given by the activities within the pooling region. This approach is free of hyperparameters and can be combined with other regularization approaches, such as dropout and data augmentation.
An alternate view of stochastic pooling is that it is equivalent to standard max pooling but with many copies of an input image, each having small local deformations. This is similar to explicit elastic deformations of the input images, which delivers excellent performance on the MNIST data set. Using stochastic pooling in a multilayer model gives an exponential number of deformations since the selections in higher layers are independent of those below.
==== Artificial data ====
Because the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting. Because there is often not enough available data to train, especially considering that some part should be spared for later testing, two approaches are to either generate new data from scratch (if possible) or perturb existing data to create new ones. The latter one is used since mid-1990s. For example, input images can be cropped, rotated, or rescaled to create new examples with the same labels as the original training set.
=== Explicit ===
==== Early stopping ====
One of the simplest methods to prevent overfitting of a network is to simply stop the training before overfitting has had a chance to occur. It comes with the disadvantage that the learning process is halted.
==== Number of parameters ====
Another simple way to prevent overfitting is to limit the number of parameters, typically by limiting the number of hidden units in each layer or limiting network depth. For convolutional networks, the filter size also affects the number of parameters. Limiting the number of parameters restricts the predictive power of the network directly, reducing the complexity of the function that it can perform on the data, and thus limits the amount of overfitting. This is equivalent to a "zero norm".
==== Weight decay ====
A simple form of added regularizer is weight decay, which simply adds an additional error, proportional to the sum of weights (L1 norm) or squared magnitude (L2 norm) of the weight vector, to the error at each node. The level of acceptable model complexity can be reduced by increasing the proportionality constant('alpha' hyperparameter), thus increasing the penalty for large weight vectors.
L2 regularization is the most common form of regularization. It can be implemented by penalizing the squared magnitude of all parameters directly in the objective. The L2 regularization has the intuitive interpretation of heavily penalizing peaky weight vectors and preferring diffuse weight vectors. Due to multiplicative interactions between weights and inputs this has the useful property of encouraging the network to use all of its inputs a little rather than some of its inputs a lot.
L1 regularization is also common. It makes the weight vectors sparse during optimization. In other words, neurons with L1 regularization end up using only a sparse subset of their most important inputs and become nearly invariant to the noisy inputs. L1 with L2 regularization can be combined; this is called elastic net regularization.
==== Max norm constraints ====
Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector
w
→
{\displaystyle {\vec {w}}}
of every neuron to satisfy
‖
w
→
‖
2
<
c
{\displaystyle \|{\vec {w}}\|_{2}<c}
. Typical values of
c
{\displaystyle c}
are order of 3–4. Some papers report improvements when using this form of regularization.
== Hierarchical coordinate frames ==
Pooling loses the precise spatial relationships between high-level parts (such as nose and mouth in a face image). These relationships are needed for identity recognition. Overlapping the pools so that each feature occurs in multiple pools, helps retain the information. Translation alone cannot extrapolate the understanding of geometric relationships to a radically new viewpoint, such as a different orientation or scale. On the other hand, people are very good at extrapolating; after seeing a new shape once they can recognize it from a different viewpoint.
An earlier common way to deal with this problem is to train the network on transformed data in different orientations, scales, lighting, etc. so that the network can cope with these variations. This is computationally intensive for large data-sets. The alternative is to use a hierarchy of coordinate frames and use a group of neurons to represent a conjunction of the shape of the feature and its pose relative to the retina. The pose relative to the retina is the relationship between the coordinate frame of the retina and the intrinsic features' coordinate frame.
Thus, one way to represent something is to embed the coordinate frame within it. This allows large features to be recognized by using the consistency of the poses of their parts (e.g. nose and mouth poses make a consistent prediction of the pose of the whole face). This approach ensures that the higher-level entity (e.g. face) is present when the lower-level (e.g. nose and mouth) agree on its prediction of the pose. The vectors of neuronal activity that represent pose ("pose vectors") allow spatial transformations modeled as linear operations that make it easier for the network to learn the hierarchy of visual entities and generalize across viewpoints. This is similar to the way the human visual system imposes coordinate frames in order to represent shapes.
== Applications ==
=== Image recognition ===
CNNs are often used in image recognition systems. In 2012, an error rate of 0.23% on the MNIST database was reported. Another paper on using CNN for image classification reported that the learning process was "surprisingly fast"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database. Subsequently, a similar CNN called AlexNet won the ImageNet Large Scale Visual Recognition Challenge 2012.
When applied to facial recognition, CNNs achieved a large decrease in error rate. Another paper reported a 97.6% recognition rate on "5,600 still images of more than 10 subjects". CNNs were used to assess video quality in an objective way after manual training; the resulting system had a very low root mean square error.
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object classification and detection, with millions of images and hundreds of object classes. In the ILSVRC 2014, a large-scale visual recognition challenge, almost every highly ranked team used CNN as their basic framework. The winner GoogLeNet (the foundation of DeepDream) increased the mean average precision of object detection to 0.439329, and reduced classification error to 0.06656, the best result to date. Its network applied more than 30 layers. That performance of convolutional neural networks on the ImageNet tests was close to that of humans. The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters, an increasingly common phenomenon with modern digital cameras. By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained categories such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this.
In 2015, a many-layered CNN demonstrated the ability to spot faces from a wide range of angles, including upside down, even when partially occluded, with competitive performance. The network was trained on a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They used batches of 128 images over 50,000 iterations.
=== Video analysis ===
Compared to image data domains, there is relatively little work on applying CNNs to video classification. Video is more complex than images since it has another (temporal) dimension. However, some extensions of CNNs into the video domain have been explored. One approach is to treat space and time as equivalent dimensions of the input and perform convolutions in both time and space. Another way is to fuse the features of two convolutional neural networks, one for the spatial and one for the temporal stream. Long short-term memory (LSTM) recurrent units are typically incorporated after the CNN to account for inter-frame or inter-clip dependencies. Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis. Its application can be seen in text-to-video model.
=== Natural language processing ===
CNNs have also been explored for natural language processing. CNN models are effective for various NLP problems and achieved excellent results in semantic parsing, search query retrieval, sentence modeling, classification, prediction and other traditional NLP tasks.
Compared to traditional language processing methods such as recurrent neural networks, CNNs can represent different contextual realities of language that do not rely on a series-sequence assumption, while RNNs are better suitable when classical time series modeling is required.
=== Anomaly detection ===
A CNN with 1-D convolutions was used on time series in the frequency domain (spectral residual) by an unsupervised model to detect anomalies in the time domain.
=== Drug discovery ===
CNNs have been used in drug discovery. Predicting the interaction between molecules and biological proteins can identify potential treatments. In 2015, Atomwise introduced AtomNet, the first deep learning neural network for structure-based drug design. The system trains directly on 3-dimensional representations of chemical interactions. Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures, AtomNet discovers chemical features, such as aromaticity, sp3 carbons, and hydrogen bonding. Subsequently, AtomNet was used to predict novel candidate biomolecules for multiple disease targets, most notably treatments for the Ebola virus and multiple sclerosis.
=== Checkers game ===
CNNs have been used in the game of checkers. From 1999 to 2001, Fogel and Chellapilla published papers showing how a convolutional neural network could learn to play checkers using co-evolution. The learning process did not use prior human professional games, but rather focused on a minimal set of information contained in the checkerboard: the location and type of pieces, and the difference in number of pieces between the two sides. Ultimately, the program (Blondie24) was tested on 165 games against players and ranked in the highest 0.4%. It also earned a win against the program Chinook at its "expert" level of play.
=== Go ===
CNNs have been used in computer Go. In December 2014, Clark and Storkey published a paper showing that a CNN trained by supervised learning from a database of human professional games could outperform GNU Go and win some games against Monte Carlo tree search Fuego 1.1 in a fraction of the time it took Fuego to play. Later it was announced that a large 12-layer convolutional neural network had correctly predicted the professional move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GNU Go in 97% of games, and matched the performance of the Monte Carlo tree search program Fuego simulating ten thousand playouts (about a million positions) per move.
A couple of CNNs for choosing moves to try ("policy network") and evaluating positions ("value network") driving MCTS were used by AlphaGo, the first to beat the best human player at the time.
=== Time series forecasting ===
Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients. Convolutional networks can provide an improved forecasting performance when there are multiple similar time series to learn from. CNNs can also be applied to further tasks in time series analysis (e.g., time series classification or quantile forecasting).
=== Cultural heritage and 3D-datasets ===
As archaeological findings such as clay tablets with cuneiform writing are increasingly acquired using 3D scanners, benchmark datasets are becoming available, including HeiCuBeDa providing almost 2000 normalized 2-D and 3-D datasets prepared with the GigaMesh Software Framework. So curvature-based measures are used in conjunction with geometric neural networks (GNNs), e.g. for period classification of those clay tablets being among the oldest documents of human history.
== Fine-tuning ==
For many applications, training data is not very available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights, this is known as transfer learning. Furthermore, this technique allows convolutional network architectures to successfully be applied to problems with tiny training sets.
== Human interpretable explanations ==
End-to-end training and prediction are common practice in computer vision. However, human interpretable explanations are required for critical systems such as a self-driving cars. With recent advances in visual salience, spatial attention, and temporal attention, the most critical spatial regions/temporal instants could be visualized to justify the CNN predictions.
== Related architectures ==
=== Deep Q-networks ===
A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.
Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming. Other deep reinforcement learning models preceded it.
=== Deep belief networks ===
Convolutional deep belief networks (CDBN) have structure very similar to convolutional neural networks and are trained similarly to deep belief networks. Therefore, they exploit the 2D structure of images, like CNNs do, and make use of pre-training like deep belief networks. They provide a generic structure that can be used in many image and signal processing tasks. Benchmark results on standard image datasets like CIFAR have been obtained using CDBNs.
=== Neural abstraction pyramid ===
The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid by lateral and feedback connections. The resulting recurrent convolutional network allows for the flexible incorporation of contextual information to iteratively resolve local ambiguities. In contrast to previous models, image-like outputs at the highest resolution were generated, e.g., for semantic segmentation, image reconstruction, and object localization tasks.
== Notable libraries ==
Caffe: A library for convolutional neural networks. Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers.
Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. A general-purpose deep learning library for the JVM production stack running on a C++ scientific computing engine. Allows the creation of custom layers. Integrates with Hadoop and Kafka.
Dlib: A toolkit for making real world machine learning and data analysis applications in C++.
Microsoft Cognitive Toolkit: A deep learning toolkit written by Microsoft with several unique features enhancing scalability over multiple nodes. It supports full-fledged interfaces for training in C++ and Python and with additional support for model inference in C# and Java.
TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU, Google's proprietary tensor processing unit (TPU), and mobile devices.
Theano: The reference deep-learning library for Python with an API largely compatible with the popular NumPy library. Allows user to write symbolic mathematical expressions, then automatically generates their derivatives, saving the user from having to code gradients or backpropagation. These symbolic expressions are automatically compiled to CUDA code for a fast, on-the-GPU implementation.
Torch: A scientific computing framework with wide support for machine learning algorithms, written in C and Lua.
== See also ==
Attention (machine learning)
Convolution
Deep learning
Natural-language processing
Neocognitron
Scale-invariant feature transform
Time delay neural network
Vision processing unit
== Notes ==
== References ==
== External links ==
CS231n: Convolutional Neural Networks for Visual Recognition — Andrej Karpathy's Stanford computer science course on CNNs in computer vision
vdumoulin/conv_arithmetic: A technical report on convolution arithmetic in the context of deep learning. Animations of convolutions. | Wikipedia/Convolutional_neural_networks |
Constellation Energy Corporation is an American energy company headquartered in Baltimore, Maryland. The company provides electric power, natural gas, and energy management services. It has approximately two million customers across the continental United States.
The company was known as Constellation Energy Group (former NYSE ticker symbol CEG), a Fortune 500 company and one of the largest electricity producers in the United States, until a merger with Exelon in 2012. When FERC approved the acquisition, Constellation Energy's energy supply business was re-branded as Constellation, an Exelon company. As part of the 2012 merger, Baltimore Gas and Electric, the regulated utility operated by Constellation Energy, became a regulated utility operating under Exelon Utilities. The current iteration of the company was founded in 2022 after splitting off from Exelon.
Before merging with Exelon, Constellation Energy Group operated more than 35 power plants in 11 states (mainly Maryland, Pennsylvania, New York, West Virginia, and California). Baltimore Gas and Electric created Constellation as a holding company in 1999.
== History ==
On September 15, 2005, Constellation Energy announced a joint venture, UniStar Nuclear, with Areva to market the European Pressurized Reactor (EPR) in the United States. On December 19, 2005, FPL Group, Inc. announced the acquisition of Constellation Energy in a merger transaction valued at more than $11 billion, as well as the fact that it would adopt Constellation Energy as its name for the post-merger entity. The merger was canceled on October 25, 2006.
In July 2008, Constellation Energy bought uranium trading firm Nufcor International from AngloGold Ashanti and FirstRand International.
On September 15, 2008, after reports that Constellation had exposure to Lehman Brothers following that firm's bankruptcy filing, Constellation's stock price dropped 56% in a single day. The massive drop led the New York Stock Exchange to halt trading in Constellation. The next day, as the stock fell as low as $13 a share, the company announced it was hiring Morgan Stanley and UBS to advise it on "strategic alternatives" suggesting a buyout. On September 17, 2008, Constellation accepted an offer of $4.7 billion by MidAmerican Energy, a subsidiary of Berkshire Hathaway, but ultimately canceled the deal on December 17, 2008, in favor of a $4.5 billion buyout from French power company Electricite de France (EDF).
In January 2009, Constellation Energy announced it would sell the majority of its London-based international commodities business to an affiliate of Goldman Sachs for an undisclosed price.
In April 2010, Constellation Energy closed its agreement with Clipper Windpower to acquire the Criterion Wind Project in Garrett County, Maryland, and to purchase 28 Clipper Liberty 2.5-MW wind turbines for the project. Construction was completed in December 2010. In May 2010, the firm acquired two natural gas combined-cycle generation facilities in Texas from Houston-based Navasota Holdings. The $365 million transaction included the Colorado Bend Energy Center, a 550-MW facility near Wharton, Texas, and Quail Run Energy Center, a 550-MW facility near Odessa, Texas. The purchase added 1,100 MW of capacity.
On April 28, 2011, Exelon announced its intention to purchase Constellation Energy. The merger was completed on March 12, 2012.
On May 27, 2011, Constellation Energy announced its intention to purchase StarTex Power, a retail electricity provider in Houston, Texas; the purchase was completed on June 1, 2011. In 2018, the StarTex brand was discontinued; Constellation served its existing customers instead. In May 2011, the company acquired MXenergy, a residential and small business energy provider with approximately half a million customers. In December 2011, it announced the acquisition of ONEOK Energy Marketing Co., a natural gas company with customers in the Midwest. In 2011, it contracted to construct and operate for the Toys-R-Us distribution center in Flanders, New Jersey, what was then the largest rooftop solar array ever constructed.
In March 2014, it agreed to acquire ETC ProLiance Energy, a supplier of natural gas to customers in several states. In November 2014, it completed its acquisition of Integrys Energy Services, a competitive retail electricity and natural gas subsidiary serving customers in 22 states. In September 2016, it completed its acquisition of the retail electricity and natural gas business from ConEdison Solutions, a subsidiary of Consolidated Edison, Inc. In the purchase, Constellation acquired ConEdison's retail electricity and natural gas customer contracts and associated supply contracts serving approximately 15 TWh of electricity and 1 billion cubic feet per annum (28,000,000 m3/a) of natural gas to more than 560,000 commercial, industrial, public sector and residential customers.
In August 2018, it began constructing a 10-megawatt solar array outside of Ocean City, Maryland. The array will provide the city with approximately 20% of its annual energy usage when completed. In October 2018, Constellation and the Tucson Unified School District completed a project that added solar generation capability to 82 of the district's buildings and facilities. The project is estimated to meet 47% of the district's electricity needs.
In 2022, it became an independent company after Exelon split its utilities and power generation businesses. Former subsidiary Baltimore Gas & Electric remained part of Exelon.
In September 2024 Microsoft entered a contract with the company that will restart the undamaged nuclear reactor at the Three Mile Island plant. The company is also planning to upgrade other existing reactor plants to provide more power.
In January 2025, Constellation agreed to acquire the natural gas and geothermal power provider Calpine for $16.4 billion ($26.6bn including debt) in a cash-and-stock deal. Approval of the purchase by state and federal regulators will be necessary.
== Operations ==
=== Electric power ===
Constellation provides electric power to commercial and industrial customers. Its electricity supply business manages energy sales, dispatch, and delivery from Exelon's power generation portfolio to utilities, municipal co-ops, and energy retailers nationwide. As of 2018, Constellation had around 360 megawatts of solar generation assets that are either in operation or under construction across the United States, including Maryland, California, Arizona, New Jersey, and Texas. In 2011, Constellation was contracted to construct and operate what was then the largest rooftop solar array ever constructed for the Toys-R-Us distribution center in Flanders, New Jersey.
The company's offsite renewables service (CORe) provides access to offsite renewable energy projects through a retail power contract. CORe combines location-specific renewable energy purchases and certificates with a physical load-following energy supply contract.
=== Natural gas ===
Constellation delivers approximately 730 billion cubic feet (21 billion cubic metres) of natural gas annually to customers, making it one of the ten largest natural gas marketers in the United States. The company oversees trading, transport, and storage of physical gas supply, pricing, hedging, and risk management.
=== Constellation Technology Ventures ===
Constellation Technology Ventures (CTV), Constellation's venture capital fund, invests in start-up companies with emerging energy technologies. Their portfolio includes Proterra, ChargePoint and Aquion Energy
=== Nuclear ===
Constellation is the United States' leading nuclear power plant operator, with over 19,000 megawatts.
Braidwood Nuclear Generating Station (Illinois)
Byron Nuclear Generating Station (Illinois)
Calvert Cliffs Nuclear Power Plant (Maryland)
Clinton Nuclear Generating Station (Illinois)
Dresden Generating Station (Illinois)
Ginna Nuclear Generating Station (New York)
James A. FitzPatrick Nuclear Power Plant (New York)
LaSalle County Nuclear Generating Station (Illinois)
Limerick Nuclear Power Plant (Pennsylvania)
Nine Mile Point Nuclear Generating Station (New York)
Peach Bottom Atomic Power Station (Pennsylvania)
Quad Cities Nuclear Generating Station (Illinois)
Salem Nuclear Power Plant (New Jersey) (minority owner)
Three Mile Island Nuclear Generating Station (Pennsylvania) (Unit 2 owned by EnergySolutions)
South Texas Nuclear Generating Station (Texas) (Minority owner)
=== Fossil ===
Constellation owns and operates a portfolio of fossil fuel and other sources generating more than 12,000 megawatts (MW) of power.
Chester Generating Station – oil (Pennsylvania), which is distinct from the historic Chester Waterside Station
Colorado Bend II Energy Center – natural gas (Texas)
Croydon Generating Station – oil (Pennsylvania)
Delaware Generating Station – oil (Pennsylvania)
Eddystone Generating Station – natural gas and oil (Pennsylvania)
Everett LNG Facility – natural gas imports (Massachusetts) a.k.a. Distrigas, purchased in 2019
Falls Generating Station – oil (Pennsylvania)
Framingham Generating Station – oil (Massachusetts)
Grande Prairie Generating Station – natural gas (Alberta, Canada)
Handley Generating Station – natural gas (Texas)
Handsome Lake Generating Station – natural gas (Pennsylvania)
Hillabee Generating Station – natural gas (Alabama)
Moser Generating Station – oil (Pennsylvania)
Mystic Generating Station – natural gas (Massachusetts)
Perryman Generating Station – oil and natural gas (Maryland)
Philadelphia Road Generating Station – oil (Maryland)
Richmond Generating Station – oil (Pennsylvania)
Schuylkill Generating Station – oil (Pennsylvania)
Southwark Generating Station – oil (Pennsylvania)
West Medway Generating Station I – oil (Massachusetts)
West Medway Generating Station II – natural gas or oil (Massachusetts)
Wolf Hollow II Generating Station – natural gas (Texas)
Wyman Generating Station – oil (Maine) (minority owner)
=== Hydro ===
Constellation's two hydroelectric plants generate 1,600 MW of power.
Conowingo Dam (Maryland)
Muddy Run Pumped Storage Facility (Pennsylvania)
=== Solar ===
Antelope Valley Solar Ranch One (California)
=== Wind ===
Constellation has 27 wind projects in ten states, totaling nearly 1,400 megawatts (MW).
=== Generation services ===
Constellation owns Constellation Generation Solutions (CGS). CGS functions as an industry-leading maintenance and technical services organization emphasizing precision and quality, structured to streamline the work execution of the nuclear fleet.
Constellation PowerLabs is a wholly owned subsidiary of Constellation. Since 1911, PowerLabs has transformed from supporting the power industry to becoming Exelon's primary calibrations and testing laboratory. It has four individual labs strategically located from the upper Midwest to the Northeast, enabling experienced staff in engineering, metrology, and nuclear power generation to support the urgent demands of our nation's nuclear facilities, power grids, and critical supply chains.
== Baltimore community involvement ==
=== Historical archives ===
Constellation owns the archives of the Baltimore Gas & Electric Company, the former Consolidated Gas Light, Electric Power Company of Baltimore City, and its ancient predecessor, the Gas Light Company of Baltimore. The Baltimore Gas & Electric Company's photographic collection consists of approximately 250,000 photographic prints and negatives, in more than 50,000 series. The archives are held by the Baltimore Museum of Industry.
=== Philanthropy ===
Constellation ranks second in local corporate giving among Baltimore-based companies, and donated $7.10 million in 2017. The company also provides grants to local schools that implement education programs promoting science and technology.
== See also ==
Calvert Cliffs Nuclear Generating Station, on the Chesapeake Bay, Calvert County; Lusby, Maryland
Conemaugh Generating Station
Ginna Nuclear Generating Station
Keystone Generating Station
Nine Mile Point Nuclear Generating Station
Safe Harbor Dam, on the Susquehanna River, Pennsylvania
== References ==
== External links ==
Official website
Business data for Constellation Energy Corp: | Wikipedia/Constellation_Energy |
The transformer is a deep learning architecture based on the multi-head attention mechanism, in which text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished.
Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM). Later variations have been widely adopted for training large language models (LLM) on large (language) datasets.
The modern version of the transformer was proposed in the 2017 paper "Attention Is All You Need" by researchers at Google. Transformers were first developed as an improvement over previous architectures for machine translation, but have found many applications since. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning, audio, multimodal learning, robotics, and even playing chess. It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs) and BERT (bidirectional encoder representations from transformers).
== History ==
=== Predecessors ===
For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens.
A key breakthrough was LSTM (1995), a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called multiplicative units. Neural networks using multiplicative units were later called sigma-pi networks or higher-order networks. LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers.
However, LSTM still used sequential processing, like most other RNNs. Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence.
Modern Transformers overcome this problem, but unlike RNNs, they require computation time that is quadratic in the size of the context window. The linearly scaling fast weight controller (1992) learns to compute a weight matrix for further processing depending on the input. One of its two networks has "fast weights" or "dynamic links" (1981). A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries. This was later shown to be equivalent to the unnormalized linear Transformer.
=== Attention with seq2seq ===
The idea of encoder-decoder sequence transduction had been developed in the early 2010s; commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.
A 380M-parameter model for machine translation uses two long short-term memories (LSTM). Its architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model used gated recurrent units (GRU) instead of LSTM. Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.
These early seq2seq models had no attention mechanism, and the state vector is accessible only after the last word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into a fixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.
The RNNsearch model introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of the fixed-size output vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".
The relative performances were compared between global (that of RNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.
In 2016, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM. It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.
=== Parallelizing attention ===
Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them from being accelerated on GPUs. In 2016, decomposable attention applied a self-attention mechanism to feedforward networks, which are easy to parallelize, and achieved SOTA result in textual entailment with an order of magnitude fewer parameters than LSTMs. One of its authors, Jakob Uszkoreit, suspected that attention without recurrence would be sufficient for language translation, thus the title "attention is all you need". That hypothesis was against conventional wisdom at the time, and even his father Hans Uszkoreit, a well-known computational linguist, was skeptical. In the same year, self-attention (called intra-attention or intra-sentence attention) was proposed for LSTMs.
In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks.
=== AI boom era ===
Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles. Transformer architecture is now used alongside many generative models that contribute to the ongoing AI boom.
In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model. In 2019 October, Google started using BERT to process search queries. In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.
Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly popular, triggering a boom around large language models.
Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer, speech recognition, robotics, and multimodal. The vision transformer, in turn, stimulated new developments in convolutional neural networks. Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024), and Sora (2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data.
== Training ==
=== Methods for stabilizing training ===
The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again.
A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup.
=== Pretrain-finetune ===
Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include:
language modeling
next-sentence prediction
question answering
reading comprehension
sentiment analysis
paraphrasing
The T5 transformer report documents a large number of natural language pretraining tasks. Some examples are:
restoring or repairing incomplete or corrupted text. For example, the input, "Thank you ~~ me to your party ~~ week", might generate the output, "Thank you for inviting me to your party last week".
translation between natural languages (machine translation)
judging the pragmatic acceptability of natural language. For example, the following sentence might be judged "not acceptable", because even though it is syntactically well-formed, it is improbable in ordinary human usage: The course is jumping well.
Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture.
=== Tasks ===
In general, there are 3 classes of language modelling tasks: "masked", "autoregressive", and "prefixLM". These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer.
In a masked task, one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens:
Loss
=
−
∑
t
∈
masked tokens
ln
(
probability of
t
conditional on its context
)
{\displaystyle {\text{Loss}}=-\sum _{t\in {\text{masked tokens}}}\ln({\text{probability of }}t{\text{ conditional on its context}})}
and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task.
In an autoregressive task, the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks.
In a prefixLM task, the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. The T5 series of models are trained by prefixLM tasks.
Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model).
== Architecture ==
All transformers have the same primary components:
Tokenizers, which convert text into tokens.
Embedding layer, which converts tokens and positions of the tokens into vector representations.
Transformer layers, which carry out repeated transformations on the vector representations, extracting more and more linguistic information. These consist of alternating attention and feedforward layers. There are two major types of transformer layers: encoder layers and decoder layers, with further variants.
Un-embedding layer, which converts the final vector representations back to a probability distribution over the tokens.
The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section.
By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as
x
W
{\displaystyle xW}
.
=== Tokenization ===
As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is a tokenizer.
The set of all tokens is the vocabulary of the tokenizer, and its size is the vocabulary size
n
vocabulary
{\displaystyle n_{\text{vocabulary}}}
. When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown".
Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece.
=== Embedding ===
Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix
M
{\displaystyle M}
. For example, if the input token is
3
{\displaystyle 3}
, then the one-hot representation is
[
0
,
0
,
0
,
1
,
0
,
0
,
…
]
{\displaystyle [0,0,0,1,0,0,\dots ]}
, and its embedding vector is
E
m
b
e
d
(
3
)
=
[
0
,
0
,
0
,
1
,
0
,
0
,
…
]
M
{\displaystyle \mathrm {Embed} (3)=[0,0,0,1,0,0,\dots ]M}
The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors.
The number of dimensions in an embedding vector is called hidden size or embedding size and written as
d
emb
{\displaystyle d_{\text{emb}}}
. This size is written as
d
model
{\displaystyle d_{\text{model}}}
in the original Transformer paper.
=== Un-embedding ===
An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens.
The un-embedding layer is a linear-softmax layer:
U
n
E
m
b
e
d
(
x
)
=
s
o
f
t
m
a
x
(
x
W
+
b
)
{\displaystyle \mathrm {UnEmbed} (x)=\mathrm {softmax} (xW+b)}
The matrix has shape
(
d
emb
,
n
vocabulary
)
{\displaystyle (d_{\text{emb}},n_{\text{vocabulary}})}
. The embedding matrix
M
{\displaystyle M}
and the un-embedding matrix
W
{\displaystyle W}
are sometimes required to be transposes of each other, a practice called weight tying.
=== Positional encoding ===
A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information about where the words are in the input sequence. This shall induce a bias towards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man".
The positional encoding is defined as a function of type
f
:
R
→
R
d
;
d
∈
Z
,
d
>
0
{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0}
, where
d
{\displaystyle d}
is a positive even integer. The full positional encoding defined in the original paper is:
(
f
(
t
)
2
k
,
f
(
t
)
2
k
+
1
)
=
(
sin
(
θ
)
,
cos
(
θ
)
)
∀
k
∈
{
0
,
1
,
…
,
d
/
2
−
1
}
{\displaystyle (f(t)_{2k},f(t)_{2k+1})=(\sin(\theta ),\cos(\theta ))\quad \forall k\in \{0,1,\ldots ,d/2-1\}}
where
θ
=
t
r
k
,
r
=
N
2
/
d
{\displaystyle \theta ={\frac {t}{r^{k}}},r=N^{2/d}}
.
Here,
N
{\displaystyle N}
is a free parameter that should be significantly larger than the biggest
k
{\displaystyle k}
that would be input into the positional encoding function. The original paper uses
N
=
10000
{\displaystyle N=10000}
.
The function is in a simpler form when written as a complex function of type
f
:
R
→
C
d
/
2
{\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}}
f
(
t
)
=
(
e
i
t
/
r
k
)
k
=
0
,
1
,
…
,
d
2
−
1
{\displaystyle f(t)=\left(e^{it/r^{k}}\right)_{k=0,1,\ldots ,{\frac {d}{2}}-1}}
where
r
=
N
2
/
d
{\displaystyle r=N^{2/d}}
.
The main reason for using this positional encoding function is that using it, shifts are linear transformations:
f
(
t
+
Δ
t
)
=
d
i
a
g
(
f
(
Δ
t
)
)
f
(
t
)
{\displaystyle f(t+\Delta t)=\mathrm {diag} (f(\Delta t))f(t)}
where
Δ
t
∈
R
{\displaystyle \Delta t\in \mathbb {R} }
is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication.
By taking a linear sum, any convolution can also be implemented as linear transformations:
∑
j
c
j
f
(
t
+
Δ
t
j
)
=
(
∑
j
c
j
d
i
a
g
(
f
(
Δ
t
j
)
)
)
f
(
t
)
{\displaystyle \sum _{j}c_{j}f(t+\Delta t_{j})=\left(\sum _{j}c_{j}\,\mathrm {diag} (f(\Delta t_{j}))\right)f(t)}
for any constants
c
j
{\displaystyle c_{j}}
. This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in a convolutional neural network language model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position."
In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference.
=== Encoder-decoder (overview) ===
Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far.
The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).
Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps. These feed-forward layers contain most of the parameters in a Transformer model.
=== Feedforward network ===
The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons:
F
F
N
(
x
)
=
ϕ
(
x
W
(
1
)
+
b
(
1
)
)
W
(
2
)
+
b
(
2
)
{\displaystyle \mathrm {FFN} (x)=\phi (xW^{(1)}+b^{(1)})W^{(2)}+b^{(2)}}
where
W
(
1
)
{\displaystyle W^{(1)}}
and
W
(
2
)
{\displaystyle W^{(2)}}
are weight matrices and
b
(
1
)
{\displaystyle b^{(1)}}
and
b
(
2
)
{\displaystyle b^{(2)}}
are bias vectors, and
ϕ
{\displaystyle \phi }
is its activation function. The original Transformer used ReLU activation.
The number of neurons in the middle layer is called intermediate size (GPT), filter size (BERT), or feedforward size (BERT). It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size:
d
ffn
=
4
d
emb
{\displaystyle d_{\text{ffn}}=4d_{\text{emb}}}
.
=== Scaled dot-product attention ===
==== Attention head ====
The attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights
W
Q
{\displaystyle W^{Q}}
, the key weights
W
K
{\displaystyle W^{K}}
, and the value weights
W
V
{\displaystyle W^{V}}
.
The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length
ℓ
seq, query
{\displaystyle \ell _{\text{seq, query}}}
, and each entry is a vector of dimension
d
emb, query
{\displaystyle d_{\text{emb, query}}}
. Similarly for the key and value sequences.
For each vector
x
i
,
query
{\displaystyle x_{i,{\text{query}}}}
in the query sequence, it is multiplied by a matrix
W
Q
{\displaystyle W^{Q}}
to produce a query vector
q
i
=
x
i
,
query
W
Q
{\displaystyle q_{i}=x_{i,{\text{query}}}W^{Q}}
. The matrix of all query vectors is the query matrix:
Q
=
X
query
W
Q
{\displaystyle Q=X_{\text{query}}W^{Q}}
Similarly, we construct the key matrix
K
=
X
key
W
K
{\displaystyle K=X_{\text{key}}W^{K}}
and the value matrix
V
=
X
value
W
V
{\displaystyle V=X_{\text{value}}W^{V}}
.
It is usually the case that all
W
Q
,
W
K
,
W
V
{\displaystyle W^{Q},W^{K},W^{V}}
are square matrices, meaning
d
emb, query
=
d
query
{\displaystyle d_{\text{emb, query}}=d_{\text{query}}}
, etc.
Attention weights are calculated using the query and key vectors: the attention weight
a
i
j
{\displaystyle a_{ij}}
from token
i
{\displaystyle i}
to token
j
{\displaystyle j}
is the dot product between
q
i
{\displaystyle q_{i}}
and
k
j
{\displaystyle k_{j}}
. The attention weights are divided by the square root of the dimension of the key vectors,
d
k
{\displaystyle {\sqrt {d_{k}}}}
, which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that
W
Q
{\displaystyle W^{Q}}
and
W
K
{\displaystyle W^{K}}
are different matrices allows attention to be non-symmetric: if token
i
{\displaystyle i}
attends to token
j
{\displaystyle j}
(i.e.
q
i
⋅
k
j
{\displaystyle q_{i}\cdot k_{j}}
is large), this does not necessarily mean that token
j
{\displaystyle j}
will attend to token
i
{\displaystyle i}
(i.e.
q
j
⋅
k
i
{\displaystyle q_{j}\cdot k_{i}}
could be small). The output of the attention unit for token
i
{\displaystyle i}
is the weighted sum of the value vectors of all tokens, weighted by
a
i
j
{\displaystyle a_{ij}}
, the attention from token
i
{\displaystyle i}
to each token.
The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices
Q
{\displaystyle Q}
,
K
{\displaystyle K}
and
V
{\displaystyle V}
are defined as the matrices where the
i
{\displaystyle i}
th rows are vectors
q
i
{\displaystyle q_{i}}
,
k
i
{\displaystyle k_{i}}
, and
v
i
{\displaystyle v_{i}}
respectively. Then we can represent the attention as
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
where the softmax is applied over each of the rows of the matrix.
The number of dimensions in a query vector is query size
d
query
{\displaystyle d_{\text{query}}}
and similarly for the key size
d
key
{\displaystyle d_{\text{key}}}
and value size
d
value
{\displaystyle d_{\text{value}}}
. The output dimension of an attention head is its head dimension
d
head
{\displaystyle d_{\text{head}}}
. The attention mechanism requires the following three equalities to hold:
ℓ
seq, key
=
ℓ
seq, value
,
d
query
=
d
key
,
d
value
=
d
head
{\displaystyle \ell _{\text{seq, key}}=\ell _{\text{seq, value}},\;d_{\text{query}}=d_{\text{key}},\;d_{\text{value}}=d_{\text{head}}}
but is otherwise unconstrained.
If the attention head is used in a self-attention fashion, then
X
query
=
X
key
=
X
value
{\displaystyle X_{\text{query}}=X_{\text{key}}=X_{\text{value}}}
. If the attention head is used in a cross-attention fashion, then usually
X
query
≠
X
key
=
X
value
{\displaystyle X_{\text{query}}\neq X_{\text{key}}=X_{\text{value}}}
. It is theoretically possible for all three to be different, but that is rarely the case in practice.
==== Multiheaded attention ====
One set of
(
W
Q
,
W
K
,
W
V
)
{\displaystyle \left(W^{Q},W^{K},W^{V}\right)}
matrices is called an attention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices,
W
Q
{\displaystyle W^{Q}}
and
W
K
{\displaystyle W^{K}}
, which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrix
W
V
{\displaystyle W^{V}}
, in combination with the part of the output projection matrix
W
O
{\displaystyle W^{O}}
, determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects. The computations for each attention head can be performed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers.
Concretely, let the multiple attention heads be indexed by
i
{\displaystyle i}
, then we have
MultiheadedAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
i
K
,
X
W
i
V
)
)
W
O
{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V}))W^{O}}
where the matrix
X
{\displaystyle X}
is the concatenation of word embeddings, and the matrices
W
i
Q
,
W
i
K
,
W
i
V
{\displaystyle W_{i}^{Q},W_{i}^{K},W_{i}^{V}}
are "projection matrices" owned by individual attention head
i
{\displaystyle i}
, and
W
O
{\displaystyle W^{O}}
is a final projection matrix owned by the whole multi-headed attention head.
It is theoretically possible for each attention head to have a different head dimension
d
head
{\displaystyle d_{\text{head}}}
, but that is rarely the case in practice.
As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:
d
emb
=
768
,
n
head
=
12
,
d
head
=
64
{\displaystyle d_{\text{emb}}=768,n_{\text{head}}=12,d_{\text{head}}=64}
Since
12
×
64
=
768
{\displaystyle 12\times 64=768}
, its output projection matrix
W
O
∈
R
(
12
×
64
)
×
768
{\displaystyle W^{O}\in \mathbb {R} ^{(12\times 64)\times 768}}
is a square matrix.
==== Masked attention ====
The Transformer architecture is constructed to calculate output tokens iteratively. Assuming
t
=
0
{\displaystyle t=0}
refers to the calculation of the first output token
i
=
0
{\displaystyle i=0}
, for step
t
>
0
{\displaystyle t>0}
, the output token
i
=
0
{\displaystyle i=0}
shall remain constant. This ensures properties of the model similar to autoregressive models. Therefore, at every time step
t
{\displaystyle t}
, the calculation for all outputs
i
{\displaystyle i}
should not have access to tokens at position
j
{\displaystyle j}
for
j
>=
i
{\displaystyle j>=i}
(as it naturally is the case for time step
t
=
i
{\displaystyle t=i}
, when tokens
j
>
t
{\displaystyle j>t}
are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrix
M
{\displaystyle M}
that is
−
∞
{\displaystyle -\infty }
at entries where the attention link must be cut, and
0
{\displaystyle 0}
at other places:
MaskedAttention
(
Q
,
K
,
V
)
=
softmax
(
M
+
Q
K
T
d
k
)
V
{\displaystyle {\begin{aligned}{\text{MaskedAttention}}(Q,K,V)={\text{softmax}}\left(M+{\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
The following matrix is commonly used in decoder self-attention modules, called "causal masking":
M
causal
=
[
0
−
∞
−
∞
…
−
∞
0
0
−
∞
…
−
∞
0
0
0
…
−
∞
⋮
⋮
⋮
⋱
⋮
0
0
0
…
0
]
{\displaystyle M_{\text{causal}}={\begin{bmatrix}0&-\infty &-\infty &\dots &-\infty \\0&0&-\infty &\dots &-\infty \\0&0&0&\dots &-\infty \\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &0\end{bmatrix}}}
In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form
P
M
causal
P
−
1
{\displaystyle PM_{\text{causal}}P^{-1}}
, where
P
{\displaystyle P}
is a random permutation matrix.
=== Encoder ===
An encoder consists of an embedding layer, followed by multiple encoder layers.
Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:
given input vectors
h
0
,
h
1
,
…
combine them into a matrix
H
=
[
h
0
h
1
⋮
]
EncoderLayer
(
H
)
=
[
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
0
)
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
1
)
⋮
]
{\displaystyle {\begin{aligned}{\text{given input vectors }}&h_{0},h_{1},\dots \\{\text{combine them into a matrix }}H&={\begin{bmatrix}h_{0}\\h_{1}\\\vdots \end{bmatrix}}\\{\text{EncoderLayer}}(H)&={\begin{bmatrix}{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{0})\\{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{1})\\\vdots \end{bmatrix}}\\\end{aligned}}}
where
FFN
{\displaystyle {\text{FFN}}}
stands for "feed-forward network". We can more succinctly write it as
EncoderLayer
(
H
)
=
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
)
{\displaystyle {\text{EncoderLayer}}(H)={\text{FFN}}({\text{MultiheadedAttention}}(H,H,H))}
with the implicit convention that the
FFN
{\displaystyle {\text{FFN}}}
is applied to each row of the matrix individually.
The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder.
As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking.
=== Decoder ===
A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer.
Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention.
Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked.
In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism.
Schematically, we have:
H
′
=
MaskedMultiheadedAttention
(
H
,
H
,
H
)
DecoderLayer
(
H
)
=
FFN
(
MultiheadedAttention
(
H
′
,
H
E
,
H
E
)
)
{\displaystyle {\begin{aligned}H'&={\text{MaskedMultiheadedAttention}}(H,H,H)\\{\text{DecoderLayer}}(H)&={\text{FFN}}({\text{MultiheadedAttention}}(H',H^{E},H^{E}))\end{aligned}}}
where
H
E
{\displaystyle H^{E}}
is the matrix with rows being the output vectors from the encoder.
The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text.
=== Adapted architectures ===
Many large language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. Early GPT models are decoder-only models trained to predict the next token in a sequence. BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence.
== Full transformer architecture ==
=== Sublayers ===
Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network.
The final points of detail are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence.
The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero.
Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector.
There are two common conventions in use: the post-LN and the pre-LN convention. In the post-LN convention, the output of each sublayer is
L
a
y
e
r
N
o
r
m
(
x
+
S
u
b
l
a
y
e
r
(
x
)
)
{\displaystyle \mathrm {LayerNorm} (x+\mathrm {Sublayer} (x))}
where
S
u
b
l
a
y
e
r
(
x
)
{\displaystyle \mathrm {Sublayer} (x)}
is the function implemented by the sublayer itself.
In the pre-LN convention, the output of each sublayer is
x
+
S
u
b
l
a
y
e
r
(
L
a
y
e
r
N
o
r
m
(
x
)
)
{\displaystyle x+\mathrm {Sublayer} (\mathrm {LayerNorm} (x))}
The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018, was found to be easier to train, requiring no warm-up, leading to faster convergence.
=== Pseudocode ===
The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from
input: Encoder input t_e
Decoder input t_d
output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence))
/* encoder */
z_e ← encoder.tokenizer(t_e)
for each t in 1:length(z_e) do
z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t)
for each l in 1:length(encoder.layers) do
layer ← encoder.layers[l]
/* first sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.multiheaded_attention(z_e, z_e, z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
/* second sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.feedforward(z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
for each t in 1:length(z_e) do
z_e[t] ← encoder.final_layer_norm(z_e[t])
/* decoder */
z_d ← decoder.tokenizer(t_d)
for each t in 1:length(z_d) do
z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t)
for each l in 1:length(decoder.layers) do
layer ← decoder.layers[l]
/* first sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* second sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.multiheaded_attention(z_d, z_e, z_e)
for each i in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* third sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.feedforward(z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
z_d ← decoder.final_layer_norm(z_d)
output_distributions ← []
for each t in 1:length(z_d) do
output_distributions.append(decoder.unembed(z_d[t]))
return output_distributions
=== Terminology ===
The Transformer architecture, being modular, allows variations. Several common variations are described here.
An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.
A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only.
An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder.
A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form: Figure 3
M
prefixLM
=
[
0
−
∞
0
M
causal
]
{\displaystyle M_{\text{prefixLM}}={\begin{bmatrix}\mathbf {0} &-\infty \\\mathbf {0} &M_{\text{causal}}\end{bmatrix}}}
where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.
There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively.
== Subsequent work ==
=== Alternative activation functions ===
The original transformer uses ReLU activation function. Other activation functions were developed. The Llama series and PaLM used SwiGLU; both GPT-1 and BERT used GELU.
Alternative activation functions are often used in combination with Gated Linear Units in the feedforward module.
=== Alternative normalizations ===
The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm which is used in the Llama series. Other examples include CapsuleNorm ScaleNorm, or FixNorm.
=== Alternative positional encodings ===
Transformers may use other positional encoding methods than sinusoidal.
The original Transformer paper reported using a learned positional encoding, but finding it not superior to the sinusoidal one. Later, found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module.
==== RoPE ====
RoPE (rotary positional embedding), is best explained by considering a list of 2-dimensional vectors
[
(
x
1
(
1
)
,
x
1
(
2
)
)
,
(
x
2
(
1
)
,
x
2
(
2
)
)
,
(
x
3
(
1
)
,
x
3
(
2
)
)
,
.
.
.
]
{\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),...]}
. Now pick some angle
θ
{\displaystyle \theta }
. Then RoPE encoding is
RoPE
(
x
m
(
1
)
,
x
m
(
2
)
,
m
)
=
(
cos
m
θ
−
sin
m
θ
sin
m
θ
cos
m
θ
)
(
x
m
(
1
)
x
m
(
2
)
)
=
(
x
m
(
1
)
cos
m
θ
−
x
m
(
2
)
sin
m
θ
x
m
(
2
)
cos
m
θ
+
x
m
(
1
)
sin
m
θ
)
{\displaystyle {\text{RoPE}}{\big (}x_{m}^{(1)},x_{m}^{(2)},m{\big )}={\begin{pmatrix}\cos m\theta &-\sin m\theta \\\sin m\theta &\cos m\theta \end{pmatrix}}{\begin{pmatrix}x_{m}^{(1)}\\x_{m}^{(2)}\\\end{pmatrix}}={\begin{pmatrix}x_{m}^{(1)}\cos m\theta -x_{m}^{(2)}\sin m\theta \\x_{m}^{(2)}\cos m\theta +x_{m}^{(1)}\sin m\theta \\\end{pmatrix}}}
Equivalently, if we write the 2-dimensional vectors as complex numbers
z
m
:=
x
m
(
1
)
+
i
x
m
(
2
)
{\displaystyle z_{m}:=x_{m}^{(1)}+ix_{m}^{(2)}}
, then RoPE encoding is just multiplication by an angle:
RoPE
(
z
m
,
m
)
=
e
i
m
θ
z
m
{\displaystyle {\text{RoPE}}{\big (}z_{m},m{\big )}=e^{im\theta }z_{m}}
For a list of
2
n
{\displaystyle 2n}
-dimensional vectors, a RoPE encoder is defined by a sequence of angles
θ
(
1
)
,
.
.
.
,
θ
(
n
)
{\displaystyle \theta ^{(1)},...,\theta ^{(n)}}
. Then the RoPE encoding is applied to each pair of coordinates.
The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:
RoPE
(
x
,
m
)
T
RoPE
(
y
,
n
)
=
RoPE
(
x
,
m
+
k
)
T
RoPE
(
y
,
n
+
k
)
{\displaystyle {\text{RoPE}}{\big (}x,m{\big )}^{T}{\text{RoPE}}{\big (}y,n{\big )}={\text{RoPE}}{\big (}x,m+k{\big )}^{T}{\text{RoPE}}{\big (}y,n+k{\big )}}
for any integer
k
{\displaystyle k}
.
==== ALiBi ====
ALiBi (Attention with Linear Biases) is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism is
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
+
s
B
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+sB\right)V\end{aligned}}}
Here,
s
{\displaystyle s}
is a real number ("scalar"), and
B
{\displaystyle B}
is the linear bias matrix defined by
B
=
(
0
1
2
3
⋯
−
1
0
1
2
⋯
−
2
−
1
0
1
⋯
−
3
−
2
−
1
0
⋯
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle B={\begin{pmatrix}0&1&2&3&\cdots \\-1&0&1&2&\cdots \\-2&-1&0&1&\cdots \\-3&-2&-1&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}}
in other words,
B
i
,
j
=
j
−
i
{\displaystyle B_{i,j}=j-i}
. The idea being that the linear bias matrix is a softened mask. Just as
0
{\displaystyle 0}
represent full attention paid, and
−
∞
{\displaystyle -\infty }
represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction.
ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located).
==== Relative Position Encodings ====
Relative Position Encodings is similar to ALiBi, but more generic:
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
+
B
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+B\right)V\end{aligned}}}
where
B
{\displaystyle B}
is a Toeplitz matrix, that is,
B
i
,
j
=
B
i
′
,
j
′
{\displaystyle B_{i,j}=B_{i',j'}}
whenever
i
−
j
=
i
′
−
j
′
{\displaystyle i-j=i'-j'}
. This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding".
=== Efficient implementation ===
The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models.
==== KV caching ====
When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching.
If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots.
==== FlashAttention ====
FlashAttention is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page on softmax for details.
An improved version, FlashAttention-2, was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention.
Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).
Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8.
==== Multi-Query Attention ====
Multi-Query Attention changes the multiheaded attention mechanism. Whereas normally,
MultiheadedAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
i
K
,
X
W
i
V
)
)
W
O
{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V})\right)W^{O}}
with Multi-Query Attention, there is just one
W
K
,
W
V
{\displaystyle W^{K},W^{V}}
, thus:
MultiQueryAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
K
,
X
W
V
)
)
W
O
{\displaystyle {\text{MultiQueryAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW^{K},XW^{V})\right)W^{O}}
This has a neutral effect on model quality and training speed, but increases inference speed.
More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups.
Multihead Latent Attention (MLA) is a low-rank approximation to standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached.
==== Speculative decoding ====
Speculative decoding is a method to accelerate token decoding. Similarly to speculative execution in CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly.
The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense.
Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a token
x
1
,
x
2
,
.
.
.
,
x
512
{\displaystyle x_{1},x_{2},...,x_{512}}
, taking time
512
T
GPT-3
{\displaystyle 512T_{\text{GPT-3}}}
. However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that each
x
t
{\displaystyle x_{t}}
is indeed the token with the largest log-likelihood in the
t
{\displaystyle t}
-th output.
In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens:
x
~
1
,
x
~
2
,
x
~
3
,
x
~
4
{\displaystyle {\tilde {x}}_{1},{\tilde {x}}_{2},{\tilde {x}}_{3},{\tilde {x}}_{4}}
. This only takes
4
T
GPT-3-small
{\displaystyle 4T_{\text{GPT-3-small}}}
. These tokens are then run through the larger GPT-3 in one go. Suppose that
x
~
1
{\displaystyle {\tilde {x}}_{1}}
and
x
~
2
{\displaystyle {\tilde {x}}_{2}}
are verified by GPT-3 as what it would have picked, then those are kept, but
x
~
3
{\displaystyle {\tilde {x}}_{3}}
is not, so
x
~
3
,
x
~
4
{\displaystyle {\tilde {x}}_{3},{\tilde {x}}_{4}}
are discarded, and GPT-3 is run on those. This would take
4
T
GPT-3-small
+
3
T
GPT-3
{\displaystyle 4T_{\text{GPT-3-small}}+3T_{\text{GPT-3}}}
, which might be shorter than
4
T
GPT-3
{\displaystyle 4T_{\text{GPT-3}}}
.
For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.
In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict the next token, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack.
=== Sub-quadratic transformers ===
Training transformer-based architectures can be expensive, especially for long inputs. Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows. In the audio domain, SepTr decouples the attention in time and frequency domains. Long Range Arena (2020) is a standard benchmark for comparing the behavior of transformer architectures over long inputs.
==== Alternative attention graphs ====
The standard attention graph is either all-to-all or causal, both of which scales as
O
(
N
2
)
{\displaystyle O(N^{2})}
where
N
{\displaystyle N}
is the number of tokens in a sequence.
Reformer (2020) reduces the computational load from
O
(
N
2
)
{\displaystyle O(N^{2})}
to
O
(
N
ln
N
)
{\displaystyle O(N\ln N)}
by using locality-sensitive hashing and reversible layers.
Sparse attention uses attention graphs that grows slower than
O
(
N
2
)
{\displaystyle O(N^{2})}
. For example, BigBird (2020) uses random small-world networks which grows as
O
(
N
)
{\displaystyle O(N)}
.
Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value.
==== Random Feature Attention ====
Random Feature Attention (2021) uses Fourier random features:
φ
(
x
)
=
1
D
[
cos
⟨
w
1
,
x
⟩
,
sin
⟨
w
1
,
x
⟩
,
⋯
cos
⟨
w
D
,
x
⟩
,
sin
⟨
w
D
,
x
⟩
]
T
{\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\cdots \cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}}
where
w
1
,
.
.
.
,
w
D
{\displaystyle w_{1},...,w_{D}}
are independent samples from the normal distribution
N
(
0
,
σ
2
I
)
{\displaystyle N(0,\sigma ^{2}I)}
. This choice of parameters satisfy
E
[
⟨
φ
(
x
)
,
φ
(
y
)
⟩
]
=
e
−
‖
x
−
y
‖
2
2
σ
2
{\displaystyle \mathbb {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{-{\frac {\|x-y\|^{2}}{2\sigma ^{2}}}}}
, or
e
⟨
x
,
y
⟩
/
σ
2
=
E
[
⟨
e
‖
x
‖
2
/
2
σ
2
φ
(
x
)
,
e
‖
y
‖
2
/
2
σ
2
φ
(
y
)
⟩
]
≈
⟨
e
‖
x
‖
2
/
2
σ
2
φ
(
x
)
,
e
‖
y
‖
2
/
2
σ
2
φ
(
y
)
⟩
{\displaystyle e^{\langle x,y\rangle /\sigma ^{2}}=\mathbb {E} [\langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle ]\approx \langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle }
Consequently, the one-headed attention, with one query, can be written as
Attention
(
q
,
K
,
V
)
=
softmax
(
q
K
T
d
k
)
V
≈
φ
(
q
)
T
∑
i
e
‖
k
i
‖
2
/
2
σ
2
φ
(
k
i
)
v
i
T
φ
(
q
)
T
∑
i
e
‖
k
i
‖
2
/
2
σ
2
φ
(
k
i
)
{\displaystyle {\text{Attention}}(q,K,V)={\text{softmax}}\left({\frac {qK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx {\frac {\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})v_{i}^{T}}{\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})}}}
where
σ
=
d
K
1
/
4
{\displaystyle \sigma =d_{K}^{1/4}}
. Similarly for multiple queries, and for multiheaded attention.
This approximation can be computed in linear time, as we can compute the matrix
φ
(
k
i
)
v
i
T
{\displaystyle \varphi (k_{i})v_{i}^{T}}
first, then multiply it with the query. In essence, we have managed to obtain a more precise version of
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
)
V
≈
Q
(
K
T
V
/
d
k
)
{\displaystyle {\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx Q(K^{T}V/{\sqrt {d_{k}}})}
Performer (2022) uses the same Random Feature Attention, but
w
1
,
.
.
.
,
w
D
{\displaystyle w_{1},...,w_{D}}
are first independently sampled from the normal distribution
N
(
0
,
σ
2
I
)
{\displaystyle N(0,\sigma ^{2}I)}
, then they are Gram-Schmidt processed.
=== Multimodality ===
Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality.
Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning. The LLaVA was a vision-language model composed of a language model (Vicuna-13B) and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.
Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer.
Conformer and later Whisper follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
Perceivers are a variant of Transformers designed for multimodality.
For image generation, notable architectures are DALL-E 1 (2021), Parti (2022), Phenaki (2023), and Muse (2023). Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image. Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image. Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted. Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.
== Applications ==
The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Gemini, AlbertAGPT, Claude, BERT, Grok, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including:
machine translation
time series prediction
document summarization
document generation
named entity recognition (NER)
writing computer code based on requirements expressed in natural language.
speech-to-text
Beyond traditional NLP, the transformer architecture has had success in other applications, such as:
biological sequence analysis
video understanding
protein folding (such as AlphaFold)
evaluating chess board positions. Using static evaluation alone (that is, with no Minimax search) transformer achieved an Elo of 2895, putting it at grandmaster level.
== See also ==
seq2seq – Family of machine learning approaches
Perceiver – Variant of Transformer designed for multimodal data
Vision transformer – Machine learning model for vision processing
Large language model – Type of machine learning model
BERT (language model) – Series of language models developed by Google AI
Generative pre-trained transformer – Type of large language model
T5 (language model) – Series of large language models developed by Google AI
== Notes ==
== References ==
== Further reading == | Wikipedia/Transformer_architecture |
The United States government's Strategic Computing Initiative funded research into advanced computer hardware and artificial intelligence from 1983 to 1993. The initiative was designed to support various projects that were required to develop machine intelligence in a prescribed ten-year time frame, from chip design and manufacture, computer architecture to artificial intelligence software. The Department of Defense spent a total of $1 billion on the project.
The inspiration for the program was Japan's fifth generation computer project, an enormous initiative that set aside billions for research into computing and artificial intelligence. As with Sputnik in 1957, the American government saw the Japanese project as a challenge to its technological dominance. The British government also funded a program of their own around the same time, known as Alvey, and a consortium of U.S. companies funded another similar project, the Microelectronics and Computer Technology Corporation.
The goal of SCI, and other contemporary projects, was nothing less than full machine intelligence. "The machine envisioned by SC", according to Alex Roland and Philip Shiman, "would run ten billion instructions per second to see, hear, speak, and think like a human. The degree of integration required would rival that achieved by the human brain, the most complex instrument known to man."
The initiative was conceived as an integrated program, similar to the Apollo moon program, where different subsystems would be created by various companies and academic projects and eventually brought together into a single integrated system. Roland and Shiman wrote that "While most research programs entail tactics or strategy, SC boasted grand strategy, a master plan for an entire campaign."
The project was funded by the Defense Advanced Research Projects Agency and directed by the Information Processing Technology Office (IPTO). By 1985 it had spent $100 million, and 92 projects were underway at 60 institutions: half in industry, half in universities and government labs. Robert Kahn, who directed IPTO in those years, provided the project with its early leadership and inspiration. Clint Kelly managed the SC Initiative for three years and developed many of the specific application programs for DARPA, such as the Autonomous Land Vehicle.
By the late 1980s, it was clear that the project would fall short of realizing the hoped-for levels of machine intelligence. Program insiders pointed to issues with integration, organization, and communication. When Jack Schwarz ascended to the leadership of IPTO in 1987, he cut funding to artificial intelligence research (the software component) "deeply and brutally", "eviscerating" the program (wrote Pamela McCorduck). Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise. In his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was not "the next wave".
The project was superseded in the 1990s by the Accelerated Strategic Computing Initiative and then by the Advanced Simulation and Computing Program. These later programs did not include artificial general intelligence as a goal, but instead focused on supercomputing for large scale simulation, such as atomic bomb simulations. The Strategic Computing Initiative of the 1980s is distinct from the 2015 National Strategic Computing Initiative—the two are unrelated.
== Results ==
Although the program failed to meet its goal of high-level machine intelligence, it did meet some of its specific technical objectives, for example those of autonomous land navigation. The Autonomous Land Vehicle program and its sister Navlab project at Carnegie Mellon University, in particular, laid the scientific and technical foundation for many of the driverless vehicle programs that came after it, such as the Demo II and III programs (ALV being Demo I), Perceptor, and the DARPA Grand Challenge. The use of video cameras plus laser scanners and inertial navigation units pioneered by the SCI ALV program form the basis of almost all commercial driverless car developments today. It also helped to advance the state of the art of computer hardware to a considerable degree.
On the software side, the initiative funded development of the Dynamic Analysis and Replanning Tool (DART), a program that handled logistics using artificial intelligence techniques. This was a huge success, saving the Department of Defense billions during Desert Storm. Introduced in 1991, DART had by 1995 offset the monetary equivalent of all funds DARPA had channeled into AI research for the previous 30 years combined.
== See also ==
AI winter § Cutbacks at the Strategic Computing Initiative
Advanced Simulation and Computing Program
== Notes ==
== References ==
Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3.
Roland, Alex; Shiman, Philip (2002). Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993. Cambridge, Mass.: MIT Press. ISBN 0-262-18226-2.
Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2
McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, Massachusetts: A. K. Peters, ISBN 1-5688-1205-1, pp. 426–432 | Wikipedia/Strategic_Computing_Initiative |
A physical symbol system (also called a formal system) takes physical patterns (symbols), combining them into structures (expressions) and manipulating them (using processes) to produce new expressions.
The physical symbol system hypothesis (PSSH) is a position in the philosophy of artificial intelligence formulated by Allen Newell and Herbert A. Simon. They wrote:
"A physical symbol system has the necessary and sufficient means for general intelligent action."
This claim implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence).
The idea has philosophical roots in Thomas Hobbes (who claimed reasoning was "nothing more than reckoning"), Gottfried Wilhelm Leibniz (who attempted to create a logical calculus of all human ideas), David Hume (who thought perception could be reduced to "atomic impressions") and even Immanuel Kant (who analyzed all experience as controlled by formal rules). The latest version is called the computational theory of mind, associated with philosophers Hilary Putnam and Jerry Fodor.
== Examples ==
Examples of physical symbol systems include:
Formal logic: the symbols are words like "and", "or", "not", "for all x" and so on. The expressions are statements in formal logic which can be true or false. The processes are the rules of logical deduction.
Algebra: the symbols are "+", "×", "x", "y", "1", "2", "3", etc. The expressions are equations. The processes are the rules of algebra, that allow one to manipulate a mathematical expression and retain its truth.
Chess: the symbols are the pieces, the processes are the legal chess moves, the expressions are the positions of all the pieces on the board.
A computer running a program: the symbols and expressions are data structures, the process is the program that changes the data structures.
The physical symbol system hypothesis claims that both of the following are also examples of physical symbol systems:
Intelligent human thought: the symbols are encoded in our brains. The expressions are thoughts. The processes are the mental operations of thinking.
English language: the symbols are words. The expressions are sentences. The processes are the mental operations that enable speaking, writing or reading.
== Evidence for the hypothesis ==
Two lines of evidence suggested to Allen Newell and Herbert A. Simon that "symbol manipulation" was the essence of both human and machine intelligence: psychological experiments on human beings and the development of artificial intelligence programs.
=== Psychological experiments and computer models ===
Newell and Simon carried out psychological experiments that showed that, for difficult problems in logic, planning, or any kind of "puzzle solving", people carefully proceeded step-by-step, considering several different possible ways forward, selected the most promising one, backing up when the possibility hit a dead end. Each possible solution was visualized with symbols, such as words, numbers or diagrams. This was "symbol manipulation" -- the people were iteratively exploring a formal system looking for a matching pattern that solved the puzzle. Newell and Simon were able to simulate the step by step problem solving skills of people with computer programs; they created programs that used the same algorithms as people and were able to solve the same problems.
This type of research, using both experimental psychology and computer models, was called "cognitive simulation" by Hubert Dreyfus. Their work was profoundly influential: it contributed to the cognitive revolution of the 1960s, in addition to the founding of the fields of cognitive science and cognitivism in psychology.
This line of research suggested that human problem solving consisted primarily of the manipulation of high-level symbols.
=== Artificial intelligence programs in the 1950s and 60s ===
In the early decades of AI research there were many programs that used high-level symbol processing. These programs were very successful, demonstrating skills that many people at the time had assumed were impossible for machines, such as solving algebra word problems (STUDENT), proving theorems in logic (Logic Theorist), learning to play competitive checkers (Arthur Samuel's checkers), and communicating in natural language (ELIZA, SHRDLU).
The success of these programs suggested that symbol processing systems could simulate any intelligent action.
== Clarifications ==
The physical symbol systems hypothesis becomes trivial, incoherent or irrelevant unless we recognize three distinctions: between "digitized signals" and "symbols"; between "narrow" AI and general intelligence; and between consciousness and intelligent behavior.
=== Semantic symbols vs. dynamic signals ===
The physical symbol system hypothesis is only interesting if we restrict the "symbols" to things that have a recognizable meaning or denotation and can be composed with other symbols to create more complex symbols, like <dog> and <tail>. It doesn't apply to the simple 0s and 1s in the memory of a digital computer or the stream of 0s and 1s passing through the perceptual apparatus of a robot. It also doesn't apply to matrixes of unidentified numbers, such as those used in neural networks or support vector machines. These may technically be symbols, but it is not always possible to determine exactly what the symbols are standing for. This is not what Newell and Simon had in mind, and the argument becomes trivial if we include them.
David Touretzky and Dean Pomerleau consider what would follow if we interpret the "symbols" in the PSSH to be binary digits of digital hardware. In this version of the hypothesis, no distinction is being made between "symbols" and "signals". Here the physical symbol system hypothesis asserts merely that intelligence can be digitized. This is a weaker claim. Indeed, Touretzky and Pomerleau write that if symbols and signals are the same thing, then "[s]ufficiency is a given, unless one is a dualist or some other sort of mystic, because physical symbol systems are Turing-universal." The widely accepted Church–Turing thesis holds that any Turing-universal system can simulate any conceivable process that can be digitized, given enough time and memory. Since any digital computer is Turing-universal, any digital computer can, in theory, simulate anything that can be digitized to a sufficient level of precision, including the behavior of intelligent organisms. The necessary condition of the physical symbol systems hypothesis can likewise be finessed, since we are willing to accept almost any signal as a form of "symbol" and all intelligent biological systems have signal pathways.
The same issue applies to the unidentified numbers that appear in the matrixes of a neural network or a support vector machine. These programs are using the same mathematics as a digitial simulation of a dynamical system, and is better understood as "dynamic system" than a "physical symbol system". Nils Nilsson wrote: "any physical process can be simulated to any desired degree of accuracy on a symbol-manipulating computer, but an account of such a simulation in terms of symbols, instead of signals, can be unmanageably cumbersome."
=== General intelligence vs. "narrow" intelligence ===
The PSSH refers to "general intelligent action" -- that is, to every activity that we would consider "intelligent". Thus it is the claim that artificial general intelligence can be achieved using only symbolic methods. It does not refer to "narrow" applications. (That is, applications that are intended only to solve exactly one problem -- which includes almost all AI systems currently in use.)
Artificial intelligence research has succeeded in developing many programs that are capable of intelligently solving particular problems. However, AI research has so far not been able to produce a system with artificial general intelligence -- the ability to solve a variety of novel problems, as humans do.
Thus, the criticism of the PSSH refers to the limits of AI in the future, and does not apply to any current research or programs.
Some claim that large language models are capable of "general intelligent action", however this is arguable.
=== Consciousness vs. intelligent action ===
The PSSH refers to "intelligent action" -- that is, the behavior of the machine -- it does not refer to the "mental states", "mind", "consciousness", or the "experiences" of the machine. "Consciousness", as far as neurology can determine, is not something that can deduced from the behavior of an agent: it is always possible that the machine is simulating the experience of consciousness, without actually experiencing it, similar to the way a perfectly written fictional character might simulate a person with consciousness.
Thus, the PSSH is not relevant to positions which refer to "mind" or "consciousness", such as John Searle's Strong AI hypothesis:
The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.
== Evidence against the hypothesis ==
Nils Nilsson has identified four main "themes" or grounds in which the physical symbol system hypothesis has been attacked.
The "erroneous claim that the [physical symbol system hypothesis] lacks symbol grounding" which is presumed to be a requirement for general intelligent action.
The common belief that AI requires non-symbolic processing (that which can be supplied by a connectionist architecture for instance).
The common statement that the brain is simply not a computer and that "computation as it is currently understood, does not provide an appropriate model for intelligence".
And last of all that it is also believed in by some that the brain is essentially mindless, most of what takes place are chemical reactions and that human intelligent behaviour is analogous to the intelligent behaviour displayed for example by ant colonies.
=== Evidence the brain does not always use symbols ===
If the human brain does not use symbolic reasoning to create intelligent behavior, then the necessary side of the hypothesis is false, and human intelligence is the counter-example.
==== Dreyfus ====
Hubert Dreyfus attacked the necessary condition of the physical symbol system hypothesis, calling it "the psychological assumption" and defining it thus:
The mind can be viewed as a device operating on bits of information according to formal rules.
Dreyfus refuted this by showing that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation. Experts solve problems quickly by using their intuitions, rather than step-by-step trial and error searches. Dreyfus argued that these unconscious skills would never be captured in formal rules.
==== Tversky and Kahnemann ====
==== Embodied cognition ====
George Lakoff, Mark Turner and others have argued that our abstract skills in areas such as mathematics, ethics and philosophy depend on unconscious skills that derive from the body, and that conscious symbol manipulation is only a small part of our intelligence.
=== Evidence that symbolic AI can't efficiently generate intelligence for all problems ===
It is impossible to prove that symbolic AI will never produce general intelligence, but if we can not find an efficient way to solve particular problems with symbolic AI, this is evidence that the sufficient side of the PSSH is unlikely to be true.
==== Intractability ====
==== Common sense knowledge, frame, qualification and ramification problems ====
==== Moravec's paradox ====
=== Evidence that sub-symbolic or neurosymbolic AI programs can generate intelligence ===
If sub-symbolic AI programs, such as deep learning, can intelligently solve problems, then this is evidence that the necessary side of the PSSH is false.
If hybrid approaches that combine symbolic AI with other approaches can efficiently solve a wider range of problems than either technique alone, this is evidence that the necessary side is true and the sufficiency side is false.
==== Brooks ====
Rodney Brooks of MIT was able to build robots that had superior ability to move and survive without the use of symbolic reasoning at all. Brooks (and others, such as Hans Moravec) discovered that our most basic skills of motion, survival, perception, balance and so on did not seem to require high-level symbols at all, that in fact, the use of high-level symbols was more complicated and less successful.
In a 1990 paper Elephants Don't Play Chess, Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough."
==== Connectionism and deep learning ====
In 2012 AlexNet, a deep learning network, outperformed all other programs in classifying images on ImageNet by a substantial margin. In the years since, deep learning has proved to be much more successful in many domains than symbolic AI.
==== Hybrid AI ====
=== Symbol grounding ===
== See also ==
Artificial intelligence, situated approach
Artificial philosophy
== Notes ==
== References ==
Brooks, Rodney (1990), "Elephants Don't Play Chess" (PDF), Robotics and Autonomous Systems, 6 (1–2): 3–15, CiteSeerX 10.1.1.588.7539, doi:10.1016/S0921-8890(05)80025-9, retrieved 2007-08-30.
Cole, David (Fall 2004), "The Chinese Room Argument", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy.
Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3.
Dennett, Daniel (1991), Consciousness Explained, The Penguin Press, ISBN 978-0-7139-9037-9
Dreyfus, Hubert (1972), What Computers Can't Do, New York: MIT Press, ISBN 978-0-06-011082-6
Dreyfus, Hubert (1979), What Computers Still Can't Do, New York: MIT Press.
Dreyfus, Hubert; Dreyfus, Stuart (1986), Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, Oxford, U.K.: Blackwell
Gladwell, Malcolm (2005), Blink: The Power of Thinking Without Thinking, Boston: Little, Brown, ISBN 978-0-316-17232-5.
Haugeland, John (1985), Artificial Intelligence: The Very Idea, Cambridge, Mass.: MIT Press.
Hobbes (1651), Leviathan.
Horst, Steven (Fall 2005), "The Computational Theory of Mind", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy.
Kurzweil, Ray (2005), The Singularity is Near, New York: Viking Press, ISBN 978-0-670-03384-3.
McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, archived from the original on 2008-09-30
McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, Massachusetts: A. K. Peters, ISBN 1-5688-1205-1
Newell, A.; Shaw, J. C.; Simon, H. A. (1958), "Elements of a theory of human problem solving", Psychological Review, 65 (3): 151–166, doi:10.1037/h0048495
Newell, Allen; Simon, H. A. (1963), "GPS: A Program that Simulates Human Thought", in Feigenbaum, E.A.; Feldman, J. (eds.), Computers and Thought, New York: McGraw-Hill
Newell, Allen; Simon, H. A. (1976), "Computer Science as Empirical Inquiry: Symbols and Search", Communications of the ACM, 19 (3): 113–126, doi:10.1145/360018.360022
Nilsson, Nils (2007), Lungarella, M. (ed.), "The Physical Symbol System Hypothesis: Status and Prospects" (PDF), 50 Years of AI, Festschrift, LNAI 4850, Springer, pp. 9–17
Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2
Russell, Stuart J.; Norvig, Peter (2021). Artificial Intelligence: A Modern Approach (4th ed.). Hoboken: Pearson. ISBN 978-0-1346-1099-3. LCCN 20190474.
Searle, John (1999), Mind, language and society, New York, NY: Basic Books, ISBN 978-0-465-04521-1, OCLC 231867665
Turing, Alan (October 1950), "Computing machinery and intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, archived from the original on 2008-07-02 | Wikipedia/Physical_symbol_systems_hypothesis |
A recommender system (RecSys), or a recommendation system (sometimes replacing system with terms such as platform, engine, or algorithm) and sometimes only called "the algorithm" or "algorithm", is a subclass of information filtering system that provides suggestions for items that are most pertinent to a particular user. Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer. Modern recommendation systems such as those used on large social media sites make extensive use of AI, machine learning and related techniques to learn the behavior and preferences of each user and categorize content to tailor their feed individually.
Typically, the suggestions refer to various decision-making processes, such as what product to purchase, what music to listen to, or what online news to read.
Recommender systems are used in a variety of areas, with commonly recognised examples taking the form of playlist generators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders. These systems can operate using a single type of input, like music, or multiple inputs within and across platforms like news, books and search queries. There are also popular recommender systems for specific topics like restaurants and online dating. Recommender systems have also been developed to explore research articles and experts, collaborators, and financial services.
A content discovery platform is an implemented software recommendation platform which uses recommender system tools. It utilizes user metadata in order to discover and recommend appropriate content, whilst reducing ongoing maintenance and development costs. A content discovery platform delivers personalized content to websites, mobile devices and set-top boxes. A large range of content discovery platforms currently exist for various forms of content ranging from news articles and academic journal articles to television. As operators compete to be the gateway to home entertainment, personalized television is a key service differentiator. Academic content discovery has recently become another area of interest, with several companies being established to help academic researchers keep up to date with relevant academic content and serendipitously discover new content.
== Overview ==
Recommender systems usually make use of either or both collaborative filtering and content-based filtering, as well as other systems such as knowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (e.g., items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in. Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties.
=== Example ===
The differences between collaborative and content-based filtering can be demonstrated by comparing two early music recommender systems, Last.fm and Pandora Radio.
Last.fm creates a "station" of recommended songs by observing what bands and individual tracks the user has listened to on a regular basis and comparing those against the listening behavior of other users. Last.fm will play tracks that do not appear in the user's library, but are often played by other users with similar interests. As this approach leverages the behavior of users, it is an example of a collaborative filtering technique.
Pandora uses the properties of a song or artist (a subset of the 450 attributes provided by the Music Genome Project) to seed a "station" that plays music with similar properties. User feedback is used to refine the station's results, deemphasizing certain attributes when a user "dislikes" a particular song and emphasizing other attributes when a user "likes" a song. This is an example of a content-based approach.
Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of the cold start problem, and is common in collaborative filtering systems. Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).
=== Alternative implementations ===
Recommender systems are a useful alternative to search algorithms since they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data. In some cases, like in the Gonzalez v. Google Supreme Court case, may argue that search and recommendation algorithms are different technologies.
Recommender systems have been the focus of several granted patents, and there are more than 50 software libraries that support the development of recommender systems including LensKit, RecBole, ReChorus and RecPack.
== History ==
Elaine Rich created the first recommender system in 1979, called Grundy. She looked for a way to recommend users books they might like. Her idea was to create a system that asks users specific questions and classifies them into classes of preferences, or "stereotypes", depending on their answers. Depending on users' stereotype membership, they would then get recommendations for books they might like.
Another early recommender system, called a "digital bookshelf", was described in a 1990 technical report by Jussi Karlgren at Columbia University,
and implemented at scale and worked through in technical reports and publications from 1994 onwards by Jussi Karlgren, then at SICS,
and research groups led by Pattie Maes at MIT, Will Hill at Bellcore, and Paul Resnick, also at MIT, whose work with GroupLens was awarded the 2010 ACM Software Systems Award.
Montaner provided the first overview of recommender systems from an intelligent agent perspective. Adomavicius provided a new, alternate overview of recommender systems. Herlocker provides an additional overview of evaluation techniques for recommender systems, and Beel et al. discussed the problems of offline evaluations. Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.
== Approaches ==
=== Collaborative filtering ===
One approach to the design of recommender systems that has wide use is collaborative filtering. Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm, while that of model-based approaches is matrix factorization (recommender systems).
A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach and the Pearson Correlation as first implemented by Allen.
When building a model from a user's behavior, a distinction is often made between explicit and implicit forms of data collection.
Examples of explicit data collection include the following:
Asking a user to rate an item on a sliding scale.
Asking a user to search.
Asking a user to rank a collection of items from favorite to least favorite.
Presenting two items to a user and asking him/her to choose the better one of them.
Asking a user to create a list of items that he/she likes (see Rocchio classification or other similar techniques).
Examples of implicit data collection include the following:
Observing the items that a user views in an online store.
Analyzing item/user viewing times.
Keeping a record of the items that a user purchases online.
Obtaining a list of items that a user has listened to or watched on his/her computer.
Analyzing the user's social network and discovering similar likes and dislikes.
Collaborative filtering approaches often suffer from three problems: cold start, scalability, and sparsity.
Cold start: For a new user or item, there is not enough data to make accurate recommendations. Note: one commonly implemented solution to this problem is the multi-armed bandit algorithm.
Scalability: There are millions of users and products in many of the environments in which these systems make recommendations. Thus, a large amount of computation power is often necessary to calculate recommendations.
Sparsity: The number of items sold on major e-commerce sites is extremely large. The most active users will only have rated a small subset of the overall database. Thus, even the most popular items have very few ratings.
One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system.
Many social networks originally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends. Collaborative filtering is still used as part of hybrid systems.
=== Content-based filtering ===
Another common approach when designing recommender systems is content-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences. These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features.
In this system, keywords are used to describe the items, and a user profile is built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots in information retrieval and information filtering research.
To create a user profile, the system mostly focuses on two types of information:
A model of the user's preference.
A history of the user's interaction with the recommender system.
Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is the tf–idf representation (also called vector space representation). The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such as Bayesian Classifiers, cluster analysis, decision trees, and artificial neural networks in order to estimate the probability that the user is going to like the item.
A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system.
Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improved metadata of items, because as they also reflect aspects of the item like metadata, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques including text mining, information retrieval, sentiment analysis (see also Multimodal sentiment analysis) and deep learning.
=== Hybrid recommendations approaches ===
Most recommender systems now use a hybrid approach, combining collaborative filtering, content-based filtering, and other approaches. There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model. Several studies that empirically compared the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck in knowledge-based approaches.
Netflix is a good example of the use of hybrid recommender systems. The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).
Some hybridization techniques include:
Weighted: Combining the score of different recommendation components numerically.
Switching: Choosing among recommendation components and applying the selected one.
Mixed: Recommendations from different recommenders are presented together to give the recommendation.
Cascade: Recommenders are given strict priority, with the lower priority ones breaking ties in the scoring of the higher ones.
Meta-level: One recommendation technique is applied and produces some sort of model, which is then the input used by the next technique.
== Technologies ==
=== Session-based recommender systems ===
These recommender systems use the interactions of a user within a session to generate recommendations. Session-based recommender systems are used at YouTube and Amazon. These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains where session-based recommendations are particularly relevant include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such as recurrent neural networks, transformers, and other deep-learning-based approaches.
=== Reinforcement learning for recommender systems ===
The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user. One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest.
=== Multi-criteria recommender systems ===
Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems. See this chapter for an extended introduction.
=== Risk-aware recommender systems ===
The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue is DRARS, a system which models the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm.
=== Mobile recommender systems ===
Mobile recommender systems make use of internet-accessing smartphones to offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems.
There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy. Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).
One example of a mobile recommender system are the approaches taken by companies such as Uber and Lyft to generate driving routes for taxi drivers in a city. This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits.
=== Generative recommenders ===
Generative recommenders (GR) represent an approach that transforms recommendation tasks into sequential transduction problems, where user actions are treated like tokens in a generative modeling framework. In one method, known as HSTU (Hierarchical Sequential Transduction Units), high-cardinality, non-stationary, and streaming datasets are efficiently processed as sequences, enabling the model to learn from trillions of parameters and to handle user action histories orders of magnitude longer than before. By turning all of the system’s varied data into a single stream of tokens and using a custom self-attention approach instead of traditional neural network layers, generative recommenders make the model much simpler and less memory-hungry. As a result, it can improve recommendation quality in test simulations and in real-world tests, while being faster than previous Transformer-based systems when handling long lists of user actions. Ultimately, this approach allows the model’s performance to grow steadily as more computing power is used, laying a foundation for efficient and scalable “foundation models” for recommendations.
== The Netflix Prize ==
One of the events that energized research in recommender systems was the Netflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.
The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.:
Predictive accuracy is substantially improved when blending multiple predictors. Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique. Consequently, our solution is an ensemble of many methods.
Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place founded Gravity R&D, a recommendation engine that's active in the RecSys community. 4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites.
A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on the Internet Movie Database (IMDb). As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and the Video Privacy Protection Act by releasing the datasets. This, as well as concerns from the Federal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010.
== Evaluation ==
=== Performance measures ===
Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure the effectiveness of recommender systems, and compare different approaches, three types of evaluations are available: user studies, online evaluations (A/B tests), and offline evaluations.
The commonly used metrics are the mean squared error and root mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such as precision and recall or discounted cumulative gain (DCG) are useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation. However, many of the classic evaluation measures are highly criticized.
Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise.
User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best.
In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such as conversion rate or click-through rate.
Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.
The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers. For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests. A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms. Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction. This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module. Researchers have concluded that the results of offline evaluations should be viewed critically.
=== Beyond accuracy ===
Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important.
Diversity – Users tend to be more satisfied with recommendations when there is a higher intra-list diversity, e.g. items from different artists.
Recommender persistence – In some situations, it is more effective to re-show recommendations, or let users re-rate items, than showing new items. There are several reasons for this. Users may ignore items when they are shown for the first time, for instance, because they had no time to inspect the recommendations carefully.
Privacy – Recommender systems usually have to deal with privacy concerns because users have to reveal sensitive information. Building user profiles using collaborative filtering can be problematic from a privacy point of view. Many European countries have a strong culture of data privacy, and every attempt to introduce any level of user profiling can result in a negative customer response. Much research has been conducted on ongoing privacy issues in this space. The Netflix Prize is particularly notable for the detailed personal information released in its dataset. Ramakrishnan et al. have conducted an extensive overview of the trade-offs between personalization and privacy and found that the combination of weak ties (an unexpected connection that provides serendipitous recommendations) and other data sources can be used to uncover identities of users in an anonymized dataset.
User demographics – Beel et al. found that user demographics may influence how satisfied users are with recommendations. In their paper they show that elderly users tend to be more interested in recommendations than younger users.
Robustness – When users can participate in the recommender system, the issue of fraud must be addressed.
Serendipity – Serendipity is a measure of "how surprising the recommendations are". For instance, a recommender system that recommends milk to a customer in a grocery store might be perfectly accurate, but it is not a good recommendation because it is an obvious item for the customer to buy. "[Serendipity] serves two purposes: First, the chance that users lose interest because the choice set is too uniform decreases. Second, these items are needed for algorithms to learn and improve themselves".
Trust – A recommender system is of little value for a user if the user does not trust the system. Trust can be built by a recommender system by explaining how it generates recommendations, and why it recommends an item.
Labelling – User satisfaction with recommendations may be influenced by the labeling of the recommendations. For instance, in the cited study click-through rate (CTR) for recommendations labeled as "Sponsored" were lower (CTR=5.93%) than CTR for identical recommendations labeled as "Organic" (CTR=8.86%). Recommendations with no label performed best (CTR=9.87%) in that study.
=== Reproducibility ===
Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to a reproducibility crisis in recommender systems publications. The topic of reproducibility seems to be a recurrent issue in some Machine Learning publication venues, but does not have a considerable effect beyond the world of scientific publication. In the context of recommender systems a 2019 paper surveyed a small number of hand-picked publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area.
More recent work on benchmarking a set of the same methods came to qualitatively very different results whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM, RecSys Challenge.
Moreover, neural and deep learning methods are widely used in industry where they are extensively tested. The topic of reproducibility is not new in recommender systems. By 2011, Ekstrand, Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently". Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions." As a consequence, much research about recommender systems can be considered as not reproducible. Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems. Said and Bellogín conducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used. Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation: "(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research."
== Artificial intelligence applications in recommendation ==
Artificial intelligence (AI) applications in recommendation systems are the advanced methodologies that leverage AI technologies, to enhance the performance recommendation engines. The AI-based recommender can analyze complex data sets, learning from user behavior, preferences, and interactions to generate highly accurate and personalized content or product suggestions. The integration of AI in recommendation systems has marked a significant evolution from traditional recommendation methods. Traditional methods often relied on inflexible algorithms that could suggest items based on general user trends or apparent similarities in content. In comparison, AI-powered systems have the capability to detect patterns and subtle distinctions that may be overlooked by traditional methods. These systems can adapt to specific individual preferences, thereby offering recommendations that are more aligned with individual user needs. This approach marks a shift towards more personalized, user-centric suggestions.
Recommendation systems widely adopt AI techniques such as machine learning, deep learning, and natural language processing. These advanced methods enhance system capabilities to predict user preferences and deliver personalized content more accurately. Each technique contributes uniquely. The following sections will introduce specific AI models utilized by a recommendation system by illustrating their theories and functionalities.
=== KNN-based collaborative filters ===
Collaborative filtering (CF) is one of the most commonly used recommendation system algorithms. It generates personalized suggestions for users based on explicit or implicit behavioral patterns to form predictions. Specifically, it relies on external feedback such as star ratings, purchasing history and so on to make judgments. CF make predictions about users' preference based on similarity measurements. Essentially, the underlying theory is: "if user A is similar to user B, and if A likes item C, then it is likely that B also likes item C."
There are many models available for collaborative filtering. For AI-applied collaborative filtering, a common model is called K-nearest neighbors. The ideas are as follows:
Data Representation: Create a n-dimensional space where each axis represents a user's trait (ratings, purchases, etc.). Represent the user as a point in that space.
Statistical Distance: 'Distance' measures how far apart users are in this space. See statistical distance for computational details
Identifying Neighbors: Based on the computed distances, find k nearest neighbors of the user to which we want to make recommendations
Forming Predictive Recommendations: The system will analyze the similar preference of the k neighbors. The system will make recommendations based on that similarity
=== Neural networks ===
An artificial neural network (ANN), is a deep learning model structure which aims to mimic a human brain. They comprise a series of neurons, each responsible for receiving and processing information transmitted from other interconnected neurons. Similar to a human brain, these neurons will change activation state based on incoming signals (training input and backpropagated output), allowing the system to adjust activation weights during the network learning phase. ANN is usually designed to be a black-box model. Unlike regular machine learning where the underlying theoretical components are formal and rigid, the collaborative effects of neurons are not entirely clear, but modern experiments has shown the predictive power of ANN.
ANN is widely used in recommendation systems for its power to utilize various data. Other than feedback data, ANN can incorporate non-feedback data which are too intricate for collaborative filtering to learn, and the unique structure allows ANN to identify extra signal from non-feedback data to boost user experience. Following are some examples:
Time and Seasonality: what specify time and date or a season that a user interacts with the platform
User Navigation Patterns: sequence of pages visited, time spent on different parts of a website, mouse movement, etc.
External Social Trends: information from outer social media
==== Two-Tower Model ====
The Two-Tower model is a neural architecture commonly employed in large-scale recommendation systems, particularly for candidate retrieval tasks. It consists of two neural networks:
User Tower: Encodes user-specific features, such as interaction history or demographic data.
Item Tower: Encodes item-specific features, such as metadata or content embeddings.
The outputs of the two towers are fixed-length embeddings that represent users and items in a shared vector space. A similarity metric, such as dot product or cosine similarity, is used to measure relevance between a user and an item.
This model is highly efficient for large datasets as embeddings can be pre-computed for items, allowing rapid retrieval during inference. It is often used in conjunction with ranking models for end-to-end recommendation pipelines.
=== Natural language processing ===
Natural language processing is a series of AI algorithms to make natural human language accessible and analyzable to a machine. It is a fairly modern technique inspired by the growing amount of textual information. For application in recommendation system, a common case is the Amazon customer review. Amazon will analyze the feedbacks comments from each customer and report relevant data to other customers for reference. The recent years have witnessed the development of various text analysis models, including latent semantic analysis (LSA), singular value decomposition (SVD), latent Dirichlet allocation (LDA), etc. Their uses have consistently aimed to provide customers with more precise and tailored recommendations.
== Specific applications ==
=== Academic content discovery ===
An emerging market for content discovery platforms is academic content. Approximately 6000 academic journal articles are published daily, making it increasingly difficult for researchers to balance time management with staying up to date with relevant research. Though traditional tools academic search tools such as Google Scholar or PubMed provide a readily accessible database of journal articles, content recommendation in these cases are performed in a 'linear' fashion, with users setting 'alarms' for new publications based on keywords, journals or particular authors.
Google Scholar provides an 'Updates' tool that suggests articles by using a statistical model that takes a researchers' authorized paper and citations as input. Whilst these recommendations have been noted to be extremely good, this poses a problem with early career researchers which may be lacking a sufficient body of work to produce accurate recommendations.
=== Decision-making ===
In contrast to an engagement-based ranking system employed by social media and other digital platforms, a bridging-based ranking optimizes for content that is unifying instead of polarizing. Examples include Polis and Remesh which have been used around the world to help find more consensus around specific political issues. Twitter has also used this approach for managing its community notes, which YouTube planned to pilot in 2024. Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empowering deliberative groups that are representative of the platform's users to control the design and implementation of the algorithm.
=== Television ===
As the connected television landscape continues to evolve, search and recommendation are seen as having an even more pivotal role in the discovery of content. With broadband-connected devices, consumers are projected to have access to content from linear broadcast sources as well as internet television. Therefore, there is a risk that the market could become fragmented, leaving it to the viewer to visit various locations and find what they want to watch in a way that is time-consuming and complicated for them. By using a search and recommendation engine, viewers are provided with a central 'portal' from which to discover content from several sources in just one location.
== See also ==
== References ==
== Further reading ==
Books
Kim Falk (d 2019), Practical Recommender Systems, Manning Publications, ISBN 9781617292705
Bharat Bhasker; K. Srikumar (2010). Recommender Systems in E-Commerce. CUP. ISBN 978-0-07-068067-8. Archived from the original on September 1, 2010.
Jannach, Dietmar; Markus Zanker; Alexander Felfernig; Gerhard Friedrich (2010). Recommender Systems: An Introduction. CUP. ISBN 978-0-521-49336-9. Archived from the original on August 31, 2015.
Seaver, Nick (2022). Computing Taste: Algorithms and the Makers of Music Recommendation. University of Chicago Press.
Scientific articles
Robert M. Bell; Jim Bennett; Yehuda Koren & Chris Volinsky (May 2009). "The Million Dollar Programming Prize". IEEE Spectrum. Archived from the original on May 11, 2009. Retrieved December 10, 2018.
Prem Melville, Raymond J. Mooney, and Ramadass Nagarajan. (2002) Content-Boosted Collaborative Filtering for Improved Recommendations. Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI-2002), pp. 187–192, Edmonton, Canada, July 2002. | Wikipedia/Recommendation_systems |
An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes.
Typical applications of robots include welding, painting, assembly, disassembly, pick and place for printed circuit boards, packaging and labeling, palletizing, product inspection, and testing; all accomplished with high endurance, speed, and precision. They can assist in material handling.
In the year 2023, an estimated 4,281,585 industrial robots were in operation worldwide according to International Federation of Robotics (IFR).
== Types and features ==
There are six types of industrial robots.
=== Articulated robots ===
Articulated robots are the most common industrial robots. They look like a human arm, which is why they are also called robotic arm or manipulator arm. Their articulations with several degrees of freedom allow the articulated arms a wide range of movements.
=== Autonomous robot ===
An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie, which were constructed in the late 1940s by W. Grey Walter. They were the first robots in history that were programmed to "think" the way biological brains do and meant to have free will. Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis which is the movement that occurs in response to light stimulus.
=== Cartesian coordinate robots ===
Cartesian robots, also called rectilinear, gantry robots, and x-y-z robots have three prismatic joints for the movement of the tool and three rotary joints for its orientation in space.
To be able to move and orient the effector organ in all directions, such a robot needs 6 axes (or degrees of freedom). In a 2-dimensional environment, three axes are sufficient, two for displacement and one for orientation.
=== Cylindrical coordinate robots ===
The cylindrical coordinate robots are characterized by their rotary joint at the base and at least one prismatic joint connecting its links. They can move vertically and horizontally by sliding. The compact effector design allows the robot to reach tight work-spaces without any loss of speed.
=== Spherical coordinate robots ===
Spherical coordinate robots only have rotary joints. They are one of the first robots to have been used in industrial applications. They are commonly used for machine tending in die-casting, plastic injection and extrusion, and for welding.
=== SCARA robots ===
SCARA is an acronym for Selective Compliance Assembly Robot Arm. SCARA robots are recognized by their two parallel joints which provide movement in the X-Y plane. Rotating shafts are positioned vertically at the effector. SCARA robots are used for jobs that require precise lateral movements. They are ideal for assembly applications.
=== Delta robots ===
Delta robots are also referred to as parallel link robots. They consist of parallel links connected to a common base. Delta robots are particularly useful for direct control tasks and high maneuvering operations (such as quick pick-and-place tasks). Delta robots take advantage of four bar or parallelogram linkage systems.
Furthermore, industrial robots can have a serial or parallel architecture.
=== Serial manipulators ===
Serial architectures a.k.a. serial manipulators are very common industrial robots; they are designed as a series of links connected by motor-actuated joints that extend from a base to an end-effector. SCARA, Stanford manipulators are typical examples of this category.
=== Parallel architecture ===
A parallel manipulator is designed so that each chain is usually short, simple and can thus be rigid against unwanted movement, compared to a serial manipulator. Errors in one chain's positioning are averaged in conjunction with the others, rather than being cumulative. Each actuator must still move within its own degree of freedom, as for a serial robot; however in the parallel robot the off-axis flexibility of a joint is also constrained by the effect of the other chains. It is this closed-loop stiffness that makes the overall parallel manipulator stiff relative to its components, unlike the serial chain that becomes progressively less rigid with more components.
== Lower mobility parallel manipulators and concomitant motion ==
A full parallel manipulator can move an object with up to 6 degrees of freedom (DoF), determined by 3 translation 3T and 3 rotation 3R coordinates for full 3T3R mobility. However, when a manipulation task requires less than 6 DoF, the use of lower mobility manipulators, with fewer than 6 DoF, may bring advantages in terms of simpler architecture, easier control, faster motion and lower cost. For example, the 3 DoF Delta robot has lower 3T mobility and has proven to be very successful for rapid pick-and-place translational positioning applications. The workspace of lower mobility manipulators may be decomposed into 'motion' and 'constraint' subspaces. For example, 3 position coordinates constitute the motion subspace of the 3 DoF Delta robot and the 3 orientation coordinates are in the constraint subspace. The motion subspace of lower mobility manipulators may be further decomposed into independent (desired) and dependent (concomitant) subspaces: consisting of 'concomitant' or 'parasitic' motion which is undesired motion of the manipulator. The debilitating effects of concomitant motion should be mitigated or eliminated in the successful design of lower mobility manipulators. For example, the Delta robot does not have parasitic motion since its end effector does not rotate.
== Autonomy ==
Robots exhibit varying degrees of autonomy.
Some robots are programmed to faithfully carry out specific actions over and over again (repetitive actions) without variation and with a high degree of accuracy. These actions are determined by programmed routines that specify the direction, acceleration, velocity, deceleration, and distance of a series of coordinated motions
Other robots are much more flexible as to the orientation of the object on which they are operating or even the task that has to be performed on the object itself, which the robot may even need to identify. For example, for more precise guidance, robots often contain machine vision sub-systems acting as their visual sensors, linked to powerful computers or controllers. Artificial intelligence is becoming an increasingly important factor in the modern industrial robot.
== History ==
The earliest known industrial robot, conforming to the ISO definition was completed by
"Bill" Griffith P. Taylor in 1937 and published in Meccano Magazine, March 1938. The crane-like device was built almost entirely using Meccano parts, and powered by a single electric motor. Five axes of movement were possible, including grab and grab rotation. Automation was achieved using punched paper tape to energise solenoids, which would facilitate the movement of the crane's control levers. The robot could stack wooden blocks in pre-programmed patterns. The number of motor revolutions required for each desired movement was first plotted on graph paper. This information was then transferred to the paper tape, which was also driven by the robot's single motor. Chris Shute built a complete replica of the robot in 1997.
George Devol applied for the first robotics patents in 1954 (granted in 1961). The first company to produce a robot was Unimation, founded by Devol and Joseph F. Engelberger in 1956. Unimation robots were also called programmable transfer machines since their main use at first was to transfer objects from one point to another, less than a dozen feet or so apart. They used hydraulic actuators and were programmed in joint coordinates, i.e. the angles of the various joints were stored during a teaching phase and replayed in operation. They were accurate to within 1/10,000 of an inch (note: although accuracy is not an appropriate measure for robots, usually evaluated in terms of repeatability - see later). Unimation later licensed their technology to Kawasaki Heavy Industries and GKN, manufacturing Unimates in Japan and England respectively. For some time, Unimation's only competitor was Cincinnati Milacron Inc. of Ohio. This changed radically in the late 1970s when several big Japanese conglomerates began producing similar industrial robots.
In 1969 Victor Scheinman at Stanford University invented the Stanford arm, an all-electric, 6-axis articulated robot designed to permit an arm solution. This allowed it accurately to follow arbitrary paths in space and widened the potential use of the robot to more sophisticated applications such as assembly and welding. Scheinman then designed a second arm for the MIT AI Lab, called the "MIT arm." Scheinman, after receiving a fellowship from Unimation to develop his designs, sold those designs to Unimation who further developed them with support from General Motors and later marketed it as the Programmable Universal Machine for Assembly (PUMA).
Industrial robotics took off quite quickly in Europe, with both ABB Robotics and KUKA Robotics bringing robots to the market in 1973. ABB Robotics (formerly ASEA) introduced IRB 6, among the world's first commercially available all electric micro-processor controlled robot. The first two IRB 6 robots were sold to Magnusson in Sweden for grinding and polishing pipe bends and were installed in production in January 1974. Also in 1973 KUKA Robotics built its first robot, known as FAMULUS, also one of the first articulated robots to have six electromechanically driven axes.
Interest in robotics increased in the late 1970s and many US companies entered the field, including large firms like General Electric, and General Motors (which formed joint venture FANUC Robotics with FANUC LTD of Japan). U.S. startup companies included Automatix and Adept Technology, Inc. At the height of the robot boom in 1984, Unimation was acquired by Westinghouse Electric Corporation for 107 million U.S. dollars. Westinghouse sold Unimation to Stäubli Faverges SCA of France in 1988, which is still making articulated robots for general industrial and cleanroom applications and even bought the robotic division of Bosch in late 2004.
Only a few non-Japanese companies ultimately managed to survive in this market, the major ones being: Adept Technology, Stäubli, the Swedish-Swiss company ABB Asea Brown Boveri, the German company KUKA Robotics and the Italian company Comau.
== Technical description ==
=== Defining parameters ===
Number of axes – two axes are required to reach any point in a plane; three axes are required to reach any point in space. To fully control the orientation of the end of the arm(i.e. the wrist) three more axes (yaw, pitch, and roll) are required. Some designs (e.g. the SCARA robot) trade limitations in motion possibilities for cost, speed, and accuracy.
Degrees of freedom – this is usually the same as the number of axes.
Working envelope – the region of space a robot can reach.
Kinematics – the actual arrangement of rigid members and joints in the robot, which determines the robot's possible motions. Classes of robot kinematics include articulated, cartesian, parallel and SCARA.
Carrying capacity or payload – how much weight a robot can lift.
Speed – how fast the robot can position the end of its arm. This may be defined in terms of the angular or linear speed of each axis or as a compound speed i.e. the speed of the end of the arm when all axes are moving.
Acceleration – how quickly an axis can accelerate. Since this is a limiting factor a robot may not be able to reach its specified maximum speed for movements over a short distance or a complex path requiring frequent changes of direction.
Accuracy – how closely a robot can reach a commanded position. When the absolute position of the robot is measured and compared to the commanded position the error is a measure of accuracy. Accuracy can be improved with external sensing for example a vision system or Infra-Red. See robot calibration. Accuracy can vary with speed and position within the working envelope and with payload (see compliance).
Repeatability – how well the robot will return to a programmed position. This is not the same as accuracy. It may be that when told to go to a certain X-Y-Z position that it gets only to within 1 mm of that position. This would be its accuracy which may be improved by calibration. But if that position is taught into controller memory and each time it is sent there it returns to within 0.1mm of the taught position then the repeatability will be within 0.1mm.
Accuracy and repeatability are different measures. Repeatability is usually the most important criterion for a robot and is similar to the concept of 'precision' in measurement—see accuracy and precision. ISO 9283 sets out a method whereby both accuracy and repeatability can be measured. Typically a robot is sent to a taught position a number of times and the error is measured at each return to the position after visiting 4 other positions. Repeatability is then quantified using the standard deviation of those samples in all three dimensions. A typical robot can, of course make a positional error exceeding that and that could be a problem for the process. Moreover, the repeatability is different in different parts of the working envelope and also changes with speed and payload. ISO 9283 specifies that accuracy and repeatability should be measured at maximum speed and at maximum payload. But this results in pessimistic values whereas the robot could be much more accurate and repeatable at light loads and speeds.
Repeatability in an industrial process is also subject to the accuracy of the end effector, for example a gripper, and even to the design of the 'fingers' that match the gripper to the object being grasped. For example, if a robot picks a screw by its head, the screw could be at a random angle. A subsequent attempt to insert the screw into a hole could easily fail. These and similar scenarios can be improved with 'lead-ins' e.g. by making the entrance to the hole tapered.
Motion control – for some applications, such as simple pick-and-place assembly, the robot need merely return repeatably to a limited number of pre-taught positions. For more sophisticated applications, such as welding and finishing (spray painting), motion must be continuously controlled to follow a path in space, with controlled orientation and velocity.
Power source – some robots use electric motors, others use hydraulic actuators. The former are faster, the latter are stronger and advantageous in applications such as spray painting, where a spark could set off an explosion; however, low internal air-pressurisation of the arm can prevent ingress of flammable vapours as well as other contaminants. Nowadays, it is highly unlikely to see any hydraulic robots in the market. Additional sealings, brushless electric motors and spark-proof protection eased the construction of units that are able to work in the environment with an explosive atmosphere.
Drive – some robots connect electric motors to the joints via gears; others connect the motor to the joint directly (direct drive). Using gears results in measurable 'backlash' which is free movement in an axis. Smaller robot arms frequently employ high speed, low torque DC motors, which generally require high gearing ratios; this has the disadvantage of backlash. In such cases the harmonic drive is often used.
Compliance - this is a measure of the amount in angle or distance that a robot axis will move when a force is applied to it. Because of compliance when a robot goes to a position carrying its maximum payload it will be at a position slightly lower than when it is carrying no payload. Compliance can also be responsible for overshoot when carrying high payloads in which case acceleration would need to be reduced.
=== Robot programming and interfaces ===
The setup or programming of motions and sequences for an industrial robot is typically taught by linking the robot controller to a laptop, desktop computer or (internal or Internet) network.
A robot and a collection of machines or peripherals is referred to as a workcell, or cell. A typical cell might contain a parts feeder, a molding machine and a robot. The various machines are 'integrated' and controlled by a single computer or PLC. How the robot interacts with other machines in the cell must be programmed, both with regard to their positions in the cell and synchronizing with them.
Software: The computer is installed with corresponding interface software. The use of a computer greatly simplifies the programming process. Specialized robot software is run either in the robot controller or in the computer or both depending on the system design.
There are two basic entities that need to be taught (or programmed): positional data and procedure. For example, in a task to move a screw from a feeder to a hole the positions of the feeder and the hole must first be taught or programmed. Secondly the procedure to get the screw from the feeder to the hole must be programmed along with any I/O involved, for example a signal to indicate when the screw is in the feeder ready to be picked up. The purpose of the robot software is to facilitate both these programming tasks.
Teaching the robot positions may be achieved a number of ways:
Positional commands The robot can be directed to the required position using a GUI or text based commands in which the required X-Y-Z position may be specified and edited.
Teach pendant: Robot positions can be taught via a teach pendant. This is a handheld control and programming unit. The common features of such units are the ability to manually send the robot to a desired position, or "inch" or "jog" to adjust a position. They also have a means to change the speed since a low speed is usually required for careful positioning, or while test-running through a new or modified routine. A large emergency stop button is usually included as well. Typically once the robot has been programmed there is no more use for the teach pendant. All teach pendants are equipped with a 3-position deadman switch. In the manual mode, it allows the robot to move only when it is in the middle position (partially pressed). If it is fully pressed in or completely released, the robot stops. This principle of operation allows natural reflexes to be used to increase safety.
Lead-by-the-nose: this is a technique offered by many robot manufacturers. In this method, one user holds the robot's manipulator, while another person enters a command which de-energizes the robot causing it to go into limp. The user then moves the robot by hand to the required positions and/or along a required path while the software logs these positions into memory. The program can later run the robot to these positions or along the taught path. This technique is popular for tasks such as paint spraying.
Offline programming is where the entire cell, the robot and all the machines or instruments in the workspace are mapped graphically. The robot can then be moved on screen and the process simulated. A robotics simulator is used to create embedded applications for a robot, without depending on the physical operation of the robot arm and end effector. The advantages of robotics simulation is that it saves time in the design of robotics applications. It can also increase the level of safety associated with robotic equipment since various "what if" scenarios can be tried and tested before the system is activated.[8] Robot simulation software provides a platform to teach, test, run, and debug programs that have been written in a variety of programming languages.
Robot simulation tools allow for robotics programs to be conveniently written and debugged off-line with the final version of the program tested on an actual robot. The ability to preview the behavior of a robotic system in a virtual world allows for a variety of mechanisms, devices, configurations and controllers to be tried and tested before being applied to a "real world" system. Robotics simulators have the ability to provide real-time computing of the simulated motion of an industrial robot using both geometric modeling and kinematics modeling.
Manufacturing independent robot programming tools are a relatively new but flexible way to program robot applications. Using a visual programming language, the programming is done via drag and drop of predefined template/building blocks. They often feature the execution of simulations to evaluate the feasibility and offline programming in combination. If the system is able to compile and upload native robot code to the robot controller, the user no longer has to learn each manufacturer's proprietary language. Therefore, this approach can be an important step to standardize programming methods.
Others in addition, machine operators often use user interface devices, typically touchscreen units, which serve as the operator control panel. The operator can switch from program to program, make adjustments within a program and also operate a host of peripheral devices that may be integrated within the same robotic system. These include end effectors, feeders that supply components to the robot, conveyor belts, emergency stop controls, machine vision systems, safety interlock systems, barcode printers and an almost infinite array of other industrial devices which are accessed and controlled via the operator control panel.
The teach pendant or PC is usually disconnected after programming and the robot then runs on the program that has been installed in its controller. However a computer is often used to 'supervise' the robot and any peripherals, or to provide additional storage for access to numerous complex paths and routines.
=== End-of-arm tooling ===
The most essential robot peripheral is the end effector, or end-of-arm-tooling (EOAT). Common examples of end effectors include welding devices (such as MIG-welding guns, spot-welders, etc.), spray guns and also grinding and deburring devices (such as pneumatic disk or belt grinders, burrs, etc.), and grippers (devices that can grasp an object, usually electromechanical or pneumatic). Other common means of picking up objects is by vacuum or magnets. End effectors are frequently highly complex, made to match the handled product and often capable of picking up an array of products at one time. They may utilize various sensors to aid the robot system in locating, handling, and positioning products.
=== Controlling movement ===
For a given robot the only parameters necessary to completely locate the end effector (gripper, welding torch, etc.) of the robot are the angles of each of the joints or displacements of the linear axes (or combinations of the two for robot formats such as SCARA). However, there are many different ways to define the points. The most common and most convenient way of defining a point is to specify a Cartesian coordinate for it, i.e. the position of the 'end effector' in mm in the X, Y and Z directions relative to the robot's origin. In addition, depending on the types of joints a particular robot may have, the orientation of the end effector in yaw, pitch, and roll and the location of the tool point relative to the robot's faceplate must also be specified. For a jointed arm these coordinates must be converted to joint angles by the robot controller and such conversions are known as Cartesian Transformations which may need to be performed iteratively or recursively for a multiple axis robot. The mathematics of the relationship between joint angles and actual spatial coordinates is called kinematics. See robot control
Positioning by Cartesian coordinates may be done by entering the coordinates into the system or by using a teach pendant which moves the robot in X-Y-Z directions. It is much easier for a human operator to visualize motions up/down, left/right, etc. than to move each joint one at a time. When the desired position is reached it is then defined in some way particular to the robot software in use, e.g. P1 - P5 below.
=== Typical programming ===
Most articulated robots perform by storing a series of positions in memory, and moving to them at various times in their programming sequence. For example, a robot which is moving items from one place (bin A) to another (bin B) might have a simple 'pick and place' program similar to the following:
Define points P1–P5:
Safely above workpiece (defined as P1)
10 cm Above bin A (defined as P2)
At position to take part from bin A (defined as P3)
10 cm Above bin B (defined as P4)
At position to take part from bin B. (defined as P5)
Define program:
Move to P1
Move to P2
Move to P3
Close gripper
Move to P2
Move to P4
Move to P5
Open gripper
Move to P4
Move to P1 and finish
For examples of how this would look in popular robot languages see industrial robot programming.
=== Singularities ===
The American National Standard for Industrial Robots and Robot Systems — Safety Requirements (ANSI/RIA R15.06-1999) defines a singularity as "a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities." It is most common in robot arms that utilize a "triple-roll wrist". This is a wrist about which the three axes of the wrist, controlling yaw, pitch, and roll, all pass through a common point. An example of a wrist singularity is when the path through which the robot is traveling causes the first and third axes of the robot's wrist (i.e. robot's axes 4 and 6) to line up. The second wrist axis then attempts to spin 180° in zero time to maintain the orientation of the end effector. Another common term for this singularity is a "wrist flip". The result of a singularity can be quite dramatic and can have adverse effects on the robot arm, the end effector, and the process. Some industrial robot manufacturers have attempted to side-step the situation by slightly altering the robot's path to prevent this condition. Another method is to slow the robot's travel speed, thus reducing the speed required for the wrist to make the transition. The ANSI/RIA has mandated that robot manufacturers shall make the user aware of singularities if they occur while the system is being manually manipulated.
A second type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist center lies on a cylinder that is centered about axis 1 and with radius equal to the distance between axes 1 and 4. This is called a shoulder singularity. Some robot manufacturers also mention alignment singularities, where axes 1 and 6 become coincident. This is simply a sub-case of shoulder singularities. When the robot passes close to a shoulder singularity, joint 1 spins very fast.
The third and last type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist's center lies in the same plane as axes 2 and 3.
Singularities are closely related to the phenomena of gimbal lock, which has a similar root cause of axes becoming lined up.
== Market structure ==
According to the International Federation of Robotics (IFR) study World Robotics 2024, there were about 4,281,585 operational industrial robots by the end of 2023. For the year 2018 the IFR estimates the worldwide sales of industrial robots with US$16.5 billion. Including the cost of software, peripherals and systems engineering, the annual turnover for robot systems is estimated to be US$48.0 billion in 2018.
China is the largest industrial robot market: 256 with 154,032 units sold in 2018. China had the largest operational stock of industrial robots, with 649,447 at the end of 2018.
The biggest customer of industrial robots is automotive industry with 30% market share, then electrical/electronics industry with 25%, metal and machinery industry with 10%, rubber and plastics industry with 5%, food industry with 5%. In textiles, apparel and leather industry, 1,580 units are operational.
Estimated worldwide annual supply of industrial robots (in units):
== Health and safety ==
The International Federation of Robotics has predicted a worldwide increase in adoption of industrial robots and they estimated 1.7 million new robot installations in factories worldwide by 2020 [IFR 2017] Archived 2017-02-11 at the Wayback Machine. Rapid advances in automation technologies (e.g. fixed robots, collaborative and mobile robots, and exoskeletons) have the potential to improve work conditions but also to introduce workplace hazards in manufacturing workplaces. [3] Despite the lack of occupational surveillance data on injuries associated specifically with robots, researchers from the US National Institute for Occupational Safety and Health (NIOSH) identified 61 robot-related deaths between 1992 and 2015 using keyword searches of the Bureau of Labor Statistics (BLS) Census of Fatal Occupational Injuries research database (see info from Center for Occupational Robotics Research). Using data from the Bureau of Labor Statistics, NIOSH and its state partners have investigated 4 robot-related fatalities under the Fatality Assessment and Control Evaluation Program. In addition the Occupational Safety and Health Administration (OSHA) has investigated dozens of robot-related deaths and injuries, which can be reviewed at OSHA Accident Search page. Injuries and fatalities could increase over time because of the increasing number of collaborative and co-existing robots, powered exoskeletons, and autonomous vehicles into the work environment.
Safety standards are being developed by the Robotic Industries Association (RIA) in conjunction with the American National Standards Institute (ANSI).[4] On October 5, 2017, OSHA, NIOSH and RIA signed an alliance to work together to enhance technical expertise, identify and help address potential workplace hazards associated with traditional industrial robots and the emerging technology of human-robot collaboration installations and systems, and help identify needed research to reduce workplace hazards. On October 16 NIOSH launched the Center for Occupational Robotics Research to "provide scientific leadership to guide the development and use of occupational robots that enhance worker safety, health, and wellbeing." So far, the research needs identified by NIOSH and its partners include: tracking and preventing injuries and fatalities, intervention and dissemination strategies to promote safe machine control and maintenance procedures, and on translating effective evidence-based interventions into workplace practice.
== See also ==
Automation
Domestic robot
Drum handler
Intelligent industrial work assistant (iiwa)
Lights out (manufacturing)
Mobile industrial robots
Cartesian coordinate robot
Gantry robot
Workplace Robotics Safety
== References ==
== Further reading ==
Nof, Shimon Y. (editor) (1999). Handbook of Industrial Robotics, 2nd ed. John Wiley & Sons. 1378 pp. ISBN 0-471-17783-0.
Lars Westerlund (author) (2000). The extended arm of man. ISBN 91-7736-467-8.
Michal Gurgul (author) (2018). Industrial robots and cobots: Everything you need to know about your future co-worker. ISBN 978-83-952513-0-6.
== External links ==
Industrial robots and robot system safety (by OSHA, so in the public domain).
International Federation of Robotics IFR (worldwide)
Robotic Industries Association RIA (North America)
BARA, British Automation and Robotics Association (UK)
Center for Occupational Robotics Research by NIOSH
Safety standards applied to Robotics
Strategies for addressing new technologies from the INRS Archived 2018-02-21 at the Wayback Machine
Machine Guarding - Why It's a Legal Requirement Archived 2021-04-15 at the Wayback Machine | Wikipedia/Industrial_robotics |
Fairness in machine learning (ML) refers to the various attempts to correct algorithmic bias in automated decision processes based on ML models. Decisions made by such models after a learning process may be considered unfair if they were based on variables considered sensitive (e.g., gender, ethnicity, sexual orientation, or disability).
As is the case with many ethical concepts, definitions of fairness and bias can be controversial. In general, fairness and bias are considered relevant when the decision process impacts people's lives.
Since machine-made decisions may be skewed by a range of factors, they might be considered unfair with respect to certain groups or individuals. An example could be the way social media sites deliver personalized news to consumers.
== Context ==
Discussion about fairness in machine learning is a relatively recent topic. Since 2016 there has been a sharp increase in research into the topic. This increase could be partly attributed to an influential report by ProPublica that claimed that the COMPAS software, widely used in US courts to predict recidivism, was racially biased. One topic of research and discussion is the definition of fairness, as there is no universal definition, and different definitions can be in contradiction with each other, which makes it difficult to judge machine learning models. Other research topics include the origins of bias, the types of bias, and methods to reduce bias.
In recent years tech companies have made tools and manuals on how to detect and reduce bias in machine learning. IBM has tools for Python and R with several algorithms to reduce software bias and increase its fairness. Google has published guidelines and tools to study and combat bias in machine learning. Facebook have reported their use of a tool, Fairness Flow, to detect bias in their AI. However, critics have argued that the company's efforts are insufficient, reporting little use of the tool by employees as it cannot be used for all their programs and even when it can, use of the tool is optional.
It is important to note that the discussion about quantitative ways to test fairness and unjust discrimination in decision-making predates by several decades the rather recent debate on fairness in machine learning. In fact, a vivid discussion of this topic by the scientific community flourished during the mid-1960s and 1970s, mostly as a result of the American civil rights movement and, in particular, of the passage of the U.S. Civil Rights Act of 1964. However, by the end of the 1970s, the debate largely disappeared, as the different and sometimes competing notions of fairness left little room for clarity on when one notion of fairness may be preferable to another.
=== Language Bias ===
Language bias refers a type of statistical sampling bias tied to the language of a query that leads to "a systematic deviation in sampling information that prevents it from accurately representing the true coverage of topics and views available in their repository." Luo et al. show that current large language models, as they are predominately trained on English-language data, often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent. Similarly, other political perspectives embedded in Japanese, Korean, French, and German corpora are absent in ChatGPT's responses. ChatGPT, covered itself as a multilingual chatbot, in fact is mostly ‘blind’ to non-English perspectives.
=== Gender Bias ===
Gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. For example, large language models often assign roles and characteristics based on traditional gender norms; it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.
=== Political bias ===
Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.
== Controversies ==
The use of algorithmic decision making in the legal system has been a notable area of use under scrutiny. In 2014, then U.S. Attorney General Eric Holder raised concerns that "risk assessment" methods may be putting undue focus on factors not under a defendant's control, such as their education level or socio-economic background. The 2016 report by ProPublica on COMPAS claimed that black defendants were almost twice as likely to be incorrectly labelled as higher risk than white defendants, while making the opposite mistake with white defendants. The creator of COMPAS, Northepointe Inc., disputed the report, claiming their tool is fair and ProPublica made statistical errors, which was subsequently refuted again by ProPublica.
Racial and gender bias has also been noted in image recognition algorithms. Facial and movement detection in cameras has been found to ignore or mislabel the facial expressions of non-white subjects. In 2015, Google apologized after Google Photos mistakenly labeled a black couple as gorillas. Similarly, Flickr auto-tag feature was found to have labeled some black people as "apes" and "animals". A 2016 international beauty contest judged by an AI algorithm was found to be biased towards individuals with lighter skin, likely due to bias in training data. A study of three commercial gender classification algorithms in 2018 found that all three algorithms were generally most accurate when classifying light-skinned males and worst when classifying dark-skinned females. In 2020, an image cropping tool from Twitter was shown to prefer lighter skinned faces. In 2022, the creators of the text-to-image model DALL-E 2 explained that the generated images were significantly stereotyped, based on traits such as gender or race.
Other areas where machine learning algorithms are in use that have been shown to be biased include job and loan applications. Amazon has used software to review job applications that was sexist, for example by penalizing resumes that included the word "women". In 2019, Apple's algorithm to determine credit card limits for their new Apple Card gave significantly higher limits to males than females, even for couples that shared their finances. Mortgage-approval algorithms in use in the U.S. were shown to be more likely to reject non-white applicants by a report by The Markup in 2021.
== Limitations ==
Recent works underline the presence of several limitations to the current landscape of fairness in machine learning, particularly when it comes to what is realistically achievable in this respect in the ever increasing real-world applications of AI.
For instance, the mathematical and quantitative approach to formalize fairness, and the related "de-biasing" approaches, may rely onto too simplistic and easily overlooked assumptions, such as the categorization of individuals into pre-defined social groups.
Other delicate aspects are, e.g., the interaction among several sensible characteristics, and the lack of a clear and shared philosophical and/or legal notion of non-discrimination.
Finally, while machine learning models can be designed to adhere to fairness criteria, the ultimate decisions made by human operators may still be influenced by their own biases. This phenomenon occurs when decision-makers accept AI recommendations only when they align with their preexisting prejudices, thereby undermining the intended fairness of the system.
== Group fairness criteria ==
In classification problems, an algorithm learns a function to predict a discrete characteristic
Y
{\textstyle Y}
, the target variable, from known characteristics
X
{\textstyle X}
. We model
A
{\textstyle A}
as a discrete random variable which encodes some characteristics contained or implicitly encoded in
X
{\textstyle X}
that we consider as sensitive characteristics (gender, ethnicity, sexual orientation, etc.). We finally denote by
R
{\textstyle R}
the prediction of the classifier.
Now let us define three main criteria to evaluate if a given classifier is fair, that is if its predictions are not influenced by some of these sensitive variables.
=== Independence ===
We say the random variables
(
R
,
A
)
{\textstyle (R,A)}
satisfy independence if the sensitive characteristics
A
{\textstyle A}
are statistically independent of the prediction
R
{\textstyle R}
, and we write
R
⊥
A
.
{\displaystyle R\bot A.}
We can also express this notion with the following formula:
P
(
R
=
r
|
A
=
a
)
=
P
(
R
=
r
|
A
=
b
)
∀
r
∈
R
∀
a
,
b
∈
A
{\displaystyle P(R=r\ |\ A=a)=P(R=r\ |\ A=b)\quad \forall r\in R\quad \forall a,b\in A}
This means that the classification rate for each target classes is equal for people belonging to different groups with respect to sensitive characteristics
A
{\displaystyle A}
.
Yet another equivalent expression for independence can be given using the concept of mutual information between random variables, defined as
I
(
X
,
Y
)
=
H
(
X
)
+
H
(
Y
)
−
H
(
X
,
Y
)
{\displaystyle I(X,Y)=H(X)+H(Y)-H(X,Y)}
In this formula,
H
(
X
)
{\textstyle H(X)}
is the entropy of the random variable
X
{\displaystyle X}
. Then
(
R
,
A
)
{\textstyle (R,A)}
satisfy independence if
I
(
R
,
A
)
=
0
{\textstyle I(R,A)=0}
.
A possible relaxation of the independence definition include introducing a positive slack
ϵ
>
0
{\textstyle \epsilon >0}
and is given by the formula:
P
(
R
=
r
|
A
=
a
)
≥
P
(
R
=
r
|
A
=
b
)
−
ϵ
∀
r
∈
R
∀
a
,
b
∈
A
{\displaystyle P(R=r\ |\ A=a)\geq P(R=r\ |\ A=b)-\epsilon \quad \forall r\in R\quad \forall a,b\in A}
Finally, another possible relaxation is to require
I
(
R
,
A
)
≤
ϵ
{\textstyle I(R,A)\leq \epsilon }
.
=== Separation ===
We say the random variables
(
R
,
A
,
Y
)
{\textstyle (R,A,Y)}
satisfy separation if the sensitive characteristics
A
{\textstyle A}
are statistically independent of the prediction
R
{\textstyle R}
given the target value
Y
{\textstyle Y}
, and we write
R
⊥
A
|
Y
.
{\displaystyle R\bot A\ |\ Y.}
We can also express this notion with the following formula:
P
(
R
=
r
|
Y
=
q
,
A
=
a
)
=
P
(
R
=
r
|
Y
=
q
,
A
=
b
)
∀
r
∈
R
q
∈
Y
∀
a
,
b
∈
A
{\displaystyle P(R=r\ |\ Y=q,A=a)=P(R=r\ |\ Y=q,A=b)\quad \forall r\in R\quad q\in Y\quad \forall a,b\in A}
This means that all the dependence of the decision
R
{\displaystyle R}
on the sensitive attribute
A
{\displaystyle A}
must be justified by the actual dependence of the true target variable
Y
{\displaystyle Y}
.
Another equivalent expression, in the case of a binary target rate, is that the true positive rate and the false positive rate are equal (and therefore the false negative rate and the true negative rate are equal) for every value of the sensitive characteristics:
P
(
R
=
1
|
Y
=
1
,
A
=
a
)
=
P
(
R
=
1
|
Y
=
1
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(R=1\ |\ Y=1,A=a)=P(R=1\ |\ Y=1,A=b)\quad \forall a,b\in A}
P
(
R
=
1
|
Y
=
0
,
A
=
a
)
=
P
(
R
=
1
|
Y
=
0
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(R=1\ |\ Y=0,A=a)=P(R=1\ |\ Y=0,A=b)\quad \forall a,b\in A}
A possible relaxation of the given definitions is to allow the value for the difference between rates to be a positive number lower than a given slack
ϵ
>
0
{\textstyle \epsilon >0}
, rather than equal to zero.
In some fields separation (separation coefficient) in a confusion matrix is a measure of the distance (at a given level of the probability score) between the predicted cumulative percent negative and predicted cumulative percent positive.
The greater this separation coefficient is at a given score value, the more effective the model is at differentiating between the set of positives and negatives at a particular probability cut-off. According to Mayes: "It is often observed in the credit industry that the selection of validation measures depends on the modeling approach. For example, if modeling procedure is parametric or semi-parametric, the two-sample K-S test is often used. If the model is derived by heuristic or iterative search methods, the measure of model performance is usually divergence. A third option is the coefficient of separation...The coefficient of separation, compared to the other two methods, seems to be most reasonable as a measure for model performance because it reflects the separation pattern of a model."
=== Sufficiency ===
We say the random variables
(
R
,
A
,
Y
)
{\textstyle (R,A,Y)}
satisfy sufficiency if the sensitive characteristics
A
{\textstyle A}
are statistically independent of the target value
Y
{\textstyle Y}
given the prediction
R
{\textstyle R}
, and we write
Y
⊥
A
|
R
.
{\displaystyle Y\bot A\ |\ R.}
We can also express this notion with the following formula:
P
(
Y
=
q
|
R
=
r
,
A
=
a
)
=
P
(
Y
=
q
|
R
=
r
,
A
=
b
)
∀
q
∈
Y
r
∈
R
∀
a
,
b
∈
A
{\displaystyle P(Y=q\ |\ R=r,A=a)=P(Y=q\ |\ R=r,A=b)\quad \forall q\in Y\quad r\in R\quad \forall a,b\in A}
This means that the probability of actually being in each of the groups is equal for two individuals with different sensitive characteristics given that they were predicted to belong to the same group.
=== Relationships between definitions ===
Finally, we sum up some of the main results that relate the three definitions given above:
Assuming
Y
{\textstyle Y}
is binary, if
A
{\textstyle A}
and
Y
{\textstyle Y}
are not statistically independent, and
R
{\textstyle R}
and
Y
{\textstyle Y}
are not statistically independent either, then independence and separation cannot both hold except for rhetorical cases.
If
(
R
,
A
,
Y
)
{\textstyle (R,A,Y)}
as a joint distribution has positive probability for all its possible values and
A
{\textstyle A}
and
Y
{\textstyle Y}
are not statistically independent, then separation and sufficiency cannot both hold except for rhetorical cases.
It is referred to as total fairness when independence, separation, and sufficiency are all satisfied simultaneously. However, total fairness is not possible to achieve except in specific rhetorical cases.
=== Mathematical formulation of group fairness definitions ===
==== Preliminary definitions ====
Most statistical measures of fairness rely on different metrics, so we will start by defining them. When working with a binary classifier, both the predicted and the actual classes can take two values: positive and negative. Now let us start explaining the different possible relations between predicted and actual outcome:
True positive (TP): The case where both the predicted and the actual outcome are in a positive class.
True negative (TN): The case where both the predicted outcome and the actual outcome are assigned to the negative class.
False positive (FP): A case predicted to befall into a positive class assigned in the actual outcome is to the negative one.
False negative (FN): A case predicted to be in the negative class with an actual outcome is in the positive one.
These relations can be easily represented with a confusion matrix, a table that describes the accuracy of a classification model. In this matrix, columns and rows represent instances of the predicted and the actual cases, respectively.
By using these relations, we can define multiple metrics which can be later used to measure the fairness of an algorithm:
Positive predicted value (PPV): the fraction of positive cases which were correctly predicted out of all the positive predictions. It is usually referred to as precision, and represents the probability of a correct positive prediction. It is given by the following formula:
P
P
V
=
P
(
a
c
t
u
a
l
=
+
|
p
r
e
d
i
c
t
i
o
n
=
+
)
=
T
P
T
P
+
F
P
{\displaystyle PPV=P(actual=+\ |\ prediction=+)={\frac {TP}{TP+FP}}}
False discovery rate (FDR): the fraction of positive predictions which were actually negative out of all the positive predictions. It represents the probability of an erroneous positive prediction, and it is given by the following formula:
F
D
R
=
P
(
a
c
t
u
a
l
=
−
|
p
r
e
d
i
c
t
i
o
n
=
+
)
=
F
P
T
P
+
F
P
{\displaystyle FDR=P(actual=-\ |\ prediction=+)={\frac {FP}{TP+FP}}}
Negative predicted value (NPV): the fraction of negative cases which were correctly predicted out of all the negative predictions. It represents the probability of a correct negative prediction, and it is given by the following formula:
N
P
V
=
P
(
a
c
t
u
a
l
=
−
|
p
r
e
d
i
c
t
i
o
n
=
−
)
=
T
N
T
N
+
F
N
{\displaystyle NPV=P(actual=-\ |\ prediction=-)={\frac {TN}{TN+FN}}}
False omission rate (FOR): the fraction of negative predictions which were actually positive out of all the negative predictions. It represents the probability of an erroneous negative prediction, and it is given by the following formula:
F
O
R
=
P
(
a
c
t
u
a
l
=
+
|
p
r
e
d
i
c
t
i
o
n
=
−
)
=
F
N
T
N
+
F
N
{\displaystyle FOR=P(actual=+\ |\ prediction=-)={\frac {FN}{TN+FN}}}
True positive rate (TPR): the fraction of positive cases which were correctly predicted out of all the positive cases. It is usually referred to as sensitivity or recall, and it represents the probability of the positive subjects to be classified correctly as such. It is given by the formula:
T
P
R
=
P
(
p
r
e
d
i
c
t
i
o
n
=
+
|
a
c
t
u
a
l
=
+
)
=
T
P
T
P
+
F
N
{\displaystyle TPR=P(prediction=+\ |\ actual=+)={\frac {TP}{TP+FN}}}
False negative rate (FNR): the fraction of positive cases which were incorrectly predicted to be negative out of all the positive cases. It represents the probability of the positive subjects to be classified incorrectly as negative ones, and it is given by the formula:
F
N
R
=
P
(
p
r
e
d
i
c
t
i
o
n
=
−
|
a
c
t
u
a
l
=
+
)
=
F
N
T
P
+
F
N
{\displaystyle FNR=P(prediction=-\ |\ actual=+)={\frac {FN}{TP+FN}}}
True negative rate (TNR): the fraction of negative cases which were correctly predicted out of all the negative cases. It represents the probability of the negative subjects to be classified correctly as such, and it is given by the formula:
T
N
R
=
P
(
p
r
e
d
i
c
t
i
o
n
=
−
|
a
c
t
u
a
l
=
−
)
=
T
N
T
N
+
F
P
{\displaystyle TNR=P(prediction=-\ |\ actual=-)={\frac {TN}{TN+FP}}}
False positive rate (FPR): the fraction of negative cases which were incorrectly predicted to be positive out of all the negative cases. It represents the probability of the negative subjects to be classified incorrectly as positive ones, and it is given by the formula:
F
P
R
=
P
(
p
r
e
d
i
c
t
i
o
n
=
+
|
a
c
t
u
a
l
=
−
)
=
F
P
T
N
+
F
P
{\displaystyle FPR=P(prediction=+\ |\ actual=-)={\frac {FP}{TN+FP}}}
The following criteria can be understood as measures of the three general definitions given at the beginning of this section, namely Independence, Separation and Sufficiency. In the table to the right, we can see the relationships between them.
To define these measures specifically, we will divide them into three big groups as done in Verma et al.: definitions based on a predicted outcome, on predicted and actual outcomes, and definitions based on predicted probabilities and the actual outcome.
We will be working with a binary classifier and the following notation:
S
{\textstyle S}
refers to the score given by the classifier, which is the probability of a certain subject to be in the positive or the negative class.
R
{\textstyle R}
represents the final classification predicted by the algorithm, and its value is usually derived from
S
{\textstyle S}
, for example will be positive when
S
{\textstyle S}
is above a certain threshold.
Y
{\textstyle Y}
represents the actual outcome, that is, the real classification of the individual and, finally,
A
{\textstyle A}
denotes the sensitive attributes of the subjects.
==== Definitions based on predicted outcome ====
The definitions in this section focus on a predicted outcome
R
{\textstyle R}
for various distributions of subjects. They are the simplest and most intuitive notions of fairness.
Demographic parity, also referred to as statistical parity, acceptance rate parity and benchmarking. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal probability of being assigned to the positive predicted class. This is, if the following formula is satisfied:
P
(
R
=
+
|
A
=
a
)
=
P
(
R
=
+
|
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(R=+\ |\ A=a)=P(R=+\ |\ A=b)\quad \forall a,b\in A}
Conditional statistical parity. Basically consists in the definition above, but restricted only to a subset of the instances. In mathematical notation this would be:
P
(
R
=
+
|
L
=
l
,
A
=
a
)
=
P
(
R
=
+
|
L
=
l
,
A
=
b
)
∀
a
,
b
∈
A
∀
l
∈
L
{\displaystyle P(R=+\ |\ L=l,A=a)=P(R=+\ |\ L=l,A=b)\quad \forall a,b\in A\quad \forall l\in L}
==== Definitions based on predicted and actual outcomes ====
These definitions not only considers the predicted outcome
R
{\textstyle R}
but also compare it to the actual outcome
Y
{\textstyle Y}
.
Predictive parity, also referred to as outcome test. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal PPV. This is, if the following formula is satisfied:
P
(
Y
=
+
|
R
=
+
,
A
=
a
)
=
P
(
Y
=
+
|
R
=
+
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(Y=+\ |\ R=+,A=a)=P(Y=+\ |\ R=+,A=b)\quad \forall a,b\in A}
Mathematically, if a classifier has equal PPV for both groups, it will also have equal FDR, satisfying the formula:
P
(
Y
=
−
|
R
=
+
,
A
=
a
)
=
P
(
Y
=
−
|
R
=
+
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(Y=-\ |\ R=+,A=a)=P(Y=-\ |\ R=+,A=b)\quad \forall a,b\in A}
False positive error rate balance, also referred to as predictive equality. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal FPR. This is, if the following formula is satisfied:
P
(
R
=
+
|
Y
=
−
,
A
=
a
)
=
P
(
R
=
+
|
Y
=
−
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(R=+\ |\ Y=-,A=a)=P(R=+\ |\ Y=-,A=b)\quad \forall a,b\in A}
Mathematically, if a classifier has equal FPR for both groups, it will also have equal TNR, satisfying the formula:
P
(
R
=
−
|
Y
=
−
,
A
=
a
)
=
P
(
R
=
−
|
Y
=
−
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(R=-\ |\ Y=-,A=a)=P(R=-\ |\ Y=-,A=b)\quad \forall a,b\in A}
False negative error rate balance, also referred to as equal opportunity. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal FNR. This is, if the following formula is satisfied:
P
(
R
=
−
|
Y
=
+
,
A
=
a
)
=
P
(
R
=
−
|
Y
=
+
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(R=-\ |\ Y=+,A=a)=P(R=-\ |\ Y=+,A=b)\quad \forall a,b\in A}
Mathematically, if a classifier has equal FNR for both groups, it will also have equal TPR, satisfying the formula:
P
(
R
=
+
|
Y
=
+
,
A
=
a
)
=
P
(
R
=
+
|
Y
=
+
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(R=+\ |\ Y=+,A=a)=P(R=+\ |\ Y=+,A=b)\quad \forall a,b\in A}
Equalized odds, also referred to as conditional procedure accuracy equality and disparate mistreatment. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal TPR and equal FPR, satisfying the formula:
P
(
R
=
+
|
Y
=
y
,
A
=
a
)
=
P
(
R
=
+
|
Y
=
y
,
A
=
b
)
y
∈
{
+
,
−
}
∀
a
,
b
∈
A
{\displaystyle P(R=+\ |\ Y=y,A=a)=P(R=+\ |\ Y=y,A=b)\quad y\in \{+,-\}\quad \forall a,b\in A}
Conditional use accuracy equality. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal PPV and equal NPV, satisfying the formula:
P
(
Y
=
y
|
R
=
y
,
A
=
a
)
=
P
(
Y
=
y
|
R
=
y
,
A
=
b
)
y
∈
{
+
,
−
}
∀
a
,
b
∈
A
{\displaystyle P(Y=y\ |\ R=y,A=a)=P(Y=y\ |\ R=y,A=b)\quad y\in \{+,-\}\quad \forall a,b\in A}
Overall accuracy equality. A classifier satisfies this definition if the subject in the protected and unprotected groups have equal prediction accuracy, that is, the probability of a subject from one class to be assigned to it. This is, if it satisfies the following formula:
P
(
R
=
Y
|
A
=
a
)
=
P
(
R
=
Y
|
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle P(R=Y\ |\ A=a)=P(R=Y\ |\ A=b)\quad \forall a,b\in A}
Treatment equality. A classifier satisfies this definition if the subjects in the protected and unprotected groups have an equal ratio of FN and FP, satisfying the formula:
F
N
A
=
a
F
P
A
=
a
=
F
N
A
=
b
F
P
A
=
b
{\displaystyle {\frac {FN_{A=a}}{FP_{A=a}}}={\frac {FN_{A=b}}{FP_{A=b}}}}
==== Definitions based on predicted probabilities and actual outcome ====
These definitions are based in the actual outcome
Y
{\textstyle Y}
and the predicted probability score
S
{\textstyle S}
.
Test-fairness, also known as calibration or matching conditional frequencies. A classifier satisfies this definition if individuals with the same predicted probability score
S
{\textstyle S}
have the same probability of being classified in the positive class when they belong to either the protected or the unprotected group:
P
(
Y
=
+
|
S
=
s
,
A
=
a
)
=
P
(
Y
=
+
|
S
=
s
,
A
=
b
)
∀
s
∈
S
∀
a
,
b
∈
A
{\displaystyle P(Y=+\ |\ S=s,A=a)=P(Y=+\ |\ S=s,A=b)\quad \forall s\in S\quad \forall a,b\in A}
Well-calibration is an extension of the previous definition. It states that when individuals inside or outside the protected group have the same predicted probability score
S
{\textstyle S}
they must have the same probability of being classified in the positive class, and this probability must be equal to
S
{\textstyle S}
:
P
(
Y
=
+
|
S
=
s
,
A
=
a
)
=
P
(
Y
=
+
|
S
=
s
,
A
=
b
)
=
s
∀
s
∈
S
∀
a
,
b
∈
A
{\displaystyle P(Y=+\ |\ S=s,A=a)=P(Y=+\ |\ S=s,A=b)=s\quad \forall s\in S\quad \forall a,b\in A}
Balance for positive class. A classifier satisfies this definition if the subjects constituting the positive class from both protected and unprotected groups have equal average predicted probability score
S
{\textstyle S}
. This means that the expected value of probability score for the protected and unprotected groups with positive actual outcome
Y
{\textstyle Y}
is the same, satisfying the formula:
E
(
S
|
Y
=
+
,
A
=
a
)
=
E
(
S
|
Y
=
+
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle E(S\ |\ Y=+,A=a)=E(S\ |\ Y=+,A=b)\quad \forall a,b\in A}
Balance for negative class. A classifier satisfies this definition if the subjects constituting the negative class from both protected and unprotected groups have equal average predicted probability score
S
{\textstyle S}
. This means that the expected value of probability score for the protected and unprotected groups with negative actual outcome
Y
{\textstyle Y}
is the same, satisfying the formula:
E
(
S
|
Y
=
−
,
A
=
a
)
=
E
(
S
|
Y
=
−
,
A
=
b
)
∀
a
,
b
∈
A
{\displaystyle E(S\ |\ Y=-,A=a)=E(S\ |\ Y=-,A=b)\quad \forall a,b\in A}
=== Equal confusion fairness ===
With respect to confusion matrices, independence, separation, and sufficiency require the respective quantities listed below to not have statistically significant difference across sensitive characteristics.
Independence: (TP + FP) / (TP + FP + FN + TN) (i.e.,
P
(
Y
^
=
1
)
{\displaystyle P({\hat {Y}}=1)}
).
Separation: TN / (TN + FP) and TP / (TP + FN) (i.e., specificity
P
(
Y
^
=
0
∣
Y
=
0
)
{\displaystyle P({\hat {Y}}=0\mid Y=0)}
and recall
P
(
Y
^
=
1
∣
Y
=
1
)
{\displaystyle P({\hat {Y}}=1\mid Y=1)}
).
Sufficiency: TP / (TP + FP) and TN / (TN + FN) (i.e., precision
P
(
Y
=
1
∣
Y
^
=
1
)
{\displaystyle P(Y=1\mid {\hat {Y}}=1)}
and negative predictive value
P
(
Y
=
0
∣
Y
^
=
0
)
{\displaystyle P(Y=0\mid {\hat {Y}}=0)}
).
The notion of equal confusion fairness requires the confusion matrix of a given decision system to have the same distribution when computed stratified over all sensitive characteristics.
=== Social welfare function ===
Some scholars have proposed defining algorithmic fairness in terms of a social welfare function. They argue that using a social welfare function enables an algorithm designer to consider fairness and predictive accuracy in terms of their benefits to the people affected by the algorithm. It also allows the designer to trade off efficiency and equity in a principled way. Sendhil Mullainathan has stated that algorithm designers should use social welfare functions to recognize absolute gains for disadvantaged groups. For example, a study found that using a decision-making algorithm in pretrial detention rather than pure human judgment reduced the detention rates for Blacks, Hispanics, and racial minorities overall, even while keeping the crime rate constant.
== Individual fairness criteria ==
An important distinction among fairness definitions is the one between group and individual notions. Roughly speaking, while group fairness criteria compare quantities at a group level, typically identified by sensitive attributes (e.g. gender, ethnicity, age, etc.), individual criteria compare individuals. In words, individual fairness follow the principle that "similar individuals should receive similar treatments".
There is a very intuitive approach to fairness, which usually goes under the name of fairness through unawareness (FTU), or blindness, that prescribes not to explicitly employ sensitive features when making (automated) decisions. This is effectively a notion of individual fairness, since two individuals differing only for the value of their sensitive attributes would receive the same outcome.
However, in general, FTU is subject to several drawbacks, the main being that it does not take into account possible correlations between sensitive attributes and non-sensitive attributes employed in the decision-making process. For example, an agent with the (malignant) intention to discriminate on the basis of gender could introduce in the model a proxy variable for gender (i.e. a variable highly correlated with gender) and effectively using gender information while at the same time being compliant to the FTU prescription.
The problem of what variables correlated to sensitive ones are fairly employable by a model in the decision-making process is a crucial one, and is relevant for group concepts as well: independence metrics require a complete removal of sensitive information, while separation-based metrics allow for correlation, but only as far as the labeled target variable "justify" them.
The most general concept of individual fairness was introduced in the pioneer work by Cynthia Dwork and collaborators in 2012 and can be thought of as a mathematical translation of the principle that the decision map taking features as input should be built such that it is able to "map similar individuals similarly", that is expressed as a Lipschitz condition on the model map. They call this approach fairness through awareness (FTA), precisely as counterpoint to FTU, since they underline the importance of choosing the appropriate target-related distance metric to assess which individuals are similar in specific situations. Again, this problem is very related to the point raised above about what variables can be seen as "legitimate" in particular contexts.
== Causality-based metrics ==
Causal fairness measures the frequency with which two nearly identical users or applications who differ only in a set of characteristics with respect to which resource allocation must be fair receive identical treatment.
An entire branch of the academic research on fairness metrics is devoted to leverage causal models to assess bias in machine learning models. This approach is usually justified by the fact that the same observational distribution of data may hide different causal relationships among the variables at play, possibly with different interpretations of whether the outcome are affected by some form of bias or not.
Kusner et al. propose to employ counterfactuals, and define a decision-making process counterfactually fair if, for any individual, the outcome does not change in the counterfactual scenario where the sensitive attributes are changed. The mathematical formulation reads:
P
(
R
A
←
a
=
1
∣
A
=
a
,
X
=
x
)
=
P
(
R
A
←
b
=
1
∣
A
=
a
,
X
=
x
)
,
∀
a
,
b
;
{\displaystyle P(R_{A\leftarrow a}=1\mid A=a,X=x)=P(R_{A\leftarrow b}=1\mid A=a,X=x),\quad \forall a,b;}
that is: taken a random individual with sensitive attribute
A
=
a
{\displaystyle A=a}
and other features
X
=
x
{\displaystyle X=x}
and the same individual if she had
A
=
b
{\displaystyle A=b}
, they should have same chance of being accepted.
The symbol
R
^
A
←
a
{\displaystyle {\hat {R}}_{A\leftarrow a}}
represents the counterfactual random variable
R
{\displaystyle R}
in the scenario where the sensitive attribute
A
{\displaystyle A}
is fixed to
A
=
a
{\displaystyle A=a}
. The conditioning on
A
=
a
,
X
=
x
{\displaystyle A=a,X=x}
means that this requirement is at the individual level, in that we are conditioning on all the variables identifying a single observation.
Machine learning models are often trained upon data where the outcome depended on the decision made at that time. For example, if a machine learning model has to determine whether an inmate will recidivate and will determine whether the inmate should be released early, the outcome could be dependent on whether the inmate was released early or not. Mishler et al. propose a formula for counterfactual equalized odds:
P
(
R
=
1
∣
Y
0
=
0
,
A
=
a
)
=
P
(
R
=
1
∣
Y
0
=
0
,
A
=
b
)
∧
P
(
R
=
0
∣
Y
1
=
1
,
A
=
a
)
=
P
(
R
=
0
∣
Y
1
=
1
,
A
=
b
)
,
∀
a
,
b
;
{\displaystyle P(R=1\mid Y^{0}=0,A=a)=P(R=1\mid Y^{0}=0,A=b)\wedge P(R=0\mid Y^{1}=1,A=a)=P(R=0\mid Y^{1}=1,A=b),\quad \forall a,b;}
where
R
{\displaystyle R}
is a random variable,
Y
x
{\displaystyle Y^{x}}
denotes the outcome given that the decision
x
{\displaystyle x}
was taken, and
A
{\displaystyle A}
is a sensitive feature.
Plecko and Bareinboim propose a unified framework to deal with causal analysis of fairness. They suggest the use of a Standard Fairness Model, consisting of a causal graph with 4 types of variables:
sensitive attributes (
A
{\displaystyle A}
),
target variable (
Y
{\displaystyle Y}
),
mediators (
W
{\displaystyle W}
) between
A
{\displaystyle A}
and
Y
{\displaystyle Y}
, representing possible indirect effects of sensitive attributes on the outcome,
variables possibly sharing a common cause with
A
{\displaystyle A}
(
Z
{\displaystyle Z}
), representing possible spurious (i.e., non causal) effects of the sensitive attributes on the outcome.
Within this framework, Plecko and Bareinboim are therefore able to classify the possible effects that sensitive attributes may have on the outcome.
Moreover, the granularity at which these effects are measured—namely, the conditioning variables used to average the effect—is directly connected to the "individual vs. group" aspect of fairness assessment.
== Bias mitigation strategies ==
Fairness can be applied to machine learning algorithms in three different ways: data preprocessing, optimization during software training, or post-processing results of the algorithm.
=== Preprocessing ===
Usually, the classifier is not the only problem; the dataset is also biased. The discrimination of a dataset
D
{\textstyle D}
with respect to the group
A
=
a
{\textstyle A=a}
can be defined as follows:
d
i
s
c
A
=
a
(
D
)
=
|
{
X
∈
D
|
X
(
A
)
≠
a
,
X
(
Y
)
=
+
}
|
|
{
X
∈
D
|
X
(
A
)
≠
a
}
|
−
|
{
X
∈
D
|
X
(
A
)
=
a
,
X
(
Y
)
=
+
}
|
|
{
X
∈
D
|
X
(
A
)
=
a
}
|
{\displaystyle disc_{A=a}(D)={\frac {|\{X\in D|X(A)\neq a,X(Y)=+\}|}{|\{X\in D|X(A)\neq a\}|}}-{\frac {|\{X\in D|X(A)=a,X(Y)=+\}|}{|\{X\in D|X(A)=a\}|}}}
That is, an approximation to the difference between the probabilities of belonging in the positive class given that the subject has a protected characteristic different from
a
{\textstyle a}
and equal to
a
{\textstyle a}
.
Algorithms correcting bias at preprocessing remove information about dataset variables which might result in unfair decisions, while trying to alter as little as possible. This is not as simple as just removing the sensitive variable, because other attributes can be correlated to the protected one.
A way to do this is to map each individual in the initial dataset to an intermediate representation in which it is impossible to identify whether it belongs to a particular protected group while maintaining as much information as possible. Then, the new representation of the data is adjusted to get the maximum accuracy in the algorithm.
This way, individuals are mapped into a new multivariable representation where the probability of any member of a protected group to be mapped to a certain value in the new representation is the same as the probability of an individual which doesn't belong to the protected group. Then, this representation is used to obtain the prediction for the individual, instead of the initial data. As the intermediate representation is constructed giving the same probability to individuals inside or outside the protected group, this attribute is hidden to the classifier.
An example is explained in Zemel et al. where a multinomial random variable is used as an intermediate representation. In the process, the system is encouraged to preserve all information except that which can lead to biased decisions, and to obtain a prediction as accurate as possible.
On the one hand, this procedure has the advantage that the preprocessed data can be used for any machine learning task. Furthermore, the classifier does not need to be modified, as the correction is applied to the dataset before processing. On the other hand, the other methods obtain better results in accuracy and fairness.
==== Reweighing ====
Reweighing is an example of a preprocessing algorithm. The idea is to assign a weight to each dataset point such that the weighted discrimination is 0 with respect to the designated group.
If the dataset
D
{\textstyle D}
was unbiased the sensitive variable
A
{\textstyle A}
and the target variable
Y
{\textstyle Y}
would be statistically independent and the probability of the joint distribution would be the product of the probabilities as follows:
P
e
x
p
(
A
=
a
∧
Y
=
+
)
=
P
(
A
=
a
)
×
P
(
Y
=
+
)
=
|
{
X
∈
D
|
X
(
A
)
=
a
}
|
|
D
|
×
|
{
X
∈
D
|
X
(
Y
)
=
+
}
|
|
D
|
{\displaystyle P_{exp}(A=a\wedge Y=+)=P(A=a)\times P(Y=+)={\frac {|\{X\in D|X(A)=a\}|}{|D|}}\times {\frac {|\{X\in D|X(Y)=+\}|}{|D|}}}
In reality, however, the dataset is not unbiased and the variables are not statistically independent so the observed probability is:
P
o
b
s
(
A
=
a
∧
Y
=
+
)
=
|
{
X
∈
D
|
X
(
A
)
=
a
∧
X
(
Y
)
=
+
}
|
|
D
|
{\displaystyle P_{obs}(A=a\wedge Y=+)={\frac {|\{X\in D|X(A)=a\wedge X(Y)=+\}|}{|D|}}}
To compensate for the bias, the software adds a weight, lower for favored objects and higher for unfavored objects. For each
X
∈
D
{\textstyle X\in D}
we get:
W
(
X
)
=
P
e
x
p
(
A
=
X
(
A
)
∧
Y
=
X
(
Y
)
)
P
o
b
s
(
A
=
X
(
A
)
∧
Y
=
X
(
Y
)
)
{\displaystyle W(X)={\frac {P_{exp}(A=X(A)\wedge Y=X(Y))}{P_{obs}(A=X(A)\wedge Y=X(Y))}}}
When we have for each
X
{\textstyle X}
a weight associated
W
(
X
)
{\textstyle W(X)}
we compute the weighted discrimination with respect to group
A
=
a
{\textstyle A=a}
as follows:
d
i
s
c
A
=
a
(
D
)
=
∑
W
(
X
)
X
∈
{
X
∈
D
|
X
(
A
)
≠
a
,
X
(
Y
)
=
+
}
∑
W
(
X
)
X
∈
{
X
∈
D
|
X
(
A
)
≠
a
}
−
∑
W
(
X
)
X
∈
{
X
∈
D
|
X
(
A
)
=
a
,
X
(
Y
)
=
+
}
∑
W
(
X
)
X
∈
{
X
∈
D
|
X
(
A
)
=
a
}
{\displaystyle disc_{A=a}(D)={\frac {\sum W(X)X\in \{X\in D|X(A)\neq a,X(Y)=+\}}{\sum W(X)X\in \{X\in D|X(A)\neq a\}}}-{\frac {\sum W(X)X\in \{X\in D|X(A)=a,X(Y)=+\}}{\sum W(X)X\in \{X\in D|X(A)=a\}}}}
It can be shown that after reweighting this weighted discrimination is 0.
=== Inprocessing ===
Another approach is to correct the bias at training time. This can be done by adding constraints to the optimization objective of the algorithm. These constraints force the algorithm to improve fairness, by keeping the same rates of certain measures for the protected group and the rest of individuals. For example, we can add to the objective of the algorithm the condition that the false positive rate is the same for individuals in the protected group and the ones outside the protected group.
The main measures used in this approach are false positive rate, false negative rate, and overall misclassification rate. It is possible to add just one or several of these constraints to the objective of the algorithm. Note that the equality of false negative rates implies the equality of true positive rates so this implies the equality of opportunity. After adding the restrictions to the problem it may turn intractable, so a relaxation on them may be needed.
==== Adversarial debiasing ====
We train two classifiers at the same time through some gradient-based method (f.e.: gradient descent). The first one, the predictor tries to accomplish the task of predicting
Y
{\textstyle Y}
, the target variable, given
X
{\textstyle X}
, the input, by modifying its weights
W
{\textstyle W}
to minimize some loss function
L
P
(
y
^
,
y
)
{\textstyle L_{P}({\hat {y}},y)}
. The second one, the adversary tries to accomplish the task of predicting
A
{\textstyle A}
, the sensitive variable, given
Y
^
{\textstyle {\hat {Y}}}
by modifying its weights
U
{\textstyle U}
to minimize some loss function
L
A
(
a
^
,
a
)
{\textstyle L_{A}({\hat {a}},a)}
.
An important point here is that, to propagate correctly,
Y
^
{\textstyle {\hat {Y}}}
above must refer to the raw output of the classifier, not the discrete prediction; for example, with an artificial neural network and a classification problem,
Y
^
{\textstyle {\hat {Y}}}
could refer to the output of the softmax layer.
Then we update
U
{\textstyle U}
to minimize
L
A
{\textstyle L_{A}}
at each training step according to the gradient
∇
U
L
A
{\textstyle \nabla _{U}L_{A}}
and we modify
W
{\textstyle W}
according to the expression:
∇
W
L
P
−
p
r
o
j
∇
W
L
A
∇
W
L
P
−
α
∇
W
L
A
{\displaystyle \nabla _{W}L_{P}-proj_{\nabla _{W}L_{A}}\nabla _{W}L_{P}-\alpha \nabla _{W}L_{A}}
where
α
\alpha
is a tunable hyperparameter that can vary at each time step.
The intuitive idea is that we want the predictor to try to minimize
L
P
{\textstyle L_{P}}
(therefore the term
∇
W
L
P
{\textstyle \nabla _{W}L_{P}}
) while, at the same time, maximize
L
A
{\textstyle L_{A}}
(therefore the term
−
α
∇
W
L
A
{\textstyle -\alpha \nabla _{W}L_{A}}
), so that the adversary fails at predicting the sensitive variable from
Y
^
{\textstyle {\hat {Y}}}
.
The term
−
p
r
o
j
∇
W
L
A
∇
W
L
P
{\textstyle -proj_{\nabla _{W}L_{A}}\nabla _{W}L_{P}}
prevents the predictor from moving in a direction that helps the adversary decrease its loss function.
It can be shown that training a predictor classification model with this algorithm improves demographic parity with respect to training it without the adversary.
=== Postprocessing ===
The final method tries to correct the results of a classifier to achieve fairness. In this method, we have a classifier that returns a score for each individual and we need to do a binary prediction for them. High scores are likely to get a positive outcome, while low scores are likely to get a negative one, but we can adjust the threshold to determine when to answer yes as desired. Note that variations in the threshold value affect the trade-off between the rates for true positives and true negatives.
If the score function is fair in the sense that it is independent of the protected attribute, then any choice of the threshold will also be fair, but classifiers of this type tend to be biased, so a different threshold may be required for each protected group to achieve fairness. A way to do this is plotting the true positive rate against the false negative rate at various threshold settings (this is called ROC curve) and find a threshold where the rates for the protected group and other individuals are equal.
==== Reject option based classification ====
Given a classifier let
P
(
+
|
X
)
{\textstyle P(+|X)}
be the probability computed by the classifiers as the probability that the instance
X
{\textstyle X}
belongs to the positive class +. When
P
(
+
|
X
)
{\textstyle P(+|X)}
is close to 1 or to 0, the instance
X
{\textstyle X}
is specified with high degree of certainty to belong to class + or – respectively. However, when
P
(
+
|
X
)
{\textstyle P(+|X)}
is closer to 0.5 the classification is more unclear.
We say
X
{\textstyle X}
is a "rejected instance" if
m
a
x
(
P
(
+
|
X
)
,
1
−
P
(
+
|
X
)
)
≤
θ
{\textstyle max(P(+|X),1-P(+|X))\leq \theta }
with a certain
θ
{\textstyle \theta }
such that
0.5
<
θ
<
1
{\textstyle 0.5<\theta <1}
.
The algorithm of "ROC" consists on classifying the non-rejected instances following the rule above and the rejected instances as follows: if the instance is an example of a deprived group (
X
(
A
)
=
a
{\displaystyle X(A)=a}
) then label it as positive, otherwise, label it as negative.
We can optimize different measures of discrimination (link) as functions of
θ
{\textstyle \theta }
to find the optimal
θ
{\textstyle \theta }
for each problem and avoid becoming discriminatory against the privileged group.
== See also ==
Algorithmic bias
Machine learning
Representational harm
== References == | Wikipedia/Algorithmic_fairness |
Artificial intelligence (AI) has been used in applications throughout industry and academia. In a manner analogous to electricity or computers, AI serves as a general-purpose technology. AI programs are designed to simulate human perception and understanding. These systems are capable of adapting to new information and responding to changing situations. Machine learning has been used for various scientific and commercial purposes including language translation, image recognition, decision-making, credit scoring, and e-commerce.
== Agriculture ==
In agriculture, AI has been proposed as a way for farmers to identify areas that need irrigation, fertilization, or pesticide treatments to increase yields, thereby improving efficiency. AI has been used to attempt to classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and optimize irrigation.
== Architecture & Design ==
AI in architecture has created a way for architects to create things beyond human understanding. AI implementation of machine learning text-to-render technologies, like DALL-E and stable Diffusion, gives power to visualization complex.
AI allows designers to demonstrate their creativity and even invent new ideas while designing. In future, AI will not replace architects; instead, it will improve the speed of translating ideas sketching.
== Business ==
=== Content extraction ===
An optical character reader is used in the extraction of data in business documents like invoices and receipts. It can also be used in business contract documents e.g. employment agreements to extract critical data like employment terms, delivery terms, termination clauses, etc.
== Computer science ==
=== Programming assistance ===
==== AI-powered code assisting tools ====
AI can be used for real-time code completion, chat, and automated test generation. These tools are typically integrated with editors and IDEs as plugins. They differ in functionality, quality, speed, and approach to privacy. Code suggestions could be incorrect, and should be carefully reviewed by software developers before accepted.
GitHub Copilot is an artificial intelligence model developed by GitHub and OpenAI that is able to autocomplete code in multiple programming languages. Price for individuals: $10/mo or $100/yr, with one free month trial.
Tabnine was created by Jacob Jackson and was originally owned by Tabnine company. In late 2019, Tabnine was acquired by Codota. Tabnine tool is available as plugin to most popular IDEs. It offers multiple pricing options, including limited "starter" free version.
CodiumAI by CodiumAI, small startup in Tel Aviv, offers automated test creation. Currently supports Python, JS, and TS.
Ghostwriter by Replit offers code completion and chat. They have multiple pricing plans, including a free one and a "Hacker" plan for $7/month.
CodeWhisperer by Amazon collects individual users' content, including files open in the IDE. They claim to focus on security both during transmission and when storing. Individual plan is free, professional plan is $19/user/month.
Other tools: SourceGraph Cody, CodeCompleteFauxPilot, Tabby
==== Neural network design ====
AI can be used to create other AIs. For example, around November 2017, Google's AutoML project to evolve new neural net topologies created NASNet, a system optimized for ImageNet and POCO F1. NASNet's performance exceeded all previously published performance on ImageNet.
==== Quantum computing ====
Machine learning has been used for noise-cancelling in quantum technology, including quantum sensors. Moreover, there is substantial research and development of using quantum computers with machine learning algorithms. For example, there is a prototype, photonic, quantum memristive device for neuromorphic (quantum-)computers (NC)/artificial neural networks and NC-using quantum materials with some variety of potential neuromorphic computing-related applications, and quantum machine learning is a field with some variety of applications under development. AI could be used for quantum simulators which may have the application of solving physics and chemistry problems as well as for quantum annealers for training of neural networks for AI applications. There may also be some usefulness in chemistry, e.g. for drug discovery, and in materials science, e.g. for materials optimization/discovery (with possible relevance to quantum materials manufacturing).
=== Historical contributions ===
AI researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered AI. All of the following were originally developed in AI laboratories:
Time sharing
Interactive interpreters
Graphical user interfaces and the computer mouse
Rapid application development environments
The linked list data structure
Automatic storage management
Symbolic programming
Functional programming
Dynamic programming
Object-oriented programming
Optical character recognition
Constraint satisfaction
== Customer Service ==
=== Human resources ===
Another application of AI is in human resources. AI can screen resumes and rank candidates based on their qualifications, predict candidate success in given roles, and automate repetitive communication tasks via chatbots.
=== Job search ===
AI has simplified the recruiting/job search process for both recruiters and job seekers. According to Raj Mukherjee from Indeed, 65% of job searchers search again within 91 days after hire. An AI-powered engine streamlines the complexity of job hunting by assessing information on job skills, salaries, and user tendencies, matching job seekers to the most relevant positions. Machine intelligence calculates appropriate wages and highlights resume information for recruiters using NLP, which extracts relevant words and phrases from text. Another application is an AI resume builder that compiles a CV in 5 minutes. Chatbots assist website visitors and refine workflows.
=== Online and telephone customer service ===
AI underlies avatars (automated online assistants) on web pages. It can reduce operation and training costs. Pypestream automated customer service for its mobile application to streamline communication with customers.
A Google app analyzes language and converts speech into text. The platform can identify angry customers through their language and respond appropriately. Amazon uses a chatbot for customer service that can perform tasks like checking the status of an order, cancelling orders, offering refunds and connecting the customer with a human representative. Generative AI (GenAI), such as ChatGPT, is increasingly used in business to automate tasks and enhance decision-making.
=== Hospitality ===
In the hospitality industry, AI is used to reduce repetitive tasks, analyze trends, interact with guests, and predict customer needs. AI hotel services come in the form of a chatbot, application, virtual voice assistant and service robots.
== Education ==
AI elevates teaching, focusing on significant issues like the knowledge nexus and educational equality. The evolution of AI in education and technology should be used to improve human capabilities in relationships where they do not replace humans. UNESCO recognizes the future of AI in education as an instrument to reach Sustainable Development Goal 4, called "Inclusive and Equitable Quality Education."
The World Economic Forum also stresses AI's contribution to students' overall improvement and transforming teaching into a more enjoyable process.
=== Personalized Learning ===
AI driven tutoring systems, such as Khan Academy, Duolingo and Carnegie Learning are the forefoot of delivering personalized education.
These platforms leverage AI algorithms to analyze individual learning patterns, strengths, and weaknesses, enabling the customization of content and Algorithm to suit each student's pace and style of learning.
=== Administrative Efficiency ===
In educational institutions, AI is increasingly used to automate routine tasks like attendance tracking, grading and marking, which allows educators to devote more time to interactive teaching and direct student engagement.
Furthermore, AI tools are employed to monitor student progress, analyze learning behaviors, and predict academic challenges, facilitating timely and proactive interventions for students who may be at risk of falling behind.
=== Ethical and Privacy Concerns ===
Despite the benefits, the integration of AI in education raises significant ethical and privacy concerns, particularly regarding the handling of sensitive student data.
It is imperative that AI systems in education are designed and operated with a strong emphasis on transparency, security, and respect for privacy to maintain trust and uphold the integrity of educational practices.
Much of the regulation will be influenced by the AI Act, the world's first comprehensive AI law.
== Energy & Environment ==
=== Energy system ===
Power electronics converters are used in renewable energy, energy storage, electric vehicles and high-voltage direct current transmission. These converters are failure-prone, which can interrupt service and require costly maintenance or catastrophic consequences in mission critical applications. AI can guide the design process for reliable power electronics converters, by calculating exact design parameters that ensure the required lifetime.
The U.S. Department of Energy underscores AI's pivotal role in realizing national climate goals. With AI, the ambitious target of achieving net-zero greenhouse gas emissions across the economy becomes feasible. AI also helps make room for wind and solar on the grid by avoiding congestion and increasing grid reliability.
Machine learning can be used for energy consumption prediction and scheduling, e.g. to help with renewable energy intermittency management (see also: smart grid and climate change mitigation in the power grid).
=== Environmental monitoring ===
Autonomous ships that monitor the ocean, AI-driven satellite data analysis, passive acoustics or remote sensing and other applications of environmental monitoring make use of machine learning.
For example, "Global Plastic Watch" is an AI-based satellite monitoring-platform for analysis/tracking of plastic waste sites to help prevention of plastic pollution – primarily ocean pollution – by helping identify who and where mismanages plastic waste, dumping it into oceans.
=== Early-warning systems ===
Machine learning can be used to spot early-warning signs of disasters and environmental issues, possibly including natural pandemics, earthquakes, landslides, heavy rainfall, long-term water supply vulnerability, tipping-points of ecosystem collapse, cyanobacterial bloom outbreaks, and droughts.
=== Economic and social challenges ===
AI for Good is a platform launched in 2017 by the International Telecommunication Union (ITU) agency of the United Nations (UN). The goal of the platform is to use AI to help achieve the UN's Sustainable Development Goals.
The University of Southern California launched the Center for Artificial Intelligence in Society, with the goal of using AI to address problems such as homelessness. Stanford researchers use AI to analyze satellite images to identify high poverty areas.
== Entertainment & Media ==
=== Media ===
AI applications analyze media content such as movies, TV programs, advertisement videos or user-generated content. The solutions often involve computer vision.
Typical scenarios include the analysis of images using object recognition or face recognition techniques, or the analysis of video for scene recognizing scenes, objects or faces. AI-based media analysis can facilitate media search, the creation of descriptive keywords for content, content policy monitoring (such as verifying the suitability of content for a particular TV viewing time), speech to text for archival or other purposes, and the detection of logos, products or celebrity faces for ad placement.
Motion interpolation
Pixel-art scaling algorithms
Image scaling
Image restoration
Photo colorization
Film restoration and video upscaling
Photo tagging
Automated species identification (such as identifying plants, fungi and animals with an app)
Text-to-image models such as DALL-E, Midjourney and Stable Diffusion
Image to video
Text to video such as Make-A-Video from Meta, Imagen video and Phenaki from Google
Text to music with AI models such as MusicLM
Text to speech such as ElevenLabs and 15.ai
Motion capture
Make image transparent
=== Deep-fakes ===
Deep-fakes can be used for comedic purposes but are better known for fake news and hoaxes.
Deepfakes can portray individuals in harmful or compromising situations, causing significant reputational damage and emotional distress, especially when the content is defamatory or violates personal ethics. While defamation and false light laws offer some recourse, their focus on false statements rather than fabricated images or videos often leaves victims with limited legal protection and a challenging burden of proof.
In January 2016, the Horizon 2020 program financed the InVID Project to help journalists and researchers detect fake documents, made available as browser plugins.
In June 2016, the visual computing group of the Technical University of Munich and from Stanford University developed Face2Face, a program that animates photographs of faces, mimicking the facial expressions of another person. The technology has been demonstrated animating the faces of people including Barack Obama and Vladimir Putin. Other methods have been demonstrated based on deep neural networks, from which the name deep fake was taken.
In September 2018, U.S. Senator Mark Warner proposed to penalize social media companies that allow sharing of deep-fake documents on their platforms.
In 2018, Darius Afchar and Vincent Nozick found a way to detect faked content by analyzing the mesoscopic properties of video frames. DARPA gave 68 million dollars to work on deep-fake detection.
Audio deepfakes and AI software capable of detecting deep-fakes and cloning human voices have been developed.
Respeecher is a program that enables one person to speak with the voice of another.
=== Video surveillance analysis and manipulated media detection ===
AI algorithms have been used to detect deepfake videos.
=== Video production ===
Artificial intelligence is also starting to be used in video production, with tools and software being developed that utilize generative AI in order to create new video, or alter existing video. Some of the major tools that are being used in these processes currently are DALL-E, Mid-journey, and Runway. Way mark Studios utilized the tools offered by both DALL-E and Mid-journey to create a fully AI generated film called The Frost in the summer of 2023. Way mark Studios is experimenting with using these AI tools to generate advertisements and commercials for companies in mere seconds. Yves Bergquist, a director of the AI & Neuroscience in Media Project at USC's Entertainment Technology Center, says post production crews in Hollywood are already using generative AI, and predicts that in the future more companies will embrace this new technology.
=== Music ===
AI has been used to compose music of various genres.
David Cope created an AI called Emily Howell that managed to become well known in the field of algorithmic computer music. The algorithm behind Emily Howell is registered as a US patent.
In 2012, AI Iamus created the first complete classical album.
AIVA (Artificial Intelligence Virtual Artist), composes symphonic music, mainly classical music for film scores. It achieved a world first by becoming the first virtual composer to be recognized by a musical professional association.
Melomics creates computer-generated music for stress and pain relief.
At Sony CSL Research Laboratory, the Flow Machines software creates pop songs by learning music styles from a huge database of songs. It can compose in multiple styles.
The Watson Beat uses reinforcement learning and deep belief networks to compose music on a simple seed input melody and a select style. The software was open sourced and musicians such as Taryn Southern collaborated with the project to create music.
South Korean singer, Hayeon's, debut song, "Eyes on You" was composed using AI which was supervised by real composers, including NUVO.
=== Writing and reporting ===
Narrative Science sells computer-generated news and reports. It summarizes sporting events based on statistical data from the game. It also creates financial reports and real estate analyses. Automated Insights generates personalized recaps and previews for Yahoo Sports Fantasy Football.
Yseop, uses AI to turn structured data into natural language comments and recommendations. Yseop writes financial reports, executive summaries, personalized sales or marketing documents and more in multiple languages, including English, Spanish, French, and German.
TALESPIN made up stories similar to the fables of Aesop. The program started with a set of characters who wanted to achieve certain goals. The story narrated their attempts to satisfy these goals. Mark Riedl and Vadim Bulitko asserted that the essence of storytelling was experience management, or "how to balance the need for a coherent story progression with user agency, which is often at odds".
While AI storytelling focuses on story generation (character and plot), story communication also received attention. In 2002, researchers developed an architectural framework for narrative prose generation. They faithfully reproduced text variety and complexity on stories such as Little Red Riding Hood. In 2016, a Japanese AI co-wrote a short story and almost won a literary prize.
South Korean company Hanteo Global uses a journalism bot to write articles.
Literary authors are also exploring uses of AI. An example is David Jhave Johnston's work ReRites (2017–2019), where the poet created a daily rite of editing the poetic output of a neural network to create a series of performances and publications.
==== Sports writing ====
In 2010, artificial intelligence used baseball statistics to automatically generate news articles. This was launched by The Big Ten Network using software from Narrative Science.
After being unable to cover every Minor League Baseball game with a large team, Associated Press collaborated with Automated Insights in 2016 to create game recaps that were automated by artificial intelligence.
UOL in Brazil expanded the use of AI in its writing. Rather than just generating news stories, they programmed the AI to include commonly searched words on Google.
El Pais, a Spanish news site that covers many things including sports, allows users to make comments on each news article. They use the Perspective API to moderate these comments and if the software deems a comment to contain toxic language, the commenter must modify it in order to publish it.
A local Dutch media group used AI to create automatic coverage of amateur soccer, set to cover 60,000 games in just a single season. NDC partnered with United Robots to create this algorithm and cover what would have never been possible before without an extremely large team.
Lede AI has been used in 2023 to take scores from high school football games to generate stories automatically for the local newspaper. This was met with significant criticism from readers for the very robotic diction that was published. With some descriptions of games being a "close encounter of the athletic kind," readers were not pleased and let the publishing company, Gannett, know on social media. Gannett has since halted their used of Lede AI until they come up with a solution for what they call an experiment.
=== Wikipedia ===
Millions of its articles have been edited by bots which however are usually not artificial intelligence software. Many AI platforms use Wikipedia data, mainly for training machine learning applications. There is research and development of various artificial intelligence applications for Wikipedia such as for identifying outdated sentences, detecting covert vandalism or recommending articles and tasks to new editors.
Machine translation (see above) has also be used for translating Wikipedia articles and could play a larger role in creating, updating, expanding, and generally improving articles in the future. A content translation tool allows editors of some Wikipedias to more easily translate articles across several select languages.
=== Video games ===
In video games, AI is routinely used to generate behavior in non-player characters (NPCs). In addition, AI is used for pathfinding. Some researchers consider NPC AI in games to be a "solved problem" for most production tasks. Games with less typical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010). AI is also used in Alien Isolation (2014) as a way to control the actions the Alien will perform next.
Games have been a major application of AI's capabilities since the 1950s. In the 21st century, AIs have beaten human players in many games, including chess (Deep Blue), Jeopardy! (Watson), Go (AlphaGo), poker (Pluribus and Cepheus), E-sports (StarCraft), and general game playing (AlphaZero and MuZero).
Kuki AI is a set of chatbots and other apps which were designed for entertainment and as a marketing tool. Character.ai is another example of a chatbot being used for recreation.
Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from AI research.
=== Art ===
AI has been used to produce visual art. The first AI art program, called AARON, was developed by Harold Cohen in 1968 with the goal of being able to code the act of drawing. It started by creating simple black and white drawings, and later to painting using special brushes and dyes that were chosen by the program itself without mediation from Cohen.
AI platforms such as DALL-E, Stable Diffusion, Imagen, and Midjourney have been used for generating visual images from inputs such as text or other images. Some AI tools allow users to input images and output changed versions of that image, such as to display an object or product in different environments. AI image models can also attempt to replicate the specific styles of artists, and can add visual complexity to rough sketches.
Since their design in 2014, generative adversarial networks (GANs) have been used by AI artists. GAN computer programming, generates technical images through machine learning frameworks that surpass the need for human operators. Examples of GAN programs that generate art include Artbreeder and DeepDream.
==== Art analysis ====
In addition to the creation of original art, research methods that utilize AI have been generated to quantitatively analyze digital art collections. Although the main goal of the large-scale digitization of artwork in the past few decades was to allow for accessibility and exploration of these collections, the use of AI in analyzing them has brought about new research perspectives.
Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art. While distant viewing includes the analysis of large collections, close reading involves one piece of artwork.
=== Computer animation ===
AI has been in use since the early 2000s, most notably by a system designed by Pixar called "Genesis". It was designed to learn algorithms and create 3D models for its characters and props. Notable movies that used this technology included Up and The Good Dinosaur. AI has been used less ceremoniously in recent years. In 2023, it was revealed Netflix of Japan was using AI to generate background images for their upcoming show to be met with backlash online. In recent years, motion capture became an easily accessible form of AI animation. For example, Move AI is a program built to capture any human movement and reanimate it in its animation program using learning AI.
== Finance ==
Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking began in 1987 when Security Pacific National Bank launched a fraud prevention task-force to counter the unauthorized use of debit cards.
Banks use AI to organize operations for bookkeeping, investing in stocks, and managing properties. AI can adapt to changes during non-business hours. AI is used to combat fraud and financial crimes by monitoring behavioral patterns for any abnormal changes or anomalies.
The use of AI in applications such as online trading and decision-making has changed major economic theories. For example, AI-based buying and selling platforms estimate personalized demand and supply curves, thus enabling individualized pricing. AI systems reduce information asymmetry in the market and thus make markets more efficient. The application of artificial intelligence in the financial industry can alleviate the financing constraints of non-state-owned enterprises, especially for smaller and more innovative enterprises.
=== Trading and investment ===
Algorithmic trading involves using AI systems to make trading decisions at speeds of magnitude greater than any human is capable of, making millions of trades in a day without human intervention. Such high-frequency trading represents a fast-growing sector. Many banks, funds, and proprietary trading firms now have AI-managed portfolios. Automated trading systems are typically used by large institutional investors but include smaller firms trading with their own AI systems.
Large financial institutions use AI to assist with their investment practices. BlackRock's AI engine, Aladdin, is used both within the company and by clients to help with investment decisions. Its functions include the use of natural language processing to analyze text such as news, broker reports, and social media feeds. It then gauges the sentiment on the companies mentioned and assigns a score. Banks such as UBS and Deutsche Bank use SQREEM (Sequential Quantum Reduction and Extraction Model) to mine data to develop consumer profiles and match them with wealth management products.
=== Underwriting ===
Online lender Upstart uses machine learning for underwriting.
ZestFinance's Zest Automated Machine Learning (ZAML) platform is used for credit underwriting. This platform uses machine learning to analyze data, including purchase transactions and how a customer fills out a form, to score borrowers. The platform is handy for assigning credit scores to those with limited credit histories.
=== Audit ===
AI makes continuous auditing possible. Potential benefits include reducing audit risk, increasing the level of assurance, and reducing audit duration.
Continuous auditing with AI allows real-time monitoring and reporting of financial activities and provides businesses with timely insights that can lead to quick decision-making.
=== Anti-money laundering ===
AI software, such as LaundroGraph which uses contemporary suboptimal datasets, could be used for anti-money laundering (AML).
=== History ===
In the 1980s, AI started to become prominent in finance as expert systems were commercialized. For example, Dupont created 100 expert systems, which helped them to save almost $10 million per year. One of the first systems was the Pro-trader expert system that predicted the 87-point drop in the Dow Jones Industrial Average in 1986. "The major junctions of the system were to monitor premiums in the market, determine the optimum investment strategy, execute transactions when appropriate and modify the knowledge base through a learning mechanism."
One of the first expert systems to help with financial plans was PlanPowerm and Client Profiling System, created by Applied Expert Systems (APEX). It was launched in 1986. It helped create personal financial plans for people.
In the 1990s, AI was applied to fraud detection. In 1993, FinCEN Artificial Intelligence System (FAIS) was launched. It was able to review over 200,000 transactions per week, and over two years, it helped identify 400 potential cases of money laundering equal to $1 billion. These expert systems were later replaced by machine learning systems.
AI can enhance entrepreneurial activity, and AI is one of the most dynamic areas for start-ups, with significant venture capital flowing into AI.
== Health ==
=== Healthcare ===
AI in healthcare is often used for classification, to evaluate a CT scan or electrocardiogram or to identify high-risk patients for population health. AI is helping with the high-cost problem of dosing. One study suggested that AI could save $16 billion. In 2016, a study reported that an AI-derived formula derived the proper dose of immunosuppressant drugs to give to transplant patients. Current research has indicated that non-cardiac vascular illnesses are also being treated with artificial intelligence (AI). For certain disorders, AI algorithms can aid in diagnosis, recommended treatments, outcome prediction, and patient progress tracking. As AI technology advances, it is anticipated that it will become more significant in the healthcare industry.
The early detection of diseases like cancer is made possible by AI algorithms, which diagnose diseases by analyzing complex sets of medical data. For example, the IBM Watson system might be used to comb through massive data such as medical records and clinical trials to help diagnose a problem. Microsoft's AI project Hanover helps doctors choose cancer treatments from among the more than 800 medicines and vaccines. Its goal is to memorize all the relevant papers to predict which (combinations of) drugs will be most effective for each patient. Myeloid leukemia is one target. Another study reported on an AI that was as good as doctors in identifying skin cancers. Another project monitors multiple high-risk patients by asking each patient questions based on data acquired from doctor/patient interactions. In one study done with transfer learning, an AI diagnosed eye conditions similar to an ophthalmologist and recommended treatment referrals.
Another study demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel judged better than a surgeon.
Artificial neural networks are used as clinical decision support systems for medical diagnosis, such as in concept processing technology in EMR software.
Other healthcare tasks thought suitable for an AI that are in development include:
Screening
Heart sound analysis
Companion robots for elder care
Medical record analysis
Treatment plan design
Medication management
Assisting blind people
Consultations
Drug creation (e.g. by identifying candidate drugs and by using existing drug screening data such as in life extension research)
Clinical training
Outcome prediction for surgical procedures
HIV prognosis
Identifying genomic pathogen signatures of novel pathogens or identifying pathogens via physics-based fingerprints (including pandemic pathogens)
Helping link genes to their functions, otherwise analyzing genes and identification of novel biological targets
Help development of biomarkers
Help tailor therapies to individuals in personalized medicine/precision medicine
=== Workplace health and safety ===
AI-enabled chatbots decrease the need for humans to perform basic call center tasks.
Machine learning in sentiment analysis can spot fatigue in order to prevent overwork. Similarly, decision support systems can prevent industrial disasters and make disaster response more efficient. For manual workers in material handling, predictive analytics may be used to reduce musculoskeletal injury. Data collected from wearable sensors can improve workplace health surveillance, risk assessment, and research.
AI can auto-code workers' compensation claims. AI-enabled virtual reality systems can enhance safety training for hazard recognition. AI can more efficiently detect accident near misses, which are important in reducing accident rates, but are often underreported.
=== Biochemistry ===
AlphaFold 2 can determine the 3D structure of a (folded) protein in hours rather than the months required by earlier automated approaches and was used to provide the likely structures of all proteins in the human body and essentially all proteins known to science (more than 200 million).
== Language Processing ==
=== Language translation ===
Speech translation technology attempts to convert one language's spoken words into another language. This potentially reduces language barriers in global commerce and cross-cultural exchange, enabling speakers of various languages to communicate with one another.
AI has been used to automatically translate spoken language and textual content in products such as Microsoft Translator, Google Translate, and DeepL Translator. Additionally, research and development are in progress to decode and conduct animal communication.
Meaning is conveyed not only by text, but also through usage and context (see semantics and pragmatics). As a result, the two primary categorization approaches for machine translations are statistical machine translation (SMT) and neural machine translations (NMTs). The old method of performing translation was to use statistical methodology to forecast the best probable output with specific algorithms. However, with NMT, the approach employs dynamic algorithms to achieve better translations based on context.
== Legal & Government ==
=== Government ===
AI facial recognition systems are used for mass surveillance, notably in China. In 2019, Bengaluru, India deployed AI-managed traffic signals. This system uses cameras to monitor traffic density and adjust signal timing based on the interval needed to clear traffic.
=== Law ===
==== Legal analysis ====
AI is a mainstay of law-related professions. Algorithms and machine learning do some tasks previously done by entry-level lawyers. While its use is common, it is not expected to replace most work done by lawyers in the near future.
The electronic discovery industry uses machine learning to reduce manual searching.
==== Law enforcement and legal proceedings ====
Law enforcement has begun using facial recognition systems (FRS) to identify suspects from visual data. FRS results have proven to be more accurate when compared to eyewitness results. Furthermore, FRS has shown to have much a better ability to identify individuals when video clarity and visibility are low in comparison to human participants.
COMPAS is a commercial system used by U.S. courts to assess the likelihood of recidivism.
One concern relates to algorithmic bias, AI programs may become biased after processing data that exhibits bias. ProPublica claims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than that of white defendants.
In 2019, the city of Hangzhou, China established a pilot program artificial intelligence-based Internet Court to adjudicate disputes related to ecommerce and internet-related intellectual property claims.: 124 Parties appear before the court via videoconference and AI evaluates the evidence presented and applies relevant legal standards.: 124
== Manufacturing ==
=== Sensors ===
Artificial intelligence has been combined with digital spectrometry by IdeaCuria Inc., enable applications such as at-home water quality monitoring.
=== Toys and games ===
In the 1990s, early artificial intelligence tools controlled Tamagotchis and Giga Pets, the Internet, and the first widely released robot, Furby. Aibo was a domestic robot in the form of a robotic dog with intelligent features and autonomy.
Mattel created an assortment of AI-enabled toys that "understand" conversations, give intelligent responses, and learn.
=== Oil and gas ===
Oil and gas companies have used artificial intelligence tools to automate functions, foresee equipment issues, and increase oil and gas output.
== Military ==
Various countries are deploying AI military applications. The main applications enhance command and control, communications, sensors, integration and interoperability. Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams.
AI has been used in military operations in Iraq, Syria, Israel and Ukraine.
== Retail and e-commerce ==
=== Internet and e-commerce ===
==== Web feeds and posts ====
Machine learning has been used for recommendation systems in determining which posts should show up in social media feeds. Various types of social media analysis also make use of machine learning and there is research into its use for (semi-)automated tagging/enhancement/correction of online misinformation and related filter bubbles.
AI has been used to customize shopping options and personalize offers. Online gambling companies have used AI for targeting gamblers.
==== Virtual assistants and search ====
Intelligent personal assistants use AI to understand many natural language requests in other ways than rudimentary commands. Common examples are Apple's Siri, Amazon's Alexa, and a more recent AI, ChatGPT by OpenAI.
Bing Chat has used artificial intelligence as part of its search engine.
==== Spam filtering ====
Machine learning can be used to combat spam, scams, and phishing. It can scrutinize the contents of spam and phishing attacks to attempt to identify malicious elements. Some models built via machine learning algorithms have over 90% accuracy in distinguishing between spam and legitimate emails. These models can be refined using new data and evolving spam tactics. Machine learning also analyzes traits such as sender behavior, email header information, and attachment types, potentially enhancing spam detection.
==== Facial recognition and image labeling ====
AI has been used in facial recognition systems. Some examples are Apple's Face ID and Android's Face Unlock, which are used to secure mobile devices.
Image labeling has been used by Google Image Labeler to detect products in photos and to allow people to search based on a photo. Image labeling has also been demonstrated to generate speech to describe images to blind people. Facebook's DeepFace identifies human faces in digital images.
== Scientific Research ==
=== Evidence of general impacts ===
In April 2024, the Scientific Advice Mechanism to the European Commission published advice including a comprehensive evidence review of the opportunities and challenges posed by artificial intelligence in scientific research.
As benefits, the evidence review highlighted:
its role in accelerating research and innovation
its capacity to automate workflows
enhancing dissemination of scientific work
As challenges:
limitations and risks around transparency, reproducibility and interpretability
poor performance (inaccuracy)
risk of harm through misuse or unintended use
societal concerns including the spread of misinformation and increasing inequalities
=== Archaeology, history and imaging of sites ===
Machine learning can help to restore and attribute ancient texts. It can help to index texts for example to enable better and easier searching and classification of fragments.
Artificial intelligence can also be used to investigate genomes to uncover genetic history, such as interbreeding between archaic and modern humans by which for example the past existence of a ghost population, not Neanderthal or Denisovan, was inferred.
It can also be used for "non-invasive and non-destructive access to internal structures of archaeological remains".
=== Physics ===
A deep learning system was reported to learn intuitive physics from visual data (of virtual 3D environments) based on an unpublished approach inspired by studies of visual cognition in infants. Other researchers have developed a machine learning algorithm that could discover sets of basic variables of various physical systems and predict the systems' future dynamics from video recordings of their behavior. In the future, it may be possible that such can be used to automate the discovery of physical laws of complex systems.
=== Materials science ===
AI could be used for materials optimization and discovery such as the discovery of stable materials and the prediction of their crystal structure.
In November 2023, researchers at Google DeepMind and Lawrence Berkeley National Laboratory announced that they had developed an AI system known as GNoME. This system has contributed to materials science by discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganic crystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through the Materials Project database, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.
=== Reverse engineering ===
Machine learning is used in diverse types of reverse engineering. For example, machine learning has been used to reverse engineer a composite material part, enabling unauthorized production of high quality parts, and for quickly understanding the behavior of malware. It can be used to reverse engineer artificial intelligence models. It can also design components by engaging in a type of reverse engineering of not-yet existent virtual components such as inverse molecular design for particular desired functionality or protein design for prespecified functional sites. Biological network reverse engineering could model interactions in a human understandable way, e.g. bas on time series data of gene expression levels.
=== Astronomy, space activities and ufology ===
Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights" for example for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. It could also be used for activities in space such as space exploration, including analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.
In the search for extraterrestrial intelligence (SETI), machine learning has been used in attempts to identify artificially generated electromagnetic waves in available data – such as real-time observations – and other technosignatures, e.g. via anomaly detection. In ufology, the SkyCAM-5 project headed by Prof. Hakan Kayal and the Galileo Project headed by Avi Loeb use machine learning to attempt to detect and classify types of UFOs. The Galileo Project also seeks to detect two further types of potential extraterrestrial technological signatures with the use of AI: 'Oumuamua-like interstellar objects, and non-manmade artificial satellites.
Machine learning can also be used to produce datasets of spectral signatures of molecules that may be involved in the atmospheric production or consumption of particular chemicals – such as phosphine possibly detected on Venus – which could prevent miss assignments and, if accuracy is improved, be used in future detections and identifications of molecules on other planets.
=== Chemistry and biology ===
Machine learning has been used for drug design. It has also been used for predicting molecular properties and exploring large chemical/reaction spaces. Computer-planned syntheses via computational reaction networks, described as a platform that combines "computational synthesis with AI algorithms to predict molecular properties", have been used to explore the origins of life on Earth, drug-syntheses and developing routes for recycling 200 industrial waste chemicals into important drugs and agrochemicals (chemical synthesis design). There is research about which types of computer-aided chemistry would benefit from machine learning. It can also be used for "drug discovery and development, drug repurposing, improving pharmaceutical productivity, and clinical trials". It has been used for the design of proteins with prespecified functional sites.
It has been used with databases for the development of a 46-day process to design, synthesize and test a drug which inhibits enzymes of a particular gene, DDR1. DDR1 is involved in cancers and fibrosis which is one reason for the high-quality datasets that enabled these results.
There are various types of applications for machine learning in decoding human biology, such as helping to map gene expression patterns to functional activation patterns or identifying functional DNA motifs. It is widely used in genetic research.
There also is some use of machine learning in synthetic biology, disease biology, nanotechnology (e.g. nanostructured materials and bionanotechnology), and materials science.
==== Novel types of machine learning ====
There are also prototype robot scientists, including robot-embodied ones like the two Robot Scientists, which show a form of "machine learning" not commonly associated with the term.
Similarly, there is research and development of biological "wetware computers" that can learn (e.g. for use as biosensors) and/or implantation into an organism's body (e.g. for use to control prosthetics). Polymer-based artificial neurons operate directly in biological environments and define biohybrid neurons made of artificial and living components.
Moreover, if whole brain emulation is possible via both scanning and replicating, at a minimum, the bio-chemical brain – as premised in the form of digital replication in The Age of Em, possibly using physical neural networks – that may have applications as or more extensive than e.g. valued human activities and may imply that society would face substantial moral choices, societal risks and ethical problems such as whether (and how) such are built, sent through space and used compared to potentially competing e.g. potentially more synthetic and/or less human and/or non/less-sentient types of artificial/semi-artificial intelligence. An alternative or additive approach to scanning are types of reverse engineering of the brain.
A subcategory of artificial intelligence is embodied, some of which are mobile robotic systems that each consist of one or multiple robots that are able to learn in the physical world.
===== Digital ghosts =====
===== Biological computing in AI and as AI =====
Additionally, biological computers, even if both artificial and highly intelligent, are typically distinguishable from synthetic, predominantly silicon-based, computers. The two technologies could, however, be combined and used for the design of either. Moreover, many tasks may be poorly carried out by AI even if it uses algorithms that are transparent, understood, bias-free, apparently effective and goal-aligned in addition to having trained data sets that are sufficiently large and cleansed. This may occur, for instance, when the underlying data, available metrics, values or training methods are incorrect, flawed or used inappropriately. Computer-aided is a phrase used to describe human activities that make use of computing as tool in more comprehensive activities and systems such as AI for narrow tasks or making use of such without substantially relying on its results (see also: human-in-the-loop). One study described the biological component as a limitation of AI stating that "as long as the biological system cannot be understood, formalized, and imitated, we will not be able to develop technologies that can mimic it" and that, even if it were understood, this does not necessarily mean there will be "a technological solution to imitate natural intelligence". Technologies that integrate biology and AI include biorobotics.
== Security & Surveillance ==
=== Cyber security ===
Cyber security companies are adopting neural networks, machine learning, and natural language processing to improve their systems.
Applications of AI in cyber security include:
Network protection: Machine learning improves intrusion detection systems by broadening the search beyond previously identified threats.
Endpoint protection: Attacks such as ransomware can be thwarted by learning typical malware behaviors.
AI-related cyber security application cases vary in both benefit and complexity. Security features such as Security Orchestration, Automation, and Response (SOAR) and Extended Endpoint Detection and Response (XDR) offer significant benefits for businesses, but require significant integration and adaptation efforts.
Application security: can help counterattacks such as server-side request forgery, SQL injection, cross-site scripting, and distributed denial-of-service.
AI technology can also be utilized to improve system security and safeguard our privacy. Randrianasolo (2012) suggested a security system based on artificial intelligence that can recognize intrusions and adapt to perform better. In order to improve cloud computing security, Sahil (2015) created a user profile system for the cloud environment with AI techniques.
Suspect user behavior: Machine learning can identify fraud or compromised applications as they occur.
== Transportation & Logistics ==
=== Automotive ===
AI in transport is expected to provide safe, efficient, and reliable transportation while minimizing the impact on the environment and communities. The major development challenge is the complexity of transportation systems that involves independent components and parties, with potentially conflicting objectives.
AI-based fuzzy logic controllers operate gearboxes. For example, the 2006 Audi TT, VW Touareg and VW Caravell feature the DSP transmission. A number of Škoda variants (Škoda Fabia) include a fuzzy logic-based controller. Cars have AI-based driver-assist features such as self-parking and adaptive cruise control.
There are also prototypes of autonomous automotive public transport vehicles such as electric mini-buses as well as autonomous rail transport in operation.
There also are prototypes of autonomous delivery vehicles, sometimes including delivery robots.
Transportation's complexity means that in most cases training an AI in a real-world driving environment is impractical. Simulator-based testing can reduce the risks of on-road training.
AI underpins self-driving vehicles. Companies involved with AI include Tesla, Waymo, and General Motors. AI-based systems control functions such as braking, lane changing, collision prevention, navigation and mapping.
Autonomous trucks are in the testing phase. The UK government passed legislation to begin testing of autonomous truck platoons in 2018. A group of autonomous trucks follow closely behind each other. German corporation Daimler is testing its Freightliner Inspiration.
Autonomous vehicles require accurate maps to be able to navigate between destinations. Some autonomous vehicles do not allow human drivers (they have no steering wheels or pedals).
==== Traffic management ====
AI has been used to optimize traffic management, which reduces wait times, energy use, and emissions by as much as 25 percent.
Smart traffic lights have been developed at Carnegie Mellon since 2009. Professor Stephen Smith has started a company since then Surtrac that has installed smart traffic control systems in 22 cities. It costs about $20,000 per intersection to install. Drive time has been reduced by 25% and traffic jam waiting time has been reduced by 40% at the intersections it has been installed.
=== Military ===
The Royal Australian Air Force (RAAF) Air Operations Division (AOD) uses AI for expert systems. AIs operate as surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.
Aircraft simulators use AI for training aviators. Flight conditions can be simulated that allow pilots to make mistakes without risking themselves or expensive aircraft. Air combat can also be simulated.
AI can also be used to operate planes analogously to their control of ground vehicles. Autonomous drones can fly independently or in swarms.
AOD uses the Interactive Fault Diagnosis and Isolation System, or IFDIS, which is a rule-based expert system using information from TF-30 documents and expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for the F-111C. The system replaced specialized workers. The system allowed regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.
Speech recognition allows traffic controllers to give verbal directions to drones.
Artificial intelligence supported design of aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools. The AIDA uses rule-based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.
=== NASA ===
In 2003 a Dryden Flight Research Center project created software that could enable a damaged aircraft to continue flight until a safe landing can be achieved. The software compensated for damaged components by relying on the remaining undamaged components.
The 2016 Intelligent Autopilot System combined apprenticeship learning and behavioral cloning whereby the autopilot observed low-level actions required to maneuver the airplane and high-level strategy used to apply those actions.
=== Maritime ===
Neural networks are used by situational awareness systems in ships and boats. There also are autonomous boats.
== Utilities ==
=== Telecommunications ===
Many telecommunications companies make use of heuristic search to manage their workforces. For example, BT Group deployed heuristic search in an application that schedules 20,000 engineers. Machine learning is also used for speech recognition (SR), including of voice-controlled devices, and SR-related transcription, including of videos.
== List of applications ==
The following are applications of artificial intelligence (AI) organized by category:
=== Agriculture ===
Precision agriculture
Crop monitoring
Automated harvesting
Yield prediction
=== Architecture & Design ===
Computer-aided design
Structural analysis
Smart city
=== Business ===
Market analysis
Business process automation
User activity monitoring, personalized targeted promotion and marketing via ads
Agent-based computational economics
=== Computer Science ===
Algorithm development
Code generation
Data structure optimization
Automated reasoning
Automated theorem proving
Proof assistants
Concept mining
Data mining
Knowledge representation
Semantic Web
=== Computer Vision ===
Computer vision
Image processing
Face recognition
Optical character recognition
Handwriting recognition
Photo and video manipulation
=== Customer Service ===
Chatbot
Virtual assistant
Sentiment analysis
Chatbots and assistant apps like Alexa, Google Assistant, Siri
Social bot
=== Education ===
Personalized learning
Educational technology
Intelligent tutoring system
Education and Learning Disabilities related issues
=== Energy & Environment ===
Smart grid
Environmental monitoring
Carbon footprint
Earth sciences applications
=== Entertainment & Media ===
Recommender system
Generative artificial intelligence
Synthetic media
Artificial creativity
Photo and video manipulation
To transcribe music
Virtual reality
=== Finance ===
Fraud detection
Algorithmic trading
Credit score
=== Gaming ===
Game artificial intelligence and computer game bot
Deep Blue (chess computer)
Game theory and strategic planning
=== Healthcare ===
Artificial intelligence in healthcare
Diagnosis (artificial intelligence)
Drug discovery
Mental health
Health informatics
=== Human Resources ===
Recruitment
Employee engagement
Training programs
=== Language Processing ===
Natural language processing, translation and chatterbots
Optical character recognition
Handwriting recognition
Speech recognition
Chatbots and assistant apps like Alexa, Google Assistant, Siri
=== Legal & Government ===
Legal research
Public service
Policy analysis
Law related services
Litigation
=== Manufacturing ===
Predictive maintenance
Quality control
Automation
Nonlinear control and robotics
=== Military ===
Autonomous weapons
Intelligence analysis
Simulation training
Game theory and strategic planning
=== Retail & E-commerce ===
Recommender system
Inventory management
Dynamic pricing
=== Robotics ===
Robotics
Behavior-based robotics
Cognitive robotics
Cybernetics
Developmental robotics (epigenetic)
Evolutionary robotics
Human-robot interaction
Humanoid robot
Hybrid intelligent system
Intelligent agent
Intelligent control
=== Scientific Research ===
Data analysis
Simulations
Bioinformatics
Earth sciences
Physics
Bio-inspired computing
Agent-based models
Artificial life
=== Security & Surveillance ===
Face recognition
Cybersecurity
Deepfake
Email spam filtering
Filtering hate speech, nudity, and other unwanted content
=== Social Impact ===
Economic forecasting
Social equity
Poverty reduction
=== Telecommunications ===
Network optimization
Predictive maintenance
Fraud detection
Customer service chatbots
Signal processing
=== Transportation & Logistics ===
Self-driving car
Vehicle routing problem
Traffic management
== See also ==
Applications of artificial intelligence to legal informatics
Applications of deep learning
Applications of machine learning
Artificial intelligence and elections
Collective intelligence § Applications
List of artificial intelligence projects
List of datasets for machine-learning research
Open data
Progress in artificial intelligence
Timeline of computing 2020–present
== Footnotes ==
== Further reading ==
Kaplan, A.M.; Haenlein, M. (2018). "Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence". Business Horizons. 62 (1): 15–25. doi:10.1016/j.bushor.2018.08.004.
Kurzweil, Ray (2005). The Singularity is Near: When Humans Transcend Biology. New York: Viking. ISBN 978-0-670-03384-3.
National Research Council (1999). "Developments in Artificial Intelligence". Funding a Revolution: Government Support for Computing Research. National Academy Press. ISBN 978-0-309-06278-7. OCLC 246584055.
Moghaddam, M. J.; Soleymani, M. R.; Farsi, M. A. (2015). "Sequence planning for stamping operations in progressive dies". Journal of Intelligent Manufacturing. 26 (2): 347–357. doi:10.1007/s10845-013-0788-0.
Felten, Ed (3 May 2016). "Preparing for the Future of Artificial Intelligence". | Wikipedia/Applications_of_AI |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.