text
stringlengths
12
14.7k
Labeled data : After obtaining a labeled dataset, machine learning models can be applied to the data so that new unlabeled data can be presented to the model and a likely label can be guessed or predicted for that piece of unlabeled data.
Ni1000 : The Ni1000 is an artificial neural network chip developed by Nestor Corporation and Intel, developed in the 1990s. It is Intel's second-generation neural network chip, but the first all-digital chip. The chip is aimed at image analysis applications– containing more than 3 million transistors – and can analyze ...
Ni1000 : Intel/Nestor Ni1000 Recognition Accelerator Technical Specification Perrone, Michael P.; Cooper, Leon N. (1995). "The Ni1000: High Speed Parallel VLSI for Implementing Multilayer Perceptrons". In Leen, Todd K.; Tesauro, Gerald; Touretzky, David S. (eds.). Advances in Neural Information Processing Systems 7 (PD...
GigaChat : GigaChat is a generative artificial intelligence chatbot developed by the Russian financial services corporation Sberbank and launched in April 2023. It is positioned as a Russian alternative to ChatGPT. The artificial intelligence software can handle a diverse range of complex cognitive and daily tasks, suc...
GigaChat : List of artificial intelligence projects List of chatbots Alice (virtual assistant) Chatbot OpenAI == References ==
BLOOM (language model) : BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a 176-billion-parameter transformer-based autoregressive large language model (LLM). The model, as well as the code base and the data used to train it, are distributed under free licences. BLOOM was trained on appr...
Interactive activation and competition networks : Interactive activation and competition (IAC) networks are artificial neural networks used to model memory and intuitive generalizations. They are made up of nodes or artificial neurons which are arrayed and activated in ways that emulate the behaviors of human memory. T...
Interactive activation and competition networks : A tribute to interactive activation Video overview of IAC networks and a description of how to build them using free software.
Example-based machine translation : Example-based machine translation (EBMT) is a method of machine translation often characterized by its use of a bilingual corpus with parallel texts as its main knowledge base at run-time. It is essentially a translation by analogy and can be viewed as an implementation of a case-bas...
Example-based machine translation : At the foundation of example-based machine translation is the idea of translation by analogy. When applied to the process of human translation, the idea that translation takes place by analogy is a rejection of the idea that people translate sentences by doing deep linguistic analysi...
Example-based machine translation : Example-based machine translation was first suggested by Makoto Nagao in 1984. He pointed out that it is especially adapted to translation between two totally different languages, such as English and Japanese. In this case, one sentence can be translated into several well-structured ...
Example-based machine translation : Example-based machine translation systems are trained from bilingual parallel corpora containing sentence pairs like the example shown in the table above. Sentence pairs contain sentences in one language with their translations into another. The particular example shows an example of...
Example-based machine translation : Example-based machine translation is best suited for sub-language phenomena like phrasal verbs. Phrasal verbs have highly context-dependent meanings. They are common in English, where they comprise a verb followed by an adverb and/or a preposition, which are called the particle to th...
Example-based machine translation : Programming by example Translation memory Natural Language Processing
Example-based machine translation : Carl, Michael; Way, Andy (2003). Recent Advances in Example-Based Machine Translation. Netherlands: Springer. doi:10.1007/978-94-010-0181-6. ISBN 978-1-4020-1400-0.
Example-based machine translation : Cunei - an open source platform for data-driven machine translation that grew out of research in EBMT, but also includes recent advances from the SMT field
DALL-E : DALL-E, DALL-E 2, and DALL-E 3 (stylised DALL·E, and pronounced DOLL-E) are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions known as prompts. The first version of DALL-E was announced in January 2021. In the following year...
DALL-E : DALL-E was revealed by OpenAI in a blog post on 5 January 2021, and uses a version of GPT-3 modified to generate images. On 6 April 2022, OpenAI announced DALL-E 2, a successor designed to generate more realistic images at higher resolutions that "can combine concepts, attributes, and styles". On 20 July 2022,...
DALL-E : The first generative pre-trained transformer (GPT) model was initially developed by OpenAI in 2018, using a Transformer architecture. The first iteration, GPT-1, was scaled up to produce GPT-2 in 2019; in 2020, it was scaled up again to produce GPT-3, with 175 billion parameters.
DALL-E : DALL-E can generate imagery in multiple styles, including photorealistic imagery, paintings, and emoji. It can "manipulate and rearrange" objects in its images, and can correctly place design elements in novel compositions without explicit instruction. Thom Dunn writing for BoingBoing remarked that "For exampl...
DALL-E : DALL-E 2's reliance on public datasets influences its results and leads to algorithmic bias in some cases, such as generating higher numbers of men than women for requests that do not mention gender. DALL-E 2's training data was filtered to remove violent and sexual imagery, but this was found to increase bias...
DALL-E : Most coverage of DALL-E focuses on a small subset of "surreal" or "quirky" outputs. DALL-E's output for "an illustration of a baby daikon radish in a tutu walking a dog" was mentioned in pieces from Input, NBC, Nature, and other publications. Its output for "an armchair in the shape of an avocado" was also wid...
DALL-E : Since OpenAI has not released source code for any of the three models, there have been several attempts to create open-source models offering similar capabilities. Released in 2022 on Hugging Face's Spaces platform, Craiyon (formerly DALL-E Mini until a name change was requested by OpenAI in June 2022) is an A...
DALL-E : Artificial intelligence art DeepDream Imagen Midjourney Stable Diffusion Prompt engineering
DALL-E : Ramesh, Aditya; Pavlov, Mikhail; Goh, Gabriel; Gray, Scott; Voss, Chelsea; Radford, Alec; Chen, Mark; Sutskever, Ilya (26 February 2021). "Zero-Shot Text-to-Image Generation". arXiv:2102.12092 [cs.CV].. The original report on DALL-E. DALL-E 3 System Card DALL-E 3 paper by OpenAI DALL-E 2 website Craiyon websit...
Recursive neural network : A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological ord...
Recursive neural network : The universal approximation capability of RNNs over trees has been proved in literature.
Multi-agent reinforcement learning : Multi-agent reinforcement learning (MARL) is a sub-field of reinforcement learning. It focuses on studying the behavior of multiple learning agents that coexist in a shared environment. Each agent is motivated by its own rewards, and does actions to advance its own interests; in som...
Multi-agent reinforcement learning : Similarly to single-agent reinforcement learning, multi-agent reinforcement learning is modeled as some form of a Markov decision process (MDP). Fix a set of agents I = . We then define: A set S of environment states. One set A i _ of actions for each of the agents i ∈ I = . P ...
Multi-agent reinforcement learning : When multiple agents are acting in a shared environment their interests might be aligned or misaligned. MARL allows exploring all the different alignments and how they affect the agents' behavior: In pure competition settings, the agents' rewards are exactly opposite to each other, ...
Multi-agent reinforcement learning : As in game theory, much of the research in MARL revolves around social dilemmas, such as prisoner's dilemma, chicken and stag hunt. While game theory research might focus on Nash equilibria and what an ideal policy for an agent would be, MARL research focuses on how the agents would...
Multi-agent reinforcement learning : An autocurriculum (plural: autocurricula) is a reinforcement learning concept that's salient in multi-agent experiments. As agents improve their performance, they change their environment; this change in the environment affects themselves and the other agents. The feedback loop resu...
Multi-agent reinforcement learning : Multi-agent reinforcement learning has been applied to a variety of use cases in science and industry:
Multi-agent reinforcement learning : There are some inherent difficulties about multi-agent deep reinforcement learning. The environment is not stationary anymore, thus the Markov property is violated: transitions and rewards do not only depend on the current state of an agent.
Multi-agent reinforcement learning : Stefano V. Albrecht, Filippos Christianos, Lukas Schäfer. Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. MIT Press, 2024. https://www.marl-book.com Kaiqing Zhang, Zhuoran Yang, Tamer Basar. Multi-agent reinforcement learning: A selective overview of theories ...
History of natural language processing : The history of natural language processing describes the advances of natural language processing. There is some overlap with the history of machine translation, the history of speech recognition, and the history of artificial intelligence.
History of natural language processing : The history of machine translation dates back to the seventeenth century, when philosophers such as Leibniz and Descartes put forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development...
History of natural language processing : In 1950, Alan Turing published his famous article "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written co...
History of natural language processing : Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing. This was due both to the steady increase in comput...
History of natural language processing : In 1990, the Elman network, using a recurrent neural network, encoded each word in a training set as a vector, called a word embedding, and the whole vocabulary as a vector database, allowing it to perform such tasks as sequence-predictions that are beyond the power of a simple ...
History of natural language processing : Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3. McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 978-1-56881-205-2, OCLC 52197627. Russell, Stuart J.; Norvig,...
Promoter based genetic algorithm : The promoter based genetic algorithm (PBGA) is a genetic algorithm for neuroevolution developed by F. Bellas and R.J. Duro in the Integrated Group for Engineering Research (GII) at the University of Coruña, in Spain. It evolves variable size feedforward artificial neural networks (ANN...
Promoter based genetic algorithm : The basic unit in the PBGA is a neuron with all of its inbound connections as represented in the following figure: The genotype of a basic unit is a set of real valued weights followed by the parameters of the neuron and proceeded by an integer valued field that determines the promote...
Promoter based genetic algorithm : The PBGA was originally presented within the field of autonomous robotics, in particular in the real time learning of environment models of the robot. It has been used inside the Multilevel Darwinist Brain (MDB) cognitive mechanism developed in the GII for real robots on-line learning...
Promoter based genetic algorithm : Grupo Integrado de Ingeniería Francisco Bellas’ website Richard J. Duro’s website
Lesk algorithm : Lesk algorithm is a classical algorithm for word sense disambiguation introduced by Michael E. Lesk in 1986. It operates on the premise that words within a given context are likely to share a common meaning. This algorithm compares the dictionary definitions of an ambiguous word with the words in its s...
Lesk algorithm : The Lesk algorithm is based on the assumption that words in a given "neighborhood" (section of text) will tend to share a common topic. A simplified version of the Lesk algorithm is to compare the dictionary definition of an ambiguous word with the terms contained in its neighborhood. Versions have bee...
Lesk algorithm : In Simplified Lesk algorithm, the correct meaning of each word in a given context is determined individually by locating the sense that overlaps the most between its dictionary definition and the given context. Rather than simultaneously determining the meanings of all words in a given context, this ap...
Lesk algorithm : Unfortunately, Lesk’s approach is very sensitive to the exact wording of definitions, so the absence of a certain word can radically change the results. Further, the algorithm determines overlaps only among the glosses of the senses being considered. This is a significant limitation in that dictionary ...
Lesk algorithm : Original Lesk (Lesk, 1986) Adapted/Extended Lesk (Banerjee and Pederson, 2002/2003): In the adaptive lesk algorithm, a word vector is created corresponds to every content word in the wordnet gloss. Concatenating glosses of related concepts in WordNet can be used to augment this vector. The vector conta...
Lesk algorithm : Word-sense disambiguation == References ==
ADALINE : ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-layer artificial neural network and the name of the physical device that implemented it. It was developed by professor Bernard Widrow and his doctoral student Marcian Hoff at Stanford University in 1960. It is based on the pe...
ADALINE : Adaline is a single-layer neural network with multiple nodes, where each node accepts multiple inputs and generates one output. Given the following variables: x , the input vector w , the weight vector n , the number of inputs θ , some constant y , the output of the model, the output is: y = ∑ j = 1 n x ...
ADALINE : The learning rule used by ADALINE is the LMS ("least mean squares") algorithm, a special case of gradient descent. Given the following: η , the learning rate y , the model output o , the target (desired) output E = ( o − y ) 2 , the square of the error, the LMS algorithm updates the weights as follows: w ...
ADALINE : MADALINE (Many ADALINE) is a three-layer (input, hidden, output), fully connected, feedforward neural network architecture for classification that uses ADALINE units in its hidden and output layers. I.e., its activation function is the sign function. The three-layer network uses memistors. As the sign functio...
ADALINE : Multilayer perceptron
ADALINE : widrowlms (2012-07-29). The LMS algorithm and ADALINE. Part II - ADALINE and memistor ADALINE. Retrieved 2024-08-17 – via YouTube. Widrow demonstrating both a working knobby ADALINE machine and a memistor ADALINE machine. "Delta Learning Rule: ADALINE". Artificial Neural Networks. Universidad Politécnica de M...
Just This Once : Just This Once is a 1993 romance novel written in the style of Jacqueline Susann by a Macintosh IIcx computer named "Hal" in collaboration with its programmer, Scott French. French reportedly spent $40,000 and 8 years developing an artificial intelligence program to analyze Susann's works and attempt t...
Just This Once : The novel's creation spanned the fields of artificial intelligence, expert systems, and natural language processing. Scott French first scanned and analyzed portions of two books by Jacqueline Susann, Valley of the Dolls and Once Is Not Enough, to determine constituents of Susann's writing style, which...
Just This Once : Jacqueline Susann's publisher was skeptical of the legality of Just This Once, although French doubted that an author's thought processes could be copyrighted. Susann's estate reportedly threatened to sue Scott French but the parties settled out of court; the settlement involved splitting profits betwe...
Just This Once : The book's publisher Steven Shragis of Carol Group said of the novel, "I'm not going to say this is a great literary work, but it's every bit as good as anything out in this field, and better than an awful lot." The novel received some positive early reviews. In USA Today, novelist Thomas Gifford compa...
Just This Once : Procedural generation The Policeman's Beard is Half Constructed == References ==
Extremal optimization : Extremal optimization (EO) is an optimization heuristic inspired by the Bak–Sneppen model of self-organized criticality from the field of statistical physics. This heuristic was designed initially to address combinatorial optimization problems such as the travelling salesman problem and spin gla...
Extremal optimization : Self-organized criticality (SOC) is a statistical physics concept to describe a class of dynamical systems that have a critical point as an attractor. Specifically, these are non-equilibrium systems that evolve through avalanches of change and dissipations that reach up to the highest scales of ...
Extremal optimization : Another piece in the puzzle is work on computational complexity, specifically that critical points have been shown to exist in NP-complete problems, where near-optimum solutions are widely dispersed and separated by barriers in the search space causing local search algorithms to get stuck or sev...
Extremal optimization : EO was designed as a local search algorithm for combinatorial optimization problems. Unlike genetic algorithms, which work with a population of candidate solutions, EO evolves a single solution and makes local modifications to the worst components. This requires that a suitable representation be...
Extremal optimization : Generalised extremal optimization (GEO) was developed to operate on bit strings where component quality is determined by the absolute rate of change of the bit, or the bits contribution to holistic solution quality. This work includes application to standard function optimisation problems as wel...
Extremal optimization : Genetic algorithm Simulated annealing
Extremal optimization : Bak, Per; Tang, Chao; Wiesenfeld, Kurt (1987-07-27). "Self-organized criticality: An explanation of the 1/fnoise". Physical Review Letters. 59 (4). American Physical Society (APS): 381–384. Bibcode:1987PhRvL..59..381B. doi:10.1103/physrevlett.59.381. ISSN 0031-9007. PMID 10035754. S2CID 7674321....
Extremal optimization : Stefan Boettcher – Physics Department, Emory University Allon Percus – Claremont Graduate University Global Optimization Algorithms – Theory and Application – Archived 2008-09-11 at the Wayback Machine – Thomas Weise
Softmax function : The softmax function, also known as softargmax: 184 or normalized exponential function,: 198 converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and is used in multinomial logistic regressi...
Softmax function : The softmax function takes as input a vector z of K real numbers, and normalizes it into a probability distribution consisting of K probabilities proportional to the exponentials of the input numbers. That is, prior to applying softmax, some vector components could be negative, or greater than one; a...
Softmax function : The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression),: 206–209 multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regre...
Softmax function : In neural network applications, the number K of possible outcomes is often large, e.g. in case of neural language models that predict the most likely outcome out of a vocabulary which might contain millions of possible words. This can make the calculations for the softmax layer (i.e. the matrix multi...
Softmax function : The standard softmax is numerically unstable because of large exponentiations. The safe softmax method calculates instead σ ( z ) i = e β ( z i − m ) ∑ j = 1 K e β ( z j − m ) )_=-m)^e^-m) where m = max i z i z_ is the largest factor involved. Subtracting by it guarantees that the exponentiations re...
Softmax function : Geometrically the softmax function maps the vector space R K ^ to the boundary of the standard ( K − 1 ) -simplex, cutting the dimension by one (the range is a ( K − 1 ) -dimensional simplex in K -dimensional space), due to the linear constraint that all output sum to 1 meaning it lies on a hyper...
Softmax function : The softmax function was used in statistical mechanics as the Boltzmann distribution in the foundational paper Boltzmann (1868), formalized and popularized in the influential textbook Gibbs (1902). The use of the softmax in decision theory is credited to R. Duncan Luce,: 1 who used the axiom of indep...
Softmax function : With an input of (1, 2, 3, 4, 1, 2, 3), the softmax is approximately (0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175). The output has most of its weight where the "4" was in the original input. This is what the function is normally used for: to highlight the largest values and suppress values which ...
Softmax function : The softmax function generates probability predictions densely distributed over its support. Other functions like sparsemax or α-entmax can be used when sparse probability predictions are desired. Also the Gumbel-softmax reparametrization trick can be used when sampling from a discrete-discrete distr...
Softmax function : Softplus Multinomial logistic regression Dirichlet distribution – an alternative way to sample categorical distributions Partition function Exponential tilting – a generalization of Softmax to more general probability distributions
Softmax function : == References ==
Bayesian interpretation of kernel regularization : Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were no...
Bayesian interpretation of kernel regularization : The classical supervised learning problem requires estimating the output for some new input point x ′ ' by learning a scalar-valued estimator f ^ ( x ′ ) (\mathbf ') on the basis of a training set S consisting of n input-output pairs, S = ( X , Y ) = ( x 1 , y 1 ) ...
Bayesian interpretation of kernel regularization : The main assumption in the regularization perspective is that the set of functions F is assumed to belong to a reproducing kernel Hilbert space H k _ .
Bayesian interpretation of kernel regularization : The notion of a kernel plays a crucial role in Bayesian probability as the covariance function of a stochastic process called the Gaussian process.
Bayesian interpretation of kernel regularization : A connection between regularization theory and Bayesian theory can only be achieved in the case of finite dimensional RKHS. Under this assumption, regularization theory and Bayesian theory are connected through Gaussian process prediction. In the finite dimensional cas...
Bayesian interpretation of kernel regularization : Regularized least squares Bayesian linear regression Bayesian interpretation of Tikhonov regularization == References ==
Feature hashing : In machine learning, feature hashing, also known as the hashing trick (by analogy to the kernel trick), is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. It works by applying a hash function to the features and using their ha...
Feature hashing : Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function. Weinberger et al. (2009) a...
Feature hashing : Implementations of the hashing trick are present in: Apache Mahout Gensim scikit-learn sofia-ml Vowpal Wabbit Apache Spark R TensorFlow Dask-ML
Feature hashing : Bloom filter – Data structure for approximate set membership Count–min sketch – Probabilistic data structure in computer science Heaps' law – Heuristic for distinct words in a document Locality-sensitive hashing – Algorithmic technique using hashing MinHash – Data mining technique
Feature hashing : Hashing Representations for Machine Learning on John Langford's website What is the "hashing trick"? - MetaOptimize Q+A
Feature scaling : Feature scaling is a method used to normalize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step.
Feature scaling : Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization. For example, many classifiers calculate the distance between two points by the Euclidean distance. If one of the features has a broad range of val...
Feature scaling : Normalization (machine learning) Normalization (statistics) Standard score fMLLR, Feature space Maximum Likelihood Linear Regression
Feature scaling : Han, Jiawei; Kamber, Micheline; Pei, Jian (2011). "Data Transformation and Data Discretization". Data Mining: Concepts and Techniques. Elsevier. pp. 111–118. ISBN 9780123814807.
Feature scaling : Lecture by Andrew Ng on feature scaling
Uniform convergence in probability : Uniform convergence in probability is a form of convergence in probability in statistical asymptotic theory and probability theory. It means that, under certain conditions, the empirical frequencies of all events in a certain event-family converge to their theoretical probabilities....
Uniform convergence in probability : For a class of predicates H defined on a set X and a set of samples x = ( x 1 , x 2 , … , x m ) ,x_,\dots ,x_) , where x i ∈ X \in X , the empirical frequency of h ∈ H on x is Q ^ x ( h ) = 1 m | | . _(h)=|\)=1\|. The theoretical probability of h ∈ H is defined as Q P ( h ) = ...
Uniform convergence in probability : The statement of the uniform convergence theorem is as follows: If H is a set of -valued functions defined on a set X and P is a probability distribution on X then for ε > 0 and m a positive integer, we have: P m ≤ 4 Π H ( 2 m ) e − ε 2 m / 8 . \(h)-(h)|\geq \varepsilon h\i...