text
stringlengths
12
14.7k
MeCab : Official website
Grammar systems theory : Grammar systems theory is a field of theoretical computer science that studies systems of finite collections of formal grammars generating a formal language. Each grammar works on a string, a so-called sequential form that represents an environment. Grammar systems can thus be used as a formali...
Grammar systems theory : Artificial life Agent-based model Distributed artificial intelligence Multi-agent system == References ==
Restricted Boltzmann machine : A restricted Boltzmann machine (RBM) (also called a restricted Sherrington–Kirkpatrick model with external field or restricted stochastic Ising–Lenz–Little model) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. RBMs we...
Restricted Boltzmann machine : The standard type of RBM has binary-valued (Boolean) hidden and visible units, and consists of a matrix of weights W of size m × n . Each weight element ( w i , j ) ) of the matrix is associated with the connection between the visible (input) unit v i and the hidden unit h j . In addi...
Restricted Boltzmann machine : Restricted Boltzmann machines are trained to maximize the product of probabilities assigned to some training set V (a matrix, each row of which is treated as a visible vector v ), arg ⁡ max W ∏ v ∈ V P ( v ) \prod _P(v) or equivalently, to maximize the expected log probability of a trai...
Restricted Boltzmann machine : The difference between the Stacked Restricted Boltzmann Machines and RBM is that RBM has lateral connections within a layer that are prohibited to make analysis tractable. On the other hand, the Stacked Boltzmann consists of a combination of an unsupervised three-layer network with symmet...
Restricted Boltzmann machine : Fischer, Asja; Igel, Christian (2012), "An Introduction to Restricted Boltzmann Machines", Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Lecture Notes in Computer Science, vol. 7441, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 14–36, doi:10.10...
Restricted Boltzmann machine : Autoencoder Helmholtz machine
Restricted Boltzmann machine : Chen, Edwin (2011-07-18). "Introduction to Restricted Boltzmann Machines". Edwin Chen's blog. Nicholson, Chris; Gibson, Adam. "A Beginner's Tutorial for Restricted Boltzmann Machines". Deeplearning4j Documentation. Archived from the original on 2017-02-11. Retrieved 2018-11-15.: CS1 maint...
Restricted Boltzmann machine : Python implementation of Bernoulli RBM and tutorial SimpleRBM is a very small RBM code (24kB) useful for you to learn about how RBMs learn and work. Julia implementation of Restricted Boltzmann machines: https://github.com/cossio/RestrictedBoltzmannMachines.jl
Foundation model : A foundation model, also known as large X model (LxM), is a machine learning or deep learning model that is trained on vast datasets so it can be applied across a wide range of use cases. Generative AI applications like Large Language Models are common examples of foundation models. Building foundati...
Foundation model : The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) coined the term "foundation model" in August 2021 to mean "any model that is trained on broad data (generally using self-supervision at scale) that can be adapted (e.g., fine-tune...
Foundation model : Technologically, foundation models are built using established machine learning techniques like deep neural networks, transfer learning, and self-supervised learning. Foundation models differ from previous techniques as they are general purpose models function as a reusable infrastructure, instead of...
Foundation model : Foundation models' general capabilities allow them to fulfill a unique role in the AI ecosystem, fueled by many upstream and downstream technologies. Training a foundation model requires several resources (e.g. data, compute, labor, hardware, code), with foundation models often involving immense amou...
Foundation model : After a foundation model is built, it can be released in one of many ways. There are many facets to a release: the asset itself, who has access, how access changes over time, and the conditions on use. All these factors contribute to how a foundation model will affect downstream applications. In part...
Memtransistor : The memtransistor (a blend word from Memory Transfer Resistor) is an experimental multi-terminal passive electronic component that might be used in the construction of artificial neural networks. It is a combination of the memristor and transistor technology. This technology is different from the 1T-1R ...
Memtransistor : These types of devices would allow for a synapse model that could realise a learning rule, by which the synaptic efficacy is altered by voltages applied to the terminals of the device. An example of such a learning rule is spike-timing-dependant-plasticty by which the weight of the synapse, in this case...
Memtransistor : Researchers at Northwestern University have fabricated a seven-terminal device fabricated on molybdenum disulfide (MoS2). One terminal controls the current between the other six. It has been shown that the I_D / V_D characteristics of the transistor can be modified even after fabrication. Subsequently, ...
Contrastive Language-Image Pre-training : Contrastive Language-Image Pre-training (CLIP) is a technique for training a pair of neural network models, one for image understanding and one for text understanding, using a contrastive objective. This method has enabled broad applications across multiple domains, including c...
Contrastive Language-Image Pre-training : It was first announced on OpenAI's official blog on January 5, 2021, with a report served directly through OpenAI's CDN, and a GitHub repository. The paper was delivered on arXiv on 26 February 2021. The report (with some details removed, and its appendix cut out to a "Suppleme...
Contrastive Language-Image Pre-training : The CLIP method trains a pair of models contrastively. One model takes in a piece of text as input and outputs a single vector representing its semantic content. The other model takes in an image and similarly outputs a single vector representing its visual content. The models ...
Contrastive Language-Image Pre-training : While the original model was developed by OpenAI, subsequent models have been trained by other organizations as well.
Contrastive Language-Image Pre-training : In the original OpenAI CLIP report, they reported training 5 ResNet and 3 ViT (ViT-B/32, ViT-B/16, ViT-L/14). Each was trained for 32 epochs. The largest ResNet model took 18 days to train on 592 V100 GPUs. The largest ViT model took 12 days on 256 V100 GPUs. All ViT models wer...
Contrastive Language-Image Pre-training : OpenAI's CLIP webpage OpenCLIP: An open source implementation of CLIP Arora, Aman (2023-03-11). "The Annotated CLIP (Part-2)". amaarora.github.io. Retrieved 2024-09-11.
Gödel machine : A Gödel machine is a hypothetical self-improving computer program that solves problems in an optimal way. It uses a recursive self-improvement protocol in which it rewrites its own code when it can prove the new code provides a better strategy. The machine was invented by Jürgen Schmidhuber (first propo...
Gödel machine : Traditional problems solved by a computer only require one input and provide some output. Computers of this sort had their initial algorithm hardwired. This does not take into account the dynamic natural environment, and thus was a goal for the Gödel machine to overcome. The Gödel machine has limitation...
Gödel machine : There are three variables that are particularly useful in the run time of the Gödel machine. At some time t , the variable time will have the binary equivalent of t . This is incremented steadily throughout the run time of the machine. Any input meant for the Gödel machine from the natural environmen...
Gödel machine : The nature of the six proof-modifying instructions below makes it impossible to insert an incorrect theorem into proof, thus trivializing proof verification.
Gödel machine : Gödel's incompleteness theorems
Gödel machine : Gödel Machine Home Page
Ball tree : In computer science, a ball tree, balltree or metric tree, is a space partitioning data structure for organizing points in a multi-dimensional space. A ball tree partitions data points into a nested set of balls. The resulting data structure has characteristics that make it useful for a number of applicatio...
Ball tree : A ball tree is a binary tree in which every node defines a D-dimensional ball containing a subset of the points to be searched. Each internal node of the tree partitions the data points into two disjoint sets which are associated with different balls. While the balls themselves may intersect, each point is ...
Ball tree : A number of ball tree construction algorithms are available. The goal of such an algorithm is to produce a tree that will efficiently support queries of the desired type (e.g. nearest-neighbor) in the average case. The specific criteria of an ideal tree will depend on the type of question being answered and...
Ball tree : An important application of ball trees is expediting nearest neighbor search queries, in which the objective is to find the k points in the tree that are closest to a given test point by some distance metric (e.g. Euclidean distance). A simple search algorithm, sometimes called KNS1, exploits the distance p...
Matrix regularization : In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive func...
Matrix regularization : Consider a matrix W to be learned from a set of examples, S = ( X i t , y i t ) ^,y_^) , where i goes from 1 to n , and t goes from 1 to T . Let each input matrix X i be ∈ R D T ^ , and let W be of size D × T . A general model for the output y can be posed as y i t = ⟨ W , X i t ⟩ F ...
Matrix regularization : Regularization by spectral filtering has been used to find stable solutions to problems such as those discussed above by addressing ill-posed matrix inversions (see for example Filter function for Tikhonov regularization). In many cases the regularization function acts on the input (or kernel) t...
Matrix regularization : Sparse optimization has become the focus of much research interest as a way to find solutions that depend on a small number of variables (see e.g. the Lasso method). In principle, entry-wise sparsity can be enforced by penalizing the entry-wise ℓ 0 -norm of the matrix, but the ℓ 0 -norm is not...
Matrix regularization : The ideas of structured sparsity and feature selection can be extended to the nonparametric case of multiple kernel learning. This can be useful when there are multiple types of input data (color and texture, for example) with different appropriate kernels for each, or when the appropriate kerne...
Matrix regularization : Regularization (mathematics) == References ==
VGGNet : The VGGNets are a series of convolutional neural networks (CNNs) developed by the Visual Geometry Group (VGG) at the University of Oxford. The VGG family includes various configurations with different depths, denoted by the letter "VGG" followed by the number of weight layers. The most common ones are VGG-16 (...
VGGNet : The key architectural principle of VGG models is the consistent use of small 3 × 3 convolutional filters throughout the network. This contrasts with earlier CNN architectures that employed larger filters, such as 11 × 11 in AlexNet. For example, two 3 × 3 convolutions stacked together has the same receptive...
Convolutional layer : In artificial neural networks, a convolutional layer is a type of network layer that applies a convolution operation to the input. Convolutional layers are some of the primary building blocks of convolutional neural networks (CNNs), a class of neural network most commonly applied to images, video,...
Convolutional layer : The concept of convolution in neural networks was inspired by the visual cortex in biological brains. Early work by Hubel and Wiesel in the 1960s on the cat's visual system laid the groundwork for artificial convolution networks. An early convolution neural network was developed by Kunihiko Fukush...
Convolutional layer : Convolutional neural network Pooling layer Feature learning Deep learning Computer vision == References ==
Graphics processing unit : A graphics processing unit (GPU) is a specialized electronic circuit initially designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consol...
Graphics processing unit : Many companies have produced GPUs under a number of brand names. In 2009, Intel, Nvidia, and AMD/ATI were the market share leaders, with 49.4%, 27.8%, and 20.6% market share respectively. In addition, Matrox produces GPUs. Chinese companies such as Jingjia Micro have also produced GPUs for th...
Graphics processing unit : Modern GPUs have traditionally used most of their transistors to do calculations related to 3D computer graphics. In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities (usually with a VGA compatibility mode). Newer cards such as AMD/ATI HD5000...
Graphics processing unit : In 2013, 438.3 million GPUs were shipped globally and the forecast for 2014 was 414.2 million. However, by the third quarter of 2022, shipments of integrated GPUs totaled around 75.5 million units, down 19% year-over-year.
Graphics processing unit : Peddie, Jon (1 January 2023). The History of the GPU – New Developments. Springer Nature. ISBN 978-3-03-114047-1. OCLC 1356877844. == External links ==
Zeuthen strategy : The Zeuthen strategy in cognitive science is a negotiation strategy used by some artificial agents. Its purpose is to measure the willingness to risk conflict. An agent will be more willing to risk conflict if it does not have much to lose in case that the negotiation fails. In contrast, an agent is ...
Zeuthen strategy : The Zeuthen strategy answers three open questions that arise when using the monotonic concession protocol, namely: Which deal should be proposed at first? On any given round, who should concede? In case of a concession, how much should the agent concede? The answer to the first question is that any a...
Zeuthen strategy : Risk ( i , t ) = (i,t)=1&U_(\delta (i,t))=0\\(\delta (i,t))-U_(\delta (j,t))(\delta (i,t))&\end Risk(i,t) is a measurement of agent i's willingness to risk conflict. The risk function formalizes the notion that an agent's willingness to risk conflict is the ratio of the utility that agent would lose ...
Zeuthen strategy : If agent i makes a sufficient concession in the next step, then, assuming that agent j is using a rational negotiation strategy, if agent j does not concede in the next step, he must do so in the step after that. The set of all sufficient concessions of agent i at step t is denoted SC(i, t).
Zeuthen strategy : δ ′ = arg ⁡ max δ ∈ S C ( A , t ) \(\delta )\ is the minimal sufficient concession of agent A in step t. Agent A begins the negotiation by proposing δ ( A , 0 ) = arg ⁡ max δ ∈ N S U A ( δ ) U_(\delta ) and will make the minimal sufficient concession in step t + 1 if and only if Risk(A,t) ≤ Risk(B,t...
OpenAI Five : OpenAI Five is a computer program by OpenAI that plays the five-on-five video game Dota 2. Its first public appearance occurred in 2017, where it was demonstrated in a live one-on-one game against the professional player Dendi, who lost to it. The following year, the system had advanced to the point of pe...
OpenAI Five : Development on the algorithms used for the bots began in November 2016. OpenAI decided to use Dota 2, a competitive five-on-five video game, as a base due to it being popular on the live streaming platform Twitch, having native support for Linux, and had an application programming interface (API) availabl...
OpenAI Five : Each OpenAI Five bot is a neural network containing a single layer with a 4096-unit LSTM that observes the current game state extracted from the Dota developer's API. The neural network conducts actions via numerous possible action heads (no human data involved), and every head has meaning. For instance, ...
OpenAI Five : Prior to OpenAI Five, other AI versus human experiments and systems have been successfully used before, such as Jeopardy! with Watson, chess with Deep Blue, and Go with AlphaGo. In comparison with other games that have used AI systems to play against human players, Dota 2 differs as explained below: Long ...
OpenAI Five : OpenAI Five have received acknowledgement from the AI, tech, and video game community at large. Microsoft founder Bill Gates called it a "big deal", as their victories "required teamwork and collaboration". Chess champion Garry Kasparov, who lost against the Deep Blue AI in 1997, stated that despite their...
OpenAI Five : Official website Official blog
Learning vector quantization : In computer science, learning vector quantization (LVQ) is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems.
Learning vector quantization : LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all Hebbian learning-based approach. It is a precursor to self-organizing maps (SOM) and related to neural gas and the k-nearest neighbor algorithm (k-NN). LVQ was invented by...
Learning vector quantization : Below follows an informal description. The algorithm consists of three basic steps. The algorithm's input is: how many neurons the system will have M (in the simplest case it is equal to the number of classes) what weight each neuron has w i → for i = 0 , 1 , . . . , M − 1 the correspo...
Learning vector quantization : Self-Organizing Maps and Learning Vector Quantization for Feature Sequences, Somervuo and Kohonen. 2004 (pdf)
Learning vector quantization : lvq_pak official release (1996) by Kohonen and his team
Colloquis : Colloquis, previously known as ActiveBuddy and Conversagent, was a company that created conversation-based interactive agents originally distributed via instant messaging platforms. The company had offices in New York, New York, and Sunnyvale, California.
Colloquis : Founded in 2000, the company was the brainchild of Robert Hoffer, Timothy Kay, and Peter Levitan. The idea for interactive agents (also known as Internet bots) came from the team's vision to add functionality to increasingly popular instant messaging services. The original implementation took shape as a wor...
Deep learning : Deep learning is a subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training"...
Deep learning : Most modern deep learning models are based on multi-layered neural networks such as convolutional neural networks and transformers, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep ...
Deep learning : Deep neural networks are generally interpreted in terms of the universal approximation theorem or probabilistic inference. The classic universal approximation theorem concerns the capacity of feedforward neural networks with a single hidden layer of finite size to approximate continuous functions. In 19...
Deep learning : Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For exampl...
Deep learning : Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. By 2019, graphics processing units (GPUs), often with AI-specific en...
Deep learning : Deep learning is closely related to a class of theories of brain development (specifically, neocortical development) proposed by cognitive neuroscientists in the early 1990s. These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These ...
Deep learning : Facebook's AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them. Google's DeepMind Technologies developed a system capable of learning how to play Atari video games using only pixels as data input. In 2015 they demonstrated their AlphaGo system, whic...
Deep learning : Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science.
Deep learning : Applications of artificial intelligence Comparison of deep learning software Compressed sensing Differentiable programming Echo state network List of artificial intelligence projects Liquid state machine List of datasets for machine-learning research Reservoir computing Scale space and deep learning Spa...
Deep learning : == Further reading ==
Character computing : Character computing is a trans-disciplinary field of research at the intersection of computer science and psychology. It is any computing that incorporates the human character within its context. Character is defined as all features or characteristics defining an individual and guiding their behav...
Character computing : Character computing can be viewed as an extension of the well-established field of affective computing. Based on the foundations of the different psychology branches, it advocates defining behavior as a compound attribute that is not driven by either personality, emotions, situation or cognition a...
Character computing : Character computing as such was first coined in its first workshop in 2017. Since then it has had 3 international workshops and numerous publications. Despite its young age, it has already drawn some interest in the research community, leading to the publication of the first book under the same ti...
Character computing : The word character originates from the Greek word meaning “stamping tool”, referring to distinctive features and traits. Over the years it has been given many different connotations, like the moral character in philosophy, the temperament in psychology, a person in literature or an avatar in vario...
Character computing : Research into character computing can be divided into three areas, which complement each other but can each be investigated separately. The first area is sensing and predicting character states and traits or ensuing behavior. The second area is adapting applications to certain character states or ...
Character computing : Character computing is based on a holistic psychologically driven model of human behavior. Human behavior is modeled and predicted based on the relationships between a situation and a human's character. To further define character in a more formal or holistic manner, we represent it in light of th...
Open information extraction : In natural language processing, open information extraction (OIE) is the task of generating a structured, machine-readable representation of the information in text, usually in the form of triples or n-ary propositions.
Open information extraction : A proposition can be understood as truth-bearer, a textual expression of a potential fact (e.g., "Dante wrote the Divine Comedy"), represented in an amenable structure for computers [e.g., ("Dante", "wrote", "Divine Comedy")]. An OIE extraction normally consists of a relation and a set of ...
Open information extraction : Reverb suggested the necessity to produce meaningful relations to more accurately capture the information in the input text. For instance, given the sentence "Faust made a pact with the devil", it would be erroneous to just produce the extraction ("Faust", "made", "a pact") since it would ...
Content determination : Content determination is the subtask of natural language generation (NLG) that involves deciding on the information to be communicated in a generated text. It is closely related to the task of document structuring.
Content determination : Consider an NLG system which summarises information about sick babies. Suppose this system has four pieces of information it can communicate The baby is being given morphine via an IV drop The baby's heart rate shows bradycardia's (temporary drops) The baby's temperature is normal The baby is cr...
Content determination : There are three general issues which almost always impact the content determination task, and can be illustrated with the above example. Perhaps the most fundamental issue is the communicative goal of the text, i.e. its purpose and reader. In the above example, for instance, a doctor who wants t...
Content determination : There are three basic approaches to document structuring: schemas (content templates), statistical approaches, and explicit reasoning. Schemas are templates which explicitly specify the content of a generated text (as well as document structuring information). Typically, they are constructed by ...
Hidden layer : In artificial neural networks, a hidden layer is a layer of artificial neurons that is neither an input layer nor an output layer. The simplest examples appear in multilayer perceptrons (MLP), as illustrated in the diagram. An MLP without any hidden layer is essentially just a linear model. With hidden l...
Studies in Natural Language Processing : Studies in Natural Language Processing is the book series of the Association for Computational Linguistics, published by Cambridge University Press. Steven Bird is the series editor. The editorial board has the following members: Chu-Ren Huang, Chair Professor of Applied Chinese...
AI alignment : In the field of artificial intelligence (AI), alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives. It is often ch...
AI alignment : Programmers provide an AI system such as AlphaZero with an "objective function", in which they intend to encapsulate the goal(s) the AI is configured to accomplish. Such a system later populates a (possibly implicit) internal "model" of its environment. This model encapsulates all the agent's beliefs abo...
AI alignment : In 1960, AI pioneer Norbert Wiener described the AI alignment problem as follows: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire. AI al...
AI alignment : Governmental and treaty organizations have made statements emphasizing the importance of AI alignment. In September 2021, the Secretary-General of the United Nations issued a declaration that included a call to regulate AI to ensure it is "aligned with shared global values". That same month, the PRC publ...
AI alignment : AI alignment is often perceived as a fixed objective, but some researchers argue it would be more appropriate to view alignment as an evolving process. One view is that AI technologies advance and human values and preferences change, alignment solutions must also adapt dynamically. Another is that alignm...
AI alignment : Brockman, John, ed. (2019). Possible Minds: Twenty-five Ways of Looking at AI (Kindle ed.). Penguin Press. ISBN 978-0525557999.: CS1 maint: ref duplicates default (link) Ngo, Richard; et al. (2023). "The Alignment Problem from a Deep Learning Perspective". arXiv:2209.00626 [cs.AI]. Ji, Jiaming; et al. (2...