text
stringlengths
12
14.7k
Removal of Sam Altman from OpenAI : Microsoft's internal memos regarding OpenAI
Explanation-based learning : Explanation-based learning (EBL) is a form of machine learning that exploits a very strong, or even perfect, domain theory (i.e. a formal theory of an application domain akin to a domain model in ontology engineering, not to be confused with Scott's domain theory) in order to make generaliz...
Explanation-based learning : An example of EBL using a perfect domain theory is a program that learns to play chess through example. A specific chess position that contains an important feature such as "Forced loss of black queen in two moves" includes many irrelevant features, such as the specific scattering of pawns ...
Explanation-based learning : An especially good application domain for an EBL is natural language processing (NLP). Here a rich domain theory, i.e., a natural language grammar—although neither perfect nor complete, is tuned to a particular application or particular language usage, using a treebank (training examples). ...
Explanation-based learning : One-shot learning in computer vision Zero-shot learning == References ==
Observational Health Data Sciences and Informatics : The Observational Health Data Sciences and Informatics, or OHDSI (pronounced "Odyssey") is an international collaborative effort aimed at improving health outcomes through large-scale analytics of health data. The OHDSI effort includes diverse researchers and health ...
KAoS : KAoS is a policy and domain services framework created by the Florida Institute for Human and Machine Cognition. It uses W3C's Web Ontology Language (OWL) standard for policy representation and reasoning, and a software guard technology for efficient enforcement of a compiled version of its policies. It has been...
KAoS : KAoS homepage
Llama (language model) : Llama (Large Language Model Meta AI, formerly stylized as LLaMA) is a family of large language models (LLMs) released by Meta AI starting in February 2023. The latest version is Llama 3.3, released in December 2024. Llama models are trained at different parameter sizes, ranging between 1B and 4...
Llama (language model) : After the release of large language models such as GPT-3, a focus of research was up-scaling models which in some instances showed major increases in emergent capabilities. The release of ChatGPT and its surprise success caused an increase in attention to large language models. Compared with ot...
Llama (language model) : LLaMA was announced on February 24, 2023, via a blog post and a paper describing the model's training, architecture, and performance. The inference code used to run the model was publicly released under the open-source GPLv3 license. Access to the model's weights was managed by an application p...
Llama (language model) : On July 18, 2023, in partnership with Microsoft, Meta announced LLaMa 2, the next generation of Llama. Meta trained and released Llama 2 in three model sizes: 7, 13, and 70 billion parameters. The model architecture remains largely unchanged from that of LLaMA-1 models, but 40% more data was us...
Llama (language model) : On April 18, 2024, Meta released Llama-3 with two sizes: 8B and 70B parameters. The models have been pre-trained on approximately 15 trillion tokens of text gathered from “publicly available sources” with the instruct models fine-tuned on “publicly available instruction datasets, as well as ove...
Llama (language model) : For the training cost column, only the largest model's cost is written. So for example, "21,000" is the training cost of Llama 2 69B in units of petaFLOP-day. Also, 1 petaFLOP-day = 1 petaFLOP/sec × 1 day = 8.64E19 FLOP. "T" means "trillion" and "B" means "billion".
Llama (language model) : The Stanford University Institute for Human-Centered Artificial Intelligence (HAI) Center for Research on Foundation Models (CRFM) released Alpaca, a training recipe based on the LLaMA 7B model that uses the "Self-Instruct" method of instruction tuning to acquire capabilities comparable to the ...
Llama (language model) : Wired describes the 8B parameter version of Llama 3 as being "surprisingly capable" given its size. The response to Meta's integration of Llama into Facebook was mixed, with some users confused after Meta AI told a parental group that it had a child. According to the Q4 2023 Earnings transcript...
Llama (language model) : GPT-4o IBM Granite, an open-source LLM made by IBM Mistral AI, a French open-source AI company
Llama (language model) : Official website Official Hugging Face organization for Llama, Llama Guard, and Prompt Guard models
Early stopping : In machine learning, early stopping is a form of regularization used to avoid overfitting when training a model with an iterative method, such as gradient descent. Such methods update the model to make it better fit the training data with each iteration. Up to a point, this improves the model's perform...
Early stopping : This section presents some of the basic machine-learning concepts required for a description of early stopping methods.
Early stopping : These early stopping rules work by splitting the original training set into a new training set and a validation set. The error on the validation set is used as a proxy for the generalization error in determining when overfitting has begun. These methods are employed in the training of many iterative ma...
Early stopping : Overfitting, early stopping is one of methods used to prevent overfitting Generalization error Regularization (mathematics) Statistical learning theory Boosting (machine learning) Cross-validation, in particular using a "validation set" Neural networks == References ==
Artificial neuron : An artificial neuron is a mathematical function conceived as a model of a biological neuron in a neural network. The artificial neuron is the elementary unit of an artificial neural network. The design of the artificial neuron was inspired by biological neural circuitry. Its inputs are analogous to ...
Artificial neuron : For a given artificial neuron k , let there be m + 1 inputs with signals x 0 through x m and weights w k 0 through w k m . Usually, the input x 0 is assigned the value +1, which makes it a bias input with w k 0 = b k =b_ . This leaves only m actual inputs to the neuron: x 1 to x m . The ou...
Artificial neuron : An MCP neuron is a kind of restricted artificial neuron which operates in discrete time-steps. Each has zero or more inputs, and are written as x 1 , . . . , x n ,...,x_ . It has one output, written as y . Each input can be either excitatory or inhibitory. The output can either be quiet or firing. ...
Artificial neuron : Artificial neurons are designed to mimic aspects of their biological counterparts. However a significant performance gap exists between biological and artificial neural networks. In particular single biological neurons in the human brain with oscillating activation function capable of learning the X...
Artificial neuron : There is research and development into physical artificial neurons – organic and inorganic. For example, some artificial neurons can receive and release dopamine (chemical signals rather than electrical signals) and communicate with natural rat muscle and brain cells, with potential for use in BCIs/...
Artificial neuron : The first artificial neuron was the Threshold Logic Unit (TLU), or Linear Threshold Unit, first proposed by Warren McCulloch and Walter Pitts in 1943 in A logical calculus of the ideas immanent in nervous activity. The model was specifically targeted as a computational model of the "nerve net" in th...
Artificial neuron : The activation function of a neuron is chosen to have a number of properties which either enhance or simplify the network containing the neuron. Crucially, for instance, any multilayer perceptron using a linear activation function has an equivalent single-layer network; a non-linear function is ther...
Artificial neuron : The following is a simple pseudocode implementation of a single Threshold Logic Unit (TLU) which takes Boolean inputs (true or false), and returns a single Boolean output when activated. An object-oriented model is used. No method of training is defined, since several exist. If a purely functional m...
Artificial neuron : Binding neuron Connectionism
Artificial neuron : Artifical [sic] neuron mimicks function of human cells McCulloch-Pitts Neurons (Overview)
Machine learning : Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances in...
Machine learning : The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence. The synonym self-teaching computers was also used in this time period. Although the earliest machine learning model was introduced in the 1950s when ...
Machine learning : A core objective of a learner is to generalize from its experience. Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability ...
Machine learning : Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system: Supervised learning: The computer is presented with example inputs and their desired outpu...
Machine learning : A machine learning model is a type of mathematical model that, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimize errors in its predictions. By extensi...
Machine learning : There are many applications for machine learning, including: In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. ...
Machine learning : Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and peopl...
Machine learning : Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In...
Machine learning : Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units. By 2019, graphics processing units (GPUs), o...
Machine learning : Software suites containing a variety of machine learning algorithms include the following:
Machine learning : Journal of Machine Learning Research Machine Learning Nature Machine Intelligence Neural Computation IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine learning : AAAI Conference on Artificial Intelligence Association for Computational Linguistics (ACL) European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) International Conference on Computational Intelligence Methods for Bioinformatics and Biostati...
Machine learning : Automated machine learning – Process of automating the application of machine learning Big data – Extremely large or complex datasets Deep learning — branch of ML concerned with artificial neural networks Differentiable programming – Programming paradigm List of datasets for machine-learning research...
Machine learning : Domingos, Pedro (September 22, 2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0465065707. Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4. Archived from the original on 2...
Machine learning : International Machine Learning Society mloss is an academic database of open-source machine learning software.
Automation in construction : Automation in construction is the combination of methods, processes, and systems that allow for greater machine autonomy in construction activities. Construction automation may have multiple goals, including but not limited to, reducing jobsite injuries, decreasing activity completion times...
Automation in construction : Kratos Defense & Security Solutions fielded the world’s first Autonomous Truck-Mounted Attenuator (ATMA) in 2017, in conjunction with Royal Truck & Equipment.
Automation in construction : Equipment control and management: Automation can be used to control and monitor construction equipment, such as cranes, excavators, and bulldozers. Material handling: Automated systems can be used to handle, transport, and place materials such as concrete, bricks, and stones. Surveying: Aut...
Automation in construction : The use of automation in construction has become increasingly prevalent in recent years due to its numerous benefits. Automation in construction refers to the use of machinery, software, and other technologies to perform tasks that were previously done manually by workers. One of the most s...
80 Million Tiny Images : 80 Million Tiny Images is a dataset intended for training machine learning systems constructed by Antonio Torralba, Rob Fergus, and William T. Freeman in a collaboration between MIT and New York University. It was published in 2008. The dataset has size 760 GB. It contains 79,302,017 32×32 pixe...
80 Million Tiny Images : It was first reported in a technical report in April 2007, during the middle of the construction process, when there were only 73 million images. The full dataset was published in 2008. They began with all 75,846 nonabstract nouns in WordNet, and then for each of these nouns, they scraped 7 Ima...
80 Million Tiny Images : The 80 Million Tiny Images dataset was retired from use by its creators in 2020, after a paper by researchers Abeba Birhane and Vinay Prabhu found that some of the labeling of several publicly available image datasets, including 80 Million Tiny Images, contained racist and misogynistic slurs wh...
80 Million Tiny Images : List of datasets in computer vision and image processing
Ernie Bot : Ernie Bot (Chinese: 文心一言, Pinyin: wénxīn yīyán), full name Enhanced Representation through Knowledge Integration, is an AI chatbot service product of Baidu, released in 2023. It is built on a large language model called ERNIE, which has been in development since 2019. Version, ERNIE 4.0, was announced on Oc...
Ernie Bot : Ernie Bot was initially released for invited testing on March 16, 2023, based on "Ernie 3.0", a large language model that had been in development since 2019. Ernie's so-called live release demo was reported to have been prerecorded, which caused Baidu's stock to drop 10 percent the same day. The company's s...
Ernie Bot : Ernie Bot is based on particular Ernie foundation models, including Ernie 3.0, Ernie 3.5, and Ernie 4.0. The training process starts from pre-training, learning from trillions of data points and billions of knowledge pieces. This was followed by refinement through supervised fine-tuning, reinforcement learn...
Ernie Bot : In its subscription options, the professional plan gives users access to Ernie 4.0 with a payment either for a month or with reduced payment for auto-renewal per month. Meanwhile, Ernie 3.5 is free of charge. Ernie 4.0, the language model for Ernie bot, has information updated to April 2023.
Ernie Bot : Ernie Bot is subject to the Chinese government's censorship regime. In public tests with journalists, Ernie Bot refused to answer questions about Xi Jinping, the 1989 Tiananmen Square protests and massacre, the persecution of Uyghurs in China in Xinjiang, and the 2019–2020 Hong Kong protests. When queried a...
Ernie Bot : Artificial intelligence industry in China ChatGPT Google Gemini
Ernie Bot : Official website Media related to ERNIE Bot at Wikimedia Commons
GOLOG : GOLOG is a high-level logic programming language for the specification and execution of complex actions in dynamical domains. It is based on the situation calculus. It is a first-order logical language for reasoning about action and change. GOLOG was developed at the University of Toronto.
GOLOG : The concept of situation calculus on which the GOLOG programming language is based was first proposed by John McCarthy in 1963.
GOLOG : A GOLOG interpreter automatically maintains a direct characterization of the dynamic world being modeled, on the basis of user supplied axioms about preconditions, effects of actions and the initial state of the world. This allows the application to reason about the condition of the world and consider the impac...
GOLOG : Golog has been used to model the behavior of autonomous agents. In addition to a logic-based action formalism for describing the environment and the effects of basic actions, they enable the construction of complex actions using typical programming language constructs. It is also used for applications in high l...
GOLOG : In contrast to the Planning Domain Definition Language, Golog supports planning and scripting as well. Planning means that a goal state in the world model is defined, and the solver brings a logical system into this state. Behavior scripting implements reactive procedures, which are running as a computer progra...
International Conference on Language Resources and Evaluation : The International Conference on Language Resources and Evaluation is an international conference organised by the ELRA Language Resources Association every other year (on even years) with the support of institutions and organisations involved in Natural la...
International Conference on Language Resources and Evaluation : The survey of the LREC conferences over the period 1998-2013 was presented during the 2014 conference in Reykjavik as a closing session. It appears that the number of papers and signatures is increasing over time. The average number of authors per paper is...
International Conference on Language Resources and Evaluation : The LRE Map was introduced at LREC 2010 and is now a regular feature of the LREC submission process for both the conference papers and the workshop papers. At the submission stage, the authors are asked to provide some basic information about all the resou...
International Conference on Language Resources and Evaluation : Conference website European Language Resources Association web site
Deep belief network : In machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer. When trained on a set of...
Deep belief network : The training method for RBMs proposed by Geoffrey Hinton for use with training "Product of Experts" models is called contrastive divergence (CD). CD provides an approximation to the maximum likelihood method that would ideally be applied for learning the weights. In training a single RBM, weight u...
Deep belief network : Bayesian network Convolutional deep belief network Deep learning Energy based model Stacked Restricted Boltzmann Machine
Deep belief network : Hinton, Geoffrey E. (2009-05-31). "Deep belief networks". Scholarpedia. 4 (5): 5947. Bibcode:2009SchpJ...4.5947H. doi:10.4249/scholarpedia.5947. ISSN 1941-6016. "Deep Belief Networks". Deep Learning Tutorials. "Deep Belief Network Example". Deeplearning4j Tutorials. Archived from the original on 2...
List of text corpora : Text corpora (singular: text corpus) are large and structured sets of texts, which have been systematically collected. Text corpora are used by both AI developers to train large language models and corpus linguists and within other branches of linguistics for statistical analysis, hypothesis test...
List of text corpora : American National Corpus Bank of English BookCorpus British National Corpus Bergen Corpus of London Teenage Language (COLT) Brown Corpus, forming part of the "Brown Family" of corpora, together with LOB, Frown and F-LOB Corpus of Contemporary American English (COCA) 425 million words, 1990–2011. ...
List of text corpora : CETENFolha Basque: The Corpus of Electronic Texts Corpus Inscriptionum Insularum Celticarum (CIIC), covering Primitive Irish inscriptions in Ogham Google Books Ngram Corpus The Georgian Language Corpus Thesaurus Linguae Graecae (Ancient Greek) Eastern Armenian National Corpus (EANC) 110 million w...
List of text corpora : Corpus Inscriptionum Semiticarum Kanaanäische und Aramäische Inschriften Hamshahri Corpus (Persian) Persian in MULTEXT-EAST corpus (Persian) Amarna letters (for Akkadian, Egyptian, Sumerogram's, etc.) TEP: Tehran English-Persian Parallel Corpus TMC: Tehran Monolingual Corpus, Standard corpus for ...
List of text corpora : Uzbek national corpus (20 million words)
List of text corpora : Nepali Text Corpus (90+ million running words/6.5+ million sentences)
List of text corpora : Kotonoha Japanese language corpus LIVAC Synchronous Corpus (Chinese)
List of text corpora : Hindi: SinMin dataset (Sinhala)
List of text corpora : Amharic: Creole (Gulf of Guinea): Hausa: Igbo: Oromo: Yoruba: Zulu:
List of text corpora : Chinese/English Political Interpreting Corpus (CEPIC) consists of transcripts of speeches delivered by top political figures from Hong Kong, Beijing, Washington DC and London, as well as their translated/interpreted texts. Developed by Jun Pan and HKBU Library. Europarl Corpus - proceedings of th...
List of text corpora : Corpus of Political Speeches contains four collections of political speeches in English and Chinese from The Corpus of U.S. Presidential Speeches (1789–2015), The Corpus of Policy Address by Hong Kong Governors (1984–1996) and Hong Kong Chief Executives (1997–2014), The Corpus of Speeches given o...
List of text corpora : Cambridge Learner Corpus Corpus of Academic Written and Spoken English (CAWSE), a collection of Chinese students’ English language samples in academic settings. Freely downloadable online. English as a Lingua Franca in Academic Settings (ELFA), an academic ELF corpus. International Corpus of Lear...
Inductive bias : The inductive bias (also known as learning bias) of a learning algorithm is the set of assumptions that the learner uses to predict outputs of given inputs that it has not encountered. Inductive bias is anything which makes the algorithm learn one pattern instead of another pattern (e.g., step-function...
Inductive bias : The following is a list of common inductive biases in machine learning algorithms. Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence. This is the bias used in the Naive Bayes classifier. Minimum cross-validation error: when...
Inductive bias : Although most learning algorithms have a static bias, some algorithms are designed to shift their bias as they acquire more data. This does not avoid bias, since the bias shifting process itself must have a bias.
Inductive bias : Algorithmic bias Cognitive bias No free lunch theorem No free lunch in search and optimization == References ==
LangChain : LangChain is a software framework that helps facilitate the integration of large language models (LLMs) into applications. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and c...
LangChain : LangChain was launched in October 2022 as an open source project by Harrison Chase, while working at machine learning startup Robust Intelligence. The project quickly garnered popularity, with improvements from hundreds of contributors on GitHub, trending discussions on Twitter, lively activity on the proje...
LangChain : LangChain's developers highlight the framework's applicability to use-cases including chatbots, retrieval-augmented generation, document summarization, and synthetic data generation. As of March 2023, LangChain included integrations with systems including Amazon, Google, and Microsoft Azure cloud storage; A...
LangChain : Official website Discord server support hub Langchain-ai on GitHub
Intelligent agent : In artificial intelligence, an intelligent agent is an entity that perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through machine learning or by acquiring knowledge. Leading AI textbooks define artificial intelligence as the "study and design ...
Intelligent agent : The concept of intelligent agents provides a foundational lens through which to define and understand artificial intelligence. For instance, the influential textbook Artificial Intelligence: A Modern Approach (Russell & Norvig) describes: Agent: Anything that perceives its environment (using sensors...
Intelligent agent : An objective function (or goal function) specifies the goals of an intelligent agent. An agent is deemed more intelligent if it consistently selects actions that yield outcomes better aligned with its objective function. In effect, the objective function serves as a measure of success. The objective...
Intelligent agent : An intelligent agent's behavior can be described mathematically by an agent function. This function determines what the agent does based on what it has seen. A percept refers to the agent's sensory inputs at a single point in time. For example, a self-driving car's percepts might include camera imag...
Intelligent agent : Intelligent agents can be organized hierarchically into multiple "sub-agents." These sub-agents handle lower-level functions, and together with the main agent, they form a complete system capable of executing complex tasks and achieving challenging goals. Typically, an agent is structured by dividin...